content
stringlengths 275
370k
|
---|
Intracardiac electrophysiology study (EPS) involves placing wire electrodes within the heart to determine the characteristics of heart arrhythmias.
How test is performed
The study is performed in a hospital laboratory by a trained staff that includes cardiologists, technicians, and nurses. The environment is safe and controlled to minimize any danger or risk to the patient.
The cardiologist inserts a catheter into a vein through a small incision in the groin after cleansing the area and numbing it with a local anesthetic. This catheter is equipped with an electrode connected to electrocardiographic monitors.
The catheter is then carefully threaded into the heart using an x-ray imaging technique called fluoroscopy to guide the insertion. Electrodes are placed in the heart to measure electrical activity along the heart's conduction system and within heart muscle cells themselves.
Normal electrical activity is signaled from the heart's natural pacemaker known as the sinoatrial (SA) node. It then travels through the atria (the two chambers on the top of the heart), the atrioventricular (AV) node (connecting the atria to the ventricles), and the ventricles (the lower chambers of the heart).
Abnormal electrical activity can occur anywhere along this conduction system, including in the muscle cells of the atria or ventricles. The electrodes inserted during EPS will map the type of arrhythmia you have and where the problem arises in your heart. This information will allow your cardiologist to determine the severity of the problem (including whether you are at risk for sudden cardiac death) as well as appropriate treatment.
How to prepare for test
Test preparations are similar to those for a cardiac catheterization. Food and fluid will be restricted for 6 - 8 hours before the test. The procedure will take place in a hospital, and you will wear hospital clothing. You must sign a consent form for the procedure.
Your health care provider will give you instructions regarding any changes to your normal medications. Do not stop taking or change any medications without consulting your health care provider.
A mild sedative is usually given 30 minutes before the procedure. You may not be able to drive home yourself if you are discharged the same day.
How test will feel
During the test, you may be awake and able to follow instructions or sedated.
A simple EPS generally lasts from 30 minutes to an hour. It may take longer if other procedures are involved.
Why test is performed
Before performing EPS, your cardiologist will try to identify a suspected arrhythmia using other, less invasive tests such as ambulatory cardiac monitoring. If the abnormal rhythm is not detected by these other methods, and your symptoms suggest you have an arrhythmia, EPS may be recommended. Additional reasons for EPS may include the need:
• To find the location of a known arrhythmia and determine the best therapy
• To assess the severity of the arrhythmia and determine if you are at risk for future cardiac events, especially sudden cardiac death
• To evaluate the effectiveness of medication in controlling an arrhythmia
• To determine if the focus (the place from where the arrhythmia is coming) should be ablated. If ablation is appropriate, it will be formed immediately.
• To evaluate the need for a permanent pacemaker or an implantable cardioverter-defibrillator (ICD)
What abnormal results mean
The exact location and type of the arrhythmia must be determined so that specific therapy can be applied appropriately. The arrhythmia may originate from any area of the heart's electrical conduction system. For example:
• Sick sinus syndrome occurs when the SA node of the heart malfunctions
• Wolff-Parkinson-White syndrome happens when there is an extra electrical pathway, causing the signal to avoid the normal path through the AV node
• Ventricular fibrillation and ventricular tachycardia can occur when muscle cells in the ventricles inappropriately take over the electrical activity of the heart
The procedure is generally safe. Possible risks include but are not limited to the following:
• Cardiac arrest
• Trauma to the vein
• Low blood pressure
• Cardiac tamponade
• Embolism caused by blood clots developing at the tip of the catheter
Heart Rhythm Specialists of South Florida gives special thanks to the National Library of Medicine and National Heart Lung and Blood Institute whose Web sites aided in the research of the patient educational material provided above.
|
How did life start? There may not be a bigger question. To learn the secret of our origins means going back beyond the earliest forms of biological life, past simple bacteria, and down to the chemistry of the building blocks that came earlier.
Most people have heard DNA’s double helix described as the blueprint for life, but its single-stranded relative RNA is also critical for transmitting genetic information. Both are present in the cells of all living organisms, and many scientists suspect that RNA was the original genetic material, coming on the scene before DNA, more than four billion years ago during a period scientists call “RNA world.”
But to build the RNA world, RNA and other biomolecules had to come together in the first place. Their constituent parts have a distinctive chemical property called chirality that’s related to how their atoms are arranged. And a debate has broken out about how life’s chirality got started: is it the product of the chemical environment of the early Earth, or did life inherit its chirality from space?
For some scientists, homing in on how a chain of genetic material was able to come together to start terrestrial life now involves looking away from Earth. One idea being explored in astrobiology is whether some prebiotic organic molecules could have been delivered to Earth by meteorites or dust grains. Recent discoveries in interstellar space may be providing some support for this.
In 2011, NASA published a study of meteorites suggesting that they contain nucleobases, chemicals that are components of both DNA and RNA. Thus, a critical starting material for life may have been seeded to early Earth from space. A year later, a team at the University of Copenhagen reported finding a sugar molecule in interstellar space that can be chemically transformed into ribose—the “R” in RNA. Last year, the same team uncovered a more complex molecule (methyl isocyanate) in a star-forming region more than 400 light years away from Earth.
And in 2016, two postdoctoral researchers, Brett McGuire (National Radio Astronomy Observatory, Virginia) and Brandon Carroll (California Institute of Technology), working with astronomers at the Parkes Observatory in Australia, reported the detection of a molecule in interstellar space, near the center of the Milky Way, that could have distinct consequences for the narrative of terrestrial life.
Where no chiral molecule has gone before
McGuire and Carroll discovered a molecule called propylene oxide (molecular formula: C3H6O) 25,000 light years away from Earth, in a star-forming region of our galaxy called Sagittarius B. But it wasn’t the chemical itself that was surprising; this propylene oxide bears a property that has been associated exclusively with life on Earth.
Propylene oxide is what is known as a “chiral” molecule (pronounced KY-ral, from the Greek word cheir for hand), which means that it comes in two forms: right- and left-handed. Chiral molecules have the same chemical formula, and their structures are nearly identical except for certain atoms that are attached on different sides of the three-dimensional molecule. In the case of propylene oxide, it’s the methyl group (CH3) that can attach to one of two carbons, as shown below.
The two forms of a chiral molecule cannot be superimposed on each other on a level plane, much like when you place one hand on top of the other and a thumb sticks out at either end—the hands are mirror images of each other. The French microbiologist Louis Pasteur discovered this quirk of nature more than 150 years ago.
What he didn’t realize was that he happened upon a fundamental feature of organic matter: as molecules get more complex, chirality is all but guaranteed. While it doesn’t change the number or types of atoms in that molecule, the differences in how those atoms attach can impact a molecule’s function. One example is limonene, a key component of the scent of citrus fruit. The right-handed version tastes like lemon, while the left-handed one like orange. Ditto for the molecule carvone: in caraway seeds, the left-handed version binds to a receptor in neurons that line the base of your nose that send a signal to your brain telling it that it has smelled rye bread; the right-sided carvone signals your brain that it has smelled spearmint.
Beyond smell and taste, chirality determines the shape of our large-scale biological structures. The famous double helix of a DNA strand twists right, along with the sugars that comprise its backbone; the amino acids in proteins twist left. Despite the fact that these molecules naturally occur in both orientations, all the living organisms on Earth appear to have DNA that is built on the blueprint of it twisting right—perhaps descended from a single right-handed twist in the ancient RNA world.
The enzymes that help our body use amino acids and DNA bases work because they recognize the specific shapes of these molecules. An amino acid with a different chirality would have a different shape, keeping those enzymes from interacting properly with it. If you were served a burger of protein that had right-handed amino acids, your body would not be able to break it down.
This deep bias that permeates all life must have had a beginning. And McGuire and Carroll suggest that their discovery of chiral propylene oxide—as well as the earlier discoveries of methyl isocyanate and glycoaldehyde—shows that space may have had a “hand” in life’s origins.
“This is the first chiral molecule detected in outer space,” said McGuire, who is the Jansky Postdoctoral Fellow with the National Radio Astronomy Observatory. Its detection suggests that a bias toward one form of chirality is not limited to life on Earth, as has been previously thought, and lends evidence to the idea that material from elsewhere in the Solar System—possibly including some much older than Earth or even our Solar System—may have seeded the earliest chemicals necessary to form life on our planet.
Of course, chirality isn’t the only problem you have to solve—the chiral molecules we’ve seen in space are much less complex than most biomolecules.
|
Waterbirds have died of lead poisoning from ingesting lead fishing sinkers in the United States and Europe. Estimating abundance and distribution of sinkers in the environment will help researchers to understand the potential effects of lead poisoning from sinker ingestion. We used a metal detector to test how environmental conditions and sinker characteristics affected detection of sinkers. Odds of detecting a lead sinker depended on the interaction of sinker mass and depth where it was buried (P=0.002). The odds of detecting a sinker increased with mass and decreased with depth buried. Lead split-shot sinkers were less detectable than tin, brass, and stainless steel sinkers. Detecting lead sinkers was not influenced by sinker shape, substrate type, or whether we searched underwater or on land. We developed a model to determine the proportion of sinkers detected when this detector is used to search for sinkers, so sinker abundance can be estimated. The log odds (Logit) of detecting a lead sinker with mass M g buried D cm below the surface was Logit Y= -1.63 + 4.20 M - 0.45 D - 0.27 MD + 0.0002 D2. The probability of detecting a lead sinker was e(Logit Y)/(1 + e(Logit Y)). At the surface, 90% of sinkers with mass 0.9 g will be detected.
Additional Publication Details
Using a metal detector to determine lead sinker abundance in waterbird habitat
|
Skeletal muscles only pull in one direction. … For this reason they always come in pairs. When one muscle in a pair contracts, to bend a joint for example, its counterpart then contracts and pulls in the opposite direction to straighten the joint out again.
Which muscles dont work in pairs?
In an antagonistic muscle pair as one muscle contracts the other muscle relaxes or lengthens. The muscle that is contracting is called the agonist and the muscle that is relaxing or lengthening is called the antagonist.
Antagonistic muscle pairs.
|Pectoralis major||Latissimus dorsi|
Which muscles work in pair?
MUSCLE WORKING IN PAIRS
Muscles usually work in pairs or groups, e.g. the biceps flexes the elbow and the triceps extends it. This is called antagonistic muscle action. The working muscle is called the prime mover or agonist.
Do muscles work in pairs Yes or no?
Muscles are attached to bones by tendons and help them to move. … Therefore muscles have to work in pairs to move a joint. One muscle will contract and pull a joint one way and another muscle will contract and pull it the other.
Do smooth muscles work in pairs?
It is all done this way to produce smooth movement. Muscles work in pairs and sometimes in more than pairs (2) because it makes the movement smooth. The muscle that is making the move is called the prime mover while another is called the antagonist and it resists the move.
What muscle is opposite bicep?
The triceps serve as an antagonist, or opposing, muscle of the biceps. Typically, the triceps are the bigger of the upper arm muscles. The biceps and triceps are each unique in their makeup and function.
What muscles are antagonists?
The muscle that is contracting is called the agonist and the muscle that is relaxing or lengthening is called the antagonist.
Antagonistic muscle pairs.
|Antagonistic pair||Biceps; triceps|
|Movements produced||Flexion; extension|
|Sport example||Chest pass in netball; badminton smash|
What happens to biceps when arm is straightened?
The biceps and triceps act against one another to bend and straighten the elbow joint. To bend the elbow, the biceps contracts and the triceps relaxes. To straighten the elbow, the triceps contract and the biceps relax.
Why heart is not joined to any bones?
Our hearts is a muscle that pumps blood through our body these muscles are not attached to bones and do not have tendons. The muscles attached to our bones to be voluntary voluntary muscles we have to think and decide to move them.
Which type of muscle never gets tired?
Cardiac muscle resists fatigue so well because it’s got more mitochondria than skeletal muscle. With so many power plants at its disposal, the heart doesn’t need to stop and chill out. It also has a steady supply of blood bringing it oxygen and nutrients.
Can bones move without muscles?
They make the skeleton flexible — without them, movement would be impossible. Joints allow our bodies to move in many ways.
Why do muscles always work in antagonistic pairs?
Muscles work in antagonistic pairs since they can only shorten causing movement in one direction. Needs to be another muscle that shortens in order to cause movement in the opposite direction.
Why is it important to exercise both muscles in a pair?
Since the antagonistic muscles works in synergy, it is important that both muscles are equally trained. … An imbalance of strength in one of the two muscles of the pair can cause muscle imbalances that then affect both the quality of movement and the flexibility and stability of the joint.
How many pairs of muscles are in the body?
There are around 650 skeletal muscles within the typical human body. Almost every muscle constitutes one part of a pair of identical bilateral muscles, found on both sides, resulting in approximately 320 pairs of muscles, as presented in this article.
How do muscles work in pairs Class 6?
Ans: The muscles work in pairs. When one of them contracts, the bone is pulled in that direction, the other muscle of the pair relaxes. To move the bone in the opposite direction, the relaxed muscle contracts to pull the bone towards its original position, while the first relaxes. A muscle can only pull.
Why do muscles work in pairs gizmo answer key?
why must muscles work in pairs? Because muscle cells can only contract, not extend, skeletal muscles must work in pairs. While one muscle contracts, the other muscle in the pair relaxes to its original length. … once the muscle relaxes, its pair has to contract to bring it back to its original place.
|
This week we will use our core text, The Tiger Who Came to Tea to inspire our writing.
- read the story and answer questions about it
- look at how questions are used in the story and write some of our own
- learn how to change singular nouns to plurals by adding -s or -es - for example egg becomes eggs, sandwich becomes sandwiches
- write an invitation to an animal we would like to invite to tea
- look at the use of speech in the story
- add speech bubbles to illustrations
|
Diabetes diet - type 1
The American Diabetes Association and the American Dietetic Association have developed specific dietary guidelines for people with diabetes.
This article focuses on diet guidelines for people with type 1 diabetes.
Diet - diabetes - type 1; Type 1 diabetes diet
If you have type 1 diabetes, it is important to know how many carbohydrates you eat at a meal. This information helps you determine how much insulin you should take with your meal to maintain blood sugar (glucose) control.
The other two major nutrients, protein and fat ,also have an effect on blood glucose levels, though it is not as rapid or great as carbohydrates.
A delicate balance of carbohydrate intake, insulin, and physical activity is necessary for the best blood sugar (glucose) levels. Eating carbohydrates increases your blood sugar (glucose) level. Exercise tends to decrease it (although not always). If the three factors are not in balance, you can have wide swings in blood sugar (glucose) levels.
CHILDREN AND DIABETES
Weight and growth patterns can help determine if a child with type 1 diabetes is getting enough nutrition.
Changes in eating habits and more physical activity help improve blood sugar (glucose) control. For children with diabetes, special occasions (like birthdays or Halloween) require additional planning because of the extra sweets. You may allow your child to eat sugary foods, but then have fewer carbohydrates during other parts of that day. For example, if child eats birthday cake, Halloween candy, or other sweets, they should NOT have the usual daily amount of potatoes, pasta, or rice. This substitution helps keep calories and carbohydrates in better balance.
One of the most challenging aspects of managing diabetes is meal planning. Work closely with your doctor and dietitian to design a meal plan that maintains near-normal blood sugar (glucose) levels. The meal plan should give you or your child the proper amount of calories to maintain a healthy body weight.
The food you eat increases the amount of glucose in your blood. Insulin decreases blood sugar (glucose). By balancing food and insulin together, you can keep your blood sugar (glucose) within a normal range. Keep these points in mind:
- Your doctor or dietitian should review the types of food you or your child usually eats and build a meal plan from there. Insulin use should be a part of the meal plan. Understand how to time meals for when insulin will start to work in your the body.
- Be consistent. Meals and snacks should be eaten at the same times each day. Do not skip meals and snacks. Keep the amount and types of food (carbohydrates, fats, and proteins) consistent from day to day.
- Learn how to read food labels to help plan you or your child’s carbohydrate intake.
- Use insulin at the same time each day, as directed by the doctor.
Monitor blood sugar (glucose) levels. The doctor will tell you if you need to adjust insulin doses based on blood sugar (glucose) levels and the amount of food eaten.
Having diabetes does not mean you or your child must completely give up any specific food, but it does change the kinds of foods one should eat routinely. Choose foods that keep blood sugar (glucose) levels in good control. Foods should also provide enough calories to maintain a healthy weight.
A registered dietitian can help you best decide how to balance your diet with carbohydrates, protein, and fat. Here are some general guidelines:
The amount of each type of food you should eat depends on your diet, your weight, how often you exercise, and other existing health risks. Everyone has individual needs, which is why you should work with your doctor and, possibly, a dietitian to develop a meal plan that works for you.
But there are some reliable general recommendations to guide you. The Diabetes Food Pyramid, which resembles the old USDA food guide pyramid, splits foods into six groups in a range of serving sizes. In the Diabetes Food Pyramid, food groups are based on carbohydrate and protein content instead of their food classification type. A person with diabetes should eat more of the foods in the bottom of the pyramid (grains, beans, vegetables) than those on the top (fats and sweets). This diet will help keep your heart and body systems healthy.
GRAINS, BEANS, AND STARCHY VEGETABLES
(6 or more servings a day)
Foods like bread, grains, beans, rice, pasta, and starchy vegetables are at the bottom of the pyramid because they should serve as the foundation of your diet. As a group, these foods are loaded with vitamins, minerals, fiber, and healthy carbohydrates.
It is important, however, to eat foods with plenty of fiber. Choose whole-grain foods such as whole-grain bread or crackers, tortillas, bran cereal, brown rice, or beans. Use whole-wheat or other whole-grain flours in cooking and baking. Choose low-fat breads, such as bagels, tortillas, English muffins, and pita bread.
(3 - 5 servings a day)
Choose fresh or frozen vegetables without added sauces, fats, or salt. You should opt for more dark green and deep yellow vegetables, such as spinach, broccoli, romaine, carrots, and peppers.
(2 - 4 servings a day)
Choose whole fruits more often than juices. Fruits have more fiber. Citrus fruits, such as oranges, grapefruits, and tangerines, are best. Drink fruit juices that do NOT have added sweeteners or syrups.
(2 - 3 servings a day)
Choose low-fat or nonfat milk or yogurt. Yogurt has natural sugar in it, but it can also contain added sugar or artificial sweeteners. Yogurt with artificial sweeteners has fewer calories than yogurt with added sugar.
MEAT AND FISH
(2 - 3 servings a day)
Eat fish and poultry more often. Remove the skin from chicken and turkey. Select lean cuts of beef, veal, pork, or wild game. Trim all visible fat from meat. Bake, roast, broil, grill, or boil instead of frying.
FATS, ALCOHOL, AND SWEETS
In general, you should limit your intake of fatty foods, especially those high in saturated fat, such as hamburger, cheese, bacon, and butter.
If you choose to drink alcohol, limit the amount and have it with a meal. Check with your health care provider about a safe amount for you.
Sweets are high in fat and sugar, so keep portion sizes small. Other tips to avoid eating too many sweets:
- Ask for extra spoons and forks and split your dessert with others.
- Eat sweets that are sugar-free.
- Always ask for the small serving size.
You should also know how to read food labels, and consult them when making food decisions.
American Diabetes Association. Standards of medical care in diabetes--2011. Diabetes Care. 2011 Jan;34 Suppl 1:S11-61.
Eisenbarth GS, Polonsky KS, Buse JB. Type 1 Diabetes Mellitus. In: Kronenberg HM, Melmed S, Polonsky KS, Larsen PR. Kronenberg: Williams Textbook of Endocrinology. 11th ed. Philadelphia, Pa: Saunders Elsevier; 2008:chap 31.
American Diabetes Association. Nutrition recommendations and interventions for diabetes: a position statement of the American Diabetes Association. Diabetes Care. 2008;31:S61-S78.
|
Bats are nocturnal, but some need sunlight to set their internal compass.
"Recent evidence suggests that bats can detect the geomagnetic field," wrote Max Planck Institute ornithologists Richard Holland, Ivailo Borissov and Bjorn Siemers in an article published March 29 in the Proceedings of the National Academy of Sciences. "We demonstrate that homing greater mouse-eared bats calibrate a magnetic compass with sunset cues."
Previously, Holland showed that interfering with the magnetic field around bats impaired their long-distance navigation abilities. Those findings suggested that while bats used echolocation for short-distance steering, they rely on some geomagnetic sense to guide nocturnal flights that take them dozens of miles from home. The details, however, were hazy.
In the new study, Holland's team captured 32 greater mouse-eared bats. Half of them were placed inside a pair of giant, coiled magnets that created a geomagnetic field misaligned with Earth's, temporarily scrambling their own geomagnetic sense. All were released in an unfamiliar location 15 miles from their home cave.
Bats that were captured at night flew home unerringly, regardless of what the researchers had done. They'd already set their compasses by the sun. But if the bats were captured and magnetically disoriented at twilight, when they would normally be flying around calibrating their compasses, they could no longer find their way home. The bats appear to use the twilight as a point of reference while setting their compasses for the rest of the night.
How the compass works is still a mystery. Some birds use sunset for navigational calibration, but the similarities likely end there. While birds' eyes contain geomagnetically sensitive molecules that are activated by photons, Holland has previously shown that bats don't have this system. Instead, some of their cells appear to be laden with magnetite.
Bats that fly only in the dead of night, such as vampire bats, could provide an interesting comparison, wrote the researchers.
"The cues used by the bats to indicate their position can only be speculated on at this stage," they wrote, noting that ornithologists have argued over the bird compass for decades. "For animals that occupy ecological niches where the sunset is rarely observed, this is a surprising finding."
Images: 1) Greater mouse-eared bat/Gilles San Martin/Flickr. 2) The directions flown by control and experimental control bats when their magnetic fields were disrupted at sunset (above) and after dark/PNAS.
- Reverse-Engineering the Quantum Compass of Birds
- Hacking Salmon's Mental Compass to Save Endangered Fish
- Google Earth Reveals Sixth Sense of Cattle, Deer
- Cockroaches Use Earth's Magnetic Field to Steer
Citation: "A nocturnal mammal, the greater mouse-eared bat, calibrates a magnetic compass by the sun." By Richard A. Holland, Ivailo Borissov, and Björn M. Siemers. Proceedings of the National Academy of Sciences, Vol. 107 No. 13, March 30, 2010.
|
“OOBLECK” or “GAK” is the term usually applied to the sticky, oozy, non-Newtonian fluid created by mixing together cornstarch and water. And playing with it really never gets old. This activity is a creative twist on typical oobleck play that allows kids to use the substance as both a canvas and a medium to create colorful works of art!
LEARN SCIENCE VOCABULARY:
States of Matter – There are 3 states of matter that everyone is familiar with: solids, liquids, and gasses. They each have different characteristics. For example, solids keep their shapes, liquids flow and drip into containers, and gasses expand to fill the volume of the room. (There is also a fourth state of matter called plasma, but it only occurs naturally inside super-hot stars.)
Non-Newtonian Fluid – Certain types of liquids that also have characteristics of solids: for example, cream becomes thicker with continuous stirring, or oobleck feels like a solid if you smack it. (Other examples of non-Newtonian fluids are honey and tomato sauce, that both get thinner as they are vigorously stirred.)
- Bowl and spoon for mixing
- Tray, lid, or plate to hold your oobleck canvas (I used the plastic lids that come with lunchmeat containers.)
- Washable paint and brushes
- Plain paper (optional)
HOW TO MAKE IT:
- Mix up your oobleck by combining 1 cup cornstarch with 3/4 cup water. This is a bit thicker than traditional oobleck recipes, but the paint will thin out your solution over time.
- Pour a thin layer of oobleck into your tray or lid.
- Provide your child with paint and a brush and watch what they create! They will have fun painting on the surface of the oobleck and also dragging the brush through it to swirl the colors. They can stick their fingers in the ooze and make handprints on the paper as well. I set the plain paper off to the side, and my son eventually began dipping his fingers into the colored oobleck and creating a drip painting on the paper. (This activity is a good way to practice letter writing skills with your preschooler as well!)
THE SCIENCE BEHIND IT:
Oobleck itself provides a great opportunity for toddlers and young children to learn through sensory play. They can let it drip through their fingers or chop it apart with a spoon. By introducing paints and brushes to oobleck exploration, we are adding the “ART” component to a classic science “STEM” activity. This gives kids a chance to be creative and manipulate the different substances to discover on their own. For example, my 3-year-old son was marveling at how the oobleck dried quickly on the paper and became hard and powdery while the paint remained wet. He used this new discovery as he designed his artwork using both mediums.
As more paint is added to the oobleck canvas, it begins to lose some of its non-Newtonian fluid characteristics and behaves more like a regular liquid. We experimented with this as we did handprints on our paper, first with plain oobleck, and then with the mixture as it got more and more diluted with paint.
|
The Orgin of life in membraneless Protocols
How life arose from non-living chemicals more than 3.5 billion years ago on Earth is a still unanswered questions. The RNA world hypothesis assumes that RNA biomolecules were key players during this time as they carry genetic information and act as enzymes. However, one requirement for RNA activity, is that there are a certain number of molecules within close enough proximity to one another. This would be possible if RNA was contained within a compartment, such as membraneless microdroplets (coacervates). Researchers at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden and the Max Planck Institute of Biochemistry in Martinsried, have shown for the first time, that simple RNA is active within membraneless microdroplets, enabling a suitable environment for the beginning of life.
The RNA world hypothesis assumes that life originates from self-replicating RNA, a biomolecule which was present before the evolution of DNA and proteins. However, researchers assume that on early Earth, concentrations of RNA and their building blocks may have been too dilute for a reaction to take place. Therefore, the scattered RNA molecules needed to find a way to one another to create a reaction and start life. Suitable places for accumulating RNA could have been within compartments. Compartments can be formed with a membrane such as the cell or without a membrane where molecules can exchange readily with its environment. Membraneless compartments can be formed by phase-separation of oppositely charged molecules, a process that is similar to the separation of oil drops in water.
In their study, the researchers proved for the first time that RNA is active within such membraneless microdroplets, supporting previous hypothesis that coacervates act as protocells and could therefore be a precursor of the cell that exists today. The ability of coacervates to accumulate RNA would have helped to overcome the dilution problem of biomolecules and offered a suitable environment for reactions with each other. Furthermore, these membraneless droplets allow free transfer of RNA between the droplets. Björn Drobot, the first author of this study, explains: “One of the really exciting things is that we have shown that coacervates act as a controlled genetic transfer system, in which shorter RNA pieces can shuttle between droplets while longer pieces are trapped in its hosting microdroplet. In this way, these protocells (coacervates) have the ability to transfer genetic information between other protocells which would have been an important criterion for starting life.”
Those findings show that membraneless microdroplets are beneficial for a selective accumulation of RNA. Dora Tang, who led the project with Hannes Mutschler, points out: “It was hypothesized by a Russian scientist (Oparin) in the 1920s that coacervate droplets could have been the first compartments on earth and existed before cells with a membrane evolved. They provide a way for biomolecules to concentrate and create the first life on Earth. The study from my lab adds to a body of work from us and others where there is increasing evidence that coacervates are interesting systems for compartmentalization in origin of life studies as well as studies in modern biology and synthetic biology.”
This Article was published by
Find it at
|
Maize dwarf mosaic virus (MDMV) has been reported in most regions of the United States and in countries around the world. The disease is caused by one of two major viruses: sugarcane mosaic virus and maize dwarf mosaic virus.
About Dwarf Mosaic Virus in Corn
The disease may also affect a number of other plants, including oats, millet, sugarcane, and sorghum, all of which can also serve as host plants for the virus. However, Johnson grass is the primary culprit.
Maize dwarf mosaic virus is known by various names including European maize mosaic virus, Indian maize mosaic virus, and sorghum red stripe virus.
Symptoms of Dwarf Mosaic Virus in Corn
Plants with maize dwarf mosaic virus typically display small, discolored specks followed by yellow or pale green stripes or streaks running along the veins of young leaves. As temperatures rise, entire leaves may turn yellow. However, when nights are cool, affected plants display reddish blotches or streaks.
The corn plant may take on a bunchy, stunted appearance and usually won’t exceed a height of 3 feet (1 m.). Dwarf mosaic virus in corn may also result in root rot. Plants may be barren. If ears develop, they may be unusually small or may lack kernels.
Symptoms of infected Johnson grass are similar, with greenish-yellow or reddish-purple streaks running along the veins. Symptoms are most apparent on the top two or three leaves.
Treating Plants with Dwarf Mosaic Virus
Preventing maize dwarf mosaic virus is your best line of defense.
Plant resistant hybrid varieties.
Control Johnson grass as soon as it emerges. Encourage your neighbors to control the weed too; Johnson grass in the surrounding environment increases the risk of disease in your garden.
Check plants carefully after an aphid infestation. Spray aphids with insecticidal soap spray as soon as they appear and repeat as needed. Large crops or severe infestations may require use of a systemic insecticide.
|
During the fall, the leaves of deciduous trees change color and then fall off in preparation for the winter season. In early autumn, when the season's first cold temperatures arrive, the forest canopy throughout the New England region transforms into a kaleidoscope of colors--rich hues of yellows, reds, oranges and browns.
These two images were collected by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite. The top image shows the region of North America spanning from the Canadian provinces of Quebec (upper left) and New Brunswick (upper right) down through the states of Maine, New Hampsire, Vermont, and Massachusetts (bottom). MODIS acquired the top image on October 9, 2001, after the autumn change in foliage began. The bottom image was acquired September 12, 2001, just before the trees began to change color.
Temperate deciduous forests experience different seasons. Their leaves change color and fall off in autumn and winter, and grow back in the spring to serve as the forests' mechanism for absorbing sunlight and carbon dioxide (for photosynthesis) throughout the spring and summer. This adaptation allows plants to survive cold winters.
[Teachers: Help your students learn more about the Earth's different land ecosystems, visit the Earth Observatory's Mission: Biome on-line activity.]
Images courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA Goddard Space Flight Center
|
Circuit protection is a wise choice for anything subject to surges such as over-current, short circuits, dirty power, and overloads. Most of us have already seen a fuse since they can be found in cars, appliances, and thermostats. But did you know that there are current-limiting fuses in your smartphone, smartwatch, ebook readers, laptop, and anything with a rechargeable battery pack? Electronics devices that work off of DC power are subject to Electrostatic Discharge (ESD). ESD is that snappy charge that you get when you walk across a carpet in your socks on a winter day; low humidity and movement builds up static electricity that can discharge a high-voltage spark into a device when you plug in a USB cable (or any other I/O cable, for that matter). Fuses are placed in series between an external port and the devices it protects.
Some very small fuses go into these devices to protect against a current surge. Some have to be manually replaced when they blow; these are one-time fuses. A Positive Temperature Coefficient (PTC) fuse protects against overcurrent and then automatically resets itself since it’s not possible to replace a fuse or the battery inside many small devices anymore.
PTC fuses don’t blow into an open circuit like one-time fuses. PTCs suddenly turn into high-impedance (high resistance) devices, blocking current flow, when they sense a specified level of heat. For example, if a motor (a load) starts to draw too much current (an overload), it can start to overheat in-circuit traces or wires. A PTC fuse will block any surge in current to a source or load. PTCs are not meant for high voltage DC uses like a Tesla car’s power train. But they are often used in consumer electronics, telecommunications, appliances, and many unreachable places in aerospace or avionics.
How does a PTC fuse work?
Most PTCs are thermistors; a passive component made of a material that has a positive temperature coefficient. Too much current flowing through a device can cause it to heat up, which like the elements in a toaster, raises the PTC’s temperature. It is the rise in temperature that causes the PTC to increase resistance exponentially and thus limit current flow through the PTC. The internal structure of a polymer PTC is made of conductive and non-conductive polymer composite particles. As the temperature increases, the crystalline structure changes, creating a very high resistance to current flow.
Under normal operation, a PTC only adds a tiny amount of resistance in series with a circuit; on the order of 1mΩ (Eaton) to 250Ω. An over-current condition will rapidly trip a PTC into a high-resistance state.
Depending on the PTC, after the PTC cools down — which can take a few seconds to several minutes — the PTC returns to its low resistance state.
Many PTC fuses are available by several manufacturers using proprietary materials under different trade names. As of this writing, Eaton makes PolyTron™ PTC resettable fuses, Bourns® makes Multifuse PTC fuses, and Littelfuse makes Poly-fuse Polymeric PTCs and PolySwitch® (formerly a TE Connectivity trade name) PTCs. Another term often used is Polymeric PTC fuse or Polymer PPTC fuse. (There’s a difference between PTCs used as fuses versus PTCs used in heaters, however, here we discuss only PTCs as fuses.)
Selecting the right fuse
When you start to look for a fuse for your design, you’ll need to know most of the following information to narrow down the parameters in your selection:
- Ease of service (e.g., resettable or one-time fuse?)
- AC (RMS) or DC voltage rating
- Current under full load
- The maximum available current if there’s ever a short circuit
- Form factor, mounting, and space requirements (e.g., radial, SMD, through-hole mounting, etc.)
- Ambient temperatures under all operating circumstances (e.g., if located in a server room, will the server room AC never break down?)
- Type of mounting required
- Expected changes in current loads (e.g., inrush current when starting a motor)
- Any additional requirements (e.g., electrical code, UL, AEC-Q200 automotive, or other safety requirements for your application)
Advice for selecting any component
Purchase only authentic or genuine components from an authorized distributor to prevent using counterfeit parts that don’t live up to specifications. Look on the vendor’s website for whether that vendor is authorized (e.g., vetted, certified) by the manufacturer to sell that manufacturer’s parts. (There is a “chain of custody” that’s followed with an authorized seller or distributor.) Legitimate distributors can be found on a site operated by the Electronic Components Industry Association (ECIA) at www.eciaauthorized.com.
|
How can students learn about abstraction by creating a movie scene? Or make an interactive map using lists? You'll learn (and do it yourself) in this course! This class teaches the concepts of abstraction (methods and parameters) and lists. For each concept, we'll start by helping you connect real-world experiences you are already familiar with to the programming concept you are about to learn. Next, through a cognitively scaffolded process we'll engage you in developing your fluency with problem solving with abstraction and lists in a way that keeps frustration at a minimum. Along the way you will learn about the common challenges or "bugs" students have with these concepts as well as ways to help them find and fix those concepts. You'll also be guided in running classroom discussions to help students develop deeper understanding of these concepts. Finally, you'll learn about the importance and logistics of assigning creative, student-designed programming projects. Additionally, you will create a personal plan for increasing your skills in supporting a culturally responsive learning environment in your classroom.
|
Slavery in the Caribbean
Demand for slaves to cultivate sugarcane and other crops caused what came to be known as the triangle trade. Ships leaving Europe first stopped in Africa where they traded weapons, ammunition, metal, liquor, and cloth for captives taken in wars or raids. The ships then traveled to America, where slaves were exchanged for sugar, rum, salt, and other island products. The ships returned home loaded with products popular with the European people, and ready to begin their journey again.
An estimated 8 to 15 million Africans reached the Americas from the 16th through the 19th century. Only the youngest and healthiest people were taken for what was called the middle passage of the triangle trade, partly because they would be worth more in America, and partly because they were the most likely to reach their destination alive. Conditions aboard the ship were dreadful. Slaves were jammed into the hull; chained to one another in order to stop revolts; as many as one in five passengers did not survive the journey. When one of the enslaved people was stricken with dysentery or smallpox, they were cast overboard.
Those who survived the middle passage faced more abuses on the plantations. Many of the plantation owners had returned to Europe, leaving their holdings in America to be managed by overseers who were often unstable or unsavory. Families were split up, and the Africans were not allowed to learn to read or write. African men, women, and children were forced to work with little to eat or drink.
The African slave population quickly began to outnumber the Europeans and Native Americans. The proportion of slaves ranged from about one third in Cuba to more than ninety percent in many of the islands. Slave rebellions were common. As slave rebellions became more frequent, European investors lost money. The costs of maintaining slavery grew higher when the European governments sent in armed forces to quell the revolts.
Many Europeans began to pressure their governments to abolish slavery. The first organized opposition to slavery came in 1724 from the Quakers, a Christian sect also known as the Society of Friends. Great Britain outlawed slavery in all of their territories in 1833, but the practice continued for almost fifty years on some of the islands of the Caribbean.
Once slavery was abolished, the plantation owners hired hundreds of thousands of people from India and other places in Asia. In Trinidad, about forty percent of the population is Asian.
Timeline of the Abolition of Slavery
|
- Topic: Demonstrative pronouns – this, that, those, these and School Supplies
- Vocabulary: crayons, pencils, ruler, eraser, draw, write, pen, sharpener, notebook, pencil box, backpack
- Grammar: In this lesson, we use contractions when asking and answering questions. We also use demonstrative pronouns (showing words) in sentences and questions.
When asking about things that are near or far away, we use demonstrative pronouns.
Demonstrative pronouns singular
- This: What’s this? ( for singular objects that are near)
- That: What’s that? ( for singular objects that are far)
Demonstrative pronouns plural
- These: What are these? ( for plural objects that are near)
Those: What are those? ( for plural objects that are at a distance)
|
In the search for extraterrestrial life forms, intelligent or not, the next destinations Earthlings have their eyes on are Europa, Titan, Ganymede, Callisto, and (possibly further out in the future) Enceladus — but there is just one thing that might get in the way: Every single one of them is frozen.
What could, at least hypothetically, survive on a world that doesn’t seem too different from the icy wasteland where that abomination in The Thing emerged? Something might be lurking underneath all that ice if you ask a team of researchers from UC Berkeley. These moons are thought to be hiding vast salty oceans beneath the surface, and salty water has a higher freezing point. Meaning, that water can stay. liquid at temperatures which would freeze water without any salt. These cold but possibly habitable oceans could have spawned life.
When oceans are able to stay liquid, they remain stable, and stable oceans mean a higher chance for habitability. There could be many types of salts in the waters of the faraway moons that spacecraft will soon explore. The researchers tested five salts at pressures that They recently published their findings in Cell Reports Physical Science. That’s three thousands times Earth’s atmospheric pressure. They recently published their findings in Cell Reports Physical Science.
“The low-temperature equilibria of aqueous solutions play a fundamental role in fields from cryopreservation to astrobiology and beyond and are particularly integral to investigation of icy moons (eg, Europa, Ganymede, Titan, Enceladus), prime candidates to find habitable environments in our solar system, ”the researchers said in the study.
Spacecraft may have not landed on Europa or any other frozen moons yet, but they have taken measurements of magnetic fields, surface composition, and geological features on the surface that suggest the water beneath is salty. They have even hinted at the existence of cryovolcanoes that Eutectoids are made of substances that freeze at one temperature and melt at another. If there is really salt water in the depths of these moons, salt will cause the liquid to freeze at a temperature lower than the freezing point of water. Something could swim in there.
What the research team wanted to find out was the lowest temperature that salty water could still remain a liquid at under extreme pressure. The experiment had actually started as a quest to figure out methods of cryopreservation for food storage and organs needed for transplants, but it Was easily used to give an idea of what can be expected in alien oceans. Out of the salts that might exist on Europa, Titan, or Ganymede, the sodium chloride in seas on Earth is believed to be one of them. Magnesium sulfate and sodium if anything lives on these bodies, it might have evolved to tolerate these.
Until we find out whether anything actually lives in the oceans of these mysterious moons, we only have life on our own planet to go off of. There are things that thrive in the Mariana trench despite the intense pressure. The pressure in seas such as Europa’s is comparable to much shallower waters. Zombie worms, vampire squid, and other creatures that could pass for aliens can survive at these depths.
There are still unanswered questions. What could survive in salts other than NaCl? Also, the deepest place on Earth seems positively warm compared to the unexplored oceans in space. The average temperature of the Mariana Trench is about 34-39 degrees Fahrenheit. Compare that. It is possible that there are hydrothermal vents deep inside Europa, Titan, Ganymede, and Callisto that gush water heated by magma in the mantle. Right now, that can only be guessed at.
We’re going to have to wait for missions to touch down on these moons if we really want to find out what may or may not be under the ice. ESA’s JUICE (Jupiter Icy Moons Explorer) will be launching in 2023, with NASA’s Europa Clipper in 2024. NASA is also behind the Dragonfly Clipper that will head for Titan in 2027. Maybe there is an alien anglerfish somewhere?
|
Why is meningitis so dangerous? - Melvin Sanicas
- 2,649,911 Views
- 9,753 Questions Answered
- TEDEd Animation
Meningitis is the inflammation of the meninges, the protective membranes that cover the brain and spinal cord. Meningitis may be caused by a fungus, bacteria, or virus. Viral meningitis is more common, but is not usually life-threatening. Fungal meningitis is rare and only happens in people with weakened immune system. Bacterial meningitis, on the other hand, is serious and can be life-threatening.
The 3 most common causes of bacterial meningitis are Neisseria meningitidis, Streptococcus pneumoniae, and Haemophilus influenza—all respiratory pathogens spread from person to person through mucus and saliva expelled when a person sneezes, coughs, talks or laughs. Once acquired, bacteria can colonize the nasal cavity, the throat, including the base of the tongue, tonsils, and soft palate—this is known as the pharyngeal carriage. From there, bacteria may cross the membrane that lines various cavities in the body and enter the blood.
While in the blood, the bacteria can reach the meninges, thereby causing inflammation of the brain and spinal cord. A headache is an early warning sign. When there is swelling in the brain, the senses go haywire. Swelling of the temporal lobe can cause ringing in the ears (tinnitus) or partial hearing loss; swelling of the occipital lobe may cause light sensitivity. Swelling of the medulla oblongata causes nausea and vomiting.
A stiff neck and back are common too. In severe cases, spasms of the muscles cause backward arching of the head, neck, and spine (a state called opisthotonus). Babies and young children are more likely to experience opisthotonus. A baby with meningitis may produce a high-pitched scream when you try to pick them up. For infants, a tight or bulging fontanel (the soft spot on top of a baby’s head) is a sign of inflammation of the brain. Excessive sleepiness is also a common symptom, and it may be hard to wake a sleeping child.
Bleeding also occurs under the skin and starts off looking like a mild rash. As the infection worsens, the rash spreads and gets darker, eventually looking like large bruises. The “glass test” is used to test for meningitis. If you press a drinking glass against a rash, it fades away. If it’s meningitis, the rash can still be seen through the glass. The glass test is not 100% accurate but with the other symptoms described present, seek medical attention immediately.
Bacterial meningitis is common in countries such as the “meningitis belt’ in Sub-Saharan Africa. People traveling to or residing in these countries are at high-risk of getting infected.
Outbreaks are most likely to happen in places where people live close to each, including college dorms or military barracks.
Fortunately, the most serious forms of bacterial meningitis can be prevented with the following vaccines: (1) Haemophilus influenzae type b (Hib) vaccine, (2) Pneumococcal vaccine, (3) Meningococcal conjugate vaccine. Getting vaccinated against measles, mumps, rubella, and chickenpox can help prevent diseases that can lead to viral meningitis. New vaccines are being developed to protect against other common causes of meningitis. Since 2010, a special vaccine called MenAfriVac has been designed for use throughout the African “meningitis belt.”
Learn the symptoms to protect you and your loved ones. Click here to find out more about meningitis.
Create and share a new lesson based on this one.
|
The mixed forest is a type of forest, as its name suggests, so it is a plant formations in which trees predominate over other types of plants. In the specific case of the mixed tree, the predominant types of trees are both gymnosperms and angiosperms, which is why large broad-leaved, bushy and deciduous trees are combined with evergreen conifers. Regarding the animals found in this biome, we can indicate the wolves, some bears and felines, among many others.
If you want to learn more about the mixed forest, its characteristics and its flora and fauna , join us in this interesting AgroCorrn article.
Mixed forest characteristics: types and climate
As we have just defined, among the main characteristics of the mixed forest it stands out that there are both deciduous and evergreen trees. In addition, it has a canopy, that is, an upper layer that reaches heights of between 25 and 45 meters, although in some areas there are mixed forests of less or greater height.
Different types of mixed forest can be differentiated according to the location in which they are found. They are as follows:
- Transitional with the taiga: these are found in the northern United States, Canada and Europe. In these areas, mixed forest transitions between temperate deciduous forests to the south and taiga to the north. In Asia it also acts as a transition between the monsoon forest and the taiga, and presents different characteristics and greater complexity, such as the presence of lianas. In this other article we will talk about what is the taiga and its characteristics .
- Mixed temperate rainforest: These formations can be found in southern New Zealand and some parts of eastern Japan and China, as well as southern Chile and the northwestern Pacific coast of North America. They show very high levels of humidity, with rainfall that reaches over 8,500 mm per year.
- Mediterranean mixed forest: located in the Mediterranean area of Europe and in the Middle East, these forests have adapted to the sometimes prolonged droughts of this climate in the warm months. Learn more about this and the Mediterranean climate itself in this other post about the Mediterranean Forest: characteristics, flora and fauna .
- Mixed transition forest with Central American pines: they are located in Central America and Mexico, where the evergreen broadleaf forest meets the Central American conifers.
- Transitional with Araucarias and Podocarpáceas: these can be found in Argentina and Chile, as well as in small extensions of New Zealand.
Mixed forest climate
We can find 3 main climates in this type of forest:
- Mediterranean climate : here we find dry summers, with little rainfall and hot, and mild and humid winters. The average temperature is around 20ºC and the transitional seasons tend to be warm or temperate.
- Oceanic climate: these are climates that are temperate due to the proximity of large masses of ocean or sea water. They are characterized by high humidity, an attenuated temperature variation between day and night and average temperatures of between 0ºC and 22ºC.
- Humid continental climate: of the three climates, this is the one that reaches the lowest average temperatures, which can reach -10 ºC. It is a climate with rainfall throughout the season, in the form of rain in the warm months and snow in the cold months.
Mixed forest flora
Given the great difference in the areas in which we can find these forests, the diversity of plants in the mixed forest is also very great. Therefore, it is necessary to differentiate according to the main locations.
With regard to gymnosperm plants , in the northern hemisphere we find, above all, members of the Pinaceae and Cupressaceae families. In Japan we see members of the Podocarpaceae and in California we cannot ignore the famous Sequoia sempervirens and Douglas fir. In the Mediterranean areas there are Pinus sylvestris, Juniperus thruifera and Pinus nigra. In the southern hemisphere the Podocarpaceae and Araucariaceae predominate. Here you can discover more about Araucarias or coniferous trees: types, names and characteristics . In addition, to learn more about Gymnosperm Plants: what they are, characteristics and examples , we invite you to read this other post.
Among angiosperm plants the diversity is even greater, which is why we mention some species according to their location.
Northern and central Europe and North America
- Quercus robur
- Fagus sylvatica
- Betula spp.
- Carpinus betulus
- Quercus pyrenaica
- Quercus suber
- Pistacia lentiscus
- Arbutus unedo
Mixed forest flora in Asia
- Quercus acutissima
- Quercus dentata
- Liquidambar formosana
- Pistacia chinensis
- Castanea japonica
- Albizia macrophylla
Mixed forest plants in Oceania
- Atherosperma moschatum
- Acacia melanoxylon
Expand your knowledge about these mixed forest plants by clicking on the link to this other AgroCorrn article on Angiosperm Plants: what they are, characteristics and examples .
Mixed forest fauna
Again, given the great diversity of areas in which these forests are located, it is necessary to distinguish according to them to be able to talk about the animals of the mixed forest . Even so, we can say that, in general, these forests are home to wolves (canis lupus) and different species of cats and bears.
- Black bears
- Canadian lynx
- Patagonian skunks
- Black-necked swans
Mixed forest fauna in Europe
Mixed forest animals in Asia
It should be noted that some subspecies of Asian tigers and bears are disappearing from these biomes.
- Barbary Leopard
- Barbary Macaque
- Barbary Deer
Hello, I am a blogger specialized in environmental, health and scientific dissemination issues in general. The best way to define myself as a blogger is by reading my texts, so I encourage you to do so. Above all, if you are interested in staying up to date and reflecting on these issues, both on a practical and informative level.
|
Tinnitus is the perception of sound in the head or ear which has no external source. The sound may be a hissing, humming, whooshing or roaring sound. It may be constant or on and off and it may in volume.
It is not a disease or illness; it is a symptom generated by the rewiring or reorganization of the hearing system.
There are 2 types of tinnitus;
• Subjective tinnitus-sounds in the head or ear perceived only by the patient. This is majority.
• Objective tinnitus-sounds in the head or ear that are perceived by both the patient and other people. They are the minority. These sounds are usually caused by internal body process like blood flow or musculo-skeletal movement.
Tinnitus can be annoying: however, in the vast majority of patients it is not usually a sign of a serious problem. There are ways to mask and adapt to the symptoms to minimize the impact of tinnitus on daily life.
Tinnitus is common and can be present in any age group, from very young children to the elderly.
The most common and prevalent risk factor is exposure to loud sounds and noise, for example:
1. People exposed to music
2. People exposed to loud machinery
3. People exposed to loud bangs
4. People who listen to their headphones
It’s difficult to pinpoint the exact cause of tinnitus, but it’s generally agreed that it results from some type of change, either mental or physical, but not necessarily related to the ear.
Most commonly, tinnitus is caused by damage to cells in the inner ear. These cells, when damaged or stressed, change the signal sent to the brain. This then produces a sound heard only by the patient.
Please note that tinnitus is not a disease by itself and neither does its presence necessarily indicate one of the causes below.
1. Hearing loss
a. Age-related hearing loss (presbyacusis)
b. Noise-induced hearing loss
2. Exposure to loud sounds
3. Stress and anxiety
4. Ear infections
5. Ear wax build up (cerumen impaction)
6. Menier’s disease (increased fluid in the inner ear)
7. Glue ear (otitis media with effusion)
8. Otosclerosis (stiffening of bones in the middle ear)
9. Perforated ear drum
10. Head and neck trauma
11. Temporomandibular joint disorder (disease of the hinge of the jaw)
12. Sinus pressure
13. Barometric trauma (diving, snorkelling, flying)
14. Certain medications
15. Others (cardiovascular problems, neurologic disease, genetic and inherited ear disorders, metabolic conditions, autoimmune disease and tumours specifically acoustic neuroma)
The most common form is a high pitched tone described as ringing. Other symptoms can include a pulsation that is rushing or humming and varies in intensity with exercise or changing of body position or even in-beat with the patient’s heart beat. A clicking sound may indicates a nerve or muscle abnormality.
A visit to the ENT doctor or the audiologist is warranted. A full history and physical examination of the ear will be performed. A hearing test might be ordered for. Other tests like brain imaging via MRI or CT Scan might be helpful, depending on the history and physical examination.
What are the treatment options for tinnitus?
There is currently no scientifically proven cure for most cases of chronic tinnitus.
The main aim for all currently-available tinnitus treatment options is to lower the perceived burden of tinnitus, allowing the patient to live a more comfortable, unencumbered, and content life.
Manage the underlying cause:
• Hearing loss-if tinnitus is accompanied by hearing loss, then correcting it might be helpful.
• Depression-depression is common in patients with tinnitus and safe and effective treatment options are available for major depressive episodes.
• Insomnia-difficulty sleeping due to tinnitus might be treated by medications or behavioural means.
Behavioural therapies help living with longstanding tinnitus.
• Tinnitus retraining therapy (TRT)- This involves retraining the subconscious part of the auditory system to accept the sounds associated with tinnitus as normal, natural sounds rather than annoying sounds. The goal is for the person to become unaware of their tinnitus unless they consciously choose to focus on it.
• Masking-using a real external sound to counteract the perception of and reaction to tinnitus.
• Biofeedback and stress reduction-biofeedback is a relaxation technique that helps you control your inner body functions like heart beat or breathing rate. It helps to manage tinnitus-related stress by changing your reaction to it.
• Cognitive behavioural therapy (CBT)-you learn to control your emotional response and therefore dissociate tinnitus from painful negative behavioural responses. It involves using coping strategies, distraction skills, and relaxation techniques.
Other therapies have been studied although none have been found to be more reliable than placebo.
• Vitamins and minerals-B vitamins, Zinc and Copper
• Herbal medications-e.g ginkgo biloba
• Exercise-30 minutes for 3 days in a week.
What is the prognosis of tinnitus?
The impact of tinnitus on daily life not only depends on the perceived sound but on the reaction to the sound. Most long term tinnitus is unlikely to go away. However, with coping strategies, it might become less bothersome.
How does tinnitus affect my general wellbeing?
The content on the Nairobi ENT website is not intended nor recommended as a substitute for medical advice, diagnosis, or treatment. Always seek the advice of your own physician or other qualified health care professional regarding any medical questions or conditions.
1. Hesse, Gerhard. “Evidence and evidence gaps in tinnitus therapy.” GMS current topics in otorhinolaryngology, head and neck surgery 15 (2016).
2. Frank, Wilhelm, Brigitte Konta, and Gerda Seiler. “Therapy of unspecific tinnitus without organic cause.” GMS health technology assessment 2 (2006).
3. American tinnitus association
4. British tinnitus association
5. Jackler and Brackman Neurotology
|
Pollen from maize or sweetcorn is known to be an important food source for the larvae of Anopheles arabiensis, and consequently cultivation of the crop can increase malaria transmission in endemic areas. However, maize is an economically important food crop and farmers cannot just stop growing the crop because of the malaria risk. Therefore, it is important to understand how the interaction between maize and the mosquito works…and then disrupt it.
Mosquitoes take olfactory cues from plants and microbes when determining where they lay their eggs (oviposition). In 2016, Betelehem Wondwosen and colleagues identified volatiles produced by rice plants that are involved in attracting gravid (egg-carrying) mosquitoes to rice fields. They found rice headspace (the air immediately above the rice plant) extracts, which included compounds such as ß-caryophyllene, decanal, sulcatone and limonene, attracted mosquitoes but only at medium doses. High doses of the headspace extract put the mosquitoes off.
In their paper published in Malaria Journal this week, Betelehem Wondwosen and colleagues from Ethiopia and Sweden, looked specifically at how gravid mosquitoes are able to detect where the maize fields are. They found that the gravid mosquitoes took their oviposition cues from headspace volatiles made by the pollen. The volatiles specifically attract gravid mosquitoes through smell to them and then stimulate them to lay their eggs. They identified five biological compounds in the headspace extract: benzaldehyde, nonanal, p-cymene, limonene and α-pinene.
Now that we know what the mosquitoes use to hone in on breeding sites, we can use this information against the mosquito. The five-compound extract identified by Betelehem Wondwosen in the Malaria Journal paper can be synthetically produced. At the very least, using the synthetic odours would confuse gravid mosquitoes, so they don’t lay their eggs in maize fields (and a ready source of food for their larvae) but potentially the synthetic compound can be used in malaria elimination programmes. The synthetic compound can be used to lure gravid mosquitoes to traps. Whatsmore, farmers can continue to grow maize for food.
|
Bacillus cereus and other Bacillus species
This pathogen can cause two types of foodborne illness—the diarrhoeal type and the emetic or vomiting type. The illnesses are generally mild, but unpleasant nevertheless. Symptoms can be more severe for young, elderly and immune-comprised consumers.
The diarrhoeal type of illness usually occurs within 8 to 16 hours of eating the food and lasts for about 24 hours. Foods involved vary from starchy vegetables, meat products, cereal foods, sauces, puddings and spices. A much shorter time is required for symptoms of the vomiting type to appear (30 minutes to five hours). One of the most common foods associated with the vomiting type is rice. Cooked rice should always be cooled and stored in the refrigerator.
Bacillus cereus form heat resistant spores that are widespread in our environment and are common in soil and dust. Spores are dormant but germinate producing cells that can grow when they are in warm, moist and nutritious environments and some cells produce a heat resistant toxin if they grow for enough time.
Why does food poisoning occur? There are two ways the types of illnesses mentioned above come about. If spores in dried foods (e.g. rice, dried spices, powders) survive during cooking and the cooked food is cooled slowly unrefrigerated, the spores germinate; cells grow to large numbers, and, in some cases, the cells produce a toxin.
- Consuming large numbers of cells in the food cause the diarrhoeal type illness.
- Consuming the toxin formed in the food causes the vomiting type illness. Reheating or cooking the food will not destroy this toxin as it is resistant to heat. Although this bacterium can grow and produce toxin at refrigeration temperatures, it does so much more slowly than at room temperature. Precooked food should not be stored in the refrigerator for more than two to three days.
To prevent Bacillus cereus food poisoning, store, handle and cool food safely.
|
International Mother Language Day was celebrated for the first time in 2000 following the 1999 UNESCO initiative. The aim is to preserve cultural and linguistic diversity as well as multilingualism.
According to UNESCO, differences in language and culture must be maintained in order to promote tolerance and respect. Every mother tongue contains unique ways of thinking and expressing itself and provides access to the special culture and traditions of a language community. Every two weeks, a language and its cultural and intellectual heritage disappear, leaving at least 43% of the approx. 6-7000 languages spoken in the world threatened. A similar proportion of people’s known possibilities for structuring their thoughts and their world are thus in danger, and if they disappear, … ↪
|
We invite you to join us each week for Did you know? articles which adhere to preselected themes. Knowledge and appreciation of these subjects helps to preserve, diffuse, and promote elements of our common heritage of the Silk Roads.
Situated on the eastern coasts of the Arabian Peninsula to the Arabian Sea, and the Indian Ocean, Oman had a crucial position along the maritime Silk Routes over centuries. Thanks to their outstanding navigations knowledge, inhabitants of Oman had excellent sailing skills, and used maritime routes since at least the third millennium B.C. This is a consequence of the secure maritime routes, and location, which is at the crossroads between South East Asia, the Middle East and Africa. It also results from its long coasts stretching from the Strait of Hormuz in junction of the Persian Gulf and the Arabian Sea to the Indian Ocean coasts. In addition, due to their location, the people of this coastal region were great ship-builders. Mainly, because of the timber they imported from India, and sometimes exported to the Gulf of Aden region, through these maritime routes.
There are evidences of boats that headed to Egypt and embarked in Dhofar region in the southern Arabian Peninsula, around 3000 B.C. During the reigns of Egyptians Pharaohs, ships from the Arabian Peninsula transported goods from Dhofar to Egypt. The goods carried included frankincense used at this time for perfumes, the embalming of the Egyptian mummies, and for religious usages. Furthermore, according to the Bible, the frankincense, myrrh, and gold carried by the Three Wise Men to the holy child were from Dhofar. Consequently, this region became an important meeting place for merchants of diverse horizons, who bought these goods, and continued their journey towards South East Asia.
In this way, goods and merchants travelling from the Far East to the Middle East and Africa – respectively with final stop Basra in Iraq and Alexandria in Egypt –, had to stop in Sohar or Muscat (in the Northern part). This region was advantageous for the sailors because of the suitable winds; it also allowed the merchants to acquire fresh supplies.
Moreover, by the mid 9th century, Omani vessels from the Arabian Peninsula started sailing towards South China. The Chinese port of Quanzhou (Zaitun) was one of the major destination of Omani sailors. Quanzhou keeps even nowadays, different evidences of these exchanges especially during 14th century. Furthermore, Omani merchants played an important role in the expansion of Islam toward South East Asia.
On the other hand, archaeological evidences such as silk, ceramics, ivory, and textiles, found in Sohar, show a Chinese presence in the Arabian Peninsula. There are evidences that Omani Ships carried these products from China to the Arabian Peninsula by the 4th century A.D. Therefore, Sohar was at the heart of the Eastern-Western trade, and this powerful position made it one of the prosperous city of the region.
Through these maritime routes, boats from the Arabian Peninsula also reached East Africa. Indeed, the sailors used to carry Eastern Asian goods to these lands, and some of them established commercial settlements, and lived in this region. Zanzibar Island in modern Tanzania keeps outstanding elements of these interactions between the Arabian Peninsula and Africa over centuries.
To conclude, Oman is an example that shows how different civilisations and their cultural elements moved from one place to another through trade. These travellers would settle in other lands, living amongst and melting with the local people through different ways including intercultural marriages; resulting in cultural exchanges and unions. The maritime trade routes were definitely a junction between different worlds.
|
Continuous forest cover
Before people arrived, more than 80% of New Zealand was covered in dense forest. Pollen and charcoal records from over 150 lake, swamp or peat bog sediment core samples give a clear picture of the country’s vegetation and fire history since the end of the last ice age, about 14,000 years ago.
Only small, occasional fires occurred in the forests. These are shown by minor, temporary declines in pollen from tall forest and shrub species, along with increases in charcoal, and in spores and pollen from bracken, grasses and other pioneer species (the first plants to grow in a cleared area).
Some scientists believe that these minor bracken and charcoal peaks from 100 CE are evidence of human-lit fires, and that there were people in New Zealand before the 13th century. However there are equally convincing natural explanations, including lightning strike after droughts, and volcanic eruptions. There is no other supporting archaeological evidence for human occupation.
Deforestation by fire
Sediment records show a huge increase in charcoal and bracken spores around the 13th and 14th centuries. At the same time, there was a massive decline in pollen from forest trees, marking a striking and devastating change: up to 40% of the forest was burnt within 200 years of Māori settling in New Zealand. Radiocarbon dating shows that deforestation began at the same time throughout the country. There are also many archaeological sites dating from this time.
The forests in the drier eastern regions burnt rapidly, and were cleared quickly and completely. In wetter or mountainous areas, clearance occurred later and was more piecemeal.
In most sites, charcoal continues to appear after the first forest clearance, suggesting that Māori used fire to stop tall forest and scrub from regenerating.
Purpose of burning
Reasons for clearing the forest included opening up the landscape to make it more habitable. Crops could be grown, and bracken fern (Pteridium esculentum) was encouraged for its edible starchy rhizomes. Burning also kept tracks clear and made travel easier. The first forested areas to be cleared must have been easy to burn, particularly during or after a drought.
Swamp and lake sediments
Pollen records from swamp and lake sediments show major changes in the plants that grew around wetland edges. Raupō (bulrush, Typha orientalis) became more abundant around swamps, because of increased nutrients and water flow. Māori used the pollen from raupō flower spikes to make cakes, so they may have encouraged its growth by burning lake-edge vegetation.
Erosion did not always follow deforestation, even in areas that are now very erosion-prone. The dense bracken with its network of rhizomes, and the tree stumps that remained after burning, would have protected against the landslides and soil erosion caused by heavy rain.
|
A recent study on mitochondrial DNA revealed that the female line of Ashkenazi Jewish ancestry closely resembles that of Southern and Western Europe, rather than the ancient Near East, as many scholars proposed in the past. Ashkenazim, a Jewish group who migrated to Central and Eastern Europe, make up the majority of the world’s Jewish population today. A recent article in Nature Communications discusses the results mtDNA tests. The article, written by a team of scientists led by the University of Huddersfield’s Martin B. Richards, includes the following in its abstract:
Like Judaism, mitochondrial DNA is passed along the maternal line. Its variation in the Ashkenazim is highly distinctive, with four major and numerous minor founders … we show that all four major founders, ~40% of Ashkenazi mtDNA variation, have ancestry in prehistoric Europe, rather than the Near East or Caucasus. Furthermore, most of the remaining minor founders share a similar deep European ancestry. Thus the great majority of Ashkenazi maternal lineages were not brought from the Levant, as commonly supposed, nor recruited in the Caucasus, as sometimes suggested, but assimilated within Europe. These results point to a significant role for the conversion of women in the formation of Ashkenazi communities, and provide the foundation for a detailed reconstruction of Ashkenazi genealogical history.
As the point where three of the world’s major religions converge, Israel’s history is one of the richest and most complex in the world. Sift through the archaeology and history of this ancient land in the free eBook Israel: An Archaeological Journey, and get a view of these significant Biblical sites through an archaeologist’s lens.
Earlier DNA tests performed on male chromosomes in Jewish communities generally reveal Near Eastern DNA patterns. Population migration and conversion to Judaism may have led to the growth of the Ashekenazi population. Ashkenazi Jewish ancestry may, in fact, stem from the Jewish community in the early Roman Empire. The authors write that “a substantial Jewish community was present in Rome from at least the mid-second century BCE, maintaining links to Jerusalem and numbering 30,000–50,000 by the first half of the first century C.E. By the end of the first millennium CE, Ashkenazi communities were historically visible along the Rhine valley in Germany.”
Read more in Nature Communications.
Related Content in Bible History Daily
Crete’s Minoan civilization is Europe’s first great Bronze Age society. A new DNA study suggests that the Minoans were ethnically European, rather than North African as suggested by Minoan archaeologist Arthur Evans.
A DNA study that compared the genetic makeup of Jewish populations from around the world with African populations has found that modern Jews can attribute about 3 to 5 percent of their ancestry to sub-Saharan Africans.
|
Concept 19 The DNA molecule is shaped like a twisted ladder.
Earlier work had shown that DNA is composed of building blocks called nucleotides consisting of a deoxyribose sugar, a phosphate group, and one of four nitrogen bases — adenine (A), thymine (T), guanine (G), and cytosine (C). Phosphates and sugars of adjacent nucleotides link to form a long polymer. Other key experiments showed that the ratios of A-to-T and G-to-C are constant in all living things. X-ray crystallography provided the final clue that the DNA molecule is a double helix, shaped like a twisted ladder.
In 1953, the race to determine how these pieces fit together in a three-dimensional structure was won by James Watson and Francis Crick at the Cavendish Laboratory in Cambridge, England. They showed that alternating deoxyribose and phosphate molecules form the twisted uprights of the DNA ladder. The rungs of the ladder are formed by complementary pairs of nitrogen bases — A always paired with T and G always paired with C.
|
Trace and Color: Pumpkin
In this fall-themed coloring instructional activity, students color the picture of a pumpkin and then practice printing the word pumpkin by tracing the dotted lines.
5 Views 13 Downloads
Take Five: Writing a Color-Coded Paragraph
Use a traffic light to model a very basic paragraph plan. The Go, or topic sentence, is written in green and expresses an opinion about the topic. Information that supports the opinion of the Go sentence is written in yellow and the...
K - 5th English Language Arts CCSS: Adaptable
Preschool Alphabet Coloring Book with Letters to Trace
Help your class learn the alphabet with this resource that asks learners to collate a 26-page alphabet coloring book! Youngsters color each letter of the alphabet and a corresponding animal; for example, they would color in an octopus...
Pre-K - 1st English Language Arts CCSS: Designed
Color Worksheets for Preschool and Kindergarten
In this color worksheet, students trace the color words by following the dots. Words are printed in the color they represent. Other color recognition activities are included with this page, such as a rainbow flash card game.
Pre-K - K Visual & Performing Arts
Preschool Color Fun and Activities
I can't stress enough the importance of songs and rhyming chants to the development of early literacy skills. To reinforce color recognition, little ones sing, count, move, and create. All you have to do is choose which of these great...
Pre-K English Language Arts
|
Betelgeuse, a bright reddish star in the constellation Orion. It is about 495 light-years from the earth, though some measurements put it at about 640 light-years away. Its color indicates a relatively low surface temperature of about 5,000° F. (2,800° C.), about half that of the sun. Betelgeuse is a variable star that fluctuates irregularly in size and brightness. At its smallest, Betelgeuse has a diameter some 300 times that of the sun; its greatest diameter is about one-third larger than this. Betelgeuse is classified in the group of largest known stars, the supergiants.
After Viking 1 captured images of what looked like a face on Mars, the public began to speculate. Had Martians carved a colossus, or was there another answer?
We can't defy the odds of an asteroid taking a turn for Earth forever, so the world's astronomers watch the sky. What happens once they spot something?
|
- Today, India ranks second worldwide in farm output and ranked within the world’s five largest producers of over 80% of agricultural produce items, including many cash crops such as coffee and cotton, in 2010
- Agriculture and allied sectors like forestry and fisheries accounted for 16.6% of the GDP in 2009 and accounted for about 50% of the total workforce. People responsible for our food security are not themselves secured!!
Q. What are the features of agriculture in India? What problems/advantages do these features bring in for agriculture?
These are some characteristics of agriculture in India:
- Self-Reliance: India has been able to produce sufficient grains (mainly rice and wheat) for its population. Credit for this mainly goes to the Green Revolution. However, it lead to diversion of land to these two crops only which further lead to insufficient production of various other important crops like pulses. India is an importer of pulses.
- Productivity: Land under cultivation in India is second in the world. This is the primary reason for India’s production. However, India is a consistent worse performer in terms of productivity (harvest/area) in almost all the major crops. The reasons for this are the small size of lands, the lack of technology use, conventional methods for cultivation, bad irrigation practices which results in water wastage and land degradation etc.
- Subsistence: Majority of households undertake agricultural activities for their own survival. They use their crops to buy essential goods and services. As they do not have enough land and capital and as they constitute the majority of farmers in India, agriculture in India is known as Subsistence Agriculture. In contrast, in USA, practice is of commercial agriculture which involves large lands and capital which results in enhanced productivity and huge profits for relatively small farmer class.
- Decreasing share in economy: Between 1970 and 2011, the GDP share of agriculture has fallen from 43 to 16%. This isn’t because of reduced importance of agriculture, or a consequence of agricultural policy. This is largely because of the rapid economic growth in services, industrial output, and non-agricultural sectors in India especially from 2000 to 2010. Agriculture grows at a lesser pace than overall growth of Indian economy and hence this downward trajectory will continue at least in near future.
- Irrigation: India has reached to about 80% of its irrigation potential to reduce its dependence on monsoon. However, irrigation practices in India leads to wastage of water and land degradation. Moreover, due to excessive use of groundwater using tubewells running on subsidized electricity, water table is increasingly going down and many bore wells have already run dry in last few years.
- Losses and Middlemen: The Indian farmer receives just 10 to 23% of the price the Indian consumer pays for exactly the same produce, the difference going to losses, inefficiencies and middlemen. Farmers in developed economies of Europe and the United States, in contrast, receive 64 to 81% (according to some reports)
- Farmer Suicides: Following the economic reforms of 1991 the government withdrew support from the agricultural sector. These reforms, along with other factors, led to a rise in farmer suicides. Various studies identify the important factors as the withdrawal of government support, insufficient or risky credit systems, the difficulty of farming semi-arid regions, poor agricultural income, absence of alternative income opportunities, a downturn in the urban economy which forced non-farmers into farming, and the absence of suitable counseling services.
- Credit Facilities: Credit availability for farmers remain low and skewed towards particular regions. Bankers sight credit worthiness of farmers as the main reason for this problem. Government, under its mission for financial inclusion, has started various schemes to provide credit facilities at favorable interest rates to farmers. These schemes along with declining poverty should increase credit worthiness of a farmer.
- Seed: Green Revolution in India involved use of high yielding varieties of seeds in India and thus improved seeds entered into India. The penetration of improved varieties of seeds remains low especially due to the controversy related to Genetically Modified seeds and their potential impacts on environment.
- Other issues: India has very poor rural roads affecting timely supply of inputs and timely transfer of outputs from Indian farms. Irrigation systems are inadequate leading to crop failures in some parts of the country because of lack of water. In other areas regional floods, poor seed quality and inefficient farming practices, lack of cold storage and harvest spoilage cause over 30% of farmer’s produce going to waste, lack of organized retail and competing buyers thereby limiting Indian farmer’s ability to sell the surplus and commercial crops.
Q. Why should I know about agriculture?
Agriculture is considered an economic activity globally but in India, it is a means for survival of majority of Indian population. Due to rapid urbanization and technology revolution, attention has been diverted from the sector to more attractive service sector. This lack of attention has already deteriorated Indian agriculture and India may face problems of food security in not very distant future. Youth forms a major electorate and will affect public policy in coming years. If we do not think for our farmers nobody will.
|
Hermit Warblers are small, yellow-headed birds with distinctive, unstreaked flanks. They are white below and gray above, with white outer tail feathers and two white wing-bars. Males have black throats. Females' throats are grayish, with some black. In parts of Washington, Hermit Warblers hybridize with Townsend's Warblers, resulting in birds with plumage intermediate between the two species.
Hermit Warblers are most often found in mature coniferous forests, from sea level to the mountains. During breeding season, they are most common in stands over 30 years old, and are generally absent from stands under 20 years old. They are generally found in the interior of large forests, high in the canopy.
During migration and post-breeding, Hermit Warblers are commonly found in mixed flocks. When foraging they hop about the foliage, moving from the trunk outward to branch tips and then starting back at the trunk. They also glean items from the foliage while hovering, and will fly out to catch aerial prey. Hermit Warblers can hang upside-down to glean from the undersides of leaves and twigs. Their preference for high, dense foliage makes them difficult to spot, but they can be heard singing regularly during the breeding season.
Insects, spiders, and other invertebrates make up most of the Hermit Warbler's diet. Young birds are fed many caterpillars.
Males arrive on the breeding grounds before females. They establish and defend territories by singing. Monogamous pairs form shortly after the females arrive. The female builds the nest, which is saddled across high limbs and concealed by overhanging branches. The nest is an open cup of weeds, needles, twigs, moss, rootlets, and spider webbing, lined with feathers, hair, and other soft material. The female incubates 4 to 5 eggs for about 12 days, and both parents feed the young. The young leave the nest 8 to 10 days after hatching, and the parents continue to feed them for at least a few days following fledging.
Most Hermit Warblers winter in the mountains of Mexico and Guatemala, although some winter on the California coast and north to Oregon in small numbers. In spring, they return along the coast in a fairly quick northward migration. Fall migration is generally through the mountains and is usually more drawn out than the spring movement.
Hermit Warblers formerly bred as far north as British Columbia. However as Townsend's Warblers expand their range, Hermit Warblers are being supplanted, that is, they are slowly being extirpated. The northernmost edge of their current range is a zone of hybridization between Hermit and Townsend's Warblers. Hermit Warblers require specialized habitat, and that habitat (mature coniferous forest) is at risk from logging within their range. Hermit Warblers will use forests that have been lightly thinned, but will not inhabit heavily thinned or clear-cut stands. They have a relatively small range that is decreasing due to logging and the expansion of Townsend's Warblers. Although still common in many areas, Hermit Warblers are listed as a species-at-risk by Partners in Flight.
When and Where to Find in Washington
Hermit Warblers are common from mid-April to early August in the southern Cascades and southeastern Olympic Mountains. Pure Hermit Warblers are found in southern Washington, from Mount Adams and Mount Rainier west, north to the southern Olympic Peninsula. Hybrid zones are approximately 50-mile-wide bands in the southern Cascades and the Olympic Peninsula. Within the hybrid zone in the Olympic Mountains, Hermit Warblers can be found at mid-elevations between higher- and lower-elevation Townsend's Warblers.
Click here to visit this species' account and breeding-season distribution map in Sound to Sage, Seattle Audubon's on-line breeding bird atlas of Island, King, Kitsap, and Kittitas Counties.
Washington Range Map
North American Range Map
- Blue-winged WarblerVermivora pinus
- Golden-winged WarblerVermivora chrysoptera
- Tennessee WarblerVermivora peregrina
- Orange-crowned WarblerVermivora celata
- Nashville WarblerVermivora ruficapilla
- Northern ParulaParula americana
- Yellow WarblerDendroica petechia
- Chestnut-sided WarblerDendroica pensylvanica
- Magnolia WarblerDendroica magnolia
- Cape May WarblerDendroica tigrina
- Black-throated Blue WarblerDendroica caerulescens
- Yellow-rumped WarblerDendroica coronata
- Black-throated Gray WarblerDendroica nigrescens
- Black-throated Green WarblerDendroica virens
- Townsend's WarblerDendroica townsendi
- Hermit WarblerDendroica occidentalis
- Blackburnian WarblerDendroica fusca
- Yellow-throated WarblerDendroica dominica
- Prairie WarblerDendroica discolor
- Palm WarblerDendroica palmarum
- Bay-breasted WarblerDendroica castanea
- Blackpoll WarblerDendroica striata
- Black-and-white WarblerMniotilta varia
- American RedstartSetophaga ruticilla
- Prothonotary WarblerProtonotaria citrea
- OvenbirdSeiurus aurocapilla
- Northern WaterthrushSeiurus noveboracensis
- Kentucky WarblerOporornis formosus
- Mourning WarblerOporornis philadelphia
- MacGillivray's WarblerOporornis tolmiei
- Common YellowthroatGeothlypis trichas
- Hooded WarblerWilsonia citrina
- Wilson's WarblerWilsonia pusilla
- Yellow-breasted ChatIcteria virens
|Federal Endangered Species List||Audubon/American Bird Conservancy Watch List||State Endangered Species List||Audubon Washington Vulnerable Birds List|
|
The heating rate in ultrasonic welding is the product of the loss modulus of the material, frequency, clamp force, and the square of amplitude. Loss modulus is the ability of the material to convert repetitive variation in compressive load into heat. In other words, how easy it is to heat the material using ultrasound. Think of this as being inversely proportional to intermolecular lubricity. Slippery materials are harder to heat up than less slippery materials. This is an oversimplification, but it will work for now. Loss modulus “is what it is” when you go to weld a part, but knowing it is a factor can be useful in rectifying a troublesome application. Frequency is determined by the equipment and tooling and there is not much you can do about it if you are standing next to the machine with the tooling installed, but careful thought should go into selection of the right frequency for the job. More on that later. Amplitude, as we have seen, is determined electrically and acoustically. Clamp force is generally provided by an air cylinder and is adjusted by adjusting the pressure in the cylinder. The ultimate temperature of the joint is determined by the heating rate and the exposure time. In the simplest terms, for any given weld, one can increase the heating rate by increasing clamp force or amplitude and decreasing exposure time, or decrease the heating rate by decreasing ampltiude or clamp force and increasing exposure time. When changing the heating rate, it is important to remember that the heating rate is much more greatly affected by changes in amplitude than it is by changes in clamp force.
|
Benjamin Hudson, a professor of history and medieval studies, was recently quoted in a story comparing President Donald Trump’s promise to build a boarder wall between the United State and Mexico to Roman Emperor Hadrian, who attempted to build a boarder wall in 122 A.D. Here’s an excerpt from the Smithsonian Magazine article:
“Hadrian’s Wall wasn’t just built to keep the Picts out. It likely served another important function—generating revenue for the empire. Historians think it established a customs barrier where Romans could tax anyone who entered. Similar barriers were discovered at other Roman frontier walls, like that at Porolissum in Dacia.
“The wall may also have helped control the flow of people between north and south, making it easier for a few Romans to fight off a lot of Picts. ‘A handful of men could hold off a much larger force by using Hadrian’s Wall as a shield,’ Benjamin Hudson, a professor of history at Pennsylvania State University and author of The Picts, says via email. ‘Delaying an attack for even a day or two would enable other troops to come to that area.’ Because the Wall had limited checkpoints and gates, Collins notes, it would be difficult for mounted raiders to get too close. And because would-be invaders couldn’t take their horses over the Wall with them, a successful getaway would be that much harder.
“The Romans had already controlled the area around their new wall for a generation, so its construction didn’t precipitate much cultural change. However, they would have had to confiscate massive tracts of land.
“Most building materials, like stone and turf, were probably obtained locally. Special materials, like lead, were likely privately purchased, but paid for by the provincial governor. And no one had to worry about hiring extra men—either they would be Roman soldiers, who received regular wages, or conscripted, unpaid local men.
“ ‘Building the Wall would not have been “cheap,” but the Romans probably did it as inexpensively as could be expected,’ says Hudson. ‘Most of the funds would have come from tax revenues in Britain, although the indirect costs (such as the salaries for the garrisons) would have been part of operating expenses,’ he adds.”
Read the full article at SmithsonianMag.com.
|
Klingon originally had a ternary number system; that is, one based on three. Counting proceeded as follows: 1, 2, 3; 3+1, 3+2, 3+3; 2*3+1, 2*3+2, 2*3+3; 3*3+1, 3*3+2, 3*3+3; and then it got complicated. In accordance with the more accepted practice, the Klingon Empire sometime back adopted a decimal number system, one based on ten. Though no one knows for sure, it is likely that this change was made more out of concern for understanding the scientific data of other civilizations than out of a spirit of cooperation.
The Klingon numbers are:
Higher numbers are formed by adding special number- forming elements to the basic set of numbers (1--9). Thus, wa'maH ten consists of wa' one plus the number-forming element for ten, maH. Counting continues as follows:
|11||wa'maH wa'||(that is, ten and one)|
|12||wa'maH cha'||(that is, ten and two)|
Higher numbers are based on maH ten, vatlh hundred, and SaD or SanID thousand. Both SaD and SanID are equally correct for thousand, and both are used with roughly equal frequency. It is not known why this number alone has two variants.
|20||cha'maH||(that is, two tens)|
|30||wejmaH||(that is, three tens)|
|100||wa'vatlh||(that is, one hundred)|
|200||cha'vatlh||(that is, two hundreds)|
|1,000||wa'SaD or wa'SanID||(that is, one thousand)|
|2,000||cha'SaD or cha'SanID||(that is, two thousands)|
Numbers are combined as in English:
5,347 vaghSaD wejvatlh loSmaH Soch or vaghSanID wejvatlh loSmaH Soch
604 javvatlh loS
31 wejmaH wa'
Some of the number-forming elements for higher numbers are:
ten thousand netlh
hundred thousand bIp
Zero is pagh.
Numbers are used as nouns. As such, they may stand alone as subjects or objects or they may modify another noun.
mulegh cha' Two (of them) see me.
(mulegh they see me, cha' two)
wa' yIHoH Kill one (of them)!
(wa' one, yIHoH kill him/her!)
The preceding sentence is grammatically correct even without the wa' because the prefix yI- indicates a singular object. The wa', therefore, is used for emphasis only.
Numbers used as modifiers precede the noun they modify.
loS puqpu' or loS puq four children
vaghmaH yuQmey or vaghmaH yuQ fifty planets
The plural suffixes (-pu', -mey) are not necessary when a number is used.
When a number is used for numbering, as opposed to counting, it follows the noun. Compare:
DuS wa' torpedo tube number 1
wa' DuS one torpedo tube
Ordinal numbers (first, second, etc.) are formed by adding -DIch to the numbers.
Ordinal numbers follow the noun.
meb cha'DIch second guest
Adding -logh to a number gives the notion of repetitions.
Hutlogh nine times
These numbers function in the sentence as adverbials (section 5.4).
wej loch cha' 2/3 (two thirds)
vagh loch wej 3/5 (three fifths)
loS loch jav 6/4 (six quarters)
In theory, if appropriate in a mathematical discussion, one could say wa' loch wej "three one–ths". (Though perhaps a little grammatically aberrant, this would not be wa' luloch wej.))
Use Dop to create negative numbers. wej Dop "minus three" or "negative three". Compare this to 'u' Dop "mirror universe".
For random numbers, as when throwing dice, use the verb 'al "float" instead of Haw be random.
mI' al' (a) random number
'al mI' the number is random
There is also a slang expression Du'Hom mI' "random number" (literally "garden number").)
|
Strictly speaking, a power lithium-ion battery refers to a lithium-ion battery with a capacity of more than 3 Ah. At present, it refers to a lithium-ion battery that can be driven to equipment, instruments, models, vehicles, and the like by discharge. Power lithium-ion batteries are divided into high-capacity and high-power types. High-capacity batteries can be used in power tools, bicycles, scooters, miner's lamps, medical equipment, etc.; high-power batteries are mainly used in hybrid vehicles and other occasions where large batteries are required to be charged and discharged. According to different internal materials, power lithium-ion batteries are divided into liquid power lithium-ion batteries and polymer lithium-ion power batteries, which are collectively referred to as power lithium-ion batteries.
High-performance lithium-ion battery: In order to break through the storage bottleneck of traditional lithium batteries, it is necessary to develop a new iron-carbon storage material that can store more stores in a small storage unit. However, the obvious shortcoming of this material is that the charging cycle is unstable, and the storage capacity is significantly reduced after the battery is charged and discharged multiple times. To this end, a new synthetic method came into being. Mixing and heating a few raw materials with a lithium salt to create a new nanostructured material with carbon nanotubes, which creates a storage unit and a conductive circuit on a nanoscale material.
|
Dyslexia, also known as alexia or developmental reading disorder, is a specific learning difficulty that mainly affects the development of literacy and language related skills. It is likely to be present at birth and to be life-long in its effects. It is characterised by difficulties with phonological processing, rapid naming, working memory, processing speed, and the automatic development of skills that may not match up to an individual’s other cognitive abilities.
Dyslexia is the most common learning difficulty. Some see dyslexia as distinct from reading difficulties resulting from other causes, such as a non-neurological deficiency with hearing or vision, or poor reading instruction. There are three proposed cognitive subtypes of dyslexia (auditory, visual and attentional), although individual cases of dyslexia are better explained by specific underlying neuropsychological deficits (e.g. attention deficit hyperactivity disorder, a visual processing disorder / visual stress) and co-occurring learning difficulties (e.g. dyscalculia and dysgraphia). Although it is considered to be a receptive (afferent) language-based learning disability, dyslexia also affects one’s expressive (efferent) language skills.
|
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Avoidance behaviour, type of activity, seen in animals exposed to adverse stimuli, in which the tendency to act defensively is stronger than the tendency to attack. The underlying implication that a single neural mechanism is involved (such as a specific part of the brain, which, under electrical stimulation, seems to inflict punishment) remains only a hypothesis. Clearly, the same kinds of avoidance behaviour might result from different underlying physiological mechanisms. Thus, although the various dichotomies, or polarities, of behaviour such as positive and negative, psychoanalytic life and death instincts, and approach and withdrawal concepts may be logical or philosophical conveniences, they seem, nevertheless, to lack clear meaning physiologically.
Alternative usage defines avoidance behaviour by describing a number of patterns: active avoidance (fleeing), passive avoidance (freezing stock-still or hiding), and a pattern of protective reflexes, as seen in the startle response. There is good reason to suppose that, in cats, for example, each of these patterns is coordinated separately by the brain. One kind of fleeing, in which the cat moves continuously and shows much upward climbing, is produced by electrical stimulation of specific parts of the brain (hypothalamic sites). Stimulation of other sites (in the thalamus) generates other types of fleeing movements, causing the animal to crouch, look around, move, slink close to the floor, and hide, if possible. In general, among birds and mammals, brain sites for fleeing of the first type occur in hypothalamic and mesencephalic zones.
Protective reflexes in mammals include ear retraction to a position of safety—pressed against and somewhat behind the skull—as when a horse is seen to lay its ears back. Among the monkey-like bush babies (Galagos) the outer ear folds up laterally and longitudinally at the same time, under threat. The eyes are closed, and the muscles around the eye are contracted, adding to the protection. During this so-called startle reflex, breathing is checked, and the mouth corners are pulled back to expose the teeth; this prepares both for biting in defense and also for movements of the tongue and for head shaking to free the mouth of any dangerous or distasteful substance that may have been taken in. In most mammals, the limbs flex as if ready for a leap; in the human startle reflex, the arms are thrust outward as if ready to grasp at a support.
It is helpful to consider avoidance behaviour in terms of factors that elicit it (e.g., specific stimuli) and regulate it (e.g., hormones).
Factors in avoidance behaviour
Warning calls and visual signals that are unique to different species of birds and mammals effectively and specifically evoke avoidance patterns. In some cases, learning clearly emerges as a factor; thus, members of a colony of birds seem to learn to respond to the alarm calls of all species present in the colony. Among ducklings, a visual model to evoke fleeing and hiding can be fashioned as a cardboard cutout. When moved overhead in one direction, the model resembles a short-necked, long-tailed hawk, and the ducklings flee from it; when moved in the other direction, the model looks like a harmless, long-necked goose, and the ducklings tend to stay calm. The model is effective, however, in eliciting the two kinds of behaviour only when the ducklings are accustomed to geese flying over but not hawks.
Innate factors also contribute to such responses (see instinct). Domestic chicks, for example, show crouching and freezing in response to the long alarm call of their species. Many of the perching birds (passerines) will gather to mob when stimulated by the sight of an owl. The eyes in the characteristic owl face have been found to be especially important; even birds reared in isolation respond to man-made models with appropriate eyespots painted on. It has been suggested that many human beings are specifically (and perhaps instinctively) disturbed by the sight of snakes—the notion of a legless object perhaps being a key stimulus. Human responses to spiders and centipedes with conspicuous legs also may be intense. In the reaction to snakes at least, notwithstanding Freudian explanations that they symbolize male sex organs, the behaviour of people may be compared with owl mobbing among passerine birds.
Specific chemical signals can induce avoidance behaviour; some are released by minnows and tadpoles when their skin is damaged (usually indicating to fellows that there is danger). These chemicals appear to be specific for each species of fish and are highly effective in producing fleeing (see chemoreception). Many ants produce volatile alarm substances (terpenes) that are attractants to other ants at low concentrations and, in high concentrations near their source, produce rapid locomotion, defense postures, and, sometimes, fleeing. Some invertebrate avoidance responses are reflexes evoked by very specific stimuli; rapid swimming by cockles clapping their shells, for example, is elicited by starfish extract. Shell jerking is produced in a freshwater snail (Physa) by contact with a leech, another specific response to a major predator.
Pain, startle, and novelty
Painful stimuli are preeminent among those that produce avoidance. Among mammals (including man) many such responses are patently inborn, as is the reflex withdrawal of one’s finger from a hot griddle.
To classify a stimulus as startling or novel requires some comparison with previous stimulation. Human responses (orientation reflex) to startling or interesting stimuli may be studied by presenting a series of repeated tones; the orientation reflex tends to appear at the moment at which some change in the usual sequence (such as a longer or shorter tone) occurs. There is some evidence that the hippocampus (a brain structure) is involved in the human experience of novelty. Surgical removal of the hippocampus in many animals makes avoidance responses to strange objects far more persistent; a comparable operation in small parrots (lovebirds) greatly increases the persistence of calls that gather others for mobbing. Probably the hippocampus takes part in establishing memory of any new stimulus, and once this has occurred, the stimulus is no longer novel. Removal of other brain structures (the amygdala) reduces avoidance of strange objects (e.g., in lovebirds) and also makes fleeing and defensive attack less likely.
|
Rome is popularly called 'the city of seven hills'. These seven hills namely, Viminal, Quirinal, Palatine, Esquiline, Capitoline, Caelian, and Aventine were separated by marshy land and the River Tiber. Of these seven hills, the Caelian, Esquiline, Quirinal and Viminal hills were portions of a volcanic ridge. The Aventine, Capitoline, and Palatine hills formed the western group of hills. In ancient Rome each of the seven hills had separate walled cities.
The Tiber River flows from the Apennine Mountains south-westwards to the Tyrrhenian Sea after passing by Rome. This river of length 405 km has played a significant role in shaping Rome's history and culture.
Rome climate very broadly is of the 'Mediterranean' variety. The summer months are warm to mild, and the winters are cold. The rainfall occurs during the winter months between October to January. The summer season lasts from June to September with temperatures ranging between a maximum of 30° C to a minimum of 14° C. The daily range of temperature averages at 14° C. The winter season extends from December to March with temperatures varying between 3° C to 16° C. The months of April, May, October and November are very pleasant with temperatures varying between 7° C and 23° C.
Rome being in the Mediterranean climatic region receives moderate rainfall throughout the year. Rainfall is heavy between October to January, which is about 40 cm. Rome Mediterranean climate being conducive to travel throughout the year is always thronged by visitors.
|
- hardening of the arteries
Atherosclerosis refers to fatty deposits formed under the inner lining of the blood vessels. The walls of the vessels become thick and less elastic. The thickened areas are called plaque.
What is going on in the body?
Atherosclerosis occurs when fatty substances, cholesterol, cellular waste products, calcium, and other materials build up on the inside lining of the arteries. The buildup is more likely to be in parts of the artery that have been injured. The injury usually occurs where the artery bends or branches. Once plaque builds up, it may cause the cells in the artery lining to make chemicals that cause more plaque buildup.
Two problems can result from the plaque.
- First, the blood vessel can become narrow, preventing blood flow to the area served by the artery. For example, if an artery to the heart becomes 80% to 90% blocked, a person can develop
- Second, the plaque can rupture and send a blood clot streaming through the artery. A blood clot that goes to other parts of the body is called an embolus. The embolus can be deposited in a smaller area of the artery or in another artery, completely cutting off the blood supply. This blockage can cause a
heart attack, stroke, pulmonary embolus, or other serious medical problem.
What are the causes and risks of the disease?
There are several factors that increase a person's risk of developing atherosclerosis, such as:
smokingand secondhand smoke diabetes high blood cholesterol, especially a high level of LDL("the bad" or "lethal" cholesterol) high blood pressure
- high levels of
triglyceridesin the blood
- increased age
- lack of exercise
- male gender
What can be done to prevent the disease?
In some cases, atherosclerosis cannot be prevented. A person may be able to reduce his or her risk for developing atherosclerosis in the following ways:
- Eat a
- Follow the American Heart Association, or AHA, recommendations for controlling
- Get 30 minutes of
physical activityevery day or almost every day.
- Maintain a healthy body weight.
- Seek effective treatment for
high blood pressure
How is the disease diagnosed?
Diagnosis of atherosclerosis begins with a medical history and physical exam. A variety of special tests can be used to check the width of the openings in the arteries that supply the affected areas.
Long Term Effects
What are the long-term effects of the disease?
Unchecked atherosclerosis will continue to narrow the large and medium arteries supplying the body's vital organs. This can result in serious medical problems, such as
heart attack, kidney failure, and stroke.
What are the risks to others?
Atherosclerosis is not contagious. It does, however, seem to run in families. If one or both parents have atherosclerosis, a person should make every effort to reduce his or her
coronary risk factors. This is especially true for people whose parents developed atherosclerosis early in life.
What are the treatments for the disease?
Treatment of atherosclerosis focuses on lowering a person's
coronary risk factors. Lowering blood cholesterol, controlling high blood pressure, and stopping smoking can stabilize plaque. However, these steps may not reverse the process.
A low dose of aspirin taken on a regular basis seems to reduce the development clots in atherosclerotic vessels, which then significantly reduces the chances of a stroke or heart attack.
Atherosclerosis that progresses far enough to cause symptoms may require surgery or interventional therapy. Surgery can remove or bypass plaque in the arteries that supply the brain, heart, kidneys, or legs.
What are the side effects of the treatments?
Medicines used to treat medical conditions may cause
allergic reactions. Surgery can be complicated by bleeding, infection, or an allergic reaction to the anesthetic.
What happens after treatment for the disease?
Most people who have atherosclerosis are encouraged to begin a regular exercise program. A person who has atherosclerosis should make every effort to reduce
coronary risk factors. This may include smoking cessation, control of chronic diseases and conditions, and a diet for preventing heart disease. Medicines may need to be adjusted to achieve the best response.
How is the disease monitored?
A person will have regular visits to the healthcare provider, along with tests to monitor the progress of the atherosclerosis. Any new or worsening symptoms should be reported to the healthcare provider.
|
Aurora Australis: Southern lights
Auroras are natural colored light displays in the sky, usually observed at night, particularly in the polar zone. They typically occur in the ionosphere. In northern latitudes, the effect is known as the aurora borealis, named after the Roman goddess of dawn, Aurora, and the Greek name for north wind, Boreas.
It often appears as a greenish glow or sometimes a faint red, as if the sun was rising from an unusual direction. The aurora borealis is also called the northern polar lights, as it is only visible in the North sky from the Northern Hemisphere. The aurora borealis most often occurs from September to October and from March to April. The Cree call this phenomenon the Dance of the Spirits.
Hit the link below to view more...
|
3.1.5 How large a key should be used in the RSA cryptosystem?
The size of a key in the RSA algorithm typically refers to the size of the modulus n. The two primes, p and q, which compose the modulus, should be of roughly equal length; this makes the modulus harder to factor than if one of the primes is much smaller than the other. If one chooses to use a 768-bit modulus, the primes should each have length approximately 384 bits. If the two primes are extremely close1 or their difference is close to any predetermined amount, then there is a potential security risk, but the probability that two randomly chosen primes are so close is negligible.
The best size for a modulus depends on one's security needs. The larger the modulus, the greater the security, but also the slower the RSA algorithm operations. One should choose a modulus length upon consideration, first, of the value of the protected data and how long it needs to be protected, and, second, of how powerful one's potential threats might be.
A good analysis of the security obtained by a given modulus length is given by Rivest [Riv92a], in the context of discrete logarithms modulo a prime, but it applies to the RSA algorithm as well. A more recent study of RSA key-size security can be found in an article by Odlyzko [Odl95]. Odlyzko considers the security of RSA key sizes based on factoring techniques available in 1995 and on potential future developments, and he also considers the ability to tap large computational resources via computer networks. In 1997, a specific assessment of the security of 512-bit RSA keys shows that one may be factored for less than $1,000,000 in cost and eight months of effort [Rob95c]. Indeed, the 512-bit number RSA-155 was factored in seven months during 1999 (see Question 2.3.6). This means that 512-bit keys no longer provide sufficient security for anything more than very short-term security needs.
RSA Laboratories currently recommends key sizes of 1024 bits for corporate use and 2048 bits for extremely valuable keys like the root key pair used by a certifying authority (see Question 22.214.171.124). Several recent standards specify a 1024-bit minimum for corporate use. Less valuable information may well be encrypted using a 768-bit key, as such a key is still beyond the reach of all known key breaking algorithms. Lenstra and Verheul [LV00] give a model for estimating security levels for different key sizes, which may also be considered.
It is typical to ensure that the key of an individual user expires after a certain time, say, two years (see Question 126.96.36.199). This gives an opportunity to change keys regularly and to maintain a given level of security. Upon expiration, the user should generate a new key being sure to ascertain whether any changes in cryptanalytic skills make a move to longer key lengths appropriate. Of course, changing a key does not defend against attacks that attempt to recover messages encrypted with an old key, so key size should always be chosen according to the expected lifetime of the data. The opportunity to change keys allows one to adapt to new key size recommendations. RSA Laboratories publishes recommended key lengths on a regular basis.
Users should keep in mind that the estimated times to break the RSA system are averages only. A large factoring effort, attacking many thousands of moduli, may succeed in factoring at least one in a reasonable time. Although the security of any individual key is still strong, with some factoring methods there is always a small chance the attacker may get lucky and factor some key quickly.
As for the slowdown caused by increasing the key size (see Question 3.1.2), doubling the modulus length will, on average, increase the time required for public key operations (encryption and signature verification) by a factor of four, and increase the time taken by private key operations (decryption and signing) by a factor of eight. The reason public key operations are affected less than private key operations is that the public exponent can remain fixed while the modulus is increased, whereas the length of the private exponent increases proportionally. Key generation time would increase by a factor of 16 upon doubling the modulus, but this is a relatively infrequent operation for most users.
It should be noted that the key sizes for the RSA system (and other public-key techniques) are much larger than those for block ciphers like DES (see Section 3.2), but the security of an RSA key cannot be compared to the security of a key in another system purely in terms of length.
1 Put m = [(p+q)/ 2]. With p < q, we have 0 £ m- Ön £ [((q-p)2)/( 8p)]. Since p = m ±Ö[(m2-n)], the primes p and q can be easily determined if the difference p-q is small.
|
Anatomy gave some grasses the upper hand
BROWN (US) — Anatomy explains why some grasses evolved a more efficient means of photosynthesis than others, scientists report.
Biologists refer to the grasses that have evolved this better means of making their food in warm, sunny, and dry conditions with the designation “C4.” Grasses without that trait are labeled “C3.”
What scientists had already known is that while all of the grasses in the BEP and PACMAD broad group, or clade, have the basic metabolic infrastructure to become C4 grasses, the species that have actually done so are entirely in the PACMAD clade.
An international team of scientists wondered why that disparity exists.
To find out, Brown University postdoctoral researcher and lead author Pascal-Antoine Christin spent two years closely examining the cellular anatomy of 157 living species of BEP and PACMAD grasses.
Using genetic data, the team also organized the species into their evolutionary tree, which they then used to infer the anatomical traits of ancestral grasses that no longer exist today, a common analytical technique known as ancestral state reconstruction.
That allowed them to consider how anatomical differences likely evolved among species over time.
Paradoxically, to understand C4 evolution, the researchers focused on the anatomy of C3 grasses in each clade.
In general what they found was that in the leaves of many PACMAD C3 grasses the veins were closer together, and that the veins themselves were surrounded by larger cells (“bundle sheath” cells) than in BEP C3 grasses.
‘Evolutionary stepping stone’
Ultimately PACMAD grasses had a higher ratio of bundle sheath cells to mesophyll cells (cells that fill in the area between veins). Their findings appear this week in the Proceedings of the National Academy of Sciences.
In C4 plants, such an anatomical arrangement facilitates a more efficient transfer and processing of CO2 in the bundle sheath cells when CO2 is in relatively short supply. When temperatures get hot or plants become stressed, they stop taking in as much CO2, creating just such a shortage within the leaf.
So PACMADs as a group had developed an anatomical predisposition to C4 photosynthesis that BEP grasses didn’t, says senior author Erika Edwards, an assistant professor of ecology and evolutionary biology at Brown.
“We found that consistently these PACMAD C3s are very different anatomically than the C3 BEPs,” she adds. “We think that was the evolutionary stepping stone to C4-like physiology.”
60 million years ago
It wasn’t always this way. Back around 60 or so million years ago, BEP and PACMAD grasses were more similar and both headed in the same direction.
The distance between the leaf veins in both clades had been growing closer together. But then they started to diverge in a key way. The bundle sheath cells surrounding the veins in BEP grasses started to shrink down while those in PACMAD grasses stayed larger.
For a long time the climate didn’t particularly punish or reward either of those directions. But then the climate changed and opportunity knocked, Edwards says. Only PACMAD was near the proverbial door.
“When atmospheric CO2 decreased tens of millions of years after the split of the BEP and PACMAD clades, a combination of shorter [distances between veins] and large [sheath] cells existed only in members of the PACMAD clade, limiting C4 evolution to this lineage,” Christin and co-authors write in the paper.
The researchers also found that evolution among C4 grasses was anatomically nuanced.
Some C4 grasses evolved because of advantageous changes in outer sheath cells, while others saw the improvement in inner sheath cells.
Ultimately, Edwards says, studies like this one show that plant biologists have made important progress in understanding the big picture of when and where important plant traits evolved. That could lead to further advances in both basic science and perhaps agriculture as well.
“Now that we have this increasingly detailed bird’s-eye view, we can start to become a more predictive science,” she says. “Now we have the raw goods to ask interesting questions about why, for example, one trait evolves 10 times in this region of the tree but never over here.
“In terms of genetic engineering we’re going to be able to provide some useful information to people who want to improve species, such as important crops.”
Scientists from the University of Sheffield, Claremont Graduate University, the Universite Paul Sabatier-Ecole Nationale de Formation Agronomique, Trinity College, and the Royal Botanic Gardens contributed to the study.
The National Science Foundation, the Marie Curie International Outgoing Fellowship, and the Agence Nationale de la Recherche supported the research.
Source: Brown University
You are free to share this article under the Creative Commons Attribution-NoDerivs 3.0 Unported license.
|
SASI Behavior In The Formation Of Supernovae And Neutron Stars
redOrbit Staff & Wire Reports – Your Universe Online
Researchers from the Max Planck Institute for Astrophysics (MPA) have, for the first time, created three-dimensional computer models in order to study the formation of neutron stars at the center of collapsing stars, officials from the German research center announced earlier this week.
By creating what they call the most expensive and elaborate computer simulations of the process to date, the team of investigators confirmed, “extremely violent, hugely asymmetric sloshing and spiral motions occur when the stellar matter falls towards the center,” MPA said. “The results of the simulations thus lend support to basic perceptions of the dynamical processes that are involved when a star explodes as supernova.”
After stars with at least eight times the mass of our Sun end their lives in a massive explosion, the stellar gas is forcefully expelled into the surrounding space. These supernovae are among the most energetic and brightest phenomena in the entire universe, the researchers said, and can shine brighter than an entire galaxy for a period of several weeks.
“Supernovae are also the birth places of neutron stars, those extraordinarily exotic, compact stellar remnants, in which about 1.5 times the mass of our Sun is compressed to a sphere with the diameter of Munich,” the Institute explained. “This happens within fractions of a second when the stellar core implodes due to the strong gravity of its own mass. The catastrophic collapse is stopped only when the density of atomic nuclei – gargantuan 300 million tons in a sugar cube – is exceeded.”
The exact processes that cause the disruption of the star, and how the implosion of a stellar core can be reversed to an explosion, are still up for debate, they said. In many preferred scenarios, a tremendous amount of the electrically neutral subatomic particles, known as neutrinos, are produced at the extreme temperatures and densities of the collapsing stellar core and nascent neutron star.
Under this scenario, experts believe the neutrinos heat the gas surrounding the hot neutron star, essentially igniting the explosion. The particles pump energy into the stellar gas, building up pressure until a shock wave is accelerated to disrupt the star in a supernova, the researchers said. Since the processes involved cannot be replicated under laboratory conditions, the MPI researchers turned to computer simulations to investigate.
However, since exceptionally complex mathematical equations are required to describe the motion of the stellar gas and the physical processes that occur within the collapsing stellar core, they needed the aid of some of the planet´s most powerful supercomputers. They gained access to those machines thanks to the Rechenzentrum Garching (RZG), the computing center of the Max Planck Institute for Plasmaphysics (IPP) and the Max Planck Society.
With access to these supercomputers, the MPA researchers “could now for the first time simulate the processes in collapsing stars in three dimensions and with a sophisticated description of all relevant physics,” the Institute said. According to Florian Hanke, a PhD student who performed the simulations, the researchers used “nearly 16,000 processor cores in parallel mode, but still a single model run took about 4.5 months of continuous computing.”
After analyzing several terabytes worth of data, the researchers discovered the stellar gas exhibited “the violent bubbling and seething with the characteristic rising mushroom-like plumes driven by neutrino heating in close similarity to what can be observed in boiling water.” In addition, they discovered “powerful, large sloshing motions, which temporarily switch over to rapid, strong rotational motions” – a behavior which had been observed previously and dubbed Standing Accretion Shock Instability (SASI).
When SASI occurs, the initial spherical nature of the supernova shock is suddenly broken because the shock develops “large-amplitude, pulsating asymmetries by the oscillatory growth of initially small, random seed perturbations,” the Institute researchers said.
Previously, the phenomenon had only been observed in less detailed, incomplete computer models, but it was also demonstrated in this most recent research, proving SASI plays an important role in the processes behind neutron star formation – even in more realistic computer simulations.
“It does not only govern the mass motions in the supernova core but it also imposes characteristic signatures on the neutrino and gravitational-wave emission, which will be measurable for a future Galactic supernova,” research team member Bernhard Mueller explained. “Moreover, it may lead to strong asymmetries of the stellar explosion, in course of which the newly formed neutron star will receive a large kick and spin.”
The Institute investigators now intend to take a closer look at the measurable effects connected to SASI, and to enhance their predictions of associated signals, they said. Furthermore, they also plan to perform additional and longer-term simulations in order to understand how the instability acts with neutrino heating, making the later process more efficient. Ultimately, they hope to determine whether or not this process is the long-sought after mechanism which triggers a supernova explosion and results in a neutron star.
Their research is detailed in the study “SASI Activity in Three-Dimensional Neutrino-Hydrodynamics Simulations of Supernova Cores”, published in the Astrophysical Journal. A related study, “Shallow Water Analogue of the Standing Accretion Shock Instability: Experimental Demonstration and a Two-Dimensional Model,” was published in 2011 by the journal Physical Review Letters.
|
Here is a schematic outline of Curiosity's MMRTG which has 4.8 kg of Pu-238 dioxide and provides 110 Watts of power (twice the maximum power of Apple Mac-book Pro)
The MMRTG from Idaho National Laboratories also provides heat to maintain a proper operating temperature of all the instruments and systems in the rover. It weighs 45 kg and is capable to produce power for almost 14 years. But, A great advantage of these thermoelectric generators is that there are no moving parts associated with them. It produces power by directly converting the heat generated by the decay into electrical energy using thermocouples at an operational efficiency of 6 to 7 % which seems too low.
The generator is fueled with a ceramic form of plutonium dioxide encased in multiple layers of protective materials including iridium capsules and high-strength graphite blocks. As the plutonium naturally decays, it gives off heat, which is circulated through the rover by heat transfer fluid plumbed throughout the system. Electric voltage is produced by using thermocouples, which exploit the temperature difference between the heat source and the cold exterior (i.e. Martian atmosphere).
As @Beckett says, Most of this efficiency loss is due to thermal conductivity with Martian atmosphere.
A nuclear power source is chosen because solar panels did not meet the full range of the mission's requirements. Only the radioisotope power system allows full-time communication with the rover during its atmospheric entry, descent and landing regardless of the landing site. Also, the nuclear-powered rover can go farther, travel to more places, last longer, and also power & heat a larger (size of a car) and more capable scientific payload compared to the solar power alternative. The radioisotope power system gives Curiosity the potential to be the longest-operating, farthest-traveling, most productive Mars surface mission in history.
|
A scientist from The Scripps Research Institute has won a four-year, $1.9 million grant from the National Institutes of Health to better understand the parasite that causes malaria, laying the groundwork to develop better drugs to combat the widespread and deadly disease.
"Many antimalarial drugs alleviate symptoms, but do not necessarily result in a complete cure because some malaria parasites are able to persist asymptomatically in the liver for months or years," said Scripps Research Professor Elizabeth Winzeler, who is principal investigator for the new grant. "We hope to find new targets that are critical to the liver stages as well as the blood stages with the long-term aim of designing better drugs."
A Complicated Lifecycle
Malaria is a nasty and often fatal disease, affecting about 250 million people every year in Africa, Asia, and the Americas, according to the World Health Organization (WHO). Each year, more than 1 million people die of malaria, mostly children under the age of five. While significant strides had been made in curtailing the disease, for the last two decades malaria, has again been on the rise due to the emergence of drug-resistant parasites.
The parasite Plasmodium, which causes malaria, has a complicated lifecycle in two hostsmosquitoes and humans (or other vertebrate). When a malaria-infected mosquito feeds on a person, the parasite enters the human body. Within 30 minutes, the parasite has infected liver cells, where it remains anywhere from eight days to several months without causing noticeable symptoms.
When this period is over, however, the parasite (now in a different form) leaves the liver and enters red blood cells, where it grows and multiplies. When the infected red blood cells eventually burst, the parasite and Plasmodium toxins are released into the bloodstream, and the person feels sick. Symptoms include fever, chills, headache, and other flulike symptoms; in severe
|Contact: Mika Ono|
Scripps Research Institute
|
Insects are intimately related to man, and they play an important part in the transmission of disease. They institute a group of arthropods that have bilateral symmetrical bodies, jointed appendages, with hearts situated dorsally and nervous system vertically. Their bodies are with a tough skin called exoskeleton and are divided into three parts – namely head provided with two antennae, eyes and mouth; the thorax composed of three segments with three pair of legs and two pair of wings; and an abdomen composed of nine to eleven segments, the last two being modified into the external genitals. They have distinct sexes and are reproduced from eggs. They have visual organs in the form of compound and simple eyes. They are not provided with lungs. They breathe by means of special type of tubular organs called trachea, which communicates with the external air by lateral openings called spiracles.
II. Winged – Mosquito
These insects are frequently though not necessarily associated with dirt. They abound in unhygienic conditions and their entry into clean places may be entirely accidental.
Fleas – There are many different kinds of fleas and each has a preference for the kind of host, e.g. human, cat, dog, vermin, etc. Any of these hosts may introduce fleas into an establishment. Fleas bite their host causing annoyance; and in case of humans, large red itching spot appears on the skin. Flea born epidemics are plague, endemic, murine, and typhus.
These are wingless insects 2-3 mm long with laterally flattened, hard thoraxes and abdomen, and three pairs of legs. They are bright coloured and both male and female suck blood.
Life history - 8-12 eggs at a time – within 2-4 days (summer) and ½ week (winter) the eggs hatch and hairy larvae appears – develops into pupa in 2 weeks time by spinning a cocoon covered with dirt and dust, in which it pupates – in 2 weeks time develops into an adult flea.
Habits – They prefer darkness and are sensitive to light. In the absence of rats, when starved they bite men. They travel about 20-30 yards, and can jump up to 3 inches.
Spraying with insecticide is a suitable way of eradicating them.
Lice – They are small wingless ectoparasites with hard chitinous covering and having three pairs of legs, each provided with a single claw. They live entirely on mammalian blood. They have oval grayish bodies that become brownish when filled with blood.
Head lice, which live in the hair of the head, are probably the most common of all lice. They cause intense irritation and such blood. Their eggs called ‘nits’ are numerous and stick firmly on hair, and cannot be removed by brushing.
Diseases conveyed – There is no disease that can be directly attributed to lice, but they cause irritation and annoyance and loss of sleep.
Life history – A female lice within 48 hrs of assuming adult form, produces it’s eggs. It lays about 300 eggs during it’s lifetime. In seams of clothing the eggs may remain alive for 30 days. The male is about 3mm long and female 3.3mm. The larva emerges in 6-7 days, during which 3 molts occur and insects become adults. A lice takes 15 days period to complete it’s cycle from an egg to it’s development of adult stage.
Average life span of lice – 36 to 58 days.
Anti-lice measures include general cleanliness of body, hair, clothes and articles of the room.
Bedbugs – No insect is more difficult to eradicate from a building than a bed bug. The main difficulty is to get at it. Gammaxine, D.D.T or kerosene oil containing pyrethrum maybe sprayed for exterminating the bed bugs. Cyanic acid, if used for the purpose, gives very good results, but it’s use requires a great care on account of having poisonous effect on human beings. It must also be emphasized that if articles are removed from the room, they should be thoroughly inspected first.
A bed bug measures from 3-5 mm in length and 1.5 to 2.5 mm in width. It is dark brown, thin, compressed creatures, so it takes it’s way in narrow cracks. Both male and female bugs bite, which cause considerable irritation and may result in large red patches with swelling.
They prefer human blood and are able to survive sometimes many months without food. They are natural by habit and deposit their eggs in crevices and cracks of woodwork and behind wallpaper. The eggs are stuck ton these surfaces by cement like substance exuded by the bug and are therefore difficult to brush off.
Bedbugs cover considerable distances although they cannot fly. They also give off an unpleasant smell.
Life history – females lay 1-12 eggs at a time several times a year – hatch in 7-10 days – larvae molts soon after a blood meal – reaches the adult stage after 4 subsequent molts – in another 2 weeks becomes sexually mature. Lifecycle takes about 8-10 weeks to complete.
Silver fish – They are wingless insects, silver gray in colour and about 1 cm long. The young closely resemble the adults and both are rounded in front and tapered towards the rear. Silver fish require a moist place in which to live. They leave their hiding places in search of food of cellulose nature. They feed on starchy food, paste in wallpaper and books, and may attack clothing made of cotton or rayon, especially if starched.
Gamaxine, D.D.T, pyrethrum maybe sprayed for exterminating silver fish.
Cockroaches – They are more likely to be found in the kitchen and restaurant than in accommodation areas, although cockroaches do not necessarily require human food; and will feed on whitewash, hair and books if no other food is available.
Hygienic storage and disposal of food and waste and the cleanliness of all areas where food is handled are important points in the prevention of infestation.
Cockroaches are difficult to eradicate but a residual insecticide, for e.g. chlardecome, may be used in cracks, crevices and holes; especially in brick or plasterwork through which warm pipes pass.
Mosquitoes – They are often referred to as ‘biting flies’ but they are in fact, piercing insects, for the jaws of the female are transferred into a needle like object with which to penetrate the skin, when a blood meal is required. The initial stages of their life viz. egg, larval and pupae stages are spent in water. The presence of water is therefore extremely necessary for their existence. The male mosquito lives rarely 1-3 weeks. The female may live up to 4 mths or more. Mosquitoes prefer dark to light colours. Provision of the blood meal is essential for the female mosquito before laying a batch of eggs.
Life cycle – The female lays100-250/500 eggs on the surface of water. In 2-3 days, the eggs hatch out and a small worm like larvae appears. These larvae feed on vegetables. The larvae cast its full size in 6-10 days, when it changes into a coma shaped creature called Pupa. In 2-3 days it splits up and an adult mosquito emerges.
Methods of prevention and control:
a) To do away with the conditions which render possible the breeding of mosquitoes.
b) To destroy the mosquito at some period of life.
c) To prevent the mosquitoes from biting man.
A. To do away with conditions which render possible the breeding of mosquitoes -
i. Proper drainage
ii. Proper water disposal
iii. No stagnant water
B. To destroy the mosquitoes at some period of life –
- Kerosene oil / diesel is sprayed on the surface of water once a week
- Pyrethrum extract: Pyrethrum extract 2%
- Pine oil
5. All the above are mixed in liquid soft soap and a concentrated stock solution is made. This is diluted 10 times with water and stirred thoroughly before spraying.
- Paris-green-aceto-arsenite copper is mixed with100 parts of fine road dust, scaled lime; saw dust, etc. and blown by machine or manually. It is effective in dense vegetation.
- D.D.T (dichloro diphenyl trichloro ethane) – This is a white crystalline powder. It is used as 5-10% oily solution by spraying, or in 10% concentration if used as dust. It is used in the following forms –
a. D.D.T aromax emulsion consisting of D.D.T, aromax, soap flakes and water. It is sprayed.
b. D.D.T kerosene oil
c. Pyrethrums extract 4%, D.D.T and kerosene oil. This is also sprayed.
d. D.D.T in aerosols. The aerosol contains.4% pyrethrum, 3% D.D.T, 5% cycle hexane, 5%sesame oil.
e. Gammaxene or benzene hexa chloride (B.H.C). This is gammaxene P520 (water dispersible powder) and water in suspension and is sprayed.
I. Prevention of breeding of flies – It aims at prompt removal and disposal of all refuse. All garbage, kitchen wastes and similar refuse should be placed in garbage receptacles. For destruction of eggs, larvae and pupae of flies, powdered borax can be applied in solution.
II. Protection of food from flies.
III. Destruction of adult flies
a) By flytraps
b) Poisonous baits maybe used. 2% formalin solution with sugar and milk maybe used. Or sodium arsenate solution maybe used.
c) Spraying D.D.T, pyrethrum in kerosene, giolderin, chlordane or B.H.C will readily kill flies.
Carpet beetles – These are 2-4mm long like small mottled brown, gray and cream ladybirds. Adults are often seen from April – June, seeking places to lay their eggs. The larvae are most active in October before they hibernate. The adult beetle feds on pollen and nectar of flowers, but lays it’s nest in old nests of birds, fabrics and accumulated fluff in buildings. The larvae that hatch from eggs do the damage by feeding on feathers, fur, hair, wool or articles made from those substances. Carpet beetles are now the major hostile pests and do more damage than moths.
The life cycle takes about a year and the larvae can survive for several months without food.
Frequent vacuum cleaning of fluff and debris from storage cupboards, floorboards, carpets and upholstery is the main means of control. Insecticide maybe sprayed between floorboards, under carpets, under felts and in the crevices.
Wood burring beetles – The common furniture beetle lays about 20-60 eggs in cracks and crevices of unfurnished wood. On hatching, the grub eats it’s way through the wood and this tunneling causes weakening of the wood and may take form 2-3 yrs. Eventually the grub matures, bores towards the surface of the wood and changes into pupa. From this emerges the beetle which bites it’s way into the open air through an exit hole which is about.15 cm in size. The beetles have a very short life of 2-3 weeks.
If small piles of bore dust beneath the holes indicate presence of active worms in the wood and treatment is necessary.
Eggs are laid in unpolished wooden surfaces, so the use of shellac, varnish, lacquer or polish acts as a deterrent. To kill woodworm, the exit holes should be sprayed, brushed or injected several times with antokil, usprinol, pyrethrum, etc. There are other treatments such as blowing poisonous gases. A badly infested piece of wood is better burnt.
Moths – Clothes and house moths are of pale buff colour and are seen flying mainly between June and October. They rarely live longer than a month.
The female lays its eggs (approx. 200 at a time) in some dark warm place, on material that later the grubs (pupae) can eat. Once the eggs hatch, the grubs immediately feed on the material the move about. When fully grown they crowd into sheltered places and spin a cocoon around themselves and become chrysalis (pupae). They later emerge as moths and start another lifecycle. The entire lifecycle varies from 1 month to 2 yrs.
The materials that are attacked by moth (the grubs) are wool, fur, skin and feather. They are immune to rubber, man-made or vegetable fibers. While feeding on these materials, the grubs form small holes in the articles and damage occurs frequently during the storage, because of the warmth, darkness and lack of disturbance.
always advisable that articles to be stored should be clean, protected by moth
deterrent and inspected frequently. Calmly used moth deterrents are
naphthalene, camphor tablets,
Rats and mice – Rats and mice are more likely to be found in kitchens and dining rooms than in bedrooms. Scraps of food, candles, soaps, etc attract them. Hygienic storage; disposal of food of all kinds of waste; and cleanliness of all areas where food is handled are important to prevent an infestation.
Rat’s destruction: –
a) By poisonous baits – Baits consists of an inner base to which some poison is added. The common bases are flour, bread, sugar, etc. the most common poison used is barium carbonate. Other poisons are white arsenic, phosphorus, zincphosphide, alphanaphnyl thiourea, sodium fluro acetate (1080) and dicoumarine (warfarin).
b) By fumigation – This is a very effective method and should be carried out by a trained squad. Cyanogas ‘A’ dust or cymag is used. The other gases which are used are carbon monoxide, carbon dioxide, sulphur dioxide.
c) By trapping – Generally wire cage traps are used for the purpose. Trapped rats are transferred everyday to the collecting cage, which is taken to the disposal station where the rats are drowned by immersing the whole cage in a tub containing phenyl or water.
Wood Rot –
Dry Rot – This is the term used for the decay of timber by a fungus which grows and lives on wood; and reduces it finally to a dry crumbling state – hence the name dry rot. It starts in damp (more than 20% moisture) unventilated places and spreads by sending out thin root like strands that creep over brickwork to attack the surrounding wood. Once a fungus gets hold, it produces fruit bodies. The spheres are produced in enormous numbers and are so small that they move as reddish brown dust, which may be blown about easily, to great distances.
Dry rot can be recognized by its offensive, mouldy smell; by it’s friable condition and the ‘dead’ sound when the wood is hit with a hammer. When a rot occurs it is necessary to find the reason for the dampness of the wood.
Having ascertained and cured the cause of dampness, all rotten wood must be thrown out 20-30 cms beyond the infested area and burnt. All brickwork near the infested wood should be sterilized by the use of a blowlamp and when cool, treated with a preservative before repairing.
Wet Rot – This is the name given to the fungus decay in timber in very damp situations. The fungus usually involved is the cellar fungus and it attack timber when it is wet. It requires considerably more moisture for development than dry rot fungus (approx. 40-50% of the dry weight of the wood).
The fungus causes the darkening of the wood, which breaks up into small rectangular pieces on drying. There is usually a thin skin of sound wood left on the surface of the timber, but rarely is there any evidence of fungus growth.
Since the fungus requires relatively wet timber, it’s eradication is much more simple than in case of dry rot. Growth can be checked at once if the timber is thoroughly dried and the source of moisture removed. Badly decayed wood should be cut out and replaced with a fungicide.
APPLICATION OF PESTICIDES
The application of pesticides must be closely monitored and controlled. Only those personnel properly trained in the storage, dilution, and application of pesticides and properly licensed by the appropriate state agency should be authorized to apply pesticides.
Types of Pesticides
Pesticides may be classified in a number of ways:
1. By their effectiveness against certain kinds of pests:
Insecticides versus insects
2. By how they are formulated and applied:
3. By the chemistry of the pesticide:
Chlorinated hydrocarbons (Chlordane)
Organic phosphates (Malathion)
Natural organic insecticides (Pyrethmun)
Effectiveness against a particular pest species, safety, clinical hazard to property, type of formulations available, equipment required, and cost of material must all be taken into account when choosing a pesticide for a particular job. Recommendations change with experience, the development of new materials, and new governmental regulations. However, there is a degree of stability, and most recommendations last over a period of years.
Chlordane is a chlorinated hydrocarbon. It is a wide-spectrum; long-residual insecticide widely used against household pests, terminates, and turf pests. It is regarded as moderately toxic; however, certain formulations commonly used for termite control have a high percentage of the active compound and should be regarded as quite hazardous to non-professionals. Preformulated 2 to 3 percent chlordane oil solutions are available to the nonprofessional for cockroach control. Generally, the nonprofessional lacks the equipment and knowledge to do a satisfactory job of controlling cockroaches.
Diazinon (spectracide) is an organophosphate type, broad-spectrum insecticide that has a rather long residual and is fairly toxic. It is widely used to control cockroaches, ticks, ants, silverfish, spiders, and many other household pests. Diazinon is formulated as a 50 percent wetable powder or 25 percent emulsion. If used by a nonprofessional, considerable care should be exercised and directions followed precisely.
DDVP (vapona, dichlorous) is an arganophosphate, volatile insecticide-acaricide which is used under special conditions. Although it is quite toxic, DDVP breaks down rapidly. It is used in cockroach control programs by professional pest control operators and is widely used against flies. It is formulated as a resin strip which is hung from the ceiling. In many cases, however, these resin strips are used in an ineffective manner. One or two strips cannot possibly protect a huge room that has a constant source of fresh air entering from outside.
Kelthane (dicofol) is a chlorinated hydrocarbon type miticide that is relatively safe when used according to directions. It is widely employed for the control of mites. It is available as a 35 percent wetable powder and is recommended for use by nonprofessionals.
Malathion is an organophosphate-type, broadspectrum insecticide that has a very low hazard threshold when use according to directions. Although only slightly toxic to man and othermammals, it is highly toxic to fish and birds. It is effective against the two-spotted spider mite.
Methoxychlor (marlate) is a chlorinated hydrocarbon type, slightly toxic insecticide that is being used as a replacement for DDT. Methoxychlor is not accumulated in human body fat and does not contaminate the environment as DDT. It is available as a 50 percent wetable powder and is commonly sold as marlate. It is safe for use by nonprofessionals.
DDT is a chlorinated hydrocarbon type, broad spectrum insecticide that is very stable and persistent. It is only moderately toxic to man. However, because of its cumulative and persistent qualities it is no longer widely used.
Dimethoate (eygon) is a moderately toxic, organophosphate type insecticide used for fly control. It is not recommended for use by nonprofessionals.
The environmental concern with insects (pests) is primarily preventive in nature. Clean-out and clean-up will probably do more to control insects in areas where they are not wanted than any other prevention that can be adopted.
|
The blood pressure is the force exerted by the blood on the wall of the blood vessel;
P = F/A
Where P is the pressure; F is the force and A is the area
Figure 1: The blood pressure is the force exerted by the blood on the wall of the blood vessel.
In other words:
The blood inside the blood vessel exerts a force that tends to keep the blood vessel open, i.e. not collapsed.
Therefore, the blood vessel itself plays an important role in determining the blood pressure, because when the blood vessel is more malleable, i.e. it does not much resist the force exerted by the blood, the resulting blood pressure is less for the same amount of force ‘F’ as compared to the situation when the blood vessel is stiff.
To sum up, the blood pressure is the outcome of two main and interacting parameters:
1- the force exerted by the blood on the vessel wall to keep it open and
2- the blood vessel malleability or distensibility, i.e. malleable or stiff.
The blood pressure is an important parameter of the blood circulation condition and is measured conveniently by an indirect method using an apparatus called sphygmomanometer.
This method relies on detecting sounds caused by blood flow turbulence during blood vessel compression, where the first appearance of sound denotes the highest level of blood pressure, i.e. systolic blood pressure, and the disappearance of sound denotes the lowest level, i.e. diastolic blood pressure.
The following diagram shows this indirect measurement of blood pressure.
Figure 2: The indirect measurement of blood pressure using a sphygmomanometer.
The blood pressure alternates between a maximum level (systolic) and a minimum level (diastolic) with a gradient slope in between!
Can the behaviour of blood pressure be approximated to the behaviour of regular wave?
Figure 3: Can the behaviour of blood pressure be approximated to the behaviour of regular wave?
Critique/ evaluation of the blood pressure as a conventional parameter for assessing blood circulation condition.
The blood pressure is dynamic. It changes continuously to reflect many physiological processes and conditions.
However, such changes in blood pressure are kept within certain range that can vary (significantly) from one person to another; and still considered normal or physiological.
The human body may be described in three different conditions of activity or stress; considering both physical and emotional aspects:
2- Working or exercising
3- Demanding stress
Because the blood vessel is living and viable, it can respond to dozen(s) of signals and/or signal combinations.
Examples of signals/ conditions:
1- Circulating chemical/ biological molecules.
2- Cold/ hot
3- Full stomach; full rectum
4- Thirst; hunger; smells; oxygen
5- Mood condition; accept/refuse feeling
And as such the blood pressure changes over time and condition-wise to meet the body’s need of sufficient and effective tissue supply of nutrients and oxygen and to eliminate waste products and CO2.
Why should the blood pressure be changing or dynamic?
The gross changes and sensible phenomena can find their origin in basically working subtle changes.
There could be subtle, i.e. very fine, changes in blood pressure that are needed for proper and smooth blood flow within the blood vessel.
These may be due to the continuous (breathing-like) changes in vessel wall diameter (mainly small blood vessels) that would help stirring of blood into a largely homogenous mixture. Otherwise, the blood in the vessels would be quite randomly fractionable.
Figure 4: Subtle changes in small blood vessel diameter (wall contraction) that would help blood mixture stirring or mixing. The figure shows one cycle (intrinsic mini-hearts).
Here, we may appreciate the condition of the blood vessel lining being smooth to hep both quiet blood flow and blood mixing.
The blood pressure is dynamic and changes to meet the needs of the body according to each situation.
The blood pressure changes are kept within a range that may differ significantly from one person to another and would be considered normal or physiological for that person.
The blood pressure is the outcome of a few other parameters, e.g.:
1- blood kinetic energy
2- blood vessel condition; malleable: health =/ stiff; relaxed =/ contracted and
3- blood volume
which in turn are influenced by body condition.
Interpretation and handling of blood pressure.
General concept (s):
1. The blood pressure knows what to do. It corrects itself by itself. However, it may need some help.
2. Gross changes in blood pressure can be understood and are in the most part temporary and harmless.
3. Low blood pressure and high blood pressure could be considered two faces of one coin as they share much in their conservative handling.
4. In the handling and evaluation of blood pressure, every person should know his/her unique normal.
5. The blood pressure reading should be interpreted context-wise, and not for its own.
6. Gross blood pressure changes may be
1) constitutional and need only conservative handling;
2) constitutional and need both conservative and medical handling;
3) causal and need to know and handle the cause.
Conservative handing of blood pressure.
There could be a paradigm for both low and high blood pressure.
Conservatively, low and high blood pressure are handled in the same way, because this would aim to help the body itself to correct itself through only filling the gaps.
First: enumerate the gaps that would be relevant and then order them according to their weight/influence/contribution to the present condition.
Figure 5: Example of factor chart analysis.
Second: choose the gap filling order in/according to your accurate intuition, e.g. incremental correction model that is body energy-wise correct. For the above factor chart analysis, the filling order could be:
To sit on toilet (3) — to rest for a while/walk for a while (1) — to drink sugary water or sugary juice (2) — to do gentle exercise (4) — to rest/lie down for some time if you think so (3’).
This could be a matter of trial and error, i.e. learning process. And one can know how things would be accurate and correct for him/her personally.
Incremental and logic correction is needed to achieve a smooth and satisfactory result.
Medical handling of blood pressure.
Figure 6: Blood pressure fluctuates normally according to body activity and/or day time.
Figure 7: Arbitrary blood pressure value (line) without antihypertensive drug (a) and with drug (b). A model suggested for drug monotherapy, i.e. one drug taken once daily. Notice, the dipping in blood pressure after the drug dose (arrow).
Figure 8: Arbitrary blood pressure value (line) without medication (a), or with only one medication (b), and with another medication (c). in the double or more (multi-)therapy model, one medication is suggested to produce a background blood pressure lowering (here drug b; e.g.) which seems to be more durable and more effective, while other drug(s) produce more lowering of blood pressure for shorter time; i.e. the B.p. lowering effect of the helping drug(s) may not be fully justified without considering the effect of the principal drug that produces the background BP lowering, i.e. new B.p. base line.
For further discussion and perspectives:
1- role of each B.P. variable on either systolic or diastolic blood pressure.
B.P variable Systolic B.P. Diastolic B.P.
Blood vessel wall ↑↑ ? ↑?
Blood kinetic energy ↑↑ ? ↑↑ ?
Blood volume ↑? ↑↑ ?
Heart rate ↑↑ ? ↑↑ ?
2- what are the considerations needed when assessing blood circulation condition using the blood pressure?
3- what could be the original parameter of blood circulation (Ɛ) that the blood pressure serve to shadow?
4- could be there measurable parameters/variables other than blood pressure that can serve to assess blood circulation condition?
5- molecular markers for assessing blood circulation; perspective of blood circulation assessment.
|
Synchrotron X-Ray Research Being Used to Advance LASIK
The scientists have found that the healthy cornea has a unique fibrous collagen structure. However, in a disorder called keratoconus, in which the center of the eyeball starts to bulge out in a cone shape, causing severe vision disturbance, this structure breaks down. Surgery can correct keratoconus, but it entails risks.
“The significance of this research is that, with a greater understanding of the structure of the cornea at the molecular level, we are able to suggest methods for improving corneal surgery by increasing our understanding of how physical disruption of the cornea’s structure can lead to refractive changes,” Meek said.
Another benefit of the work, he added, is that it’s a step toward developing an artificial cornea that could satisfy the huge demand for donor corneas in many parts of the world.
“In theory, it would take months to scan a single cornea using a conventional laboratory source,” Meek explained, “but due to the high intensity of Diamond’s X-rays, a cornea can be scanned in just a few hours. In addition, because synchrotron X-rays can be focused to a tiny spot, we can generate more detailed maps of corneal structure than ever before. In 12 months time at Diamond, the achievable spot size on the non-crystalline diffraction beamline will be in the order of 10 microns, which is less than a 10th the size of a human hair.
“This means that, within a few years, the work will be at a stage where it can feed into the development of artificial biological corneal constructs that mimic the remarkable natural properties of this extraordinary tissue.”
|
OUR CHEMICAL SENSES: 2. TASTE
Experiment: How Taste and Smell Work TogetherDeveloped by Marjorie A. Murray, Ph.D.; Neuroscience for Kids Staff Writer
FEATURING: A "CLASS EXPERIMENT"
To view the Teacher Guide and Student Guide, you must have the free Adobe Acrobat Reader.
[Summary] | [Background Concepts] |
[Planning and Teaching the Lab] | [References] | [Science Education Standards]
|Students learn how to investigate the sense of
taste and then find out how to plan and carry out their own experiments.
In the "CLASS EXPERIMENT," students find that the ability to identify a
flavor depends on the sense of smell as well as the sense of taste. They
learn basic facts about sensory receptors, nerve connections, and brain
In "TRY YOUR OWN EXPERIMENT," students design experiments to further explore the sense of taste. They can extend the Class Experiment by finding the chemical categories that taste receptors can detect without olfactory input.
In other self-directed investigations, they can learn whether mixing substances makes it harder to identify the components, or whether identifying flavors is difficult when other sensory input interferes.
|SUGGESTED TIMES for these
activities: 45 minutes for introducing and discussing the activity, 45
minutes for the "Class Experiment;" and 45 minutes for "Try Your Own
1. Overview of the smell and taste
Odor and food molecules activate membrane receptors
Reports from our noses and mouths alert us to pleasure, danger, food and drink in the environment. The complicated processes of smelling and tasting begin when molecules detach from substances and float into noses or are put into mouths. In both cases, the molecules must dissolve in watery mucous in order to bind to and stimulate special cells. These cells transmit messages to brain centers where we perceive odors or tastes, and where we remember people, places, or events associated with these olfactory (smell) and gustatory (taste) sensations.
The neural systems for these two chemical senses can distinguish thousands of different odors and flavors. Identification begins at membrane receptors on sensory cells, where odorant or taste molecules fit into molecular slots or pockets with the right "lock and key" fit. This latching together of binding molecule or ligand and membrane receptor leads to the production of an electrical signal, which speeds along a pathway formed by nerve cells (neurons) and their extensions called axons. In this way, information reaches brain areas that perceive and interpret the stimulus.
A membrane receptor will respond to several structurally related molecules
The activation of receptors by discrete chemical structures is not absolute, because a given membrane receptor will accept a number of structurally similar ligands. Nevertheless, we can discriminate many thousands of smells and tastes, even though some chemicals stimulate the same receptor. How are we able to distinguish these? Our ability results from the fact that most substances we encounter are complex mixtures, which activate different combinations of odor and taste receptors simultaneously. Thus, each substance we smell or taste has a unique chemical signature. In the laboratory, researchers frequently test people or animals with pure individual chemicals in order to find the best stimulus for a receptor, but in the real world we seldom encounter these molecules alone.
Although we do have overlap in the response of taste and smell receptors to ligands, scientists have identified quite a number of receptor types. Humans probably have hundreds of kinds of odor membrane receptors, and on the order of 50 to 100 different kinds of taste receptors. It is true that we typically describe only five categories of tastes (see below): this means that each of the categories probably has more than one type of receptor. Further research will show how this puzzle fits together.
The neural systems for taste and smell share several characteristics
Although the neural systems (sensory cells, nerve pathways, and primary brain centers) for taste and smell are distinct from one another, the sensations of flavors and aromas often work together, especially during eating. Much of what we normally describe as flavor comes from food molecules wafting up our noses. Furthermore, these two senses both have connections to brain centers that control emotions, regulate food and water intake, and form certain types of memories.
Another similarity between these systems is the constant turnover of olfactory and gustatory receptor cells. After ten or so days, taste sensory cells die and are replaced by progeny of stem cells in the taste bud. More surprising is the story of olfactory sensory cells. These are not epithelial cells like taste cells, but rather neurons, which are not commonly regenerated in adults (although recent evidence shows that new neurons are produced, even in the brain). Researchers are investigating how taste perception and odor recognition are maintained when cells die and new connections to the nervous system must be generated.
2. Taste sensory cells are found in taste buds
How do these cells begin the process that leads to recognizing tastes? As mentioned in Section 1, the membrane receptors on sensory cells contain molecular pockets that accommodate only compounds with certain chemical structures. According to current research, humans can detect five basic taste qualities: salt, sour, sweet, bitter, and umami (the taste of monosodium glutamate and similar molecules). Investigations of the molecular workings of the first four show that salt and sour receptors are types of ion channels, which allow certain ions to enter the cell, a process that results directly in the generation of an electrical signal.
Sweet and bitter receptors are not themselves ion channels, but instead, like olfactory receptors, accommodate parts of complex molecules in their molecular pockets. When a food or drink molecule binds to a sweet or a bitter receptor, an intracellular "second messenger" system (usually using cyclic AMP) is engaged. After several steps, concluding with the opening of an ion channel, the membrane of the taste receptor cell produces an electrical signal. (The second messenger system is a signaling mechanism used in many sensory nerve cells as well as in other cells in the body.)
Although humans can distinguish only five taste qualities, more than one receptor probably exists for some of these. This is supported by the finding that some people cannot detect certain bitter substances but do respond to others, indicating that only one kind or class of bitter receptor is missing, probably as the result of a small genetic change. (You can demonstrate this with phenylthiourea-impregnated papers in the classroom, as described in the Teacher Guide.)
|3. Taste signals go
to the limbic system and to the cerebral cortex|
Where do taste messages go once they activate the receptor cells in the taste bud? The electrical message from a taste receptor goes directly to the terminal of a primary taste sensory neuron (Figure 2), which is in contact with the receptor cell right in the taste bud. The cell bodies of these neurons are in the brainstem (lower part of the brain, below the cerebrum and their axons form pathways in several cranial nerves. Once these nerve cells get electrical messages from the taste cells, they in turn pass the messages on through relay neurons to two major centers: the limbic system and the cerebral cortex as shown in Figure 3.
The limbic system (which includes the hippocampus, hypothalamus and amygdala) is important in emotional states and in memory formation, so when taste messages arrive here, we experience pleasant, or aversive, or perhaps nostalgic feelings. In the frontal cerebral cortex, conscious identification of messages and other related thought processes take place. The messages from the limbic system and the frontal cortex may be at odds with each other. For example, if you are eating dinner at a friend's home and the first bite of a food item is bitter, you may feel an aversion to eating more. But if you know the food is merely from another culture and not harmful, you may make a conscious decision to continue eating and not offend your hosts. Thus, taste messages go to more primitive brain centers where they influence emotions and memories, and to "higher" centers where they influence conscious thought.
Figure 3. Central taste pathways. (See text for explanation)
|4. Patterns of nerve activity
encode taste sensations|
In other sensory systems, stimulation often activates nerve cells in a spatial pattern that reflects the body area reporting the sensory input. For example, a pointed object touching the thumb activates neurons in a defined patch of the cerebral cortex. When the object touches the pointer finger, neurons right next to the "thumb cortical neuron" are activated; that is, neighborhood relationships are maintained from the skin to the cortex.
But how is taste encoded? How does the brain know that something is sweet? Here we will consider only taste sensations, and not the additional flavors that odors add to an eating experience. Researchers have detected some mapping of tastes is in higher areas such as the taste or gustatory nucleus in the brainstem, the thalamus and the cerebral cortex. The mapping appears to be geographical, as in the touch system-this means that messages from the tip of the tongue go to different areas in the brain than messages from the sides of the tongue. Further, some evidence indicates that cells receiving "sweet" messages in the brainstem may be grouped together, as are cells receiving salt, sour, and bitter messages.
Researchers have also found that while each receptor responds best to one type of taste, say sweet, it can also respond weakly to another, perhaps bitter. This happens because taste cells have more than one type of membrane receptor, not because bitter compounds are squeezing into sweet receptor molecular slots. To complicate matters further, a given nerve axon forms branches in the tongue and sends terminals to several different taste cells. Thus, it will carry different messages to the brain, depending on which of its taste cell subjects are reporting to it at the time. These complicated connections make it likely that neurons in the brain detect a complex taste, such as sweet-sour, or bittersweet, by a pattern of activated sensory nerve axons rather than by an absolute counting of groups of pre-defined sensory and central neurons.
As you might expect, researchers are still trying to sort out this system and no final answers are at hand. As researchers have found for the visual and olfactory systems, patterns of axon activation are probably giving us distinct taste sensations.
5. Sensory processing allows us to interpret flavors
To summarize how we perceive and interpret flavors, let's follow some food into your mouth. It's a warm June day and as you drive through the countryside, you see a roadside stand ahead. Stopping, you buy a flat of freshly picked strawberries to make jam, but you grab a few to sample. As you bite into the first one, the tart but sweet juice squishes out and floods your mouth; escaping molecules waft into your nose and assail your odor receptors. Many types of molecules are present, and each fits into a slot on a taste or odor membrane receptor that can accommodate only that class of molecular structures.
As soon as the molecules stick to their receptors, both ion channels and second messenger systems go into gear, quickly causing each stimulated cell to produce an electrical signal. The signals flash through the axons of taste and olfactory sensory neurons and on to cells in the brain. The messages zip to several places by way of axons from secondary or relay neurons. Messages to the limbic system give you that "aahhh" feeling, others activate memories of previous strawberries, warm summer days, and steaming pots of bubbling jam. Still other pathways stimulate motor centers to cause salivation, chewing, and swallowing. The signals to your frontal cortex activate motor neurons that allow you to say, "Wow!" and you turn around to buy a second flat of berries.
The experiences of perceiving and interpreting the strawberry flavor are the result of activating a pattern of neural components, and in turn, a pattern specific memories, feelings, and thoughts.
6. Genes determine the kinds of taste receptors that we have, and experiences shape our perceptions
Taste preferences and perceptions vary widely among individuals-we all know someone who hates bananas, or loves rhubarb, or is unusually fond of chocolate. Studies have shown that people who are unable to perceive one type of taste stimulus frequently have small genetic differences from the general population. Thus, in some cases, foods really do not taste the same to everyone. In fact, researchers have found that some people are "supertasters" to whom sweet things taste much sweeter and bitter things much more bitter than to the average person. These supertasters have more papillae on their tongues than usual, so they probably have more taste receptors.
Other differences in taste perception may be temporary. Temporary general inability to taste foods can result from a cold or certain medicines and usually is caused by the blocking of olfactory rather than taste receptors. (You can test this when you have a bad cold by seeing if you can still perceive that, for example, strawberries still taste sweet and tart, although you cannot discern many other qualities that you usually do.)
Experiences as well as genetics influence our food preferences. Anyone who has become memorably ill after eating a particular food seldom wants to eat it again, perhaps for years or forever. Animal experiments on this aversion phenomenon showed that pairing food of a specific flavor with a mild poison that induced vomiting caused permanent refusal to eat anything with that flavor. Pairing the poison with an auditory tone, however, did not result in aversion to the tone, even though the animal became ill this time as well. Scientists believe the close association of stomach illness with taste and odor is a survival trait that many animals have evolved.
7. Taste disorders may be genetic, or may result from illness or injury
Although genetic differences account for some cases of ageusia (the complete loss of taste), the inability to taste, or other disorders of taste, most are caused by illnesses or accidents. Other taste disorders include hypogeusia-diminished taste sensitivity; hypergeusia-heightened sense of taste; and dysgeusia-distortions in the sense of taste. Small growths in the nasal cavities (polyps), dental problems, hormonal disturbances, or sinus infections, as well as common colds, may cause chemosensory losses. Injury to the head may damage nerve centers or break axons. Patients who receive radiation therapy for cancers of the head or neck often develop changes in their senses of taste and smell.
Are these disorders serious? Our sense of taste can warn us that something we put into our mouths may be spoiled or dangerous. Further, eating is much more than just "food intake" for humans; it is an important part of our social lives and a source of pleasure. People should see a doctor if they realize something is wrong with their sense of taste.
[Back to Top]
First, prepare students for lab activities by giving background information according to your teaching practices (e.g., lecture, discussion, handouts, models). Because students have no way of discovering sensory receptors or nerve pathways for themselves, they need some basic anatomical and physiological information. Teachers may choose the degree of detail and the methods of presenting the sense of taste, based on grade level and time available.
Offer students the chance to create their own experiments
While students do need direction and practice to become good laboratory scientists, they also need to learn how to ask and investigate questions that they generate themselves. Science classrooms that offer only guided activities with a single "right" answer do not help students learn to formulate questions, think critically, and solve problems. Because students are naturally curious, incorporating student investigations into the classroom is a logical step after they have some experience with a system.
The "Try Your Own Experiment" section of this unit (see the accompanying Teacher and Student Guides) offers students an opportunity to direct some of their own learning after a control system has been established in the "Class Experiment." Because students are personally vested in this type of experience, they tend to remember both the science processes and concepts from these laboratories.
Use "Explore Time" before experimenting
To encourage student participation in planning and conducting experiments, first provide Explore Time or Brainstorming Time. Because of their curiosity, students usually "play" with lab materials first even in a more traditional lab, so taking advantage of this natural behavior is usually successful. Explore Time can occur before the Class Experiment, before the "Try Your Own Experiment" activity, or both, depending on the nature of the concepts under study.
Explore before the Class Experiment
To use Explore Time before the Class Experiment, set the lab supplies out on a bench before giving instructions for the experiment. Ask the students how these materials, along with the information they have from the lecture and discussion, could be used to investigate the sense of taste. Give some basic safety precautions, then offer about 10 minutes for investigating the materials. Circulate among students to answer questions and encourage questions. After students gain an interest in the materials and subject, lead the class into the Class Experiment with the Teacher Demonstration and help them to formulate the Lab Question. Wait until this point to hand out the Student Guide and worksheets, so students have a chance to think creatively. (See the accompanying Guides.)
Explore before "Try Your Own Experiment"
To use Explore Time before Try Your Own Experiment, follow the procedure above, adding the new materials for student-generated experiments. Let the students suggest a variety of ideas, then channel their energies to make the lab manageable. For example, when a number of groups come up with similar ideas, help them formulate one lab question so that the groups can compare data. The goal is to encourage students to think and plan independently while providing sufficient limits to keep the classroom focused. The Teacher and Student Guides contain detailed suggestions for conducting good student-generated experiments.
By reaching Project 2061 Benchmarks for Science Literacy, students will
also fulfill many of the National Science Education Standards and
individual state standards for understanding the content and applying the
methods of science. Because the Benchmarks most clearly state what is
expected of students, they are used here. Below is a list of Benchmarks
that can be met while teaching the taste sense activities. The
Benchmarks are now on-line at: The Benchmarks are now on-line at:
The Benchmarks are listed by chapter, grade level, and item number; for instance, 1A, 6-8, #1 indicates Chapter 1, section A, grades 6-8, benchmark 1.
The process of inquiry used in the sense of taste activities will help students reach the following summarized Benchmarks:
[Back to Top]
Fill out survey
|
What do you think of when you hear the word “clock”? Do you picture a couple of arrow hands on a round face with numbers around the rim, going tick-tock? That’s the old perception of clocks. Today’s clocks are worlds away from those pocket watches and grandfather clocks with pendulums. This is the age of technology, the Internet, social media, and a vast net of connectivity. And believe it or not, today’s clocks are actually what run that world, and they do so with atomic precision.
What is an atomic clock, more accurately called a chip scale atomic clock, and what makes it atomic? Contrary to what the name invokes, there is nothing nuclear about these devices. The use of the word atomic comes from the inner workings of the clock. Where traditional clocks work using spring-powered gears that click as they turn to count seconds, minutes and hours, these atomic clock work by counting the frequency of electromagnetic waves that are being emitted by a container of cesium atoms. The atoms are held in a container about the size of a grain of rice, which is struck by a very small laser beam, causing the electromagnetic waves to be measured.
These clocks are a technological wonder, as they run on 100 times less power than other clocks, and are no bigger than a matchbox. The accuracy of this clock is to be marveled at, as it will only lose one second per 50 billion years, it is the most precise device for measuring time that mankind possesses. There is another great difference between chip scale atomic clocks and their older, less advanced counterparts, and that is the fact that they do not actually tell the time of day. They can be used in reference to other clocks, to detect the accuracy of their time keeping, which is the main way that phones, Internet, and all of wide-spanning connections stay on the same basic timeline. This is crucial to preventing mass confusion and upset worldwide.
The other main purpose of these clocks is to be used in tandem with another atomic clock, to keep two groups of otherwise unconnected people in perfect synchronization. This is key to applications around the globe, since GPS signals are not always reliable or able to reach remote areas and destinations. For example, miners deep underground can plan safely with people above, and deep-sea explorers can stay on the same page as their counterparts on the surface, which is key to safety in both cases.
Communication and data relay across nations depends on these chip scale atomic clocks to ensure that there is no room for error or miscommunication, a crucial aspect of keeping the peace between the world’s powerful nations. The military applications also cannot be understated in importance.
Chip scale atomic clocks are also being designed for portable use, and geologists, hydrologists and scientists in other fields of study are finding new ways of using them, such as accurately measuring the changes that occur to the surface of our planet, as well as measure ice shifts for various climate studies. To sum up, chip scale atomic clocks are invaluable when it comes to keeping the world coordinated and connected.
To learn more about chip scale atomic clocks, click here!
Article Author: Rocky Rhodes
Article Source: EzineArticles
Image Source: Aventas Inc
Disclosure: This company may be a client, sponsor or have a professional connection with this site.
|
Cerebral palsy is an umbrella term for a group of disorders that affect coordination, posture, and muscle movement. CP is caused by damage to the brain’s development before or during birth, and up to one year or so after birth. This damage can be congenital or through accident or neglect. Symptoms begin to show in early childhood, and can last a lifetime.
Cerebral palsy can vary in its symptoms. The most obvious are difficulty with body movement, coordination, posture, reflex, balance, and other muscular control. CP can also cause difficulties with learning, speech, epilepsy, hearing, and vision.
Cerebral palsy is known as a non-progressive disorder. While individual symptoms may change over the course of the patient’s life, the condition in general does not get worse over time. Therapy and specialized equipment can help with the symptoms, but cerebral palsy cannot be cured.
Types of Cerebral Palsy
There are four main types of cerebral palsy:
These categories are determined by the primary effects of the condition, which will vary according to which area or areas of the brain were damaged.
- Spastic CP is marked by stiff muscles and/or limited range of motion.
- Dyskinetic CP presents as uncontrollable muscle movements.
- Ataxic CP is characterized by impaired balance and/or coordination.
- Mixed CP will have two or more of the former.
If you think your child has or may have cerebral palsy because of something that happened at birth, please fill out the form on the Birth Injury Web official website..
|
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Survival skills are techniques that a person may use in order to sustain life in any type of natural environment. These techniques are meant to provide basic necessities for human life which include water, food, and shelter. The skills also support proper knowledge and interactions with animals and plants to promote the sustaining of life over a period of time. Survival skills are often basic ideas and abilities that ancients invented and used themselves for thousands of years. Outdoor activities such as hiking, backpacking, horseback riding, fishing, and hunting all require basic wilderness survival skills, especially in handling emergency situations. Bushcraft and primitive living are most often self-implemented, but require many of the same skills.
First aid (wilderness first aid in particular) can help a person survive and function with injuries and illnesses that would otherwise kill or incapacitate him/her. Common and dangerous injuries include:
- Bites from snakes, spiders and other wild animals
- Bone fractures
- Heart attack
- Hypothermia (too cold) and hyperthermia (too hot)
- Infection through food, animal contact, or drinking non-potable water
- Poisoning from consumption of, or contact with, poisonous plants or poisonous fungi
- Sprains, particularly of the ankle
- Wounds, which may become infected
The survivor may need to apply the contents of a first aid kit or, if possessing the required knowledge, naturally occurring medicinal plants, immobilize injured limbs, or even transport incapacitated comrades.
A shelter can range from a natural shelter, such as a cave, overhanging rock outcrop, or fallen-down tree, to an intermediate form of man-made shelter such as a debris hut, tree pit shelter, or snow cave, to completely man-made structures such as a tarp, tent, or longhouse.
Making fire is recognized in the sources as significantly increasing the ability to survive physically and mentally. Lighting a fire without a lighter or matches, e.g. by using natural flint and steel with tinder, is a frequent subject of both books on survival and in survival courses. There is an emphasis placed on practicing fire-making skills before venturing into the wilderness. Producing fire under adverse conditions has been made much easier by the introduction of tools such as the solar spark lighter and the fire piston.
One fire starting technique involves using a black powder firearm if one is available. Proper gun safety should be used with this technique to avoid harmful injury or death. The technique includes ramming tinder, like charred cloth or fine wood strands, down the barrel of the firearm until the tinder is against the powder charge. Next, fire the gun up in the air in a safe direction, run and pick up the cloth that is projected out of the barrel with the shot, and then blow it into flame. It works better if you have a supply of tinder at hand so that the cloth can be placed against it to start the fire.
Fire is presented as a tool meeting many survival needs. The heat provided by a fire warms the body, dries wet clothes, disinfects water, and cooks food. Not to be overlooked is the psychological boost and the sense of safety and protection it gives. In the wild, fire can provide a sensation of home, a focal point, in addition to being an essential energy source. Fire may deter wild animals from interfering with a survivor, however wild animals may be attracted to the light and heat of a fire.
A human being can survive an average of three to five days without the intake of water. The issues presented by the need for water dictate that unnecessary water loss by perspiration be avoided in survival situations. The need for water increases with exercise.
A typical person will lose minimally two to maximally four liters of water per day under ordinary conditions, and more in hot, dry, or cold weather. Four to six liters of water or other liquids are generally required each day in the wilderness to avoid dehydration and to keep the body functioning properly. The U.S. Army survival manual does not recommend drinking water only when thirsty, as this leads to underhydrating. Instead, water should be drunk at regular intervals. Other groups recommend rationing water through "water discipline".
A lack of water causes dehydration, which may result in lethargy, headaches, dizziness, confusion, and eventually death. Even mild dehydration reduces endurance and impairs concentration, which is dangerous in a survival situation where clear thinking is essential. Dark yellow or brown urine is a diagnostic indicator of dehydration. To avoid dehydration, a high priority is typically assigned to locating a supply of drinking water and making provision to render that water as safe as possible.
Culinary root tubers, fruit, edible mushrooms, edible nuts, edible beans, edible cereals or edible leaves, edible moss, edible cacti and algae can be searched and if needed, prepared (mostly by boiling). With the exception of leaves, these foods are relatively high in calories, providing some energy to the body. Plants are some of the easiest food sources to find in the jungle, forest or desert because they are stationary and can thus be had without exerting much effort. Skills and equipment (such as bows, snares and nets) are necessary to gather animal food in the wild include animal trapping, hunting, and fishing.
Focusing on survival until rescued by presumed searchers, the Boy Scouts of America especially discourages foraging for wild foods on the grounds that the knowledge and skills needed are unlikely to be possessed by those finding themselves in a wilderness survival situation, making the risks (including use of energy) outweigh the benefits.
Survival situations can often be resolved by finding a way to safety, or a more suitable location to wait for rescue. Types of navigation include:
- Celestial navigation, using the sun and the night sky to locate the cardinal directions and to maintain course of travel
- Using a map, compass or GPS receiver
- Dead reckoning
- Natural navigation, using the condition of surrounding natural objects (i.e. moss on a tree, snow on a hill, direction of running water, etc.)
The mind and its processes are critical to survival. The will to live in a life-and-death situation often separates those that live and those that do not. Stories of heroic feats of survival by regular people with little or no training but a strong will to live are not uncommon. Among them is Juliane Koepcke, who was the sole survivor among the 93 passengers when her plane crashed in the jungle of Peru. Situations can be stressful to the level that even trained experts may be mentally affected. One should be mentally and physically tough during a disaster.
To the extent that stress results from testing human limits, the benefits of learning to function under stress and determining those limits may outweigh the downside of stress. There are certain strategies and mental tools that can help people cope better in a survival situation, including focusing on manageable tasks, having a Plan B available and recognizing denial.
In order for your survival you must have the knowledge and experience to identify a threat before it becomes a threat. In assessing your situation/threat you will put yourself ahead the population by ensuring your safety and security. In mastering your mindset you will be better able to recognize the differences of the sights, the sounds, and the baseline world.
Important survival items
Often survival practitioners will carry with them a "survival kit". This consists of various items that seem necessary or useful for potential survival situations, depending on anticipated challenges and location. Supplies in a survival kit vary greatly by anticipated needs. For wilderness survival, they often contain items like a knife, water container, fire starting apparatus, first aid equipment, food obtaining devices (snare wire, fish hooks, firearms, or other,) a light, navigational aids, and signalling or communications devices. Often these items will have multiple possible uses as space and weight are often at a premium.
Survival kits may be purchased from various retailers or individual components may be bought and assembled into a kit.
Some survival books promote the "Universal Edibility Test". Allegedly, it is possible to distinguish edible foods from toxic ones by a series of progressive exposures to skin and mouth prior to ingestion, with waiting periods and checks for symptoms. However, many experts including Ray Mears and John Kallas reject this method, stating that even a small amount of some "potential foods" can cause physical discomfort, illness, or death.
Many mainstream survival experts have perpetuated the act of drinking urine in times of dehydration. However, the United States Air Force Survival Manual (AF 64-4) instructs that this technique should never be applied. Several reasons include the high salt content of urine, potential contaminants, and sometimes bacteria growth, despite urine's being generally "sterile".
Many classic cowboy movies and even classic survival books suggest that sucking the venom out of a snake bite by mouth is an appropriate treatment. However, once the venom is injected into the blood stream, it cannot be sucked out and it may be dangerous to attempt to do so. If bitten by a venomous snake, the best chance of survival is to get to a hospital for treatment as quickly as possible.
- Boulder Outdoor Survival School[better source needed]
- Churchill, James E. The Basic Essentials of Survival. Merrillville, IN: ICS, 1989. Print.
- HowStuffWorks by Charles W. Bryant
- Water Balance; a Key to Cold Weather Survival by Bruce Zawalsky, Chief Instructor, BWI
- "Army Survival Manual; Chapter 13 – Page 2". Aircav.com. Retrieved 2011-10-21.
- "U.S. Army Survival Manual FM 21-76, also known as FM 3-05.70 May 2002 Issue; drinking water". Survivalebooks.com. Retrieved 2011-10-21.
- "Water Discipline" at Survival Topics
- "US EPA". Archived from the original on 29 December 2011. Retrieved 2011-12-27.
- "Wilderness Medical Society". Wemjournal.org. Retrieved 2011-10-21.[dead link]
- "Wisconsin Dept. of Natural Resources". Dnr.wi.gov. 11 March 2008. Archived from the original on 8 March 2012. Retrieved 2011-10-21.
- "Master The Great Outdoors". www.SurvivalGrounds.com. Retrieved 2011-10-21.
- Wilderness Survival Merit Badge pamphlet, January 2008, at 38
- Krieger, Leif. "How to Survive Any Situation". How to Survive Any Situation. Silvercrown Mountain Outdoor School.
- Leach, John (1994). Survival Psychology. NYU Press.
- "Situational Awareness: How to Develop a Situational Awareness Mindset". Off Grid Survival - Wilderness & Urban Survival Skills. 2016-08-10. Retrieved 2016-11-09.
- US Army Survival Manual FM21-76 1998 Dorset press 9th printing ISBN 1-56619-022-3
- John Kallas, Ph.D., Director, Institute for the Study of Edible Wild Plants and Other Foragables. Biography[not in citation given] Archived 13 February 2014 at the Wayback Machine.
- Peterson, Devin (2013). "Effects of Urine Consumption". SCS. DNM International. p. 1. Retrieved 6 August 2013.
- Lawson, Malcolm (2013). "Top 10 Survival Myths Busted". SCS. DNM International. p. 1. Archived from the original on 27 April 2015. Retrieved 18 April 2015.
- Mountaineering: The Freedom of the Hills; 8th Ed; Mountaineers Books; 596 pages; 1960 to 2010; ISBN 978-1594851384.
|Wikimedia Commons has media related to Survival skills.|
- The short film Aircrew Survival: Cold Land Survival is available for free download at the Internet Archive
- The short film Aircrew Survival: Hot Land Survival is available for free download at the Internet Archive
- The short film Aircrew Survival: Survival Kits, Rafts, Accessories is available for free download at the Internet Archive
- The short film Aircrew Survival: Survival Medicine is available for free download at the Internet Archive
- The short film Aircrew Survival: Surviving on Open Water is available for free download at the Internet Archive
- The short film Aircrew Survival: Survival Signalling is available for free download at the Internet Archive
- The short film Aircrew Survival: Tropical Survival is available for free download at the Internet Archive
- The short film Aircrew Survival: The Will to Survive is available for free download at the Internet Archive
|
Quote 1: "Shakespeare wrote at least 38 plays, two major narrative poems, a sequence of sonnets, and several short poems. His works have been translated into a remarkable number of languages, and his plays are performed throughout the world. His plays have been a vital part of the theater in the Western world since they were written about 400 years ago. Through the years, most serious actors and actresses have considered the major roles of Shakespeare to be the supreme test of their art."
Commentary 1: Shakespeare had a huge impact on the world in many ways. He had written over thirty eight plays. Taught many new languages in his shows, and his shows were performed all over the world and still continue too. Shakespeare had written his shows over four hundred years ago and they still continue to be done and produced for entertainment. This just goes to show how much people loved his shows.
Quote 2: "Shakespeare, William (1564-1616), was an English playwright, poet, and actor. Many people regard him as the world’s greatest dramatist and the finest poet England has ever produced."
Commentary 2: Shakespeare was a very well skilled play writer as he went to grammar school prior to becoming a play writer. Many people use the skills and ideas he had come up with when they write pr direct shows because Shakespeare was basically the start to theater really becoming a thing. Shakespeare and his shows had a huge impact on the world and especially in the theater world because his shows had valuable life lessons in them as well as he had many ideas in constructing writing and putting on a show. Many people know his sows and kept learning the old English language because of his shows. He had a huge impact in the theater world as well as the normal world himself. He was a huge icon and still is, spreading his love and knowledge for theater around the world.
Quote 3: "Shakespeare wrote at least 38 plays, two major narrative poems, a sequence of sonnets, and several short poems. His works have been translated into a remarkable number of languages, and his plays are performed throughout the world. His plays have been a vital part of the theater in the Western world since they were written about 400 years ago. Through the years, most serious actors and actresses have considered the major roles of Shakespeare to be the supreme test of their art."
Commentary 3: Shakespeare had a huge impact on the world in many ways. He had written over thirty eight plays. Taught many new languages in his shows, and his shows were performed all over the world and still continue too. Shakespeare had written his shows over four hundred years ago and they still continue to be done and produced for entertainment. This just goes to show how much people loved his shows.
Quote 4: "Little is known about the Globe's design except what can be learned from maps and evidence from the plays presented there. The Globe was round or polygonal on the outside and probably round on the inside. The theater may have held as many as 3,000 spectators. Its stage occupied the open-air space, with a pit in front for standing viewers. The stage was surrounded by several levels of seating. In 1613, the Globe burned down. It was rebuilt on the same foundation and reopened in 1614. The Globe was shut down in 1642 and torn down in 1644. A reconstruction of the theater was completed 200 yards (183 meters) from the original site in 1996, and it officially opened in 1997."
Commentary 4: Shakespeare had many ways of building a theater and he helped many people with ways of building a theater. He had an impact on how to build a theater and all the ways and structures he put into his theater. Shakespeare put a lot of thinking into the way he built his theater. He thought of many things like how the theater could be round so everyone could see and it could hold more people and how the stage was set up having a evenly proportional way of setting it up. This had a huge impact on today's theaters and how they are built because if you go to some theaters you'll notice how they are rounded so everyone can see as well as how the audience chairs move up so the people in front you can see the stage.
Quote 5: "Shakespeare's plays are still produced all over the world. During a Broadway season in the 1980's, one critic estimated that if Shakespeare were alive, he would be receiving $25,000 a week in royalties for a production of Orthoello alone."
Commentary 5: Shakespeare's shows are extremely well populated and known all over the world. His work is so phenomenal and well known around the world. His shows continue to be re done over and over because of how much of an impact they have.
Quote 6: "The last suggestion is given some credence by the academic style of his early plays; The Comedy of Errors, for example, is an adaptation of two plays by Plautus."
Commentary 6: shakespeare started doing shows after his education in grammar school. He learned many things in doing shows, he then fell in love with doing them and started to write them himself. He has an impact on the world in theater and literature through his shows. His shows show a lot of meaning with the elements and symbolism show through out them.
Quote 7: "The Taming of the Shrew,The Two Gentlemen of Verona,Love's Labour's Lost, and Romeo and Juliet. Some of the comedies of this early period are classical imitations with a strong element of farce. The two tragedies, Titus Andronicus and Romeo and Juliet, were both popular in Shakespeare's own lifetime. In Romeo and Juliet the main plot, in which the new love between Romeo and Juliet comes into conflict with the longstanding hatred between their families, is skillfully advanced, while the substantial development of minor characters supports and enriches it."
Commentary 7: This quote shows how his shows not only made a huge impact on theater but because of the way he wrote his shows, it shows symbolism and morals and life lessons that people don't always realize or understand.Shakespeare's shows had huge life lessons in them and things that we don't always think about. He helped people understand things better when they saw his shows. His shows such as Romeo and Juliet, shows how love can be so strong and people have to sacrifice and make compromises in relationships
Quote 8: "Appeal and Influence Since his death Shakespeare's plays have been almost continually performed, in non-English-speaking nations as well as those where English is the native tongue; they are quoted more than the works of any other single author. The plays have been subject to ongoing examination and evaluation by critics attempting to explain their perennial appeal, which does not appear to derive from any set of profound or explicitly formulated ideas. Indeed, Shakespearehas sometimes been criticized for not consistently holding to any particular philosophy, religion, or ideology; for example, the subplot of A Midsummer Night's Dream includes a burlesque of the kind of tragic love that he idealizes in Romeo and Juliet."
Commentary 8: this quote shows the impact of Shakespeare's shows. Not only did he write show's back when the old English took place but because his show were so life changing people kept re doing his shows with the old English and because of this he has spread his voice around the world for the past 100 of years. Shakespeare's shows were so fantastic with all of the life lessons people have been doing then forever now. The old enlgish has been continued to be taught and learned because of him
Quote 9: "But costumes were often elaborate, and the stage might have been hung with colorful banners and trappings."
Commentary 9: The quote means and shows how Shakespeare brought costumes and set pieces to his show to bring it more to life and help the audience understand the show better. Set pieces and other things to make the scene make sense based on when and where the scene took place. In today's theater we use set peices, costumes, lighting and makeup to make the show more interesting and help the actors understand the show as well as the audience members. Shakespeare used the same technique but in moderations.
Quote 10: 'We can see that this stage, with it;s few sets and many acting areas-forstage, inner stage, and upper stage-made for a theater of great fluidity. That is, scene could follow scene with almost cinematic ease."
Commentary 10: Shakespear made his theater flow very nicely so that no matter where you were in the theater you had somewhat of a good veiw of the show. Back then the stage would be build on a slant so that the audience could see the people upstage. Now instead of the stage moving up the audience normally does. This is one of the few things that we have caught on to designing a theater to fit the audience needs. Theaters are also often build in a curve to the rows. This is so that they can see the full stage without other audience members in the way. The quote shows how Shakespeare set up the stage, using the inner, back, and front of the stage.
Anderson, Robert. “Shakespeare and His Theater.” Holt Literature & Language Arts: Mastering the California Standards: Reading, Writing, Listening, Speaking, by G. Kylene Beers et al., Austin, Holt, Rinehart & Winston, 2003, pp. 778-80.
---. “William Shakespeare’s Life.” History Book, pp. 776-77.
Ebscohost. William Shakespeare, “William Shakespeare.” Columbia Electronic Encyclopedia, 6Th Edition (2016): 14. History Reference Center. Web. 5 Dec. 2016. <!Additional Information: Persistent link to this record (Permalink): http://search.ebscohost.com/login.aspx?direct=true&db=khh&AN=39031723&site=hrclive End of citation>.
world book. sample, Lander, Jesse M. “Shakespeare, William.” World Book Advanced. World Book, 2016. Web. 18 Nov. 2016. APA:.
world book. Seidel, Michael. “Globe Theatre.” World Book Advanced. World Book, 2016. Web. 8 Dec. 2016.
|
Cell division is part of the cell cycle, and it is caused either by binary fission or as part of a multiple-phase cycle. Binary fission is the method by which prokaryotic cells divide. Eukaryotic cells use the three-phase cycle commonly referred to as mitosis.Continue Reading
Bacteria are one example of prokaryotic cells, and the binary fission by which they divide is a form of asexual reproduction. Binary fission is much less complex than mitosis, and it can occur in a mere 20 minutes at room temperature. Within the cell, the circular DNA molecule is copied, and it then moves to the poles of the cell. The cell then begins to lengthen until the middle of the cell separates into two new identical cells.
Mitosis, used for growth and repair of skin, hair and blood cells, begins with interphase. During this phase, the cell grows and collects nutrients to prepare for mitosis and begins to replicate DNA. The mitotic phase separates the chromosomes into two identical nuclei and moves directly into cytokinesis.
Cytokinesis divides the cytoplasm, organelles and other needed parts into the two daughter cells equally. Cytokinesis can occur directly with mitosis, and these two phases are often referred to simply as "M phase."Learn more about Cells
|
For other uses see Muscle (disambiguation).
Muscle (from Latin musculus, diminutive of mus "mouse") is the contractile tissue of the body and is derived from the mesodermal layer of embryonic germ cells. Muscle cells contain contractile filaments that move past each other and change the size of the cell. They are classified as skeletal, cardiac, or smooth muscles. Their function is to produce force and cause motion. Muscles can cause either locomotion of the organism itself or movement of internal organs. Cardiac and smooth muscle contraction occurs without conscious thought and is necessary for survival. Examples are the contraction of the heart and peristalsis which pushes food through the digestive system. Voluntary contraction of the skeletal muscles is used to move the body and can be finely controlled. Examples are movements of the eye, or gross movements like the quadriceps muscle of the thigh. There are two broad types of voluntary muscle fibers: slow twitch and fast twitch. Slow twitch fibers contract for long periods of time but with little force while fast twitch fibers contract quickly and powerfully but fatigue very rapidly.
There are three types of muscle:
Cardiac and skeletal muscles are "striated" in that they contain sarcomeres and are packed into highly-regular arrangements of bundles; smooth muscle has neither. While skeletal muscles are arranged in regular, parallel bundles, cardiac muscle connects at branching, irregular angles (called intercalated discs). Striated muscle contracts and relaxes in short, intense bursts, whereas smooth muscle sustains longer or even near-permanent contractions.
Skeletal muscle is further divided into several subtypes:
The gross anatomy of a muscle is the most important indicator of its role in the body. The action a muscle generates is determined by the origin and insertion locations. The cross-sectional area of a muscle (rather than volume or length) determines the amount of force it can generate by defining the number of sarcomeres which can operate in parallel. The amount of force applied to the external environment is determined by lever mechanics, specifically the ratio of in-lever to out-lever. For example, moving the insertion point of the biceps more distally on the radius (farther from the joint of rotation) would increase the force generated during flexion (and, as a result, the maximum weight lifted in this movement), but decrease the maximum speed of flexion. Moving the insertion point proximally (closer to the joint of rotation) would result in decreased force but increased velocity. This can be most easily seen by comparing the limb of a mole to a horse - in the former, the insertion point is positioned to maximize force (for digging), while in the latter, the insertion point is positioned to maximize speed (for running).
One particularly important aspect of gross anatomy of muscles is pennation or lack thereof. In most muscles, all the fibers are oriented in the same direction, running in a line from the origin to the insertion. In pennate muscles, the individual fibers are oriented at an angle relative to the line of action, attaching to the origin and insertion tendons at each end. Because the contracting fibers are pulling at an angle to the overall action of the muscle, the change in length is smaller, but this same orientation allows for more fibers (thus more force) in a muscle of a given size. Pennate muscles are usually found where their length change is less important than maximum force, such as the rectus femoris.
There are approximately 639 skeletal muscles in the human body. However, the exact number is difficult to define because different sources group muscles differently.
See main article: Table of muscles of the human body.
Muscle is mainly composed of muscle cells. Within the cells are myofibrils; myofibrils contain sarcomeres, which are composed of actin and myosin. Individual muscle fibres are surrounded by endomysium. Muscle fibers are bound together by perimysium into bundles called fascicles; the bundles are then grouped together to form muscle, which is enclosed in a sheath of epimysium. Muscle spindles are distributed throughout the muscles and provide sensory feedback information to the central nervous system.
Skeletal muscle is arranged in discrete muscles, an example of which is the biceps brachii. It is connected by tendons to processes of the skeleton. Cardiac muscle is similar to skeletal muscle in both composition and action, being comprised of myofibrils of sarcomeres, but anatomically different in that the muscle fibers are typically branched like a tree and connect to other cardiac muscle fibers through intercalcated discs, and form the appearance of a syncytium.
See main article: muscle contraction.
The three (skeletal, cardiac and smooth) types of muscle have significant differences. However, all three use the movement of actin against myosin to create contraction. In skeletal muscle, contraction is stimulated by electrical impulses transmitted by the nerves, the motor nerves and motoneurons in particular. Cardiac and smooth muscle contractions are stimulated by internal pacemaker cells which regularly contract, and propagate contractions to other muscle cells they are in contact with. All skeletal muscle and many smooth muscle contractions are facilitated by the neurotransmitter acetylcholine.
Muscular activity accounts for much of the body's energy consumption. All muscle cells produce adenosine triphosphate (ATP) molecules which are used to power the movement of the myosin heads. Muscles conserve energy in the form of creatine phosphate which is generated from ATP and can regenerate ATP when needed with creatine kinase. Muscles also keep a storage form of glucose in the form of glycogen. Glycogen can be rapidly converted to glucose when energy is required for sustained, powerful contractions. Within the voluntary skeletal muscles, the glucose molecule can be metabolized anaerobically in a process called glycolysis which produces two ATP and two lactic acid molecules in the process (note that in aerobic conditions, lactate is not formed; instead pyruvate is formed and transmitted through the citric acid cycle). Muscle cells also contain globules of fat, which are used for energy during aerobic exercise. The aerobic energy systems take longer to produce the ATP and reach peak efficiency, and requires many more biochemical steps, but produces significantly more ATP than anaerobic glycolysis. Cardiac muscle on the other hand, can readily consume any of the three macronutrients (protein, glucose and fat) aerobically without a 'warm up' period and always extracts the maximum ATP yield from any molecule involved. The heart, liver and red blood cells will also consume lactic acid produced and excreted by skeletal muscles during exercise.
The efferent leg of the peripheral nervous system is responsible for conveying commands to the muscles and glands, and is ultimately responsible for voluntary movement. Nerves move muscles in response to voluntary and autonomic (involuntary) signals from the brain. Deep muscles, superficial muscles, muscles of the face and internal muscles all correspond with dedicated regions in the primary motor cortex of the brain, directly anterior to the central sulcus that divides the frontal and parietal lobes.
In addition, muscles react to reflexive nerve stimuli that do not always send signals all the way to the brain. In this case, the signal from the afferent fiber does not reach the brain, but produces the reflexive movement by direct connections with the efferent nerves in the spine. However, the majority of muscle activity is volitional, and the result of complex interactions between various areas of the brain.
Nerves that control skeletal muscles in mammals correspond with neuron groups along the primary motor cortex of the brain's cerebral cortex. Commands are routed though the basal ganglia and are modified by input from the cerebellum before being relayed through the pyramidal tract to the spinal cord and from there to the motor end plate at the muscles. Along the way, feedback, such as that of the extrapyramidal system contribute signals to influence muscle tone and response.
The afferent leg of the peripheral nervous system is responsible for conveying sensory information to the brain, primarily from the sense organs like the skin. In the muscles, the muscle spindles convey information about the degree of muscle length and stretch to the central nervous system to assist in maintaining posture and joint position. The sense of where our bodies are in space is called proprioception, the perception of body awareness. More easily demonstrated than explained, proprioception is the "unconscious" awareness of where the various regions of the body are located at any one time. This can be demonstrated by anyone closing their eyes and waving their hand around. Assuming proper proprioceptive function, at no time will the person lose awareness of where the hand actually is, even though it is not being detected by any of the other senses.
Several areas in the brain coordinate movement and position with the feedback information gained from proprioception. The cerebellum and red nucleus in particular continuously sample position against movement and make minor corrections to assure smooth motion.
Exercise is often recommended as a means of improving motor skills, fitness, muscle and bone strength, and joint function. Exercise has several effects upon muscles, connective tissue, bone, and the nerves that stimulate the muscles.
Various exercises require a predominance of certain muscle fiber utilization over another. Aerobic exercise involves long, low levels of exertion in which the muscles are used at well below their maximal contraction strength for long periods of time (the most classic example being the marathon). Aerobic events, which rely primarily on the aerobic (with oxygen) system, use a higher percentage of Type I (or slow-twitch) muscle fibers, consume a mixture of fat, protein and carbohydrates for energy, consume large amounts of oxygen and produce little lactic acid. Anaerobic exercise involves short bursts of higher intensity contractions at a much greater percentage of their maximum contraction strength. Examples of anaerobic exercise include sprinting and weight lifting. The anaerobic energy delivery system uses predominantly Type II or fast-twitch muscle fibers, relies mainly on ATP or glucose for fuel, consumes relatively little oxygen, protein and fat, produces large amounts of lactic acid and can not be sustained for as long a period as aerobic exercise. The presence of lactic acid has an inhibitory effect on ATP generation within the muscle; though not producing fatigue, it can inhibit or even stop performance if the intracellular concentration becomes too high. However, long-term training causes neovascularization within the muscle, increasing the ability to move waste products out of the muscles and maintain contraction. Once moved out of muscles with high concentrations within the sarcomere, lactic acid can be used by other muscles or body tissues as a source of energy, or transported to the liver where it is converted back to pyruvate. The ability of the body to export lactic acid and use it as a source of energy depends on training level.
Humans are genetically predisposed with a larger percentage of one type of muscle group over another. An individual born with a greater percentage of Type I muscle fibers would theoretically be more suited to endurance events, such as triathlons, distance running, and long cycling events, whereas a human born with a greater percentage of Type II muscle fibers would be more likely to excel at anaerobic events such as a 200 meter dash, or weightlifting. People with high overall musculation and balanced muscle type percentage engage in sports such as rugby or boxing and often engage in other sports to increase their performance in the former.
Delayed onset muscle soreness is pain or discomfort that may be felt one to three days after exercising and subsides generally within two to three days later. Once thought to be caused by lactic acid buildup, a more recent theory is that it is caused by tiny tears in the muscle fibers caused by eccentric contraction, or unaccustomed training levels. Since lactic acid disperses fairly rapidly, it could not explain pain experienced days after exercise.
See main article: Neuromuscular disease.
Symptoms of muscle diseases may include weakness, spasticity, myoclonus and myalgia. Diagnostic procedures that may reveal muscular disorders include testing creatine kinase levels in the blood and electromyography (measuring electrical activity in muscles). In some cases, muscle biopsy may be done to identify a myopathy, as well as genetic testing to identify DNA abnormalities associated with specific myopathies and dystrophies.
Neuromuscular diseases are those that affect the muscles and/or their nervous control. In general, problems with nervous control can cause spasticity or paralysis, depending on the location and nature of the problem. A large proportion of neurological disorders leads to problems with movement, ranging from cerebrovascular accident (stroke) and Parkinson's disease to Creutzfeldt-Jakob disease.
A non-invasive elastography technique that measures muscle noise is undergoing experimentation to provide a way of monitoring neuromuscular disease. The sound produced by a muscle comes from the shortening of actomyosin filaments along the axis of the muscle. During contraction, the muscle shortens along its longitudinal axis and expands across the transverse axis, producing vibrations at the surface.
There are many diseases and conditions which cause a decrease in muscle mass, known as muscle atrophy. Example include cancer and AIDS, which induce a body wasting syndrome called cachexia. Other syndromes or conditions which can induce skeletal muscle atrophy are congestive heart disease and some diseases of the liver.
During aging, there is a gradual decrease in the ability to maintain skeletal muscle function and mass, known as sarcopenia. The exact cause of sarcopenia is unknown, but it may be due to a combination of the gradual failure in the "satellite cells" which help to regenerate skeletal muscle fibers, and a decrease in sensitivity to or the availability of critical secreted growth factors which are necessary to maintain muscle mass and satellite cell survival. Sarcopenia is a normal aspect of aging, and is not actually a disease state.
Inactivity and starvation in mammals lead to atrophy of skeletal muscle, accompanied by a smaller number and size of the muscle cells as well as lower protein content. In humans, prolonged periods of immobilization, as in the cases of bed rest or astronauts flying in space, are known to result in muscle weakening and atrophy. Such consequences are also noted in small hibernating mammals like the golden-mantled ground squirrels and brown bats. Representatives of the Ursid species make for an interesting exception to this expected norm.
Bears are famous for their ability to survive unfavorable environmental conditions of low temperatures and limited nutrition availability during winter by means of hibernation. During that time Ursids go through a series of physiological, morphological and behavioral changes. Their ability to maintain skeletal muscle number and size at time of disuse is of a significant importance. During hibernation bears spend four to seven months of inactivity and anorexia without undergoing muscle atrophy and protein loss. There are a few known factors that contribute to the sustaining of muscle tissue. During the summer period, Ursids take advantage of the nutrition availability and accumulate muscle protein. The protein balance of bears at time of dormancy is also maintained by lower levels of protein breakdown during the winter time. At times of immobility, muscle wasting in Ursids is also suppressed by a proteolytic inhibitor that is released in circulation. Another factor that contributes to the sustaining of muscle strength in hibernating bears is the occurrence of periodic voluntary contractions and involuntary contractions from shivering during torpor. The three to four daily episodes of muscle activity are responsible for the maintenance of muscle strength and responsiveness in bears during hibernation.
A display of "strength" (e.g. lifting a weight) is a result of three factors that overlap: physiological strength (muscle size, cross sectional area, available crossbridging, responses to training), neurological strength (how strong or weak is the signal that tells the muscle to contract), and mechanical strength (muscle's force angle on the lever, moment arm length, joint capabilities). Contrary to popular belief, the number of muscle fibres cannot be increased through exercise; instead the muscle cells simply get bigger. Muscle fibres have a limited capacity for growth through hypertrophy and some believe they split through hyperplasia if subject to increased demand.
Since three factors affect muscular strength simultaneously and muscles never work individually, it is misleading to compare strength in individual muscles, and state that one is the "strongest". But below are several muscles whose strength is noteworthy for different reasons.
The density of mammalian skeletal muscle tissue is about 1.06 kg/liter . This can be contrasted with the density of adipose tissue (fat), which is 0.9196 kg/liter . This makes muscle tissue approximately 15% denser than fat tissue.
Evolutionarily, specialized forms of skeletal and cardiac muscles predated the divergence of the vertebrate/arthropod evolutionary line. This indicates that these types of muscle developed in a common ancestor sometime before 700 million years ago (mya). Vertebrate smooth muscle (smooth muscle found in humans) was found to have evolved independently from the skeletal and cardiac muscles.
|
You and your cat see the world differently. True, your eyes are built around the same design, but each of you has specializations that make your vision best for your needs. You evolved as a fruit-eating diurnal animal; your cat evolved as a meat-eating nocturnal animal. You evolved to have good detail and color vision; your cat evolved to have good vision in the dark. Compare your eye to your cat’s eye, and you’ll understand how each of you attains the best vision for your needs.
Light enters through the pupil, which gets larger or smaller to let more or less light in. The cat’s pupil can get much larger than your pupil can, letting in more light, but it does so at the expense of good depth of field (the distance over which objects can be put into clear focus).
When the cat’s pupil contracts, it doesn’t stay round as a person’s does, but becomes a vertical slit. Slit pupils are seen in animals that are active in both day and night; their advantage is that they can cover a great range of sizes, getting much smaller, much faster, than can a round pupil. Their disadvantage is that when they are in their slit formation, they create optical interference that makes perfect focus difficult.
After passing through the pupil, light is collected and focused by the lens. The cat’s lens is much larger than the human lens, which enables the lens to gather more light. But again, there’s a trade-off: while the small human lens can change shape to focus light over a great range of distances, the big cat lens can hardly change its shape at all. As a result, cats have difficulty focusing on objects very close to them, very much like an older person who needs reading glasses.
The optics of the eye work to focus an image on the retina, the lining at the back of the eye which is made up of cells that react to light. Several factors influence how fine the detail is that a retina can pick up. First, how well is the image focused upon the retina? If the lens is too strong for the distance between it and retina, the image will come to a focus before it gets to the retina and will be defocused by the time it reaches it. If the lens is too weak for the distance to the retina, the image will still be unfocussed when it reaches it. In most cats, the lens strength is appropriate for the distance to the retina; that is, cats are neither nearsighted nor farsighted.
Second, the farther the distance to the retina, and the larger the retina, the larger the image can be on the retina. Cats have large eyes and retinas for their size.
Third, the smaller the sampling grain on the retina, the better the ability to detect details. Both cats and people have two different types of receptors in their retinas, each with a different sampling grain. Rods pool light from comparatively large areas on the retina, while cones have a very fine sampling grain. Humans have a cone-rich retina, and even have an area in the center of the visual field made up of only cones. Cats have a rod-rich retina, and no cone-only area.
Cones and Color Vision
The end result is that cats have poor detail vision compared to humans. And because cones are also responsible for color vision, cats have comparatively poor color vision. But they’re not colorblind. Instead, they have the same type of color vision as many people who are called colorblind: a type of red-green colorblindness termed deuteranopia. They can see blue versus other color fine, but tend to confuse colors on the red through brown through green continuum.
Cats give up the ability to see fine detail and rich colors in exchange for the ability to see in the dark. The level of retina illumination is about five times higher in your cat’s eye than in yours. And all those rods pooling signals from minute amounts of light allow the cat to pick up the faintest light source. Nonetheless, some light still manages to pass between the rods and cones. Instead of letting it be absorbed at the back of the eye, as the human eye does, the cat has a structure called the tapetum lucidum that reflects light back to the receptors for a second chance to create a signal. The eye shine you see when you shine a light at a cat in the dark is the reflected light that has managed to elude the receptors in both directions and is bouncing back to you from the tapetum. The end result is that cats can see light at eight times dimmer illumination than you can!
In summary, the cat’s eye is specialized to see in dim and changing light. To achieve this it sacrifices the ability to focus close up, detail vision, and some color vision. It is the vision of a hunter active in both day and night, enabling it to detect movement under any lighting conditions, to use binocular vision to gauge distance, and to aim correctly to catch prey.
|
C Program To Find Smallest Digit of a Number
Learn How To Find Smallest Digit of a Number in C Programming Language. This C Program To Search the Smallest Digit in an Integer makes use of Division and Modulus operators primarily. The Modulus operator is used to extract the digit at the end of a given number.
We have written two methods to Search for the Smallest Digit in a Number which includes a While Loop and a For Loop. Here, we have initialized the variable small with 10 as it is the highest value any digit in a number can be. So, whenever you fetch any digit, it can go upto 9 as its maximum value.
C Program To Find Smallest Digit of a Number using While Loop
C Code To Find The Smallest Digit in an Integer using For Loop
In case you find any error in the above C Program Code To Find Smallest Digit of a Number or if you have any doubts, let us know about it in the Comment Section below.
|
OELP’s innovations for literacy and learning inside schools are aimed at Classes 1 and 2. This is based on research that has convincingly shown that influences in the early years of a child’s life can have a lifelong impact and consequently early interventions play a major role in building strong foundations for a child’s future. The OELP innovations for Early Literacy and Language Learning have evolved organically through a continuous engagement, inside classrooms over the past eight years within an ongoing process of engagement; understanding, reflection and review. We have been working with children from Grades 1 and 2 to try and develop their spoken and written language, thinking and questioning skills. This has involved raising our expectations so that children are challenged to think and respond, and learn meaningfully.
Our understanding is that:
A. Young children learn with fullness when they experience:
- 1) Emotional well-being; acceptance and they feel safe and not afraid to make mistakes.
- 2) Social competence and a positive self concept which enables them to participate. actively and meaningfully in learning through natural as well as planned interactions.
- 3) A responsive environment that allows each child to think and learn in ways that are meaningful and so enables all children to unfold their emerging cognitive abilities.
- An enabling learning environment for early literacy is based on :
- 1) Informal and meaningful reading and writing experiences which help children realise that reading and writing are ways of expressing their ideas, thoughts, feelings and understandings and of sharing the ideas, thoughts and feelings of others. In such an environment children begin to make inner connections with reading and writing as something meaningful that is connected to them and their lives
- 2) An understanding that while teaching children how to read, it is vital to focus on getting children to want to read.
- 3) A print environment in which children get exposure to a variety of displayed texts, so that through active engagement with the print in their surroundings, they “pick up” many concepts of print and written language naturally. However, written language is more distant and rule bound than spoken language and therefore it is essential that print exposure in the learning space is supported with a programme for oral language development; meaningful engagement with written texts and a structured programme for skill building.
Our conceptual framework
The OELP conceptual framework has evolved over a period time through sustained and intensive engagement inside classrooms and informal learning spaces. This has included engagement with various stakeholders, such as children; school managements; educators, teachers, parents and community. Our idea has been to align our early literacy and learning foundation programme with the main stream programme while ensuring that it is grounded and contextualised. This has resulted in a period of tentativeness in our pedagogies and classroom practices, with flexibility and constant modification and adaptations from time to time to accommodate new learning. Scalability has been an overriding concern through our evolutionary process, so we have tried to keep the frameworks simple and not resource intensive.
We believe we have now achieved a fair degree of stability in our innovations. These are being presented within three broad categories as follows:
- Setting up the classroom learning environment
- Implementation of OELP’s Foundation Programme through the Four Blocks Framework
- Learner tracking and programme monitoring
Essential components of the OELP Innovations
- Teaching of a variety of skills and strategies simultaneously, rather than sequentially. These have been clubbed by us into the following three skill sets:
- Foundation skills for school based learning
- Foundation skills for reading and writing
- Higher order thinking skills
All the above skill sets are addressed simultaneously, rather than sequentially, from the very earliest grades, through OELP’s adapted version of the Four Block Framework.
Understanding that reading and writing are developmental processes in which scribbling, drawing, invented spelling, pretend reading are all recognised and built upon as natural and emergent stages of learning to read and write.
- Building on the children’s oral language and listening skills and using the children’s home languages as a resource inside the classroom.
- Providing opportunities for children to use reading and writing in a variety of meaningful and purposeful ways inside the classroom, as well as outside in the course of their daily life activities.
- Building on each child’s real world experiences preferably through thematic units that allow children a clear context for concept formation and to allow listening, speaking, reading, writing and thinking to emerge through an in depth engagement. The themes can be linked to curricular content, if other resources are unavailable.
- Providing a print rich environment with a variety of planned and informal opportunities to read and write in different ways.
- Opportunities to build a love for children literature by teaching children to appreciate and respond to the deeper aspects of literature.
- Planned opportunities for children to think and reflect upon the texts / stories that they read and respond critically to them in a variety of ways, such through questioning, reasoning; relating to their own experiences, imagining, predicting, expressing opinions and listening to the opinions of others and so on
- Regular learner tracking.
|
As a parent or caregiver, it’s essential to understand the various stages of emotional development that children go through.
A child’s emotional development takes place on both a conscious and a subconscious level, and monitoring a child’s emotional development is an important part of raising a healthy, well-adjusted child.
What are the basic stages of emotional childhood development?
Infant or baby (birth – 2 years old)
A child goes through many changes in terms of their emotional development in the first year of their life. The infant will go from being a sleepy baby in the first few weeks to being more alert, responsive and interactive with people whom they see on a daily basis. During this period, the child will develop a very close bond with their parents or caregivers and could even start imitating people and breaking into a smile from the age of 3 months.
When the child becomes more aware of their surroundings, they will start exploring and developing their own sense of belonging in the family. Once the child is fully aware of their surroundings and family members, they could also start showing signs of jealousy when a parent holds another baby. If this happens, you should not be too worried, as this is a normal sign of emotional development.
If you would like to know more about the stage of birth to 24 months, click here.
Toddler or preschool age (2 – 5 years old)
When the child starts walking, they will take on a whole new adventurous approach to life. They will start exploring on their own and their language skills will develop significantly. They will start naming objects and people and will start developing their own personality very quickly.
During this stage of their lives, they will start exploring their emotions and might even start throwing tantrums. During these moments, it is important that the parents or caregivers learn to teach the child the value of delayed gratification. In other words, they need to teach the child that they cannot get everything that they see. Just as the child learns to say ‘no’, they need to learn to accept hearing a ‘no’ from other people too. (Source: Child Development Institute)
Schoolgoing age (6- 12)
During this stage of a child’s life, they become a lot more independent and social. It is during this stage that a parent or caregiver needs to instil a good set of morals and accepted behaviour.
Some children may struggle to adapt to schooling, and according to the Child Development Institute, it’s important at this stage that parents are able to “provide praise and encouragement for achievement but parents must also be able to allow [children] to sometimes experience the natural consequences for their behavior or provide logical consequences to help them learn from mistakes.”
Adolescent or teenager (13 – 18 years old)
The teenage years often pose the biggest challenges when it comes to parenthood. During this time, a child goes through many emotional and social changes. Most 13- or 14-year-olds are going through puberty, which means you can expect a slight change in mood, sensitivity, and self-consciousness.
At around the age of 15, most children want to do things without their parents and want to be more social with friends.
According to www.verywell.com, most teenagers at the age of 17 “are equipped to regulate their emotions. They’re less likely to lose their tempers and healthy teens know how to deal with uncomfortable feelings.” During this stage, they will develop and strengthen relationships with people they feel close to.
Please keep in mind that these developments are only some of the developments that will occur during these stages of a child’s life. Also, every child develops at his or her own pace, and the ages at which certain developments take place are not set in stone.
Why is it important to be familiar with the stages of a child’s emotional development?
A child’s environment can have a big impact on his or her behaviour and development. And as a parent or caregiver, you can help a child to develop to his or her full potential by:
- Understanding the stages of emotional development and how they influence a child’s behaviour
- Using this understanding to create an environment that fosters a child’s development
If you want to learn more about how you can help children reach their full potential, you can read the following articles:
|
/sim"euh lee/, n.1. a figure of speech in which two unlike things are explicitly compared, as in "she is like a rose." Cf. metaphor.2. an instance of such a figure of speech or a use of words exemplifying it.
* * *Figure of speech involving a comparison between two unlike entities.In a simile, unlike a metaphor, the resemblance is indicated by the words "like" or "as." Similes in everyday speech reflect simple comparisons, as in "He eats like a bird" or "She is slow as molasses." Similes in literature may be specific and direct or more lengthy and complex. The Homeric, or epic, simile, which is typically used in epic poetry, often extends to several lines.
* * *figure of speech involving a comparison between two unlike entities. In the simile, unlike the metaphor, the resemblance is explicitly indicated by the words “like” or “as.” The common heritage of similes in everyday speech usually reflects simple comparisons based on the natural world or familiar domestic objects, as in “He eats like a bird,” “He is as smart as a whip,” or “He is as slow as molasses.” In some cases the original aptness of the comparison is lost, as in the expression “dead as a doornail.”A simile in literature may be specific and direct or more lengthy and complex, as in the following lines of Othello: (Othello)Never, Iago. Like to the Pontic Sea,Whose icy current and compulsive courseNe'er feels retiring ebb, but keeps due onTo the Propontic and the Hellespont;Even so my bloody thoughts, with violent pace,Shall ne'er look back . . .The simile does more than merely assert that Othello's urge for vengeance cannot now be turned aside; it suggests huge natural forces. The proper names also suggest an exotic, remote world, with mythological and historical associations, reminiscent of Othello's foreign culture and adventurous past.The Homeric (epic simile), or epic, simile is a descriptive comparison of greater length usually containing some digressive reflections, as in the following:As one who would water his garden leads a stream from some fountain over his plants, and all his ground—spade in hand he clears away the dams to free the channels, and the little stones run rolling round and round with the water as it goes merrily down the bank faster than the man can follow—even so did the river keep catching up with Achilles albeit he was a fleet runner, for the gods are stronger than men.( Iliad, Book XII)
* * *
|
If you look through the list of contents for the first aid kits on this site, you will notice that some of them contain scissors and some contain ‘shears’. Most people think of sheers as something you cut hedges with, so we did a bit of research into the history of scissors to try and find out more about the different types.
Scissors were invented in ancient Egypt in around 1500 BC. The oldest known scissors were discovered from archeological digs at what was once Mesopotamia. These are 3000–4000 years old "spring-scissors", comprising two blades joined by a curved flexible bronze strip. Between that day and now, scissors have come a long way, finding thousands of applications in fields such as engineering, agriculture, grooming, and medical, apart from regular everyday use in homes and offices.
Medical scissors today are made from stainless steel, nitinol, titanium and tungsten carbide. They come in a variety of shapes and sizes, each suitable for a particular use. The type of medical scissors is determined by the shape of the cutting blades. Straight, curved, bent, pointed, and blunt blades are used for different medical applications, such as dressing and surgery. Medical scissors can be grouped under three broad categories.
1. Bandage Scissors
Removing dressings requires the scissors to be slid under the bandage close to the skin. Bandage scissors, therefore, have blunt tips and angled blades. The blunt bottom edge can easily slide under the bandage without tearing the skin, which makes cutting bandages safe, quick and simple. Bandage scissors can have different sizes, although their shape stays the same, more or less.
2. Trauma Shears
These are also known as "tuff cuts" and are used by paramedics or emergency crews to cut through injured people's clothing. Trauma shears have plastic handles and curved cutting blades with longer lever arms. The blunt bottom edge can be easily slid under clothes or seat-belts without gouging the skin or aggravating the injury. Longer moment arms make it easy to cut thick clothing such as jeans or leather.
3. Surgical Scissors
Surgical scissors have multiple types and are manufactured from very hard stainless steel in order for them to have sharper edges that don’t easily go blunt. Some of them have tungsten carbide coating along the cutting edges to provide extra toughness. Surgical scissors can have many different types formed by different combinations of blunt, sharp, curved, and bent blades. Each type is suitable for a particular type of surgical procedure. Most of these scissors provide quick and safe cutting in delicate and confined spaces. Here are some common types of surgical scissors.
Dissecting Scissors: These are general-purpose surgical scissors used or cutting though the outer layer of skin or tissues during surgical procedures.
Metzenbaum Scissors: These are used in sensitive procedure such as heart surgery for cutting delicate tissues. They can be curved or straight with blunt tips and are manufactured using tungsten carbide. Metzenbaum scissors typically have longer shanks.
Mayo Scissors: Developed by Mayo Clinic surgeons, these scissors are made from stainless steel or titanium and are used for cutting fascia—a type of tissue that surrounds muscles, veins and organs. Mayo Scissors are generally 6 inches long and can have straight or curved blades.
Tenotomy Scissors: These are used for delicate operations such as eye surgery or neurosurgery. Tenotomy Scissors can be blunt or sharp, straight or curved, depending upon their applications, which generally involve maneuvering and cutting in small and sensitive areas. They typically have very small blades and very long handles.
Iris Scissors: Iris Scissors are small scissors with long handles. These are used in ophthalmic surgery where fine, detailed cutting work is required.
Other than the above types, there are Operating Scissors, Stitch Scissors, Sprig Scissors, and Plastic Surgery Scissors, each of which is used for specialized surgical applications. So, now you know how many types of medical scissors there are and what they are used for. Make sure you choose what you need.
You will find that scissors or shears are supplied with most of our office first aid kits - check out our range here
|
- Press Release
- Oct 6, 2022
Phosphine on Venus
An international team of astronomers detected phosphine (PH3) in the atmosphere of Venus. They studied the origin of phosphine, but no inorganic processes, including supply from volcanos and atmospheric photochemistry can explain the detected amount of phosphine. The phosphine is believed to originate from unknown photochemistry or geochemistry, but the team does not completely reject the possibility of biological origin. This discovery is crucial to examine the validity of phosphine as a biomarker.
“When we got the first hints of phosphine in Venus’s spectrum, it was a shock!”, says team leader Jane Greaves of Cardiff University in the UK, who first spotted signs of phosphine in observations from the James Clerk Maxwell Telescope (JCMT), operated by the East Asian Observatory, in Hawai?i. Confirming their discovery required the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, a more sensitive telescope. The reason why she was so shocked is that phosphine can be produced by microbes on the Earth , although the research team does not think that they found life on Venus.
How do you find life on a planet from quite far away? One way is to study its atmosphere and find a biomarker that can be evidence of the presence of living forms. If a molecule in the atmosphere is mainly produced by living organisms and the contribution from abiotic origins is negligibly small, it can be a good biomarker.
The international team led by Greaves, including Hideo Sagawa at Kyoto Sangyo University, studied the signal of phosphine in the radio spectra and found that the amount of the molecule is about 20 parts per billion in the atmospheric molecules. This is quite a small amount, but enough to astonish the researchers. This is because researchers have supposed that most of the phosphorus, if it existed in the first place, would bind with oxygen atoms because the Venusian atmosphere has a huge amount of oxygen atoms, although most of them are in the form of carbon dioxide (CO2).
The team carefully examined the possible origins of the phosphine: production by chemical reaction in the atmosphere driven by strong sunlight or lightning, supply from volcanic activity, and delivery by meteorites. The team found that all of these known processes failed to produce the observed amount of phosphine. The amount of phosphine molecules produced by those processes is 10,000 times smaller than the amount detected with the radio telescopes.
The researchers supposed that phosphine is produced by unknown photochemistry or geochemistry, but they also considered the possibility of biological origin. On Earth, some microbes produce and egest phosphine. If similar living organisms were in the Venusian atmosphere, they could produce the detected amount of phosphine.
“Although we concluded that known chemical processes cannot produce enough phosphine, there remains the possibility that some hitherto unknown abiotic process exists on Venus,” says Sagawa. “We have a lot of homework to do before reaching an exotic conclusion, including re-observation of Venus to verify the present result itself.”
Venus is Earth’s twin, in terms of size. However, the atmospheres of the two planets are quite different. Venus has a very think atmosphere and the devastating greenhouse effect raises the surface temperature as high as 460 degrees Celsius. Some researchers argue that the upper atmosphere is much milder and possibly habitable, but the extremely dry and deadly acidic atmosphere would make it difficult for a life form similar to the ones on Earth to survive on Venus.
Further observations with large telescopes on the Earth, including ALMA, and ultimately on-site observations and a sample return of the Venusian atmosphere by space probes will provide crucial information to understand the mysterious origin of the phosphine.
Phosphine has been detected in the atmospheres of Jupiter and Saturn. The molecule is formed deep inside the giant planets and transported to the upper layers by atmospheric circulation. On the other hand, Venus is a rocky planet so that a similar chemistry can not be used to produce phosphine.
|
Our World has within the last year has undergone a major change. The Pandemic has brought to the forefront just how important Contamination Control is in respect of Hygiene, fresh air (including filtered air) and the circulation of quality air within premises.
Clean Air can lead to a healthy environment, mental stability and people performance in the workplace, which at present is a priority in order to keep stress levels down. Good ventilation and the flow of fresh air with optimum air change rates can reduce the risk of virus and germ transmission. Coupled with good hygiene practice, this increases your chances of staying healthy in this new world normal, which we have come to know.
There has been a lot of talk about droplets and Aerosol’s, and the differences between them. This needs some clarification:
Droplet transmission is infection spread through exposure to virus-containing respiratory droplets (i.e., larger and smaller droplets and particles) exhaled by an infectious person. Transmission is most likely to occur when someone is close to the infectious person, generally within about 6 feet.
Airborne transmission is infection spread through exposure to those virus-containing respiratory droplets comprised of smaller droplets and particles that can remain suspended in the air over long distances (usually greater than 6 feet) and time (typically hours).
Droplet transmission consists of exposure to larger droplets, smaller droplets, and particles when a person is close to an infected person. Airborne transmission consists of exposure to smaller droplets and particles at greater distances or over longer times.
These modes of transmission are not mutually exclusive. For instance, “close contact” refers to transmission that can happen by either contact or droplet transmission while a person is within about 6 feet of an infected person.
Scientific Studies and guidelines have historically used a threshold of 5μm to differentiate between large and small particles, but researchers are now suggesting that a size threshold of 100μm better differentiates aerodynamic behaviour of particles, and particles that would fall to the ground within 2 m are likely to be in the 60–100μm in size. Investigators have also measured particle sizes of infectious aerosols and have shown that pathogens are most commonly found in smaller particles typically (<5μm), which are airborne and can travel further and ingestible in the right circumstances.
SUMMARY: Airborne transmission is different from droplet transmission as it refers to the presence of microbes within droplet nuclei, which are generally considered to be particles <5μm in diameter, these particles can remain in the air for long periods of time and be transmitted to others over distances greater than 1 meter.
As new technologies advance, the Contagious Variants pose a problem if they are not controlled. We need to focus on the impact of these Biological diseases especially the variants and scientifically study the impact they have on human life.
Contamination Control is required to control the spread of such biological diseases. This, together with new Technology, including containment/quarantine facilities and best hygiene practices which includes certificated high PPE will ensure that we all have a future.
|
To avoid system errors, if Chrome is your preferred browser, please update to the latest version of Chrome (81 or higher) or use an alternative browser.
Click here to login if you're an NAE Member
Recover Your Account Information
Author: Melvyn Green
More information on constructing simple, seismically safe buildings could go a long way toward reducing fatalities.
Studies of recent earthquakes have confirmed that loss of life occurs principally in single-family dwellings of unreinforced masonry, usually constructed by owners or local masons with site-found materials, such as ashlar or rubble stone, earth (adobe), or manufactured masonry block or brick. The 2005 earthquake in Pakistan resulted in more than 75,000 deaths from building collapses, primarily in rural dwellings and schools constructed mostly of stone and some manufactured masonry, some with a concrete bond beam and columns at corners, most with concrete roofs. The recent earthquake in Iran had similar results, but the structures there were constructed of earth rather than stone.
After disasters, many nations and organizations provide short-term and long-term relief. Donor organizations, such as the World Bank, which often provide or pay for replacement housing, want new structures to be earthquake-resistant both to ensure the safety of the occupants, and possibly to protect their investments. Plans for replacement buildings often call for concrete-masonry unit walls and concrete or wooden roofs. Although this kind of construction may be feasible in urban areas and towns and villages near roads, it may not be feasible in many other places. In the earthquake zone of Pakistan, for example, many villages are accessible only by trail. Thus construction materials for earthquake-resistant buildings would have to be carried long distances by hand or, at best, by mule. As a result, villagers often reconstruct or replace destroyed or damaged buildings using the same methods and materials that were used in the original construction.
One of the problems for people in remote, earthquake-prone areas is a lack of information about how to improve construction. In fact, most engineering research has been focused on the seismic rehabilitation of large, multistory structures rather than low-rise masonry and earthen buildings, and very little information is available on how to construct a simple building with built-in seismic safety.
Earthquake Response in the United States
Earthquakes in the United States have been followed by federally funded research to provide guidance to engineers on seismic strengthening. The loss of life in brick buildings in past earthquakes, particularly the 1971 Sylmar (Los Angeles area) earthquake, led to a research project funded by the National Science Foundation (NSF) carried out by a team with members from Agbabian Associates, S.B. Barnes and Associates, and Kariotis and Associates (ABK). Focused on unreinforced masonry buildings with wood-framed roofs and floors, the results of the project were reviewed by the professional community and later adopted into building codes. This earthquake-resistant construction, initially permitted by the city of Los Angeles as a “special procedure,” has since gained acceptance and is now included in the Uniform Code for Building Conservation and the International Existing Building Code. The “special procedure” has been the basis for strengthening several thousand brick buildings. Although these provisions have not brought buildings up to current building code standards, they have reduced the chances of death and injury in earthquakes.
The 1994 Northridge (Los Angeles area) earthquake caused significant damage to steel-moment-frame building connections. In the aftermath, the Federal Emergency Management Agency (FEMA) funded research by a joint team of the Applied Technology Council (ATC), California Universities for Research in Earthquake Engineering (CUREE), and the Structural Engineers Association of California (SEAOC) to test and evaluate steel-moment connections (see http://www.sacsteel.org/). This research led to a different detailing of steel joints for new buildings—several so-called FEMA connections and proprietary designs. Numerous other post-earthquake studies of concrete buildings have focused on beam-column joints and lightly reinforced buildings. In other parts of the world, however, much less has been done, especially for single-family dwellings and low-rise buildings.
Earthen buildings constructed of a mixture of sand and silt with clay as the binder are found on all continents and in all countries (Figure 1—see PDF version for figures). The most common types of earthen construction are adobe and rammed earth.
Adobe bricks are made in a mold and are usually 16 to 20 inches long and 8 inches or more wide, a size that can be lifted by one person. Adobe buildings are constructed in a running-bond pattern with a mortar of adobe mud between blocks.
In the rammed-earth construction method (Figure 2), earth is packed into forms in a manner similar to the placement of concrete. The side of a formed unit may be as much as 4 feet high by about 6 feet long, depending on the thickness of the wall. Joints between units are packed with mud.
Historically, a bond beam, usually of wood, was used in earthen buildings. In the seismic zones of California, a concrete bond beam, or collar, is constructed at the top of walls, usually at the roof line; in some buildings, a parapet may be constructed above the bond beam. In recent years, engineers in California have attached the bond beam to the wall with vertical connector rods. However, in many places around the world, the bond beam is not connected to the wall at all.
Entire villages around the world are constructed using these methods. Some research has been done on the seismic behavior of adobe construction in several countries, including Peru and the United States. The Getty Conservation Institute, through its Getty Seismic Adobe Program (GSAP), has supported testing of adobe construction and has published the results in several reports (Tolles et al., 2000).1
In mountainous areas, stone has been the traditional construction material for walls. Stone walls are erected as typical masonry lay-up with bond blocks between wythes (Figure 3). In some cases buildings are constructed with single-wythe or unbonded, multi-wythe construction. The roof is constructed of wood trusses with a metal covering. Some later buildings were constructed with concrete bond beams and concrete corner columns (Figure 4). Many also had concrete roofs. A significant number of stone buildings collapsed in the Pakistan earthquake. Inspections after the earthquake revealed that the majority of collapsed buildings were the unbonded, single-wythe construction. These buildings did not have direct connections between bond beams and the stone walls, which might have reduced the number of collapses.
Brick Masonry Buildings
Brick construction, which is widely used in many countries, also has been the cause of many deaths and injuries in earthquakes. Research in the United States has led to many improvements; India has also developed strengthening procedures.
In the United States, strengthening is based on the unreinforced wall acting as a vertical beam between the floor and the roof, or between floors in multistory structures. Connections to the roof and floors keep the walls in place.
In India, instead of a roof or floor diaphragm to brace the walls, the walls are allowed to span horizontally to perpendicular walls (Figure 5). The spacing between walls is limited and is within the traditional building wall-spacing, reflecting cultural preferences. In addition, seismic bands made of wire mesh plastered with a thin layer of concrete are placed at the roof, sill, and window lintel lines, and vertically at corners (Figure 6). This type of construction appears similar to the bond-beam approach, with additional ties at points where failures may occur. Interestingly, the placement of the seismic bands appears to be in line with research results on adobe buildings. It is not clear if this seismic-band type of construction is effective in all seismic zones, however.
Concrete Buildings with Masonry Infill
A common construction type used worldwide, especially for low-rise structures, is a concrete “frame” with unreinforced masonry infill. The “virtual diagonal strut” concept, in which the wall is regarded as a diagonal brace, is one way of evaluating such structures. Another is to consider the building a shear-wall structure. Out-of-plane loads require positive connections, usually epoxy-adhered bolts and metal connectors, between the wall and the bracing diaphragms.
It may not be possible to provide the levels of safety (life safety in the 475-year event and collapse prevention in the 2,500 year event) as envisioned in U.S. building codes for owner-built structures in other parts of the world. Nevertheless, all of the construction types outlined in this paper can be improved.
A number of studies and projects have been carried out over the years around the world, and many countries have assembled, or are assembling, building code provisions for different types of construction. However, these efforts have not been coordinated so that engineers and code authorities can make effective use of them. The Earthquake Engineering Research Institute online World Housing Encyclopedia2 may be a potential resource and repository for such information.
It has been suggested that a deterministic approach be taken in analyzing simple, low-rise buildings. This would involve reviewing how these buildings fail and determining the sequence of failure. For example, we are aware that connections between elements are critical, and research might be directed toward improving connections between bond beams and walls. Another study might determine if a corrugated metal roof could be mobilized to act as a diaphragm with simple connections.
Another study might focus on improving single-wythe masonry with connections or stiffening elements to make structures safer. We also need guidelines for improving connections and the performance of low-rise, concrete frame buildings with masonry infill.
Improving existing owner-built buildings constructed with site-found materials would improve building performance and reduce the number of deaths and amount of damage in earthquakes. Cooperative efforts among nations could provide information to building owners and builders. Activities could be conducted by regional groups working with world bodies such as the United Nations or with individual countries.
Tolles, E.L., E.E. Kimbro, F.A. Webster, and W.S. Ginell. 2000. Seismic Stabilization of Historic Adobe Structures. Final Report of the Getty Seismic Adobe Project. Los Angeles, Calif.: Getty Conservation Institute.
1 More information about the Getty Seismic Adobe Program is available online at: http://www.getty.edu/conservation/publications/newsletters/ 11_1/news1_1.html.
2 Available online at: http://www.world-housing.net/.
|
IQ tests consist of different types of questions. The questions focus on various aspects of the examinee, to get as accurate an analysis as possible.
Is familiarity with such questions likely to give the examinee an advantage? Can practicing sample questions of an IQ test affect your IQ score? The answer is controversial, but the sure thing is that it can not harm your score.
In this article, we will present IQ test questions examples. Also, we will understand how to solve them correctly.
- The questions are of varying degrees of difficulty.
- The correct answer appears after each question.
- If you want to examine yourself, we recommend that you do not scroll down the page, but try to solve the question yourself.
Question 1 explanation:
In this question, there are nine different shapes. You can notice that there is a relationship between the shapes. The relationship is both horizontally and vertically.
Vertical: In each row from top to bottom, the circle is on the right side of the line going to the middle and to the left.
Horizontal: In each column from left to right, the line starts up, goes to the middle, and goes down.
The missing shape consists of a bottom line, with the circle in its left part. Therefore, the correct answer is C.
Question 2 explanation:
Such questions are more confusing than the previous question. The challenging part is to understand that although all the shapes are similar to each other, there is no relationship between the columns.
If we look at each column individually - we will not find a pattern that will lead us to a solution. The pattern will only appear if you look at each line individually.
The rightmost shape in each row contains only the overlapping green parts that are in the shapes to its left.
Therefore, the correct answer is D.
Question 3 explanation:
These questions are also called mirror questions. The purpose of these questions is to test your ability to imagine what the shape will look like from a different angle.
The shape does not have to be a reflection of a mirror, but also rotates at different angles - 90, 180, or 270 degrees. In this case, the rotating object is a shape, but it can also be text.
The correct answer is F.
Question 4 explanation:
Arithmetic progressions and geometric series are also very common.
It is important to know that these questions do not require extensive mathematical knowledge. Only addition, subtraction, multiplication, and division.
In this question, the pattern is the subtraction of 1, the addition of 2, the subtraction of 3, the addition of 4, and so on. Therefore the correct answer is A.
Question 5 explanation:
Some questions can not be prepared for. These questions test your ability to think outside the box. Therefore such questions are considered more difficult questions. What characterizes such questions is that some people can find the solution relatively quickly, and some people will not find it.
In this question, you can draw all the shapes with a pen, without repeating the lines and without lifting the pen from the paper. The correct answer is F, because it is the only shape that maintains the same pattern.
If you have not found the solution to this type of question, there is something crucial to know:
There is no need to stress because of only one question. The effect of one question on your score is minor. Wasting time on one question will leave you less time to concentrate on the following questions and may hurt your score.
In addition, there is no need to take the feelings that you have not been able to solve one question for the rest of the test and lower your confidence. Do not let a single question affect you on the overall test. Just leave it for the end of the test.
It is important to note that the above questions exist in Culture Fair IQ tests.
In tests of this type, the examinee is not asked vocabulary questions to avoid an advantage for speakers of a particular language.
In non-Culture fair IQ tests, there are other types of questions, such as vocabulary questions, logic, word analogies, and more.
Discover your accurate IQ, and take Brainalytics' free IQ test →
|
Menu: Solids / Sphere
Draws a solid sphere.
Point 1: Center of the sphere
Point 2: Point on surface of the sphere
Specify the number of sides or facets around both the longitude and latitude of the sphere in the window. The more facets the sphere has, the more spherical it appears.
The first point is the center of the sphere. The second point determines the radius of the sphere. This point can lie in one of three places: Pole, Vertex, or Midpoint.
Vertex: The equator of the sphere is inscribed by a circle of that radius. (The second point lies on the equator at one of the longitudinal divisions.)
Midpoint: The equator of the sphere circumscribes a circle of that radius. (The second point will be on the equator midway between two longitudinal lines.)
Pole: The second point is at the pole and the axis lies along the line between Points 1 and 2
|
The Silurians are a species of lizard-like creature that appeared in the cult science fiction TV show Dr. Who. They achieved industrial expertise about 450 million years ago, long before humans evolved on Earth.
The Silurians are fictional, of course. But the idea of advanced prehistoric life is an intriguing one and raises a variety of interesting questions. Not least of these is this: if an industrial civilization had existed in the past, what traces would it have left?
Today we get an answer thanks to Gavin Schmidt at the NASA Goddard Institute for Space Studies in New York city and Adam Frank at the University of Rochester. These guys have a name for the idea that an industrial civilization may have predated humanity: the Silurian hypothesis. They study the signature that our own civilization is likely to leave behind and ask whether it will be detectable millions of years from now. Their conclusion is that our probable impact on the planet will be palpable but in some ways hard to distinguish from various other events in the geological record.
Their work has some interesting implications for how we should study Earth and the impact we have on it. The research should also help astrobiologists decide what to look for elsewhere in the universe.
Schmidt and Frank begin by setting out just how little we know about ancient Earth. The oldest part of Earth’s surface is the Negev Desert in southern Israel, which is 1.8 million years old. Older surfaces exist only in exposed areas or as a result of mining and drilling operations. Given these constraints, the evidence of activity by Homo sapiens stretches back some 2.5 million years—not really that far in geological terms.
The ocean floor is relatively young too, because ocean crust is constantly recycled. As a result, all ocean sediment post-dates the Jurassic Period and is therefore less than 170 million years old.
In any case, say Schmidt and Frank, the fraction of life that gets fossilized is tiny. Dinosaurs roamed Earth for some 180 million years, and yet only a few thousand near-complete specimens exist. Modern humans have existed for just a few tens of thousands of years. “Species as short-lived as homo sapiens (so far) might not be represented in the existing fossil record at all,” say Schmidt and Frank.
What of human artifacts—roads, buildings, baked-bean tins, and silicon chips? These, too, are unlikely to survive long, or to be found even if they do. “The current area of urbanization is less than 1% of the Earth’s surface,” point out the researchers.
“We conclude that for potential civilizations older than about 4 million years, the chances of finding direct evidence of their existence via objects or fossilized examples of their population is small,” they say.
But there is another type of evidence: our civilization also leaves a chemical footprint.
Schmidt and Frank are interested in industrial societies, which they define as those capable of extracting energy from the environment. By this definition, humanity has been industrial for about 300 years. “Since the mid-18th Century, humans have released over 0.5 trillion tons of fossil carbon via the burning of coal, oil and natural gas,” say Schmidt and Frank.
That has had a significant impact on the planet. Since all this carbon was originally biological, it contains less carbon-13 than the much larger pool of inorganic carbon. Releasing it is therefore changing the ratio of C-13 and C-12, a signature that should be visible in the geological record.
The temperature increase caused by this carbon release is about 1 °C. This, too, should have an observable signature: the way it changes the isotopic ration of oxygen-18 in carbonates. Agriculture and nitrogen-cycling in fertilizers is also changing the isotopic signature of nitrogen.
Agriculture and deforestation both increase soil erosion, as does increased rainfall due to global warming. So ocean sediment should be changing too, thanks to eroded soil washing into the sea.
On top of all this, use of metals such as lead, chromium, rhenium, platinum, and gold has increased thanks to mining activities. And these will presumably be flushed into the ocean at greater rates than before industrialization.
Humans are also changing the fossil record. There has been a widespread increase in small animals such as mice and rats. That ought to be noticeable, as will the increased extinction rate of other species. “Large mammal extinctions that occurred at the end of the last ice age will also associated with the onset of the Anthropocene,” says Schmidt and Frank.
Then there are the chemicals we make. Humanity has released large volumes of synthetic chlorinated compounds into the environment, along with huge volumes of plastics. Just how long these chemicals or their daughter products will be detectable isn’t clear.
There is even the possibility of a nuclear signature, perhaps from a civilization-ending war. Curiously, the effects of such a war may not last long in geological terms. The half-lives of most of these elements are just too short to be relevant on this time scale.
Two possible exceptions are plutonium-244, with a half-life of 80.8 million years, and curium-247 with a half-life of 15 million years. “[These] would be detectable for a large fraction of the relevant time period if they were deposited in sufficient quantities, say, as a result of a nuclear weapon exchange,” say the researchers.
Schmidt and Franks conclude that humanity’s existence should be visible in the geological record. “The Anthropocene layer in ocean sediment will be abrupt and multi-variate, consisting of seemingly concurrent specific peaks in multiple geochemical proxies, biomarkers, elemental composition, and mineralogy,” they say.
However, this signature may not be unique. The researchers have identified a number of events in the geological record that look similar to the impact humans are having (see diagram). For example, a sudden global change occurred in carbon and oxygen isotope levels some 56 million years ago in an event known as the Paleocene-Eocene Thermal Maximum.
This coincided with a large increase in carbon levels and a temperature rise of between 5 and 7 °C over a period of 200,000 years or so, a mere sneeze in geological terms.
Nobody knows what caused this event, but one idea is that at this time, igneous rock in the North Atlantic expanded into organic sediment, heating it and releasing carbon. This North Atlantic Igneous Province later became Iceland and related land masses.
That’s not the only unexplained change in the geological signature. Numerous other changes in temperature, carbon deposits, ocean salinity, and so are awaiting explanation. “There are undoubted similarities between previous abrupt events in the geological record and the likely Anthropocene signature in the geological record to come,” say Schmidt and Frank.
Of course, none of these events indicate the presence of an earlier industrial civilization. “The Silurian hypothesis cannot be regarded as likely merely because no other valid idea presents itself,” say Schmidt and Frank, who are keen to head off unconstrained speculation.
Nevertheless, their work raises some intriguing questions and points to the value of further research on how long synthetic compounds will survive in the environment. “We recommend further synthesis and study on the persistence of uniquely industrial byproducts in ocean sediment environments,” say Schmidt and Frank, adding: “Are there other classes of compounds that will leave unique traces in the sediment geochemistry on multi-million year timescales?”
That’s interesting work written up in an entertaining paper. It explores an unusual idea that has the potential to change the way we think about humanity and places our impact in a broader perspective. It also provides a background for astrobiologists studying other planets.
Mars was once much wetter and warmer. If it ever hosted an industrial society, this paper maps out some of the signatures that might show up in the geological record there. Venus, too, was once more hospitable. Then there are the oceans of Europa and, ultimately, planets around other stars.
Nevertheless, our industrial civilization may be unique in the universe. But much more exciting is the possibility that it is just one of many, perhaps millions of others. Schmidt and Frank have set out some of the groundwork for finding them.
Ref: arxiv.org/abs/1804.03748 : The Silurian Hypothesis: Would It Be Possible To Detect An Industrial Civilization In The Geological Record?
|
Pollination and the Pollinators
By: Stefani Harrison
It’s that time of year again: the temperatures are rising as spring arrives. With spring, the much-needed season of pollination begins.
Pollination is defined as the process of moving pollen (genetic material) from one flowering plant to another. This transfer of DNA leads to the creation of seeds and fruit in flowering plants and provides genetic diversity. More than 80 percent of flowering plants require pollinators. Without pollinators, we could lose foods like almonds, apples, avocados, bananas, chocolate, coffee, and more!
Honey bees are vital to the survival of crops. In fact, honey bees are so essential that every year millions of dollars are spent transporting them across the country in order to ensure crops like blueberries, cantaloupes, honeydew, raspberries, and watermelons can produce fruit. But did you know that honey bees are not native to the United States?
Honey bees, Apis mellifera, belong to the insect family Apidae and are native to Europe. They were introduced when European colonists brought them to the Jamestown colony, in what is now the state of Virginia, in 1622. They were brought for their honey and wax production as well as their ability to pollinate the crops the colonists brought with them, including apples, oranges, kiwifruit, and pears.
Native pollinators are often overlooked but are more essential to the survival of many native plants and ecosystems. Florida’s three major pollinators are insects, birds, and bats. Insects are the most common and populous pollinators, visiting more plants overall. Bees, butterflies, wasps, ants, flies, moths, and beetles serve as major insect pollinators. Birds and bats, however, pollinate differently and can be more specialized in larger flowering plants that require more pollen and the transportation of seeds and fruits. Birds pollinate many of Florida’s wildflowers while also transporting seeds. Bats pollinate nighttime flowering plants including mangoes, bananas, and guava and often transport fruit.
Pollinators are in Trouble
Pollinators are in decline thanks to human impact and we need your help to protect them. In Florida, pesticides and habitat loss play critical roles in species decline. Many pesticides aimed at limiting pests are also deadly to pollinators, and large areas of lawn limit available pollinator habitats. You can help make a difference and choose to help pollinators by:
- Mowing less
- Using fewer pesticides and applying them less often
- Planting a variety of native plants in your garden
- Varying your yard space by planting more than just grass
The majority of insects are beneficial to the environment and are often overlooked. Help the Foundation save native habitat by providing pollinators a place in your yard and by donating to help us maintain native ecosystems.
|
Biochar, also known as black carbon, is a product that is derived from organic materials rich in carbon (C) and found in soils in very stable solid forms. It is basically a pyrolysis product of organic residues that has received wide attention to mitigate climate change. Biochars can persist for longer periods of time in the soil at various depths, typically thousands of years. Biochars are well known to improve soil physical and chemical properties, such as increasing soil fertility and productivity.
As biochars are acquired through pyrolyzing biomass at temperatures above 300°C in the absence of oxygen, degraded, dry lands and soils with poor fertility as well as low organic matter soils can massively get benefit from biochar amendments. It also has improved nutrient and water-holding capacities that increases fertility and productivity, and improved crop management efficiency.
Biochar as a soil amendment can help in sequester the stable carbon in soils and combat climate change. On the other hand, responses to biochars may depend on the type of biochar used and also the specific characteristics of that biochar as its characteristics determine its fitness for specific agronomic or environmental purposes. Additional benefits come from biochar’s ability to absorb contaminants, including inorganic and organic pollutants in soil and leaching waters, ultimately improving the soil and water quality to improve soil fertility and crop productivity.
Carbon sequestration and greenhouse gas mitigation potential of biochar
Biochar technology has proven the best to achieve carbon sequestration and greenhouse gas mitigation. Biochar potential is determined by several basic factors, including: Efficiency of the crop production technology. Available renewable biomass resource that can be sustainably harvested. Stability of biochar in the soil for longer period of time. Adoption and implementation of biochar investment schemes to achieve high yield. Production and utilization of co-produced bioenergy to replace fossil energy sources
Sustainable biochar can be used now to help combat climate change by holding carbon in soil and by replacing fossil fuel use. Research shows that the stability of biochar in soil significantly exceeds that of un-charred organic matter. Moreover, because biochar retains nitrogen, emissions of nitrous oxide (an effective greenhouse gas) may be reduced. Methane (another strong greenhouse gas) generated by the natural decomposition of the waste can also be reduced by turning agricultural waste into biochar.
To read the original full article published on Technology Times, click here.
|
Activities that support a happy heart!
Category: Social & Emotional
Healthy living practices are very important for young children to develop and follow. Help children develop positive practices for a healthy life through these hands-on activities:
- Create an Alphabet Exercise book
- Twirl, jump, glide in a Fitness Dance
- Play a ‘healthy grocery shopping’ game
- Fill up their plate based on Choose My Plate guidelines of good nutrition
|
Inductor Package Styles
Inductors are passive devices used in electronic circuits to store energy in
the form of a magnetic field. They are the compliment of
capacitors, which store energy in the
form of an electric field. An ideal inductor is the equivalent of a short circuit
(0 ohms) for direct currents (DC), and presents an opposing force (reactance) to
alternating currents (AC) that depends on the frequency of the current. The reactance
(opposition to current flow) of an inductor is proportional to the frequency of
the current flowing through it. Inductors are sometimes referred to as "coils" because
most inductors are physically constructed of coiled sections of wire.
The property of inductance that opposes a change in current flow is exploited
for the purpose of preventing signals with a higher frequency component from passing
while allowing signals of lower frequency components to pass. This is why inductors
are sometimes referred to as "chokes," since they effectively choke off higher frequencies.
A common application of a choke is in a radio amplifier biasing circuit where the
collector of a transistor needs to be supplied with a DC voltage without allowing
the RF (radio frequency) signal from conducting back into the DC supply.
When used in
series (left drawing) or
parallel (right drawing) with its circuit
compliment, a capacitor, the inductor-capacitor combination forms a circuit that
resonates at a particular frequency that depends on the values of each component.
In the series circuit, the impedance to current flow at the resonant frequency is
zero with ideal components. In the parallel circuit (right), impedance to current
flow is infinite with ideal components.
Real-world inductors made of physical components
exhibit more than just a pure inductance when present in an AC circuit. A common
circuit simulator model is shown to the left. It includes the actual ideal inductor
with a parallel resistive component that responds to alternating current. The DC
resistive component is in series with the ideal inductor, and a capacitor is connected
across the entire assembly and represents the capacitance present due to the proximity
of the coil windings. SPICE-type simulators use this or an even more sophisticated
model to facilitate more accurate calculations over a wide range of frequencies.
Related Pages on RF Cafe
- Inductors &
- Inductance Conversions
Standard Inductor Values
website has a very sophisticated calculator for coil inductance that allows you
to en9ter the conductor diameter.
Equations (formulas) for combining inductors in series and parallel are given
below. Additional equations are given for inductors of various configurations.
Total inductance of series-connected inductors is equal to the sum of the individual
inductances. Keep units constant.
Closely Wound Toroid
Coaxial Cable Inductance
Straight Wire Inductance
These equations apply for
when the length of the wire is much longer than the wire diameter (look up
wire diameter here). The ARRL Handbook presents
the equation for units of inches and µF:
For lower frequencies - up through about VHF, use this formula:
Above VHF, skin effect causes the ¾ in the top equation to approach unity (1),
so use this equation:
Straight Wire Parallel to Ground Plane w/One End Grounded
The ARRL Handbook presents this equation for a straight wire suspended above
a ground plane, with one end grounded to the plane:
a = wire radius,
l = wire length parallel to ground plane
h = height of wire above ground plane to bottom of wire
Parallel Line Inductance
Multi-Layer Air-Core Inductance
Total inductance of parallel-connected inductors is equal to the reciprocal of
the sum of the reciprocals of the individual inductances. Keep units constant.
Inductance Formula Constants and Variables
The following physical constants and mechanical dimensional variables apply to
equations on this page. Units for equations are shown inside brackets at the end
of equations; e.g., means lengths
are in inches and inductance is in Henries. If no units are indicated, then any
may be used so long as they are consistent across all entities; i.e., all meters,
all µH, etc.
C = Capacitance
L = Inductance
N = Number of turns
W = Energy
εr = Relative permittivity (dimensionless)
ε0 = 8.85 x 10-12 F/m (permittivity
of free space)
µr = Relative permeability (dimensionless)
µ0 = 4π
x 10-7 H/m (permeability of free space)
1 meter = 3.2808 feet <—> 1 foot = 0.3048 meters
1 mm = 0.03937 inches <—> 1 inch = 25.4 mm
Also, dots (not to be confused with decimal points)
are used to indicate multiplication in order to avoid ambiguity.
Inductive reactance (XL, in Ω) is proportional to the frequency
(ω, in radians/sec, or f, in Hz) and inductance (L, in Henries). Pure inductance
has a phase angle of 90° (voltage leads current with a phase angle of 90°).
Energy Stored in an Inductor
Energy (W, in Joules) stored in an inductor is half the product of the inductance
(L, in Henries) and the current (I, in amp) through the device.
Voltage Across an Inductor
The inductor's property of opposing a change in current flow causes a counter
EMF (voltage) to form across its terminals opposite in polarity to the applied voltage.
Quality Factor of Inductor
Quality factor is the dimensionless ratio of reactance to resistance in an inductor.
Single-Layer Round Coil Inductance
Formula for d >> a:
In general for a = wire radius:
Note: If lead lengths are significant, use the straight wire calculation to add
Finding the Equivalent "RQ"
Since the "Q" of an inductor is the ratio of
the reactive component to the resistive component, an equivalent circuit can be
defined with a resistor in parallel with the inductor. This equation is valid only
at a single frequency, "f," and must be calculated for each frequency of interest.
|
Coding4Kids: Python/Scratch Programming
View all 12 dates
In business since January, '14
Want to become a coding genius?
Coding skills are among the most in-demand in today’s job market—get your real-world experience with this introduction to computational thinking. Once you have the basics, go further to see how your coding knowledge applies to machine learning! You’ll start with logic games and Python—the fastest-growing programming language available—then move into object-oriented concepts. As your understanding of coding deepens, explore creating simple neural networks, and prepare for more advanced machine learning courses!
We are offering scratch and python for young kids. We create a strong foundation and help the kids practice making coding easy for them. We hope to create interest in game development, data science by the end of completion of each course. This week-long session will teach basics/advanced depending on the level of every student.
The classes are streamed on Zoom's webinar platform. Participants are able to join the video from their homes and interact with us in real-time, it is a fun and engaging experience.
DAY 1: Lesson 1 - Students will start off right by going on a brief tour of both the language and the environment
DAY 2: Lesson 2 - Students will get up to speed with Python variables, and then learn how to use these variables to get input from the user
Week 3: Lesson 3 - Students will practice with Python’s if the syntax and learn how to write both simple and complex conditions to select which statements should be run.
Week 4: Lesson 4 - Students will learn how to write both while and for loops in Python so that your statements can be repeated over and over until some condition is met
Week 5: Lesson 5 - Students will learn how to write modular programs by creating functions
Week 6: Lesson 6 - Students will get an introduction to how modular programming in Python
Week 7: Lesson 7 - Students will explore Python graphics
Week 8: Lesson 8 - Students will learn the two of Python’s basic data structures: lists and tuples
Week 9: Lesson 9 - Students will learn how to use dictionaries to write useful programs in fewer lines of code that’ll execute in a shorter amount of time
Week 10: Lesson 10 - Students will learn how to read from and write to data files
Week 11: Lesson 11 - Students will learn about Python’s exceptions and learn how to handle them to keep the program up and running, even when something unexpected happens.
Week 12: Lesson 12 - Students will learn how to display text with labels and get user data with text boxes, buttons, radio buttons, and checkboxes
Duration: 12 weeks
Length of Class: 60 minutes
We are looking forward to our goal of becoming a Coding genius!
|
|331260. Tue May 06, 2008 5:33 am
Smoke signals were used by Native American Indians to send messages. There was no standardized code for these signals. When used to send secret messages the signs were devised privately to suit a particular end by the transmitting person and the receiver. Since anyone could see the signals, unless there was a secret significance the information would be conveyed to friend and enemy alike.
However, despite this, there were a few more or less recognized smoke signals including the following; one puff meant ATTENTION, two puffs meant ALL'S WELL, three puffs of smoke, or three fires in a row, signified DANGER, TROUBLE OR A CALL FOR HELP.
Amongst the Apache, the sighting of one puff quickly losing its geometric shape indicated that a strange party had been spotted approaching. If those " puffs" were frequent and rapidly repeated, it transmitted the message that "the stranger approaching" was in fact many in number and armed.
To send messages over long distances tribes would make a chain of fire to send the message across the land.
There was a method to smoke signaling. First sending stations were built on top of hills, so that the signals could be seen from a long distance away. Fires were built in what are now called ‘fire bowls’; saucer shaped holes about 5 feet across lined with stones to stop the fire from escaping. Poles were laid over the ‘bowls’ with skins attached. These skins were used to fashion the smoke from the fire into signals. Smoke could be made to curl in spirals, ascend in puffs, circles, and even parallel lines. Some signals resembled the letter V or Y and some were zigzag.
Some of these " fire bowls" have been mapped and studied, in particular those that lay in close proximity to the " Warrior Path" that ran between encampments of Shawnee near the Scioto River and Ohio River near Richmondale. This ridge and " path" of location ranges from elevations of 600-900 feet.
It is possible that Magellan saw smoke signals as he was approaching Tierra del Fuego (Land of Smoke), although he may well have seen the smoke or lights of natural phenomena. The local Yámana tribe (now extinct) used fire to send messages by smoke signals, for instance if a whale drifted ashore. The large amount of meat required notification of many people, so that it would not decay.
Indian sign language
Native American Indians of the Plains communicated through sign language. It was a way to communicate between tribal groups irrespective of differences in their spoken languages. Little Raven, once head chief of the Southern Arapahoes said ‘The summer after President Lincoln was illed we had a grand gathering of all tribes to the east and south of us. Twenty five different tribes met near old Fort Abercrombie on the Wichita River. The Caddos had a different sign for horse, and also for moving, but the rest were made the same by all the tribes’. This language was probably the first American language and the may be the only American universal language.
Smoke signals used to be sent from the towers of the Great Wall of China by soldiers stations there. They used a mixture of wolf dung, saltpeter and sulfur to create dense smoke that's easily seen from a distance. By passing the message from tower to tower, they were able to relay a communiqué as far as 300 miles in only a few hours.
The Boy Scouts of America are taught to use three puffs of smoke as a signal of distress. They can also use three gun-shots or three whistles.
The number three, whether in shots, fires, whistles or smokes, is the distress signal of all woodsmen plainsmen, and outdoor people in general.
Smoke signals are used during the process of choosing a new Pope to tell the crowds gathered in St Peter’s square whether or not a decision has been made. 15-20 days after the death of the incumbent the Cardinals meet in the Sistine Chapel under the Last Judgement. They are not allowed to leave until a Pope is chosen. There are a series of ballots, and after each one they burn the ballot papers. The Cardinals add chemicals (since 1958) or traditionally would add a bit of damp straw to the paper to make the smoke black if they haven't got a winner, or leave it white if they have so that crowds gathered in St. Peter’s Square can know whether of not a new Pope has been chosen. Since 2005, a bell is to be rung after a successful election in case the colour of the smoke is ambiguous.
This process can go on for a long time: in 1271 they went on voting for 33 months, until the populace called in builders to wall them up and remove the roof (ostensibly to make it easier for the Holy Spirit to get in). After that they introduced a system whereby the Cardinals were locked in and put on rations that dwindled gradually to bread and water. It worked: the next Pope was chosen in a day. For the duration of the Conclave each cardinal is assigned a 'cell', some of which have showers (these are obviously quite sought after).
When John Paul I was elected in 1978, the chimney leaked black smoke into the chamber causing a lot of coughing amongst the cardinals.
The conclave is the oldest ongoing method for choosing the leader of an institution.
Australian Aborigines would send up smoke to notify others of their presence, particularly when entering lands which were not their own. However, these were not complex signals; smoke simply told others where you were located.
The Noon Gun, South Africa
The Noon Gun has been used to keep time in Cape Town, South Africa since 1806. The gun is situated on Signal Hill, close to the centre of the city. Traditionally the gun would be fired so that ships at sea and in port could check their chronometers were accurate; they would look for smoke from the gun rather than the sound because light travels much faster than sound. Nowadays the gun is fired remotely at noon, from the master clock in the South African Astronomical Observatory.
According to Polybius, the Greeks lit torches to send signals. They had a code which involved splitting the letters of the alphabet into 5 groups of 5 letters and signalling using the held of ten torch bearers. The way they did it went a little like this;
You construct a diagram something like this:
A B C D E
F G H I J
K L M N O
P Q R S T
U V W X Y
Since the Greek alphabet only has 24 letters, the issue of what to do about Z didn't arise. Then, you note that E is in Row 1, Column 5, so to send an E one displayed one torch and then five torches.
To write the word Alan you would signal – 1 torch, 1 torch, 3-2, 1,1, 3,4.
|
Right now, COVID-19 vaccines are being rolled out at pace across the UK. People from priority groups are being vaccinated every day and the aim is to vaccinate 14 million of the most vulnerable people by the end of February.
In this blog we answer some of the most common questions about the COVID-19 vaccines.
What role does a vaccine play in ending the pandemic?
Effective vaccines are a vital part of ending the COVID-19 pandemic. Through vaccination, we can stop those most at-risk from getting the virus, meaning a reduction in hospitalisations and fewer deaths. However, a vaccine is not a ‘silver bullet’ and won’t stop the pandemic immediately.
It will take time and a continued combination of all the things we know help reduce spread, such as social distancing and washing hands, a vaccine and a deeper understanding of the virus that only comes with time. Better treatments will help reduce deaths in hospitals.
How do vaccines work?
Vaccines contain either a weakened or dead version of the virus, or a part of the virus, which cannot harm the recipient.
When we receive a vaccine, it stimulates our immune system to produce antibodies like it would if we were infected with the actual` virus. These antibodies remain in our body so if we are exposed to the virus in future, we can quickly fight off the disease before we become ill.
How do we make sure vaccines are safe?
Any vaccine approved for use in the UK must go through a robust and rigorous testing process to make sure that it meets extremely high standards of safety, quality and efficacy.
Public safety always comes first and vaccines are only made available to the public when they have met strict criteria.
Vaccines, like all medicines, are highly regulated and there are checks carried out at every stage of development to ensure safety is not compromised at any point.
Once a vaccine has been developed, it is given to a small group of volunteers in a clinical trial to assess initial safety. Next, it is given to a bigger group of people, usually hundreds, to learn more about its safety and to see if an immune response is triggered. Then the trial is expanded to thousands of people and the number of people who get the disease in those vaccinated is compared to a group who did not receive the vaccine, so we can report on its safety and efficacy.
The Pfizer/BioNtech vaccine was given to 43,500 people during its clinical trial and no major side effects or safety concerns have been reported.
Who is getting vaccinated currently?
The Joint Committee on Vaccination and Immunisation (JCVI) advised that the vaccine should first be given to those living and working in care homes, followed by people over the age of 80 as well as health and social care workers. The vaccine has now been offered to all older residents of every eligible care home in England.
The next target is to offer vaccines to 15 million people – those aged 70 and over, healthcare workers and people required to shield – by mid-February and millions more people aged 50 and over and other priority groups by spring.
What about COVID-19 variants?
We are continuing efforts to understand the effect of the variants on vaccine efficacy and there is currently no evidence to suggest that vaccines will be ineffective.
We know that the vaccines currently in use are likely to have at least 50% protection against the variant first identified in South Africa, which is very encouraging. This is equivalent to flu vaccination.
We will learn more about this as the population is studied in South Africa throughout their vaccination programme.
There are a number of studies taking place at the moment including an AstraZeneca trial taking place in South Africa and we will continue to monitor the situation.
Why are we now leaving up to 12 weeks between doses of the vaccine?
Both the Oxford/AstraZeneca vaccine and Pfizer/BioNTech vaccine provide high levels of efficacy after the first dose. By giving as many people as possible the first dose of the vaccine, we are giving a greater number of people significant protection from the virus at a greater pace. This protects those who are most vulnerable and likely to suffer the worst effects of COVID-19. Simply put, every time we vaccinate someone for a second time, we are not vaccinating someone for the first time.
Why is it important to keep following the rules once you have been vaccinated?
The information we have so far on the vaccines in use are that they are highly effective, however they are not 100% effective, so there is still a chance you get infected with COVID-19, but it’s highly likely to be much less severe.
We don’t yet know if the vaccines stop you from passing the virus onto other, so while they will offer significant protection to the individual, you could still pass on COVID-19 to someone who has not been vaccinated. It is therefore important that even if you are vaccinated, you continue to follow the national guidelines to keep others safe and that if you are asked to or someone in your household has symptoms or tests positive, you still self-isolate.
Why is mixing vaccines not advised?
We do not currently recommend mixing the COVID-19 vaccines, i.e. between a first and second dose.
Patients taking part in a new clinical study will receive different COVID-19 vaccines for their first or second dose. Backed by £7 million of government funding, the study will be the first in the world to determine the effects of using different vaccines for the first and second dose – for example, using AstraZeneca’s vaccine for the first dose, followed by Pfizer/BioNTech’s vaccine for the second. Initial findings are expected to be released in the summer and the JCVI will await the results with interest.
In the meantime, we do not recommend mixing the COVID-19 vaccines.
There may be extremely rare occasions where the same vaccine is not available or it is not known which vaccine a patient received on their first dose. Every effort will be made to ensure the patient gets the same vaccine, but JCVI advice is that it is better to give a second dose of a different vaccine than to not give one at all.
What is the guidance on the Pfizer/BioNTech vaccine to people who have previously suffered allergic reactions?
Any person with a history of anaphylaxis (a serious allergic reaction) to a vaccine, medicine or food should not receive the Pfizer/BioNTech vaccine. A second dose should not be given to anyone who has experienced anaphylaxis following administration of the first dose of this vaccine.
Anaphylaxis is a known, although very rare, side effect with any vaccine. Most people will not get anaphylaxis and the benefits of protecting people against COVID-19 outweigh the risks.
Do the current vaccines contain pork products?
No, neither the Pfizer/BioNTech or Oxford/AstraZeneca vaccines contain pork products, so they should be suitable to people of various faiths.
View original article
Contributor: Blog Editor
|
Bernetta Liv Worksheets April 11, 2020 21:00:00
This is because music education is already a tough and complicated subject. So, discussing all the points all together could cause great confusions among the music learners. To make classroom music worksheets more attractive and visually appealing, various aspects of technology as well as graphical representations are also being included. Colorful backgrounds, beautiful layouts, big, bold and clear fonts can be used to make these worksheets. Such things make these worksheets more interesting and fun to the music learners. Classroom music worksheets should not leave any vital issue that has been covered in a particular lecture. Remember, these worksheets are intended to make music education a playful affair rather than making it a serious subject.
Most of us know how to play the game of bingo. Thus, the way to play bingo is probably may already be familiar, but if not, here is a quick recap: 1. Each player is a bingo worksheet (also known as a ”bingo card” or ”bingo board”). 2. The bingo worksheet contains a grid of squares. Each square usually contains a different number. 3. The bingo caller calls out the items printed on the worksheets in a random order. 4. As items are called out, the players cross items off their worksheets. The winner is the first player to achieve a winning pattern of crossed out items on their worksheet (in different versions of the game, different winning patterns may be used). Although of course the standard game of bingo is well-known by many people, and played by many as a leisure activity, what is not so widely known is that modified versions of bingo can be of great use in education. In fact, bingo is becoming increasingly common in classrooms, and can be used as a teaching aid in a number of K-12 subjects including reading, vocabulary, math, foreign languages and even science and history, as well as in adult education, in for example English as a Second Language (ESL) classes.
Subtraction worksheets help a child learn the skills required for subtraction. It also gives you, as a parent or teacher, an opportunity to understand how much he has grasped and what the best way of making him learn more will be. Subtraction is integral in math and it is something children will be using all their lives. There are various levels of subtraction worksheets available which match the skill levels of different children. Creating worksheets for children involves creativity to make it look like a fun thing to do and at the same it should serve its educational purposes. There are many websites which have free subtraction worksheets available which you can download or print for free. Choose a worksheet that has problems suited to the child or children you want it for.
The common element in most educational versions of bingo is the use of modified bingo worksheets. Instead of the standard worksheets that contain numbers, the teacher creates, ahead of class, worksheets that contain items chosen for the lesson. In the math class, the items might still be numbers, but the numbers are the answers to problems called out by the teacher. In a language class, the worksheets might be printed with Spanish or French words, which the students must match to calls made in English by the teacher. Really there are almost endless possible variations, and innovative teachers are inventing new ones all the time. You might think that this is all very well, but where can the special customized bingo worksheets be obtained. Obviously, it would not be a good use of a busy teacher’s time to spent a lot of time manually preparing a worksheet for each student. Fortunately, there is an answer – a PC and some bingo worksheet creator software can make light work of printing worksheets on any theme that the teacher chooses.
You can keep the worksheet very challenging without making it boring. Be extra sensitive as to whom you are distributing these worksheets. Do not assume that Asian learners, in particular, appreciate the English alphabet. Other learners such as the Chinese, Indians, Japanese and Koreans do have high regards to their traditional letters and this should be considered at all times. Note that spelling skills in this case, is simply a challenge for them. Young Asian learners are often taught with their Mother Language first and so preparing worksheets in English require extra effort in the structure as well as in the presentation. Another key in writing worksheets is the amount of time a student will spend on it. If you prepare worksheets with several pages, they may end up bringing them at home and losing the interest to finish answering them. So be keen in preparing one at all times!
A tip for anger management. Learning to breath deeply can be welcome relief for a lot of rage, stress and fear issues. It’s not that you have to breathe deeply all of the time. Short deep breathing sessions once or twice a day can be a big help. For a massive distress experience, sit or lie down somewhere comfortable. Breath in slowly, and allow the air to fill your lungs from the top at the chest all the way to the bottom around the navel area. Don’t be afraid to allow those lungs to fill, and it’s a good sign if your stomach comes out!
|
The human mouth is home to a teeming community of microbes, yet still relatively little is known about what determines the specific types of microorganisms that live there. Is it your genes that decide who lives in the microbial village, or is it your environment? In a study published online in Genome Research (www.genome.org), researchers have shown that environment plays a much larger role in determining oral microbiota than expected, a finding that sheds new light on a major factor in oral health.
Our oral microbiome begins to take shape as soon as we are born and sees a myriad of bacteria introduced to our mouth during childhood and later in life, yet little is known about whether nature (your genes) or nurture (your environment) has a stronger influence. Because of variations in the oral microbiome in both health and diseases like bacteremia and endocarditis, understanding the determinants of oral microbiota communities might lead to better prevention and treatment strategies.
In this study, a team of researchers from the University of Colorado sequenced the microbial DNA present in the saliva samples of a cohort of twins, and matched the DNA sequences in a database to determine which types of bacteria were present in each individual. In their data set, they utilized samples that were gathered over a decade of adolescence from the same individuals to observe how the salivary microbiome changes with time.
By comparing the salivary microbiomes of identical twins, who share the same genetic make-up and live in a common environment, the group found that their salivary microbiomes were not significantly more similar than the salivary microbiota of fraternal twins, who share only half of their genes, suggesting genetic relatedness is not as important as environment. "The conclusion that genetic relatedness plays at most only a small role in microbial relatedness was really a surprise," said Dr. Ken Krauter, senior author of the study.
"We were also intrigued to see that the microbiota of twin pairs becomes less similar once they moved apart from each other," added Simone Stahringer, first author of the study, explaining further evidence for the influence of environment on oral microbiota. Interestingly, in the samples obtained from the same individuals over time, they found that the salivary microbiome changed the most during early adolescence, between the ages of 12 and 17. This suggests that factors such as puberty or prominent behavioral changes at this age might be important.
Krauter explained that their work uncovered another unexpected finding, that there is a core community of bacteria that are present in nearly all humans studied. "Though there are definitely differences among different people, there is a relatively high degree of sharing similar microbial species in all human mouths."
The authors suggested that this report has established a framework for future studies of the factors that influence oral microbial communities. "With broad knowledge of the organisms to expect to find in mouths," said Krauter, "we can now better understand how oral hygiene, environmental exposure to substances like alcohol, methamphetamines and even foods we eat affect the balance of microbes."
Stahringer SS, Clemente JC, Corley RP, Hewitt J, Knights D, Walters WA, Knight R, Krauter KS. Nurture trumps nature in a longitudinal survey of salivary bacterial communities in twins from early adolescence to early adulthood. Genome Res doi: 10.1101/gr.140608.112.
Cold Spring Harbor Laboratory: http://www.cshl.org
This press release was posted to serve as a topic for discussion. Please comment below. We try our best to only post press releases that are associated with peer reviewed scientific literature. Critical discussions of the research are appreciated. If you need help finding a link to the original article, please contact us on twitter or via e-mail.
Evidence of an ancient settlement was found in the most inaccessible forest in Central America
Strong drugs are rarely warranted to control the behavior of dementia patients, specialists say. But antipsychotic medicine is being overprescribed, and not just among residents of nursing homes.
Scientists are refining what constitutes "normal"
Bumblebees can recall which flowers yield nectar, but like people they can get mixed up – leading them to home in on flowers they have no experience of
Brain cells that help us predict the intentions of others before they've actively made a decision have been discovered in monkeys
Experiments in mice suggest that treatment of haemophilia could be more successful if the baby's immune system is primed while in the womb
Scientists have recovered cultivated wheat DNA from an 8,000-year-old submerged site off the British coast. The finding suggests hunter-gatherers were trading for the grain long before they grew it.
An exciting fossil find in China points to a 525-million year old sea-dweller who used its new backbone to swim nimbly away from predators
Birds are dinosaurs. That’s a fact underscored by dozens upon dozens of discoveries in the last 30 years.
|
Have you ever wondered how important the ocean is to life on Earth? How exactly we impact the health of the ocean and its inhabitants? What we can do to help minimize that impact? If you have never thought of these questions before, then it’s time that you do!
Our planet is called the “Blue Planet” because of our ocean and how important it is to life on earth. If you look at a globe or a world map, it is mainly blue. This is because roughly 71% of the earth is covered by water. If you take into account all of earth’s water, 97% of it is contained in the ocean. That’s a lot! Only about 1% of water on Earth is freshwater, and only about 2-3% is contained in the ice caps and glaciers. Basically, water is life. The ocean also acts as a form of insulation. The top 10 feet of the ocean holds as much heat as the entire atmosphere. The entire atmosphere! Did you know that the ocean is also responsible for oxygen production? Very tiny bacteria in the water carry out the process of photosynthesis – using sunlight (photo) to create (synthesis) oxygen for the ocean. It is thought that they are responsible for 50% of Earth’s oxygen production, with some estimates putting oxygen production at as much as 85%.
Unfortunately, humans are having a visible impact on the health of the ocean, an impact that has increased since the introduction of plastic and styrofoam. It is thought that around three times as much garbage is dumped or deposited in the ocean compared to the amount of fish caught. It is estimated that 80% of garbage in the ocean is made of plastic, a very common material that has no way of breaking down once in the ocean. There is an area twice the size of Texas called the Great Pacific Garbage Patch where there are six pounds of plastic for every pound of plankton. Even pollution nowhere near the ocean can affect its health. 33% of toxic contaminants that are found in the ocean are actually from air pollution, while 44% of the toxic contaminants originate from rivers and streams.
No matter how many dives you have or what your qualification is, whether you are a new open water student or a seasoned instructor, or even if you are just snorkeling, there is always room for improving your own impact on the environment. Since we are visitors in the ocean, it is our job to treat it with respect, so much that environmental awareness is included in most PADI courses, including Discover Scuba Diving.
Project AWARE, a movement founded by PADI divers around 20 years ago, was formed to educate divers about ocean issues. They host international cleanup days and have many online resources to help divers, including petitions and data collection tips. If we can all follow Project AWARE’s 10 Tips for Divers to Protect the Ocean Planet, we can at least reduce our own footprint. For those of you who are not divers, you can still help with ocean health. Say no to plastic, reduce/reuse/recycle materials, and pass on knowledge to others. If you’re ever near the beach or in the water, go ahead and pick up any garbage you see. You shouldn’t just assume someone else will pick it up – if everyone has that same thought, then no one will. A healthy ocean means a healthy Earth, so it is up to us to treat it with respect and to do our part. Hopefully you all will join underwater cleanups or even start your own!
Your marine biologist /Daniel
|
Like many corals, staghorn corals have a special symbiotic relationship with algae, called zooxanthellae. The zooxanthellae live inside the tissues of the coral and provide the coral with food, which it produces through photosynthesis and therefore requires sunlight. In return, the coral provides the algae with protection and access to sunlight.
Staghorn corals are reef-building or hermatypic corals, and are incredibly successful at this task for two reasons. Firstly, they have light skeletons which allow them to grow quickly and out-compete their neighbouring corals. Secondly, the skeleton, or corallite, of a new polyp, is built by specialised ‘axial’ corallites. These axial corallites form the tips of branches, and as a result, all the corallites of a colony are closely interconnected and can grow in a coordinated manner (3).
Staghorn corals reproduce sexually or asexually. Sexual reproduction occurs via the release of eggs and sperm into the water. Most staghorn corals on the Great Barrier Reef sexually reproduce simultaneously, an incredible event that occurs soon after the full moon, from October to December. Streams of pinkish eggs are released from corallites on the sides of branches, to be fertilized by sperm released from other polyps at the same time. The water turns milky from all the eggs and sperm released from thousands of colonies. Some of the resulting larvae settle quickly on the same reef, whilst others may drift around for months, finally settling on reefs hundreds of kilometers away (3). Asexual reproduction occurs via fragmentation, when a branch breaks off a colony, reattaches to the substrate and grows (4).
|
Like a fast-moving relay race, neurotransmitters are the vehicle by which messages travel from one nerve cell to another in the brain. They affect mood, memory and our ability to concentrate, as well as several physical processes. When these chemical messengers are disrupted, the message may go right back to the transmitter or be lost altogether. When considering mental illness, the result of interrupted neurotransmitters can be depression or even a tendency toward drug and alcohol dependency.
Though the brain has billions of nerve cells, they don’t actually touch – thus the job of neurotransmitters to bring messages back and forth. Because neurotransmitters can impact a specific area of the brain, including behavior or mood, their malfunctions can cause effects ranging from mood swings to aggression and anxiety. Many neurotransmitters exist in the brain, but those most studied in relation to mental disorders are dopamine, acetylcholine, GABA, noradrenaline (norepinephrine) and serotonin.
Understanding the way neurotransmitters function in the brain could lead to better treatments for mental disorders. Normally, nerve impulses move along the brain through axons , long cellular structures – until they land at a presynaptic membrane. These membranes house the neurotransmitters that will be sent out into free spaces, or synaptic clefts, so that they can be collected by receptors of another neuron. The neuron that collects the neurotransmitter then internalizes it and the nerve impulse can keep moving forward with the message.
If serotonin or norepinephrine movement is interrupted, depression or anxiety disorders can result, as these hormones (also called neurotransmitters) regulate things like mood, appetite and concentration. For patients with depression, the neurotransmitters may return to their original location (the presynaptic membrane) instead of sending the right message produced by the serotonin to a neuron. Medications for depression can help stop these hormones from returning to their original location, a process called reuptake. The result is that broken signals are repaired; there is more serotonin activity; and reduced symptoms for depression.
Dopamine is another neurotransmitter linked to mental illness, such as schizophrenia, characterized in part by emotional disturbances, but certain medications can help reduce the symptoms. Attention-deficit/hyperactivity disorder (ADHD) is also believed to be a result of interrupted passages of dopamine or norepinephrine. Tiredness, high levels of stress and poor motivation are also linked to low dopamine.
Additional mental illnesses, such as personality disorders and social disorders, are believed to be caused by the interrupted transfer of neurotransmitter messages. In patients with drug or alcohol addictions, the gamma-aminobutyric acid, or GABA, receptor may be affected. This neurotransmitter slows the speed of nerve impulses and causes muscles to relax.
Interestingly, people with vitamin deficiencies may be more likely to experience disrupted, lacking or ineffective neurotransmitters. Amino acids are the building blocks of neurotransmitter production, but amino acids can’t be generated without first taking in a broad range of vitamins and minerals. Diets that are too low in protein may also contribute to impaired neurotransmitter function. A combination of good nutrition, prescription medications or antidepressants, exercise and psychotherapy are recommended to increase neurotransmitter production and encourage a smooth flow of these critical chemicals in the brain.
|
Tapeworms? Storm in a Teacup or Major Concern
24 March 2011
Tapeworms are so called because of their appearance. They are generally broad and flattened and contain segments. These segments are in effect egg packets and are shed from the end of the worm one-by-one, so that the stool is infested and in turn, the pasture becomes infested, allowing the continuation of the species. There are a number of species of Tapeworm that affect the horse. The most commonly observed species in horses in Ireland and the UK is known as Anoplocephala perfoliata.
Image of a typical tapeworm. Note the rounded head (top right had corner) and the flattened segmented appearance of the posterior end (bottom left hand corner)
Life Cycle of the tapeworm:
The life cycle of the tapeworm hinges on tiny mites. These mites present a perfect growing site for the immature tapeworms (cysticercoids). The cysticercoids emerge from the horses stool, infest these mites and wait on the pasture until consumed by a horse. When the mite containing the infective cysticercoids, is digested, the tapeworm larva is released inside the horse. The tapeworm uses its hook- like teeth to latch onto the intestinal lining of the large intestine. There the larva grows into the fully grown, segmented, tapeworm. Each segment of the worm contains eggs which the tapeworm releases into the digestive tract of the horse. The segment coating is dissolved and the eggs are released into the stool where they're eaten by the mites. Eggs must develop in the mite for a few months to reach the cysticercoid stage and become infective. These mites live in the grass and are eaten by grazing horses, starting the cycle over again. The life cycle therefore, usually takes a number of months to complete itself. This fact alone, makes tapeworm relatively easy to control, as it may take a considerable amount of time for very heavy worm burdens to accumulate.
Diagnosis is by demonstration of the characteristic eggs within the stool. However, as the shedding of egg sacs is not continuous, more than one faecal egg count may be required to get a definite diagnosis. There is also a blood test for detecting tapeworm in horses that detects tapeworm antibodies. However, although this blood test will tell whether or not the horse has tapeworms, it merely gives a broad indication of the levels of tapeworm present and not specific numbers.
In light infestations, no signs of disease are present. However, in heavy infections, loss of weight and intestinal disturbances such as colic and peritionitis may be seen. In stables where these tapeworms are prevalent, infections can be prevented by pyrantel embonate (the active agent within Embotape). These preparations require the administration of a double dose for most effective treatment of tapeworm infestation.
Treating horses with pyrantel embonate immediately before turn out and at the end of the grazing season is likely to be most beneficial and can be easily applied into individual horse worming strategies or applied to all horses on a given property. For more information on Embotape or any of the Bimeda horse products, contact your local agent or veterinary surgeon.
|
This Bar Graphs worksheet also includes:
- Answer Key
- Join to access all included materials
What are these bar graphs depicting? Novice data analyzers examine two basic bar graphs and answer three comprehension questions about each. The first graph shows tickets sold per date and the next is distances run per member of a cross country team. Questions prompt learners to determine how many tickets were sold on a certain date, which runners ran the same distance, etc. Some require pupils to compare two variables in a "how many more" context. Use this in preparation to create your own classroom bar graph, possibly using data from a class poll.
|
|National Park Service
US Department of the Interior
|Office of Public Health||1201 Eye Street, NW
Washington, DC 20005
|Office of Public Health - Valley Fever (Coccidioidomycosis)|
|Archeologists are at risk for contracting Valley fever (also known as coccidioidomycosis), a fungal infection caused by inhaling Coccidioides spores. The disease is common in certain areas of the Southwest and Western United States (see map). About 30% to 60% of people who live in these areas are exposed to the fungus at some point in their life. Most of the time, the disease is mild and resolves on its own. Serious complications occur in about 5% of infected people.
Coccidioides spores travel through the air when soil is disturbed, such as by screening dirt or shoveling. Wind and dust storms can also carry the spores. People breathe the spores into their lungs, where the spores can undergo changes and cause illness. Valley fever cannot be transmitted from person to person, from animal to animal, or between animals and people.
60% of people exposed to Coccidioides spores do not develop symptoms. Those who become ill usually get flu-like symptoms such as fever, cough, headache, fatigue, and muscle aches. A rash on the chest, back, arms, or legs can also occur. More serious forms of the disease include pneumonia and complications where the fungus spreads to the brain, joints, bone, or other organs. Symptoms usually develop one to three weeks after exposure and can last longer than six months.
Risk for Complications
Anyone with Valley fever can develop complications, but pregnant women in their third trimester, people with weakened immune systems (e.g. diabetes or HIV), and people receiving steroids or chemotherapy are at greatest risk. People of African-American and Filipino descent may also be at risk for complications.
Testing and Treatment
If you think you might have Valley fever, see a healthcare provider for evaluation. Symptomatic individuals can be tested (blood antibody test) to confirm the diagnosis. Treatment is usually not necessary for mild infections, which often resolve on their own. For individuals with moderate to severe symptoms or people who are at risk for complications, anti-fungal medications are recommended and may be effective.
Safety precautions are recommended for archeologists and other high-risk occupations. These precautions are based on common sense and have not been scientifically studied. No vaccine is currently available. Commonly recommended prevention measures include:
Tri-fold color brochure available.
If you have any questions, please contact your nearest Regional Point of Contact, park sanitarian or call WASO Public Health for more information.
Return to Infectious Agents Page
|
Anaemia of Causes.
Iron deficiency anaemia is a condition in which a lack of iron in the body results in a decrease in red blood cells.
Anemia is a condition in which you do not have enough healthy red blood cells to carry adequate oxygen to the tissues of the body, you may feel tired and weak if you have anemia.
Iron is used to make red blood cells that help store and carry oxygen in the blood. If you have fewer red blood cells than normal, your organs and tissues will not get as much oxygen as they would normally.
Anaemia is a shortcoming in your body’s number or quality of red blood cells, red blood cells use a particular protein called haemoglobin to carry oxygen around your body.
Anaemia means either red blood cell levels or haemoglobin levels are below normal levels. When a person has anaemia, their heart needs to work harder to get enough oxygen around their body to pump the amount of blood needed.
Anemia treatments range from supplements to medical procedures, by eating a healthy, varied diet, you may be able to prevent certain types of anemia.
Anemia is the most common condition of blood in the United States, it affects about 5.6% of the U.S. population, there is an increased risk of anemia among women, young children, and people with chronic diseases.
Fast facts on Anemia.
Here are some key points about anemia:
- There have been identified more than 400 types of anemia.
- Anemia is not limited to humans and may affect dogs and cats.
- Anemia affects an estimated 24.8% of the population of the world.
- Pre-school kids are at the highest risk worldwide, with an estimated 47% developing anemia.
Symptoms of Anaemia.
There are only a few symptoms in many people with iron deficiency anaemia, the severity of the symptoms depends largely on how fast anaemia develops.
You may immediately notice symptoms, or if your anaemia is caused by a long-term problem, such as a stomach ulcer, they may develop gradually.
The most common symptoms include:
- Chest pain.
- Shortness of breath.
- Strange cravings for food.
- Cold hands and feet
- Pale skin.
- Heart racing or palpitations.
- Various headaches.
- A pale teint.
- Remarkable heartbeat (heartbeat).
- Fatigue and energy shortage (lethargy).
- Blood pressure drop when sitting or lying (orthostatic hypotension) – this can happen after a severe loss of blood, like a severe period.
There are a variety of anemia treatments available, all of them aim to increase the number of red blood cells, this, in turn, increases the blood carrying amount of oxygen.
Treatment involves supplementing folic acid, removing spleen, and sometimes transfusions of blood, and transplants of bone marrow.
Iron deficiency anemia:
Iron supplements (which can be purchased online) or changes in diet. if the condition is caused by blood loss, it is necessary to find and stop the bleeding.
Vitamin deficiency anemias:
There are dietary supplements and B-12 shots in the treatments.
Anemia of chronic disease:
This is anemia associated with a severe underlying chronic condition. Specific treatments are not available and the focus is on the underlying condition.
The patient receives transfusions of blood or transplants of the bone marrow.
|
The term "otitis media" means that there is inflammation present in the middle ear, behind the eardrum. When inflammation is present in the middle ear, fluid may accumulate. The type of fluid present varies, and thus there is a spectrum of disease from "acute otitis media" through to "glue ear" (medically termed otitis media with effusion).
When the fluid in the middle ear is infected, the eardrum is red and bulging, frequently with pus behind the eardrum, and there is associated pain and fever. This is called "acute otitis media."
When the infection has resolved the fluid becomes "semi-sterile" and this is called "glue ear". Fluid is present behind the eardrum, but there is no fever, and the eardrum is not inflamed or bulging.
On the right is a cross section, or slice, through the middle of the head. On the left can be seen the external ear or pinna, which connects to the ear canal. At the end of the canal on the right, the ear drum can be seen. The ear drum vibrates with sound waves, and this pressure wave is then transmitted through tiny bones in the middle ear to the cochlea. Fluid waves in the cochlea stimulate tiny hair cells there, and the sound is converted to electrical signals, which are then transmitted through the cochlea nerve to the brain. In "glue ear" fluid collects in the middle ear reducing the ability of the ear drum to vibrate.
What causes otitis media?
Otitis media occurs most commonly in young children. The exact causes are not known, but one cause is thought to be a result of temporary malfunction of the Eustachian tube, which connects the middle ear to the back of the nose. Increasingly, chronic bacterial infection is also thought to play a role.
The Eustachian tube normally allows air to circulate from the back of the nose to the middle ear, and allows mucus to drain from the middle ear to the back of the nose. In young children, the tube is smaller, more horizontal and shorter. It is easier for bugs (bacteria and viruses) to travel in the tube, which may result in swelling of the lining of the tube, and an increase in mucus production in the tube. This may cause it to block. Part of the problem also relates to a developing immune system. As a child's immune system develops, a child is less likely to get infected with bacteria and viruses, which cause an upper respiratory tract infection ("cold") and subsequent otitis media. It follows that as children grow, they are less likely to have trouble with otitis media.
Increasingly bacterial biofilms are being implicated. Bacteria are now recognised as existing in two forms - free floating (planktonic) or in sophisticated communities called biofilms, which adhere to both biological and non-biological surfaces. Many chronic infectious diseases including otitis media, tonsillitis and chronic rhinosinusitis appear to be caused by bacteria living in a biofilm state. Biofilms have been defined as a “structured community of bacterial cells enclosed in a self-produced polymeric matrix and adherent to an inert or living surface”. In a biofllm state, bacteria produce an extracellular matrix (often referred to as “slime”), which protects its inhabitants against environmental threats including “biocides, antibiotics, antibodies, surfactants, bacteriophages and foraging predators such as free living amoebae and white blood cells”. Bacteria within biofilms are difficult to culture and highly refractory to conventional antibiotic treatment.
We know some important risk factors, but not all the reasons why some children develop otitis media. There is some limited evidence linking bottle feeding to early development of acute otitis media. This may be because of the immune protective effect of antibodies passed through breast milk.
The most important risks include:
- a family history of otitis media
- exposure to tobacco smoke ("passive smoking")
- exposure to other children in child care/crèche/preschool
- an older sibling in childcare/crèche/preschool/ early primary school
There is no clear evidence supporting allergy as a causal factor in the development of otitis media, however children with allergy have an increased risk of developing "colds".
What are the symptoms of otitis media?
Acute otitis media may result in severe ear pain, fever, grumpiness/misery and night waking. The hearing is reduced. More severe complications (burst eardrum with discharge from the ear, mastoiditis, meningitis) are uncommon, but do occur. Rarely, a child may have few symptoms, even with very inflamed ears. Balance may be temporarily affected in some children.
Glue ear may have few symptoms. There is usually no fever, but ear discomfort may still occur, particularly at night when children lie down. There is usually a hearing loss: in some children this may be only mild, and in others, this may be sufficient to delay speech and language development. This may have implications for effective learning at preschool and school. The consistency of the fluid in the middle ear may change, and this may lead to fluctuating hearing. Parents may feel that their child has selective hearing. Balance may be affected and the child may seem clumsy.
How is Otitis Media diagnosed?
Examination of the eardrum using an "otoscope" is the best way to diagnose otitis media. An otoscope is a small torch with a magnifying lens and a funnel attachment. This is inserted in the outer ear canal and the eardrum and ear canal are examined.
Tympanometry is a test to assess eardrum movement. Air is puffed in and out of the ear canal and a probe in the ear canal detects sound echoing off the eardrum. Tympanometry may be useful in doubtful cases, and is also used as a screening tool for glue ear, particularly in preschools and kindergartens. Tympanometry is not a hearing test and a "pass" on this test does not necessarily mean that a child can hear - it just means that it is very unlikely glue ear is present at the time of the test.
Hearing testing is a very valuable tool in the assessment of glue ear and its impact on the hearing of an individual child. No child is too young to be tested, however testing does need extra time and special techniques are needed in children under the age of two and a half to three years. Your doctor may recommend a hearing test if otitis media has been present for three months. A qualified audiologist should perform hearing testing. This may be at the public hospital or at a private Audiology Centre.
What treatment is recommended, and is it necessary?
Acute Otitis Media:
Antibiotic treatment is recommended for acute otitis media. This has a modest effect in the reduction of pain and fever, and may reduce the risk of complications of acute otitis media. However, there remains some dispute about the benefits of antibiotics - some doctors believe there is not enough evidence to provide antibiotic treatment for acute otitis media in some older and otherwise healthy children. Paracetamol should also be given at the same time for pain relief and to reduce fever.
If a child suffers from recurring attacks of acute otitis media, prophylactic antibiotics may be prescribed. Vitamin D supplementation has been shown to be useful in this situation. More concerns are being raised also about the complications of antibiotic usage, including the development of antibiotic resistance, allergic reactions, diarrhoea and thrush. An alternative is the surgical insertion of grommets into the tympanic membrane under general anaesthetic. There is no absolute definition of the number of episodes required before grommet insertion is recommended, but a rule of thumb is 6 episodes in a year. The surgery reduces the frequency of the infections, in many cases abolishing them altogether. If a child does get an attack of acute otitis media the drum does not bulge; instead there is a discharge of pus through the grommet into the external ear canal. This can usually easily be treated with topical ears drops such as Ciproxin ear drops.
Because most episodes ofn "glue ear" resolve without treatment, regular observation alone is often recommended for three months if the eardrums are otherwise of normal appearance. Once fluid has been present behind the eardrum for three months, it is considered unlikely to resolve for a considerable time (sometimes years). Continued observation alone may be an option after this time if hearing is completely normal and there has been no eardrum damage.
Treatment options include:
A prolonged course of antibiotics (most commonly amoxycillin or cotrimoxazole) for two to four weeks. Antibiotics have a very modest improvement in the clearance of middle ear fluid, and it cannot be said for sure whether the benefit is only temporary. More concerns are being raised about the value of antibiotic treatment and about the complications of antibiotic usage, including the development of antibiotic resistance, allergic reactions, diarrhoea and thrush.
Grommet (ventilation tube) insertion:
Grommets are tiny plastic flanged tubes, which are inserted through a small cut in the eardrum to allow air into the middle ear until the eustachian tube begins to function normally. The most common ventilation tubes last between 6-9 months and 12-15 months. This may vary considerably in individual children.
Grommets eliminate middle ear fluid by allowing air in to the middle ear from the outside - they are not "drains". Allowing air in from the outside through the grommet enables mucus and fluid to drain in the normal way down the Eustachian tube. There is usually improvement in hearing and reduction in frequency of acute otitis media episodes. Parents often report improvement in balance and walking ability, and an improvement in wellbeing and happiness of the child. Many times, there is an improvement in sleeping at night. The grommets are inserted with the child asleep (general anaesthetic). Children are often able to return home an hour or so afterwards. There is not usually any major pain in the ears after the surgery. Approximately 25% of children have the requirement for further grommet insertion after the first set of grommets extrude (come out), and of this group, another 25% have the requirement for a further set of grommets after that.
What are the risks of grommet insertion?
General anaesthetic: The risk of complications from a short anaesthetic for an otherwise healthy child are extremely low. They should be discussed with the anaesthetist prior to surgery.
Ear drum perforation: About 1% of the tubes leave a small hole in the ear drum after extrusion. Many such holes heal spontaneously, but some need surgical repair. This is best left to at least 9 years old for maximal chance of success.
Discharge from the ear: This may occur from time to time in some (up to 40%) of children. It is not normally painful, but does mean that the ear is infected and should be treated. Eardrops (e.g. “Ciproxin") for 5-7 days, rather than oral medicines are usually required to treat this.
Eardrum scarring: There may be a small scar in the eardrum after the grommets extrude. This does not damage the hearing in any way. More significant scarring can occur in the eardrum or middle ear, but is usually a result of more severe disease than as a result of grommet insertion.
Water and swimming:
A lot is said about gromments and water getting in the ears. The large majority of children can go swimming without any protection to the ears. Care should be taken to avoid forcing water up the nose, or into the ears by avoiding diving or swimming under water. If a child does get a discharging ear after swimming then it is usually easily fixed with eardrops and one should be more careful about getting water in the ears.
To wash the hair showering is recommended. The alternative is to sit the child in the bath. Get the child to put his/her fingers in his/her ears and clean their hair with the child sitting in the bath.
In children who have discharging grommets topical ear drops are often precribed. The technique to apply them is to warm the ear drops in your hands, fill the ear canal with the drops and then use the tragus (cartilage in front of the ear canal) to pump the drops into the middle ear and then into the back of the nose (nasopharynx) through the eustachian tube. This is illustrated in the associated picture. If the child can taste the drops after this it is a good clinical sign that the drops have been effectively given.
|
The purpose of the Warm-Up is partly for students to review key skills and partly to preview new material. I explicitly tell students to skip problems that they have fully mastered, and throughout the year I coach them to do this. Some students will spend lots of time on problems that have become routine for them, and they no longer need this practice, so they need to be coached to skip these problems. Other students skip problems that they don’t know how to do, so it is important to circulate and ask them to explain problems to you. Sometimes I ask students to do part of one problem just to make sure that they know how to do it. Ultimately, the goal is for students to assess their understanding of key problems and concepts each and every day and to use the warm-up as a tool to help them do this. Constantly facilitate these conversations with students by asking them how they are choosing which problems to work on.
The idea of today’s warm-up is that students will need to understand problem (2) and (3) in order to apply these skills in a new real-world situation dealing with falling objects. The first problem is tricky, however, and may take them some time. Encourage them to really think about how to set up a function for the first problem and have a stack of organizers available for students who like to use them (MP1).
Ask students to make sure that they understand problems (2) and (3) and explain to them that they will need both of these skills to be successful in today’s lesson. Give them lots of time to talk to their partners about these problems, and have some reference posters available showing how to determine whether a data table matches a quadratic function and also showing how to find the vertex of a the function.
When it seems like most students have had time to review these two key skills, ask them to find a new partner (or assign them a new partner if you want to rearrange the room a bit) and talk about problem 4. There is a lot going on in this problem, and it is great to take the time to ask students what the graph and the data table are really showing. What are x and y in this situation? What is happening to the height of the object as it is falling? Why does this happen? Discuss the context of the situation in some depth so that students can make more sense of the numbers and computations later (MP2). Students may or may not be able to find a function to fit this data. It’s fine if they don’t, because that will be the focus of the day’s lesson.
The first question on this Exit Ticket may seem trivial, but producing precise explanation requires deeper understanding of many concepts (MP6). The goal is for students to be able to say that the function is not linear because the height does not change by the same amount over each time interval. Why doesn’t the height change linearly? This is because the object accelerates as it falls. (Students may have some prior knowledge of this from physics, but even if they don’t, this should fit with their intuition.)
The second question really gets at the idea of the vertex. The vertex (or maximum height) changes if we throw the object upwards rather than just letting it fall. How does this show up in the function rule? Ask students to explain their answers to this question using multiple representations (MP3).
The third question is asking students to make some connections between the real-world problem about falling objects and the more abstract problems about parabolas and vertices and different forms of functions (MP2). Though it may seem obvious, asking students to articulate this helps them develop better metacognition and enables them to think more broadly about the day’s lesson.
|
Any views expressed in this article are those of the author and not of Thomson Reuters Foundation.
As countries ponder incentives to slow the degradation of their tropical forests, a huge, unanswered question looms: What exactly is a degraded forest?
Programs that provide such incentives, such as REDD+ (Reducing Emissions from Deforestation and forest Degradation), a U.N.-backed initiative, face the challenge of accurate measurements of deforestation and degradation.
New criteria can help address that problem.
“The difficulty is that what some people consider a degraded forest may not look degraded to others,” said Manuel Guariguata, a principal scientist with the Center for International Forestry Research (CIFOR).
“There are hundreds of definitions of forest degradation, but they don’t clarify where the threshold lies for defining what is degraded and what is not.”
Guariguata and colleagues aim to remedy the problem with a set of five guideline criteria that forest managers and land-use planners can use to evaluate the state of a forest and determine whether use of its resources is sustainable.
Those criteria: long-term production of forest goods and services; biodiversity; unusual disturbances such as fire or invasive species; carbon storage; and the forest’s ability to protect soil. The criteria can be given a different weighting depending on the forest-management goals.
The researchers describe the criteria and how they can be measured in a paper, “An Operational Framework for Defining and Monitoring Forest Degradation”, published in the journal Ecology and Society.
“We did not create a specific definition of degradation, but our work provides guidance about how land planners and managers can apply different dimensions of degradation to their own work,” Guariguata said.
Forest managers can decide which criteria are most important in their own situations, he said. In many cases, they can then use remote-sensing technology, such as satellite images, to continue to monitor the state of the forests.
TOWARD A DEFINITION
The Collaborative Partnership on Forests provided a definition that relates degradation to the loss of ecosystem goods and services. However, that definition still required a way to make it operational for land managers to use. The five new guidelines seek to provide this.
Because forests store carbon and are a source of timber and products such as fuel, fruit and nuts, the first criterion for measuring degradation is how well they provide those products and services, the researchers said.
A forest’s ability to produce timber and fuel wood is judged by its “growing stock” — the volume of all trees of a particular height and diameter. Signs of degradation could include a decrease in that volume over time, the number of certain types of tree, or in the harvest of such non-timber forest products as fruits or nuts, the research shows.
The second factor is biodiversity — vital because a wide range of plants, insects, animals, fungi and other living things perform crucial functions in tropical forests, such as seed dispersal, pollination, disease control and decomposition, the authors said. These functions are often directly related to the provision of ecosystems goods and services.
Land managers can measure biodiversity by monitoring changes in vegetation and certain important species, including insects and birds. They can also track forest fragmentation, a type of forest degradation that can result in the loss of habitat and of species — animals, birds, insects or other creatures — that were dependent on it.
Sometimes degradation is more obvious — a forest may be scarred by excessive fires or overrun by an invasive exotic plant or insect that threatens native species. Such “unusual disturbances,” which can be aggravated by climate change, are the third criterion.
Forests are not only a source of products, but they also protect soil and maintain moisture by regulating the flow of water in an ecosystem, releasing water into the atmosphere through their leaves, in a process known as evapotranspiration, and controlling the way in which water filters into the ground.
The researchers designated water retention as the fourth criterion and recommended monitoring this type of degradation by measuring soil erosion and the quantity of water.
The fifth criterion in defining forest degradation reflects the key role that tropical forests play in carbon storage, as forests hold about half the world’s carbon stocks in living and dead trees and the soil.
Degradation from forest fragmentation, a reduction of tree size or in the number of species in a forest can release carbon and limit its future accumulation in the forest. The researchers recommended monitoring both stored carbon and the presence of high-density tree species, which store the most above-ground carbon, in the forest.
For all five criteria, the key to monitoring lies in having a reliable baseline, or reference level, against which to measure degradation, Guariguata said. Although the “gold standard” is an intact, old-growth forest, he cautioned that trees alone do not make a functional forest.
“You can have a very beautiful, old-growth forest, but no animals, because of overhunting,” he said. “From the standpoint of forest structure, the forest is not degraded, but there are no seed dispersers, game species or herbivores, and that will eventually have an impact on the structure of the forest.”
One danger — and a reason why the researchers drew up criteria for measuring degradation — is that policymakers may be tempted to write off any forest where there has been logging or other activity, saying that because it is “degraded,” it no longer serves a purpose.
That could pave the way for clearing for road building, agriculture or other activities that further threaten the forest’s survival, Guariguata said.
“There is a perception that if you say a forest is degraded, it’s not a good forest,” said Ian Thompson, research scientist in forest ecology with the Canadian Forest Service who co-authored the report. “If you use good logging practices to harvest timber, it won’t be the same as an old-growth forest, but it will still be a productive forest. We wanted to clear up these misconceptions and show that there are many dimensions to degradation.”
In other words, a well-managed forest, considering all the criteria, could equally serve as a baseline, Thompson added.
For further information on the topics discussed in this article, please contact Manuel Guariguata at [email protected]
This work forms part of the CGIAR Research Program on Forests, Trees and Agroforestry.
|
Nuclear power has played a significant role in the exploration of the solar system, in many cases enabling missions that could not have been achieved otherwise. First flown by the United States in 1961 (Table 1-1), radioisotope power systems (RPSs) possess unique capabilities relative to other types of space power systems. RPSs generate electrical power by converting the heat released from the nuclear decay of radioactive isotopes (typically 238Pu) into electricity via one of many power conversion processes. Potential advantages of RPSs are their long life, robustness, compact size, and high reliability. They are able to operate continuously, largely independent of orientation to and distance from the Sun, and can be designed to be relatively insensitive to radiation and other environmental effects. These properties have made RPSs ideally suited for many robotic missions in the extreme environments of outer space and on planetary and satellite surfaces.
|
Geomathematics or Mathematical Geophysics is the application of mathematical intuition to solve problems in Geophysics. The most complicated problem in Geophysics is the solution of the three dimensional inverse problem, where observational constraints are used to infer physical properties. The inverse procedure is much more sophisticated than the normal direct computation of what should be observed from a physical system. The estimation procedure is often dubbed the inversion strategy (also called the inverse problem) as the procedure is intended to estimate from a set of observations the circumstances that produced them. The Inverse Process is thus the converse of the classical scientific method.
An important research area that utilises inverse methods is seismic tomography, a technique for imaging the subsurface of the Earth using seismic waves. Traditionally seismic waves produced by earthquakes or anthropogenic seismic sources (e.g., explosives, marine air guns) were used.
Crystallography is one of the traditional areas of geology that use mathematics. Crystallographers make use of linear algebra by using the Metrical Matrix. The Metrical Matrix uses the basis vectors of the unit cell dimensions to find the volume of a unit cell, d-spacings, the angle between two planes, the angle between atoms, and the bond length. Miller's Index is also helpful in the application of the Metrical Matrix. Brag's equation is also useful when using an electron microscope to be able to show relationship between light diffraction angles, wavelength, and the d-spacings within a sample.
Geophysics is one of the most math heavy disciplines of geology. There are many applications which include gravity, magnetic, seismic, electric, electromagnetic, resistivity, radioactivity, induced polarization, and well logging. Gravity and magnetic methods share similar characteristics because they're measuring small changes in the gravitational field based on the density of the rocks in that area. While similar gravity fields tend to be more uniform and smooth compared to magnetic fields. Gravity is used often for oil exploration and seismic can also be used, but it is often significantly more expensive. Seismic is used more than most geophysics techniques because of its ability to penetrate, its resolution, and its accuracy.
- Darcy's law is used when one has a saturated soil that is uniform to describe how fluid flows through that medium. This type of work would fall under hydrogeology.
- Stoke's law measures how quickly different sized particles will settle out of a fluid. This is used when doing pipette analysis of soils to find the percentage sand vs silt vs clay. A potential error is it assumes perfectly spherical particles which don't exist.
- Stream power is used to find the ability of a river to incise into the river bed. This is applicable to see where a river is likely to fail and change course or when looking at the damage of losing stream sediments on a river system (like downstream of a dam).
- Differential equations can be used in multiple areas of geomorphology including: The exponential growth equation, distribution of sedimentary rocks, diffusion of gas through rocks, and crenulation cleavages.
Polycrystalline ice deforms slower than single crystalline ice, due to the stress being on the basal planes that are already blocked by other ice crystals. It can be mathematically modeled with Hooke's Law to show the elastic characteristics while using Lamé constants. Generally the ice has its linear elasticity constants averaged over one dimension of space to simplify the equations while still maintaining accuracy.
Viscoelastic polycrystalline ice is considered to have low amounts of stress usually below one bar. This type of ice system is where one would test for creep or vibrations from the tension on the ice. One of the more important equations to this area of study is called the relaxation function. Where it's a stress-strain relationship independent of time. This area is usually applied to transportation or building onto floating ice.
Shallow-Ice approximation is useful for glaciers that have variable thickness, with a small amount of stress and variable velocity. One of the main goals of the mathematical work is to be able to predict the stress and velocity. Which can be affected by changes in the properties of the ice and temperature. This is an area in which the basal shear-stress formula can be used.
- Gibbs, G. V. The Metrical Matrix in Teaching Mineralogy. Virginia Polytechnic Institute and State University. pp. 201–212.
- Telford, W. M.; Geldart, L. P.; Sheriff, R. E. (1990-10-26). Applied Geophysics (2 ed.). Cambridge University Press. ISBN 9780521339384.
- Hillel, Daniel (2003-11-05). Introduction to Environmental Soil Physics (1 ed.). Academic Press. ISBN 9780123486554.
- Liu, Cheng; Ph.D, Jack Evett (2008-04-16). Soil Properties: Testing, Measurement, and Evaluation (6 ed.). Pearson. ISBN 9780136141235.
- Ferguson, John (2013-12-31). Mathematics in Geology (Softcover reprint of the original 1st ed. 1988 ed.). Springer. ISBN 9789401540117.
- Hutter, K. (1983-08-31). Theoretical Glaciology: Material Science of Ice and the Mechanics of Glaciers and Ice Sheets (Softcover reprint of the original 1st ed. 1983 ed.). Springer. ISBN 9789401511698.
- Development, significance, and influence of geomathematics: Observations of one geologist, Daniel F. Merriam, Mathematical Geology, Volume 14, Number 1 / February, 1982
- Handbook of Geomathematics, Freeden, Willi; Nashed, M. Zuhair; Sonar, Thomas (Eds.), ISBN 978-3-642-01547-2, Due: October 2010
- Progress in Geomathematics, Editors Graeme Bonham-Carter, Qiuming Cheng, Springer, 2008, ISBN 978-3-540-69495-3
|
Ribonucleic acid (RNA, also RNK) is a nucleic acid consisting of a strand of nucleotides connected to each other by covalent bonds. It differs from deoxyribonucleic acid ( DNA (nucleic acid) ) by the presence of a hydroxyl group on each pentose (sugar) molecule. Instead of the nucleobase thymine, uracil is used. It is usually single-stranded, sometimes double-stranded. RNA has many functions in the body and many different subtypes are distinguished.
Basic information[edit | edit source]
- Its molecule consists of only one polynucleotide strand (however, there are also double-stranded types of RNA, e.g. in some viruses);
- the carbohydrate component consists of the five-carbon sugar D-ribose;
- nitrogenous bases (N-bases) form adenine and guanine (purine bases), cytosine and uracil (instead of thymine, pyrimidine bases);
- all types of RNA are produced by the process of transcription ;
- the secondary structure of individual types of RNA is different, they are generally single-stranded molecules (with the exception of some viral RNAs);
- on this single strand, the formation of a double helix can occur in certain sections, if these sections contain bases complementary to each other.
Types of RNA[edit | edit source]
m-RNA[edit | edit source]
- Messenger RNA, information RNA, mediator RNA;
- transmits hereditary information, which is stored in a gene and encodes the exact order of AMK in a protein;
- it is created by transcription from DNA and subsequent splicing;
- it is transported from the nucleus to the cytoplasm, where, in conjunction with ribosomes, it participates in protein synthesis ( translation );
- its reverse transcription (reverse transcription) into DNA creates c-DNA ( reverse transcriptase enzyme ).
t-RNA[edit | edit source]
- Transfer RNA;
- brings amino acids to the right place of the nascent polypeptide - to the proteosynthetic apparatus of the cell ;
- consists of 75 pb;
- it arises from the transcription by polymerase III of genes scattered in different places of the genome;
- signal sequences for transcription are located within the transcribed regions;
- the primary transcript is edited by splicing, where introns are removed;
- tRNA is characterized by a high content of minor bases;
- the classical scheme of the tRNA molecule is considered to be the "trefoil";
- the "stems" of this formation are formed by the bonding of hydrogen bridges based on the principle of base complementarity;
- at the end of CCA 3' is bound by an ester bond carried by Free fatty acids
- 4 loops can be distinguished on the t-RNA molecule
- D-loop ;
- according to the dihydrouracil content.
- Anticodon loop ;
- contains a triplet of bases complementary to the codon of the given AMK;
- enables the inclusion of the AMK-tRNA complex in the correct place during proteosynthesis.
- V-loop ;
- variable, it differs both in size and in the included bases between tRNA molecules for different AMKs.
- Pi-loop (ψ) ;
- according to pseudo-uridine content.
r-RNA[edit | edit source]
- Ribosomal RNA;
- forms the building block of ribosomal subunits;
- we recognize four different types of r-RNA :
- 5S rRNA
- composed of 120 nucleotides, it is created by transcription (polymerase III) of genes that are distributed in larger quantities at different places of the genome in the form of tandem repeats separated by untranscribed sequences;
- signal sequences are located within the transcribed regions.
- Genes for 18S rRNA, 5.8S rRNA and 28S rRNA
- they create multiple repeating blocks on chromosomes carrying so-called nucleolar organizers;
- transcription takes place with the help of polymerase I, when a section of approximately 13 kb is transcribed;
- subsequently, splicing occurs, when this very long molecule produces 18S r-RNA (2300 bp), then 5.8S r-RNA (156 bp) and 28S r-RNA (4200 bp);
- 6800 bp, which were transcribed, were not used to build these functional r-RNA molecules;
- 18S rRNA associates with approximately 30 proteins to form a smaller ribosome unit (40S ribosomal unit);
- the large unit of the ribosome (60S) is made up of 5.8S r-RNA, 28S r-RNA and 5S RNA + approximately 50 proteins brought here from another place in an as yet unknown way;
- all types of rRNA based on base complementarity can form quite complicated secondary structures.
ncRNA[edit | edit source]
As "non-coding" (meaning "non-protein-coding") RNA ( ncRNA ) we refer to all functional RNA molecules that are not translated into protein in the process of translation . They generally fall into two categories, distinguishable by size:
ncRNAs shorter than 200 nucleotides[edit | edit source]
This group includes, for example:
- transfer RNA (tRNA) – RNA involved in the process of translation. We distinguish 49 types/families of tRNA. There are 497 tRNA genes in the nuclear genome (a significant part of them are on chromosomes 1 and 6), the transcription of which is ensured by RNA polymerase III (another 22 tRNAs are encoded by the mitochondrial genome).
- ribosomal RNA (rRNA) – forms part of ribosomes, there are 4 distinct types – 5S rRNA, 18S rRNA, 5.8S rRNA and 28S rRNA
- small nuclear RNA ( small nuclear RNA – snRNA ) – participates in the so-called splicing process – splicing of hnRNA, cleavage of introns
- small nucleolar RNA ( small nucleolar RNA - snoRNA ) - plays an important role in the synthesis and maturation (post-transcriptional chemical modification) of rRNA, snRNA and tRNA. Deletion of a cluster of snoRNAs in the region of chromosome 15q leads to the manifestation of Prader-Willi syndrome
- a number of regulatory RNA types such as:
- microRNAs – participate in the regulation of gene expression – are complementary to certain sections of mRNA, bind to them and thus regulate their translation
- small interfering RNA ( small interfering RNA – siRNA )
- piRNA (piwi-interacting RNA) – RNA interacting with proteins from the PIWI family.
ncRNA longer than 200 nucleotides[edit | edit source]
This group bears the collective designation of long non-coding RNA - lncRNA. Probably the best-known representative of lncRNA is the XIST ( X Inactivation Specific Transcript)Xq13.2; OMIM: 314670]) gene, which is involved in the process of inactivation of the X chromosome .
Links[edit | edit source]
[edit | edit source]
External links[edit | edit source]
Source[edit | edit source]
- ŠTEFÁNEK, Jiří. Medicine, diseases, studies at the 1st Faculty of Medicine, UK [online]. [feeling. 2009]. < https://www.stefajir.cz/ >.
- ŠIPEK, Antonín. Genetics [online]. ©2008. [feeling. 2010-02-11]. < http://www.genetika-biologie.cz/ribonucleova-kyselina >.
Kategorie:Genetika Kategorie:Molekulární biologie Kategorie:Biochemie
|
Headphones (or "head-phones" in the early days of telephony and radio) are a pair of small loudspeakers that are designed to be held in place close to a user's ears. They are also known as earspeakers, earphones or, colloquially, cans. The alternate in-ear versions are known as earbuds or earphones. In the context of telecommunication, a headset is a combination of headphone and microphone. Headphones either have wires for connection to a signal source such as an audio amplifier, radio, CD player, portable media player, mobile phone, video game consoles, electronic musical instrument, or have a wireless device, which is used to pick up signal without using a cable.
The different types of headphones have different sound reproduction characteristics. Closed-back headphones, for example, are good at reproducing bass frequencies. Headphones that use cables typically have either a 1/4 inch jack or an 1/8 inch jack for plugging the headphones into the sound source.
- 1 History
- 2 Applications
- 3 Electrical characteristics
- 4 Types
- 5 Ambient noise reduction
- 6 Transducer technology
- 7 Benefits and limitations
- 8 Dangers and volume solutions
- 9 See also
- 10 References
- 11 External links
Headphones originated from the earpiece, and were the only way to listen to electrical audio signals before amplifiers were developed. The first truly successful set was developed in 1910 by Nathaniel Baldwin, who made them by hand in his kitchen and sold them to the United States Navy.
Some very sensitive headphones, such as those manufactured by Brandes around 1919, were commonly used for early radio work. These early headphones used moving iron drivers, either single ended or balanced armature. The requirement for high sensitivity meant no damping was used, thus the sound quality was crude. They also had very poor comfort compared to modern types, usually having no padding and too often having excessive clamping force to the head. Their impedance varied; headphones used in telegraph and telephone work had an impedance of 75 ohms. Those used with early wireless radio had to be more sensitive and were made with more turns of finer wire; impedance of 1000 to 2000 ohms was common, which suited both crystal sets and triode receivers.
In early powered radios, the headphone was part of the vacuum tube's plate circuit and had dangerous voltages on it. It was normally connected directly to the positive high voltage battery terminal, and the other battery terminal was securely grounded. The use of bare electrical connections meant that users could be shocked if they touched the bare headphone connections while adjusting an uncomfortable headset.
In 1943, John C. Koss, an audiophile and jazz musician from Milwaukee, produced the first stereo headphones. Before that, the headphones were used only in industry by telephone operators and the like.
Headphones may be used both with fixed equipment such as CD or DVD players, home theater, personal computers and with portable devices (e.g. digital audio player/mp3 player, mobile phone, etc.). Cordless headphones are not connected via a wire, receiving a radio or infrared signal encoded using a radio or infrared transmission link, like FM, Bluetooth or Wi-Fi. These are powered receiver systems of which the headphone is only a component. Cordless headphones are used with events such as a Silent disco or Silent Gig.
In the professional audio sector headphones are used in live situations by disc jockeys with a DJ mixer and sound engineers for monitoring signal sources. In radio studios, DJs use a pair of headphones when talking to the microphone while the speakers are turned off, to eliminate acoustic feedback and monitor their own voice. In studio recordings, musicians and singers use headphones to play along to a backing track. In the military, audio signals of many varieties are monitored using headphones.
Wired headphones are attached to an audio source. The most common connectors are 6.35 mm (¼″) and 3.5 mm phone connectors. The larger 6.35 mm connector tending to be found on fixed location home or professional equipment. Sony introduced the smaller, and now widely used, 3.5 mm "minijack" stereo connector in 1979, adapting the older monophonic 3.5 mm connector for use with its Walkman portable stereo tape player. The 3.5 mm connector remains the common connector for portable application today. Adapters are available for converting between 6.35 mm and 3.5 mm devices.
Electrical characteristics of dynamic loudspeakers may be readily applied to headphones since most headphones are small dynamic loudspeakers.
Headphones are available with low or high impedance (typically measured at 1 kHz). Low-impedance headphones are in the range 16 to 32 ohms and high-impedance headphones are about 100-600 ohms. As the impedance of a pair of headphones increases, more voltage (at a given current) is required to drive it, and the loudness of the headphones for a given voltage decreases. In recent years, impedance of newer headphones has generally decreased to accommodate lower voltages available on battery powered CMOS-based portable electronics. This results in headphones that can be more efficiently driven by battery powered electronics. Consequently, newer amplifiers are based on designs with relatively low output impedance.
The impedance of headphones is of concern because of the output limitations of amplifiers. A modern pair of headphones is driven by an amplifier, with lower impedance headphones presenting a larger load. Amplifiers are not ideal; they also have some output impedance that limits the amount of power they can provide. In order to ensure an even frequency response, adequate damping factor, and undistorted sound, an amplifier should have an output impedance less than 1/8 that of the headphones it is driving (and ideally as low as possible). If output impedance is large compared to the impedance of the headphones, significantly higher distortion will be present. Therefore, lower impedance headphones will tend to be louder and more efficient, but will also demand a more capable amplifier. Higher impedance headphones will be more tolerant of amplifier limitations, but will produce less volume for a given output level.
Historically, many headphones had relatively high impedance, often over 500 ohms in order to operate well with high impedance tube amplifiers. In contrast, modern transistor amplifiers can have very low output impedance, enabling lower impedance headphones. Unfortunately, this means that older audio amplifiers or stereos often produce poor quality output on some modern, low impedance headphones. In this case, an external headphone amplifier may be beneficial.
Sensitivity is a measure of how effectively an earpiece converts an incoming electrical signal into an audible sound. It thus indicates how loud the headphones will be for a given electrical drive level. It can be measured in decibels of sound pressure level per milliwatt, or dB SPL/mW, which may be abbreviated to dB/mW. The sensitivity of headphones is usually between about 80 and 125 dB/mW.
Headphone sensitivity may be measured in dB/mW or dB/V. These are dB SPL (sound pressure level) measured in a standard ear for a 1 kHz sinusoidal headphone input of either one milliwatt or one volt. One can convert between these two units if the impedance of the earpiece is known.
Headphone size can affect the balance between fidelity and portability. Generally, headphone form factors can be divided into four separate categories: circumaural, supra-aural, earbud, and in-ear.
Circumaural headphones (sometimes called full size headphones) have circular or ellipsoid earpads that encompass the ears. Because these headphones completely surround the ear, circumaural headphones can be designed to fully seal against the head to attenuate external noise. Because of their size, circumaural headphones can be heavy and there are some sets that weigh over 500 grams (1 lb). Ergonomic headband and earpad design is required to reduce discomfort resulting from weight.
Supra-aural headphones have pads that press against the ears, rather than around them. They were commonly bundled with personal stereos during the 1980s. This type of headphone generally tends to be smaller and lighter than circumaural headphones, resulting in less attenuation of outside noise. Supra-aural headphones can also lead to discomfort due to the pressure on the ear as compared to circumaural headphones that sit around the ear. Comfort may vary due to the earcup material.
Open or closed back
Both circumaural and supra-aural headphones can be further differentiated by the type of earcups:
Open-back headphones have the back of the earcups open. This leaks more sound out of the headphone and also lets more ambient sounds into the headphone, but gives a more natural or speaker-like sound and more spacious "soundstage" - the perception of distance from the source.
Closed-back (or sealed) styles have the back of the earcups closed. They usually block some of the ambient noise, but have a smaller soundstage, giving the wearer a perception that the sound is coming from within their head. Closed-back headphones tend to be able to produce stronger low frequencies than open-back headphones.
Semi-open headphones, have a design that can be considered as a compromise between Open-back headphones and Closed-back headphones. This may imply that the result combines all the positive properties of both designs. Some believe the term "Semi-open" is purely there for marketing purposes. While there is no exact definition for the term semi-open headphone, there are designs that can be considered as such. Where the open-back approach has hardly any measure to block sound at the outer side of the diaphragm, and the closed-back approach, really has a closed chamber at the outer side of the diaphragm, a semi-open headphone can have a chamber to block sound partially while leaving some sound through, via openings or vents.
Earphones (popularly called "earbuds" in recent years) are very small headphones that are fitted directly in the outer ear, facing but not inserted in the ear canal. Earphones are portable and convenient, but many people consider them to be uncomfortable and prone to falling out. They provide hardly any acoustic isolation and leave room for ambient noise to seep in; users may turn up the volume dangerously high to compensate, at the risk of causing hearing loss. On the other hand, they let the user be better aware of their surroundings. Since the early days of the transistor radio, earphones have commonly been bundled with personal music devices. They are sold at times with foam pads for comfort.
In-ear headphones, also known as in-ear monitors (IEMs) or canalphones, are small headphones with similar portability to earbuds which are inserted in the ear canal itself. IEMs are higher quality in-ear headphones and are used by audio engineers and musicians as well as audiophiles.
Because in-ear headphones engage the ear canal, they can be less prone to falling out and they block out much environmental noise. Lack of sound from the environment can be a problem when sound is a necessary cue for safety or other reasons, as when walking, driving, or riding near or in vehicular traffic.
Generic or custom fitting ear canal plugs are made from silicone rubber, elastomer, or foam. Custom in-ear headphones use castings of the ear canal to create custom-molded plugs that provide added comfort and noise isolation.
A headset is a headphone combined with a microphone. Headsets provide the equivalent functionality of a telephone handset with hands-free operation. Among applications for headsets, besides telephone use, are aviation, theatre or television studio intercom systems, and console or PC gaming. Headsets are made with either a single-earpiece (mono) or a double-earpiece (mono to both ears or stereo). The microphone arm of headsets is either an external microphone type where the microphone is held in front of the user's mouth, or a voicetube type where the microphone is housed in the earpiece and speech reaches it by means of a hollow tube. Some headsets come in a choice of either behind-the-neck or no-headband design instead of the traditional over-the-head band.
Telephone headsets connect to a fixed-line telephone system. A telephone headset functions by replacing the handset of a telephone. Headsets for standard corded telephones are fitted with a standard 4P4C commonly called an RJ-9 connector. Headsets are also available with 2.5 mm jack sockets for many DECT phones and other applications. Cordless bluetooth headsets are available, and often used with mobile telephones. Headsets are widely used for telephone-intensive jobs, in particular by call centre workers. They are also used by anyone wishing to hold telephone conversations with both hands free.
For older models of telephones, the headset microphone impedance is different from that of the original handset, requiring a telephone amplifier for the telephone headset. A telephone amplifier provides basic pin-alignment similar to a telephone headset adaptor, but it also offers sound amplification for the microphone as well as the loudspeakers. Most models of telephone amplifiers offer volume control for loudspeaker as well as microphone, mute function and switching between headset and handset. Telephone amplifiers are powered by batteries or AC adaptors.
Ambient noise reduction
Unwanted sound from the environment can be reduced by excluding sound from the ear by passive noise isolation, or, often in conjunction with isolation, by active noise cancellation.
Passive noise isolation is essentially using the body of the earphone, either over or in the ear, as a passive earplug that simply blocks out sound. The headphone types that provide most attenuation are in-ear canal headphones and closed-back headphones, both circumaural and supra aural. Open-back and earbud headphones provide some passive noise isolation, but much less than the others. Typical closed-back headphones block 8 to 12 dB, and in-ears anywhere from 10 to 15 dB. Some models have been specifically designed for drummers, with the aim to be able to monitor the recorded sound while shutting out the sound coming directly from the drums at the same time as much as possible. Such headphones claim to reduce ambient noise by around 25 dB.
Active noise-cancelling headphones use a microphone, amplifier, and speaker to pick up, amplify, and play ambient noise in phase-reversed form; this to some extent cancels out unwanted noise from the environment without affecting the desired sound source, which is not picked up and reversed by the microphone. They require a power source, usually a battery, to drive their circuitry. Active noise cancelling headphones can attenuate ambient noise by 20 dB or more, but the active circuitry is mainly effective on constant sounds and at lower frequencies, rather than sharp sounds and voices. Some noise cancelling headphones are designed mainly to reduce low-frequency engine and travel noise in aircraft, trains, and automobiles, and are less effective in environments with other types of noise.
Various types of transducer are used to convert electrical signals to sound in headphones.
The moving coil driver, more commonly referred to as a "dynamic" driver is the most common type used in headphones. The operating principle consists of a stationary magnetic element affixed to the frame of the headphone which sets up a static magnetic field. The magnetic element in headphones is typically composed of ferrite or neodymium. The diaphragm, typically fabricated from lightweight, high stiffness to mass ratio cellulose, polymer, carbon material, or the like, is attached to a coil of wire (voice coil) which is immersed in the static magnetic field of the stationary magnet. The diaphragm is actuated by the attached voice coil, when the varying current of an audio signal is passed through the coil. The alternating magnetic field produced by the current through the coil reacts against the static magnetic field in turn, causing the coil and attached diaphragm to move the air, thus producing sound. Modern moving-coil headphone drivers are derived from microphone capsule technology.
Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated. Electrostatic headphones are usually more expensive than moving-coil ones, and are comparatively uncommon. In addition, a special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1000 volts.
Due to the extremely thin and light diaphragm membrane, often only a few micrometers thick, and the complete absence of moving metalwork, the frequency response of electrostatic headphones usually extends well above the audible limit of approximately 20 kHz. The high frequency response means that the low midband distortion level is maintained to the top of the audible frequency band, which is generally not the case with moving coil drivers. Also, the frequency response peakiness regularly seen in the high frequency region with moving coil drivers is absent. Well-designed electrostatic headphones can produce significantly better sound quality than other types.
Electrostatic headphones require a voltage source generating 100 V to over 1 kV, and are on the user's head. They do not need to deliver significant electric current, which limits the electrical hazard to the wearer in case of fault.
An electret driver functions along the same electromechanical means as an electrostatic driver. However the electret driver has a permanent charge built into it, where electrostatics have the charge applied to the driver by an external generator. Electret and electrostatic headphones are relatively uncommon. Electrets are also typically cheaper and lower in technical capability and fidelity than electrostatics.
Orthodynamic (also known as Planar Magnetic) headphones use similar technology to electrostatic headphones, with some fundamental differences. They operate similarly to Planar Magnetic Loudspeakers.
An othodynamic driver consists of a relatively large membrane suspended between two sets of permanant, opisitly aligned, magnets.
The membrane is then electrically charged in order to induce movement in the membrane and produce sound.
Orthodynamic headphones have a number of advantages and disadvantages. Since the entire membrane is equally charged, and suspended within a uniform magnetic field, changes in the electrical charge produce uniform movement across the entire membrane surface. This reduces the possibility of distortion at different frequencies across different parts of the membrane, as may occur in dynamic drivers. This configuration also usually benefits from a good quality of bass response, due to the large surface area of the membrane producing the pressure waves required for low frequency sound.
Overall, orthodynamic headphones are considered to have very high-fidelity sound quality.
However, the large permanent magnet configurations cause orthodynamic headphones to often be considerably heavier than their dynamic or electrostatic counterparts. Whilst they do not require as much power as electrostatic headphones in order to give acceptable levels of volume and audio quality, they do require significantly more than most dynamic driver headphones. As such, most orthodynamic headphones require substantial amplification.
A balanced armature is a sound transducer design primarily intended to increase the electrical efficiency of the element by eliminating the stress on the diaphragm characteristic of many other magnetic transducer systems. As shown schematically in the first diagram, it consists of a moving magnetic armature that is pivoted so it can move in the field of the permanent magnet. When precisely centered in the magnetic field there is no net force on the armature, hence the term 'balanced.' As illustrated in the second diagram, when there is electric current through the coil, it magnetizes the armature one way or the other, causing it to rotate slightly one way or the other about the pivot thus moving the diaphragm to make sound.
The design is not mechanically stable; a slight imbalance makes the armature stick to one pole of the magnet. A fairly stiff restoring force is required to hold the armature in the 'balance' position. Although this reduces its efficiency, this design can still produce more sound from less power than any other[clarification needed]. Popularized in the 1920s as Baldwin Mica Diaphragm radio headphones, balanced armature transducers were refined during World War II for use in military sound powered telephones. Some of these achieved astonishing electro-acoustic conversion efficiencies, in the range of 20% to 40%, for narrow bandwidth voice signals.
Today they are typically used only in canalphones and hearing aids, where their diminutive size is a major advantage. They generally are limited at the extremes of the hearing spectrum (e.g. below 20 Hz and above 16 kHz) and require a better seal than other types of drivers to deliver their full potential. Higher-end models may employ multiple armature drivers, dividing the frequency ranges between them using a passive crossover network. A few combine an armature driver with a small moving-coil driver for increased bass output.
The earliest loudspeakers for radio receivers used balanced armature drivers for their cones.
The thermoacoustic effect generates sound from the audio frequency Joule heating of the conductor, an effect which is not magnetic and does not vibrate the speaker. In 2013 a carbon nanotube thin-yarn earphone based on the thermoacoustic mechanism was demonstrated by a research group in Tsinghua University. The as-produced CNT thin yarn earphone has a working element called CNT thin yarn thermoacoustic chip. Such a chip is composed of a layer of CNT thin yarn array supported by the silicon wafer, and periodic grooves with certain depth are made on the wafer by micro-fabrication methods to suppress the heat leakage from the CNT yarn to the substrate.
Other transducer technologies
Transducer technologies employed much less commonly for headphones include the Heil Air Motion Transformer (AMT); Piezoelectric film; Ribbon planar magnetic; Magnetostriction and Plasma-ionisation. The first Heil AMT headphone was marketed by ESS Laboratories and was essentially an ESS AMT tweeter from one of the company's speakers being driven at full range. Since the turn of the century, only Precide of Switzerland have manufactured an AMT headphone. Piezoelectric film headphones were first developed by Pioneer, their two models both used a flat sheet of film which limited the maximum volume of air that could be moved. Currently TakeT produce a piezoelectric film headphone which is shaped not unlike an AMT transducer but which like the driver Precide uses for their headphones, has a variation in the size of transducer folds over the diaphragm. It additionally incorporates a two way design by its inclusion of a dedicated tweeter/supertweeter panel. The folded shape of a diaphragm allows a transducer with a larger surface area to fit within smaller space constraints. This increases the total volume of air that can be moved on each excursion of the transducer given that radiating area.
Magnetostriction headphones, sometimes sold under the label of "Bonephones", are headphones that work via the transmission of vibrations against the side of head, transmitting the sound via bone conduction. This is particularly helpful in situations where the ears must be left unobstructed or when used by those who are deaf for reasons which do not affect the nervous apparatus of hearing. Magnetostriction headphones though, have greater limitations to their fidelity than conventional headphones which work via the normal workings of the ear. Additionally, there was also one attempt to market a plasma-ionisation headphone in the early 1990s by a French company called Plasmasonics. It is believed that there are no functioning examples left.
Benefits and limitations
Headphones may be used to prevent other people from hearing the sound either for privacy or to prevent disturbance, as in listening in a public library. They can also provide a level of sound fidelity greater than loudspeakers of similar cost. Part of their ability to do so comes from the lack of any need to perform room correction treatments with headphones. High quality headphones can have an extremely flat low-frequency response down to 20 Hz within 3 dB. Marketed claims such as 'frequency response 4 Hz to 20 kHz' are usually overstatements; the product's response at frequencies lower than 20 Hz is typically very small.
Headphones are also useful for video games that use 3D positional audio processing algorithms, as they allow players to better judge the position of an off-screen sound source (such as the footsteps of an opponent or their gun fire).
Although modern headphones have been particularly widely sold and used for listening to stereo recordings since the release of the Walkman, there is subjective debate regarding the nature of their reproduction of stereo sound. Stereo recordings represent the position of horizontal depth cues (stereo separation) via volume and phase differences of the sound in question between the two channels. When the sounds from two speakers mix, they create the phase difference the brain uses to locate direction. Through most headphones, because the right and left channels do not combine in this manner, the illusion of the phantom center can be perceived as lost. Hard panned sounds will also only be heard only in one ear rather than from one side.
Binaural recordings use a different microphone technique to encode direction directly as phase, with very little amplitude difference below 2 kHz, often using a dummy head, and can produce a surprisingly lifelike spatial impression through headphones. Commercial recordings almost always use stereo, rather than binaural, recording, because loudspeaker listening has been more popular than headphone listening.
It is possible to change the spatial effects of stereo sound on headphones to better approximate the presentation of speaker reproduction by using frequency-dependent cross-feed between the channels, or—better still—a Blumlein shuffler (a custom EQ employed to augment the low-frequency content of the difference information in a stereo signal).
Headsets can have ergonomic benefits over traditional telephone handsets. They allow call center agents to maintain better posture without needing to hand-hold a handset or tilt their head sideways to cradle it.
Due to the unique physiology of the ear, PPG sensors may be integrated into earbuds, providing a mechanism for real-time fitness monitoring during exercise and while listing to music. For example, headsets using Valencell's PeformTek technology can measure health and fitness metrics that include: heart rate, distance, speed, cadence, VO2 max (aerobic fitness level) and calories burned.
Dangers and volume solutions
Using headphones at a sufficiently high volume level may cause temporary or permanent hearing impairment or deafness. The headphone volume often has to compete with the background noise, especially in loud places such as subway stations, aircraft, and large crowds. Extended periods of exposure to high sound pressure levels created by headphones at high volume settings may be damaging; however, one hearing expert found that "fewer than 5% of users select volume levels and listen frequently enough to risk hearing loss." Some manufacturers of portable music devices have attempted to introduce safety circuitry that limited output volume or warned the user when dangerous volume was being used, but the concept has been rejected by most of the buying public, which favors the personal choice of high volume. Koss introduced the "Safelite" line of cassette players in 1983 with such a warning light. The line was discontinued two years later for lack of interest.
The government of France has imposed a limit on all music players sold in the country: they must not be capable of producing more than 100dBA (the threshold of hearing damage during extended listening is 80 dB, and the threshold of pain, or theoretically of immediate hearing loss, is 130 dB). Motorcycle and other power-sport riders benefit by wearing foam earplugs when legal to do so to avoid excessive road, engine, and wind noise, but their ability to hear music and intercom speech is enhanced when doing so. The ear can normally detect 1-billionth of an atmosphere of sound pressure level, hence it is incredibly sensitive. At very high sound pressure levels, muscles in the ear tighten the tympanic membrane and this leads to a small change in the geometry of the ossicles and stirrup that results in lower transfer of force to the oval window of the inner ear (the acoustic reflex).
Listening to music through headphones while exercising can be dangerous. Blood may be diverted from the ears to the limbs leaving the inner ear more vulnerable to damage from loud sound. A Finnish study recommended that exercisers should set their headphone volumes to half of their normal loudness and only use them for half an hour.
Passive noise canceling headphones can be considered dangerous because of a lack of awareness the listener may have with their environment. Noise cancelling headphones are so effective that a person may not be able to hear oncoming traffic or pay attention to people around them. This issue multiplies as the amount of headphone enabled device increases.
The usual way of limiting sound volume on devices driving headphones is by limiting output power. This has the additional undesirable effect of being dependent of the efficiency of the headphones; a device producing the maximum allowed power may not produce adequate volume in low-efficiency high-quality headphones, while possibly reaching dangerous levels in very efficient ones.
- Bone conduction
- Digital audio player
- Headphone amplifier
- In-ear monitor
- List of headphone manufacturers
- Noise-cancelling headphone
- "earphone". Retrieved 4 January 2014.
- Stanley R. Alten Audio Basics Cengage 2011 ISBN 0-495-91356-1 page 63
- The Early Radio Industry and the United States Navy
- Utah History To Go. Ruin Followed Riches for a Utah Genius (Will Bagley for the Salt Lake Tribune, July 8, 2001)
- "Headphone & Amp Impedance". Retrieved 31 May 2012.
- Siau, John. "The "0-Ohm" Headphone Amplifier". Retrieved 26 June 2012.
- "Understanding Earphone/Headphone Specifications". Shure Customer Help. Shure. Retrieved 30 December 2012.
- "Headphones | AudioBoost - Audio Review, Headphone Review, Amplifier Review, Music Album Review". AudioBoost. Retrieved 2013-09-06.
- Time magazine: custom-made headphones
- Ear and Hearing - Abstract: Volume 25(6) December 2004 p 513-527 Output Levels of Commercially Available Portable Compact Disc Players and the Potential Risk to Hearing
- In-ear monitors are referred to as "earbuds" by some manufacturers. The box for the "J2" from jbuds describes the product as noise isolating soft silicone earbuds with the latest cutting edge In-Ear design. Also, Koss' The Plug is "a stereo earbud" that "features ... a tubular port structure that is inserted on a soft expandable cushion into the ear canal." Skullcandy also sells "in-ear earbuds" - http://www.skullcandy.com/shop/skin/frontend/default/skullcandy_main-v3/images/ninjistics/in-ear.png
- Yang Wei; Xiaoyang Lin; Kaili Jiang; Peng Liu; Qunqing Li; Shoushan Fan (2013). "Thermoacoustic Chips with Carbon Nanotube Thin Yarn Arrays.". Nano Letters. doi:10.1021/nl402408j.
- cnet reviews: Headphone Buying Guide "Even the flimsiest, cheap headphones routinely boast extremely low bass-response performance—15 or 20Hz—but almost always sound lightweight and bright."
- United States Department of Labor. Occupational Safety & Health Administration. Computer Workstation. Checklist. Accessed February 2, 2009.
- Europa.eu. Consumers: EU acts to limit health risks from exposure to noise from personal music players
- "Medical Information Search (Cochlea • FAQ)". Lookformedical.com. Retrieved 2013-09-06.
- A standard threshold of hearing at 1000 Hz for the human ear is 2 x 10-5 Pa (Marsh, Andrew (1999). "Human Ear and Hearing". Online Course on Acoustics. The School of Architecture and Fine Arts, The University of Western Australia. Retrieved 23 August 2010.); Standard atmospheric pressure is 101,325 Pa. 2 x 10-5 / 100,000 = 0.2 x 10-9, a ratio of less than a billionth.
- Greenfield, Paige (25 June 2011). "Deaf to Danger: The Perils of Earbuds". ABC News. Retrieved 20 June 2013.
- Headwize.com. Preventing Hearing Damage When Listening With Headphones
- Airo, Erkko; J. Pekkarinen; P. Olkinuora. "Listening to music with earphones: an assessment of noise exposure," Acustica–Acta Acustica, pp. 82, 885–894. (1996)
|Wikimedia Commons has media related to Headphones.|
|Look up headphone, earphone, earpiece, stereophone, or canalphones in Wiktionary, the free dictionary.|
|
An APA format essay follows the American Psychological Association’s style guidelines for citing and documenting sources. According to Purdue University’s Online Writing Lab, APA format is most commonly used to document sources in papers on topics in the field of social sciences such as psychology, sociology, and anthropology. Major papers are usually composed of four sections: Title Page, Abstract, Main Body, and References.
APA format is applied to all textual citations, which are references to work or ideas that don’t originate with the essayist, as well as to the “References” page located at the end of the essay.
Set the style. In your Word processing software, set your document preferences to 10- to 12-point type and a standard letter-sized page, which is 8.5 inches by 11 inches, with one-inch margins on all sides. Select Times New Roman or a similar font and double-space your text.
Create a page header. Each page of your document should include a header with the title of your paper and the page number. The title appears on the left, while the pages numbers appear on the right.
Use citations. Include in-text citations for any summary, idea, or direct quotation that is the work of another author. APA format citations must include the author’s last name, the year of publication for the specific source of the material, and the page number where the cited information appears. State the name and date as part of the sentence and include the page number parenthetically at the end. For example: In 2003, Smith found that ... (p. 206). Conversely, you may include all or part of the citation parenthetically at the end of the sentence. This would appear as (Smith, 2003, p. 206) within your text.
Use correct placement. If you have included a direct quote, place parenthetical citations after quotation marks and before the ending punctuation. For example: “This is an example of a direct quotation” (Smith, 2003, p. 206).
List references. Create a reference list at the end of your paper that includes complete bibliographic information for each source that is referenced in your document. List in order the first and last name of each author, the complete name of the publication, the article title if it is part of a larger work, the volume and issue numbers, the name of the publisher, and city and date of publication.
Title your reference list. Type “References” at the top of the page and center the word.
Format your reference list with hanging indentations. This means that the first line of each reference is flush left, but each subsequent line within the reference is indented by half an inch.
Italicize certain titles. Italicize the titles of longer works, such as books, journals and newspapers. Article titles are not italicized.
Use specific capitalization rules. Capitalize only the first letter of the first word in the title and subtitle of any work that does not appear in a journal. In the case of journals, capitalize the first letter of all major words.
Create a bibliography. Write the bibliographic entry for a single author book as follows: Author, A.B. (Year of publication). Title of book: Subtitle of book. Location: Publisher.
Write the bibliographic entry for a single author periodical article as follows: Author, A.B. (Date of publication). Title of article. Title of periodical volume number (issue number), pages.
Write the bibliographic entry for a non-periodical web document, web page or report as follows: Author, A.B. (Date of publication). Title of document. Retrieved from http://Web address
|
The Network Basics of Bridging, Routing, and Switching
A network node, which is just a device that forwards packets toward a destination, can be a router, bridge, or switch. They operate on different layers of a networking protocol (layered protocols make it easier to modify and implement the networking task).
Routers operate at Layer 3, the packet layer. Routes on a network, whether the global Internet or the network within your company, are the path that messages take to reach their destination.
But Layer 3 packets are placed inside Layer 2 frames, and a network node that only looks at frames is called a bridge. A switch is a bridge that uses frames with special tags called virtual LANs (VLANs), to forward traffic.
Layer 2: Bridging
Bits at Layer 1 are organized into frames at Layer 2. Ethernet frames have a source and destination address and a type field in the header, followed by the “data” (as you might imagine, by definition, all data units at any level carry data). At the end of the Ethernet frame comes a trailer that contains some error-detecting information.
Now, here’s the key: Bridges are the network devices that look at the frame (Layer 2) header to figure out which adjacent system should get the frame next. Bridges adjust the frame source and destination addresses (called Media Access Control addresses, or MAC addresses) so that the frame addresses show each network device that a frame came from and where it is going on each hop from source to destination.
Layer 3: Routing
Wait a minute! A bridge changes the source and destination addresses each hop along the way, which makes it hard for the end systems to figure out where the frame came from and whom to reply to.
That’s where the layers come in. Although a different frame (at least as far as MAC addresses are concerned) is sent hop-by-hop through the network, the data content of the frame, called the Layer 3 packet, remains intact from source host to destination host. The Layer 3 packet can’t use Layer 2 MAC addresses, so the IP address scheme was created for Layer 3.
Network devices that look at the packet (Layer 3) header to figure out which adjacent system should get the frame next are called routers. Routers cannot adjust the packet source and destination addresses (the IP addresses) so that the receiver knows that the packet is for them and where to reply. However, routers do adjust the MAC addresses in the Layer 2 frame hop-by-hop, just like bridges.
(Layer 2) Switching
However, if you define a bridge as a MAC-frame-address-examining-device and a router as an IP-packet-address-examining-device, then there does not seem to be anything left for a switch to do.
Today, when people say “switch,” they usually mean a LAN switch. When applied to LANs, a switch is a device with a number of characteristics that can be compared to bridges and routers.
The LAN switch is really a complex bridge with many interfaces. LAN switching is a form of multiport bridging, where a bridge device connects not just two, but many LANs on different ports. Essentially, though, a LAN switch has every device on its own LAN segment (piece of a LAN), giving each system the entire LAN bandwidth.
Much more can be said about switching, of course, enough to fill a book or two. For now, just remember that switching normally involves virtual LANs, or VLANs.
|
Gangrene is a condition that arises when a considerable mass of body tissue dies (necrosis) . This may occur after an injury or infection, or in people suffering from any chronic health problem affecting blood circulation. Gangrene is primarily caused due to reduced blood supply to the affected tissues that leads to cell death. Diseases like diabetes and long-term smoking increases the risk of gangrene. It can affect any part of the body but typically starts in the toes, feet, fingers and hands (the extremities).
Types of gangrene:
Dry gangrene: Dry gangrene begins at the distal part of the limb due to ischemia (restricted supply of blood), and often appears in the toes and feet of elderly patients due to arteriosclerosis and thus, it is also known as senile gangrene. Dry gangrene is generally seen due to arterial occlusion. As there is limited putrefaction and bacteria fail to survive, dry gangrene spreads slowly until it reaches the point where the blood supply is adequate to keep tissue viable. The affected part is dry, shrunken and dark reddish-black, like mummified flesh.
Wet gangrene: Wet gangrene occurs in moist tissues and organs such as the mouth, bowel, lungs, cervix, and vulva. Bed sores occurring on body parts such as the sacrum, buttocks, and heels are also classified as wet gangrene infections. Wet gangrene is characterized by numerous bacteria and generally has a poor prognosis (compared to dry gangrene) due to septicaemia. In wet gangrene, the tissue is infected by microorganisms like Clostridium perfringens or Bacillus fusiformis that causes the tissues to swell and emit a foetid smell. Wet gangrene usually develops rapidly due to blockage of venous (mainly) and/or arterial blood flow. The affected part is saturated with stagnant blood,that promotes rapid growth of bacteria.
Gas gangrene: Gas gangrene is a bacterial infection that produce gas within tissues. It is the most severe form of gangrene usually caused by Clostridium perfringens bacteria. Infection spreads rapidly as the gases produced by bacteria expand and infiltrate healthy tissues in the vicinity.Gas gangrene is generally treated as a medical emergency as it quickly spreads to the surrounding tissues.
Dry gangrene which is the most common type of gangrene, usually manifests in the following form.
• Firstly, the affected part turns red
• Then it becomes cold, pale and numb (though some people might experience pain).
• Furthermore, the body part may begin to change colour from red to brown to black without treatment. The dead tissue may shrivel up and fall away from the surrounding healthy tissue.
Wet gangrene progresses much faster than dry gangrene. Its symptoms include:
• Swelling and redness of the affected body part
• Pain which can often be severe
• Foul-smelling discharge of pus from a sore in skin
• Affected area will change colour from red to brown to black
The symptoms of Gas gangrene are as follows:
• Feeling of heaviness followed by severe pain
• In most cases of gas gangrene, pressing skin near the affected area will produce a crackling sound caused by a build-up of gas; producing the feeling of crushing fine tinfoil.
Gangrene is caused when a body part loses its blood supply due to an injury or an underlying disease. The conditions most commonly responsible for causing gangrene are as follows:
- Blood vessel disease such as arteriosclerosis causing hardening of the arteries, in arms or legs
- Suppressed immune system (for example, from HIV or chemotherapy)
- Due to infection or ischemia, such as by the bacteria Clostridium perfringens or by thrombosis (a blocked blood vessel).
Clinical tests can be carried out to confirm the diagnosis of gangrene. These include:
- Blood tests – An increase or decrease in the number of white blood cells can indicate infection.
- Tissue culture – A small sample of fluid (or tissue) from the affected area can be tested for bacteria. This test is called a Gram stain. Bacteria are stained with a dye and examined under a microscope. The test is also useful for determining the most effective type of antibiotic to treat the infection.
- Blood culture – A sample of infected blood is removed and placed in a warm environment to encourage the growth of bacteria.
- Imaging tests – A range of imaging tests, such as X-rays, MRI scans (where radio waves are used to produce an image of the inside of your body) or a computerized tomography (CT) scan can be used to confirm the presence and spread of gangrene. These tests can also be used to study blood vessels in order to identify the blockages.
- Surgery – Surgical examination may be necessary to confirm the diagnosis of gas gangrene.
Although Gangrene can be managed by its symptomatic treatment, yet it is also necessary to diagnose and treat the underlying cause. The symptomatic treatment is as follows:
Infection: Serious infections are usually treated with antibiotics.
Debridement: Debridement is the surgical removal of the dead tissue that results from gangrene.
Vascular surgery: Vascular surgeries can be used to restore the blood flow either by Angioplasty or Bypass surgery.
- Angioplasty – Where a tiny balloon is placed into a narrow, or blocked, artery and is inflated to open up the vessel. A small metal tube, known as a stent, may also be inserted into the artery to prevent it from getting closed.
- Bypass surgery – Where the surgeon redirects the flow of blood and bypasses the blockage by connecting (grafting) one of the veins to a healthy part of an artery.
Hyperbaric oxygen therapy: An alternative treatment for some forms of gangrene is hyperbaric oxygen therapy. As part of the therapy, a specially designed chamber that also contains a plastic hood filled with pure oxygen, is filled with pressurized air. The plastic hood is then placed over the damaged body part.
There are a number of self care techniques for the patients who are at risk of developing Gangrene. The most common methods used are as follows:
- Check feet daily for problems such as numbness, discoloration, breaks in the skin, pain, or swelling. • Avoid walking barefoot outside and wearing shoes without socks.
- Avoid walking barefoot outside and wearing shoes without socks.
- Wash your feet daily.
- Avoid using hot water bottles, electric blankets, foot spas, and sitting too close to the fire. These may burn the feet. Burnt tissue is vulnerable to gangrene.
- Avoid wearing sandals, flip-flops, slip-ons, shoes with a pointed toe, or heels higher than an inch. Shoes with round or square toes, and laces or fasteners, provide the best support and protection for feet. Always break into new shoes gradually.
|
The Cassini spacecraft looks down, almost directly at the north pole of Dione. The feature just left of the terminator at bottom is Janiculum Dorsa, a long, roughly north-south trending ridge. Image taken March 22, 2008.
Credit: NASA/JPL/Space Science Institute
The plain-looking Saturn moon Dione may have once had a geologically active subsurface ocean, new images from NASA's Cassini spacecraft reveal.
Images of Dione's 500-mile-long (800 kilometers) mountain Janiculum Dorsa suggest that the moon could have been a weaker copycat of Enceladus, Saturn's icy geyser moon.
"There may turn out to be many more active worlds with water out there than we previously thought," Bonnie Buratti, who leads the Cassini science team at NASA's Jet Propulsion Laboratory in Pasadena, Calif., said in a statement.
Subsurface oceans are thought to exist on several bodies in the solar system, including Saturn's moons Enceladus and Titan and Jupiter's moon Europa. These geologic hotspots have garnered the interest of scientists searching for the building blocks of life beyond Earth. If Dione turned out to have a liquid layer under its crust, that would increase the moon's chances of supporting life.
Cassini, which has been exploring Saturn since 2004, detected a weak particle stream coming from Dione with its magnetometer. Images taken by the spacecraft suggest a slushy liquid layer might exist beneath its icy crust, as well as ancient, inactive fractures that now spew water ice and carbon-containing particles, much like ones seen on Enceladus.
Dione's Janiculum Dorsa ranges from about 0.6 to 1.2 miles (1 to 2 kilometers) in height. The mountain seems to have deformed the icy crust underneath by as much as 0.3 mile (0.5 kilometer). The deformation implies the crust was warm, most likely from a subsurface ocean when the mountain formed, the researchers said.
As Dione swings around Saturn, it gets squished and stretched, causing it to heat up. When you have a subsurface ocean that lets the icy crust float around on top, Saturn's gravitational pull becomes amplified and generates 10 times more heat, the researchers said. The heating could also be caused by a local hotspot or a crazy orbit, but these explanations are less likely.
Scientists don't know why Dione hasn't been as active as Enceladus. The latter may have experienced stronger gravitational forces or more radioactive heating in its core, they suggest. Subsurface oceans appear to be common on icy satellites, and could exist on dwarf planets like Ceres and Pluto.
Cassini's recent findings were reported in March in the journal Icarus.
|
Paleontologists would love to use DNA to study the evolution of long-extinct animals, but genetic material is much too fragile to last very long. Now, scientists have tapped a related, sturdier source of molecular information by studying proteins in fossilized bone. Two bison bones more than 55,000 years old have yielded the first complete amino acid sequences from a fossil, and researchers think the same technique could work for much older specimens.
Biochemist Christina Nielsen-Marsh of the University of Newcastle, U.K., and her colleagues borrowed a technique used to analyze modern genetic material and applied it to bison bones recovered from the permafrost of Siberia and Alaska. In the December issue of Geology, the team members report that they isolated a protein called osteocalcin from the bones and recovered the first intact amino acid sequences from an ancient specimen. Because the protein sequence is directly related to the DNA code, the team says that comparing protein sequences--like comparing DNA sequences--is an approach that can be used to determine degrees of relatedness between species and to decipher how ancient animals evolved.
In addition, proteins have distinct advantages over DNA, says Nielsen-Marsh. Osteocalcin is bound tightly to minerals in the bone, which makes them very stable. Although DNA can survive only up to 100,000 years and is rarely found in useful quantities in specimens even half that old, the team has found measurable amounts of osteocalcin in 120,000-year-old bones. Nielsen-Marsh estimates that the proteins might survive as long as 10 million years. Their stability makes osteocalcin more likely to endure warmer, harsher environments as well. Contamination--one of the biggest concerns with ancient DNA analyses--is also less likely to be a problem with osteocalcin, because it is found only in vertebrate bones.
"It's a very promising preliminary result," says evolutionary biologist Robert Wayne of the University of California, Los Angeles. By providing an alternative to DNA, osteocalcin could expand the range of specimens that can be used to test evolutionary hypotheses, he says.
|
Draft of new version of Chapter 3 of Pickard and Emery. (Revised 11/2/00 Lynne Talley). This will be copyrighted material. There are some formatting errors for the tables and it is likely that some equations have not converted to html. The text is placed here for your convenience and for your comments. A printed version of the word document will be placed in the library reserve reading.
Units. Officially we should be using mks units for everything. In reality, we often use cgs since our velocities are on the order of cm/sec rather than m/sec. We usually refer to depths in meters, and distances in kilometers. Most publications use decibars for pressure rather than Pascals. We usually use degrees Celsius rather than Kelvin, but care should be taken when doing heat calculations. Salinity officially has no units (see discussion below). In general be careful about units when doing calculations.
Units. The units
of force are (mass length / time^2) which you can remember
from Newton's Law F = ma. The units of pressure
are (force / length^2) or (mass /[length time^2]).
Description. The force due to pressure comes from the difference in pressure from one point to another - i.e. the "pressure gradient force" since the gradient is the change over distance. The force is in the direction from high to low pressure, hence we say the force is oriented "down the pressure gradient".
In the ocean, the downward force of gravity is balanced mostly by an upward pressure gradient force. That is, the water is not accelerating downwards - instead it is kept from collapsing by the upward pressure gradient. Therefore pressure increases with increasing depth.
The pressure at a given depth depends on the mass of water lying above that depth. (Hydrostatic equation given in class.) If the pressure change is 100 decibars (100 dbar), gravity g = 9.8 m/sec^2, and density is 1025 kg/m^3, then the depth change is 99.55 meter.
The total vertical variation in pressure in the ocean is thus from near zero (surface) to 10,000 dbar (deepest).
Horizontal pressure gradients drive the horizontal flows in the ocean (which are much much stronger than the vertical flows). The horizontal variation in pressure in the ocean is due entirely to variations in the mass distribution. Where the water column above a given depth (or rather geopotential surface, parallel to the geoid) is heavier because it is either heavier or thicker or both, the pressure will be greater. Note that the horizontal pressure differences which drive the ocean currents are on the order of a decibar over hundreds or thousands of kilometers, that is, much smaller than the change in pressure with depth.
How is pressure measured?
(1) Until recently, and possibly still in some circumstances, pressure was measured using a pair of reversing thermometers - one protected from seawater pressure by a vacuum and the other open to the seawater pressure. They were sent in a pair down to whatever depth, then flipped over, which cuts off the mercury in an ingenious small glass loop in the thermometer. They were brought back aboard and the difference between the mercury column length in the protected and unprotected thermometers was used to calculate the pressure.
(2) Quartz transducer now used with electronic instruments. The accuracy is 3 dbar and the precision is 0.5 dbar.
Figure. Depth versus pressure calculated from a CTD profile near Japan.
Definition. Temperature is a thermodynamic property of a fluid, and is due to the activity of molecules and atoms in the fluid. The more the activity (energy), the higher the temperature. Temperature is a measure of the heat content. Heat and temperature are related through the specific heat: (equation in class). When the heat content is zero (no activity), the temperature is absolute zero (on the Kelvin scale).
Units. Temperature units used in oceanography are degrees Celsius. For heat content and heat transport calculations, the Kelvin scale for temperature should be used. In the special case when mass transport is zero across the area chosen for the heat transport calculation, degrees Celsius can of course be used. Most oceanographic applications of heat transport rely on making such a mass balance. (See discussion in topic 3.) 0 C = 273.16 K. A change of 1 deg C is the same as a change of 1 deg K.
How is temperature measured? (1) Reversing mercury thermometers (see pressure discussion above). These were invented by Negretti and Zamba in 1874. Accuracy is 0.004C and precision is 0.002C. (2) Thermistors for electronic instruments, including replacement for reversing thermometer pairs. Quality varies significantly. The best thermistors commonly used in oceanographic instruments have and accuracy of 0.002C and precision of 0.0005-0.001C.
Heat per unit volume is computed from temperature using Q = density*specific heat*T where Q is heat/volume and T is temperature in degrees Kelvin. (When making a heat calculation within the ocean, where pressure is non-zero, use potential temperature, as defined below.) mks units of heat are joules (i.e. an energy unit). Heat change is expressed in Watts (i.e. joules/sec). Heat flux is in Watts/meter^2 (energy per second per unit area).
To change the temperature by 1C in a column of water which is 100 m thick and 1 m^2 on the top and bottom, over a period of 30 days, requires what heat flux? The density of seawater is about 1025 kg/m^3 and the specific heat is about 3850 J/(kg C). The heat flux into the volume must then be density*specific heat*(delta T)*volume/(delta t) where T is temperature and t is time. This gives a heat change of 100 W. The heat flux through the surface area of 1m^2 is thus 100 W/m^2.
Maps of surface heat flux are shown in topic 3.
3.3. Potential temperature
Pressure in the ocean increases greatly downward. A parcel of water moving from one pressure to another will be compressed or expanded. When a parcel of water is compressed adiabatically, that is, without exchange of heat, its temperature increases. (This is true of any fluid or gas.) When a parcel is expanded adiabatically, its temperature decreases.
The change in temperature which occurs solely due to compression or expansion is not of interest to us - it does not represent a change in heat content of the fluid. Therefore if we wish to compare the temperature of water at one pressure with water at another pressure, we should remove this effect of adiabatic compression/expansion.
Definition. "Potential temperature" is the temperature which a water parcel has when moved adiabatically to another pressure. In the ocean, we commonly use the sea surface as our "reference" pressure for potential temperature - we compare the temperatures of parcels as if they have been moved, without mixing or diffusion, to the sea surface. Since pressure is lowest at the sea surface, potential temperature (computed at surface pressure) is ALWAYS lower than the actual temperature unless the water is lying at the sea surface.
3.4. Distribution of temperature
Temperature maps at the sea surface (Levitus annual mean).
Surface temperature is dominated by net heating in the tropics and cooling at higher latitudes. The total range of temperature is from the seawater freezing point up to about 30C. Land temperatures have a much larger range. A number of factors contribute to limiting the maximum ocean temperature - a convincing argument has been made for the role of clouds in blocking incoming solar radiation. Atmospheric convection becomes very vigorous when ocean temperatures exceed about 27C.
Vertical profiles of temperature and potential temperature.
Meridional section of temperature (to contrast with potential temperature section)
Meridional sections of potential temperature
The potential temperature sections show warm water at the surface to cold down below. Warm waters reach a bit deeper in the bowls of the subtropical gyres. Coldest deep water extending northward from Antarctica. Large rise in isotherms south of 40S is associated with the Antarctic Circumpolar Current. The potential temperature inversion layer in the South Atlantic must be balanced by salinity since on these large scales true density inversions are never seen.
Definition. Salinity is roughly the number of grams of dissolved matter per kilogram of seawater. This was the original definition, and at one time salinity was determined by evaporating the water and weighing the residual. The dissolved matter in seawater affects its density (see section 5 below), hence the importance of measuring salinity.
The "law" of constant proportions (Dittmar, 1884), formalized the observation that the composition of the dissolved matter in seawater does not vary much from place to place. Why constant proportions? Salts come from weathering of continents and deep-sea vents, etc - the input is very very slow (order 100000 years) compared with the mixing rate of the whole ocean (which is order 1000 years). Thus it is possible to measure just one component of the dissolved material and then estimate the total amount of dissolved material (salinity). This approach was used until the 1950's.
The main constituent of sea salt is Cl, the second largest is Na, followed by many other constituents (see Pickard and Emery for table). In actuality, there is a slight variation in the proportions, and recommendations are underway to formulate new definitions of salinity which depend on the actual constituents - this may likely take the form of geographically-dependent tables of corrections to the quantity which is measured (usually conductivity).
Units. In the original definition, salinity units were o/oo (parts per thousand). This was replaced by the "practical salinity unit" or psu. Most recently, the recommendation of the SCOR working group on salinity is that salinity be unitless, as the measurement is now based on conductivity and is not precisely related to the mass of dissolved material.
The total amount of salt in the world oceans does not change except on the longest geological time scales. However, the salinity does change, in response to freshwater inputs from rain and runoff, and freshwater removal through evaporation.
How is salinity measured? (1) Evaporate and weigh residual (oldest method). (2) Determine amount of chlorine, bromine and iodine to give "chlorinity", through titration with silver nitrate. Then relate salinity to chlorinity: S = 1.80655 C. Accuracy is 0.025 (less than 2 places). This method was used until the International Geophysical Year in 1957. (3) Measure conductivity (see next).
Definition. Conductivity of sea water depends strongly on temperature, somewhat less strongly on salinity, and very weakly on pressure. If the temperature is measured, then conductivity can be used to determine the salinity. Salinity as computed through conductivity appears to be more closely related to the actual dissolved constituents than is chlorinity, and more independent of salt composition. Therefore temperature must be measured at the same time as conductivity, to remove the temperature effect and obtain salinity. Accuracy of salinity determined from conductivity: 0.001 to 0.004. Precision: 0.001. The accuracy depends on the accuracy of the seawater standard used to calibrate the conductivity based measurement.
How is conductivity for calculating salinity measured? (1) For a seawater sample in the laboratory, an "autosalinometer" is used, which gives the ratio of conductivity of the seawater sample to a standard solution. The standard seawater solutions are either seawater from a particular place, or a standard KCl solution made in the laboratory. The latter provides greater accuracy and has recently become the standard. Because of the strong dependence of conductivity on temperature, the measurements must be carried out in carefully temperature-controlled conditions. (2) From an electronic instrument in the water, either inductive or capacitance cells are used, depending on the instrument manufacturer. Temperature must also be measured, from a thermistor mounted close to the conductivity sensor. Calibration procedures include matching the temperature and conductivity sensor responses.
4.3. Distribution of salinity and conductivity
Salinity maps at the sea surface (Levitus annual mean).
Surface salinity is dominated by net evaporation in the subtropical regions, and net precipitation/runoff at higher latitudes and in the tropics. The range of salinity in the open ocean is about 31 to 38. Higher values are found in the Mediterranean and Red Seas, and Persian Gulf. Lower values are found at river outflows and near melting ice edges. (Salinity in coastal areas can be much lower.)
Vertical profiles of salinity, conductivity and temperature
These show that salinity in the subpolar regions is lowest at the sea surface and increases monotonically downward. In the subtropical regions, the low salinity from the higher latitudes is found at depth (around 500 to 1000 meters), and the surface waters are more saline due to evaporation in the subtropics. Salinity profiles for the other ocean basins are similar, but differ much from one another than do the temperature profiles. Please look carefully at the vertical sections presented next for each ocean.
Meridional sections of salinity
For mapping general circulation, it is more useful to use density as our vertical coordinate than pressure since we assume that water parcels much more nearly conserve density than pressure. Thus we often map properties on isopycnal surfaces. However, the isopycnals which we choose must have the effect of changing pressure removed since most of the density variation in the ocean is due to pressure, which has no bearing on sources of heat/salt for water parcels. Thus we introduce the concept of potential density or neutral surfaces, which attempt to remove the effect of pressure changes on density.
Definition. Seawater density depends on temperature, salinity and pressure. Colder water is denser. Saltier water is denser. High pressure increases density. The dependence is nonlinear. An empirical equation of state is used, based on very careful laboratory measurements. (See Gill, Appendix 3, and the fortran/matlab/c subroutines linked to the study notes.)
Discussion. Freshwater density is 999 kg/m^3. Typical densities for seawater are only slightly higher: 1020 to 1050 kg/m^3, with most of this range being due to pressure. The range of densities at the sea surface is about 1020 to 1029 kg/m^3.
Other expressions for density: sigma = density - 1000. alpha (specific volume) = 1/density.
Density depends nonlinearly on temperature and salinity.
Fresh water (S=0) at atmospheric pressure (p=0) has maximum density at temperature 4C. (Thus colder fresh water is less dense, which has implications for lake overturn and ice floating.) As salinity is increased, the density maximum moves to lower temperature. At a salinity of about 24.7, the maximum density is at the freezing point.
The nonlinearity of the equation of state is apparent in contours
of constant density in the plane of temperature and salinity
(at constant pressure) - they are curved. They are concave towards
higher salinity and lower temperature.
5.2. Potential density
Seawater is compressible, although not as compressible as a gas.
As a water parcel
compresses, the molecules are crushed together and the density
increases. (At the same time, and for a completely different
reason, compression causes the temperature to increase which
very slightly offsets the density increase due to compression.)
Most variation in seawater is caused by pressure variation. This has little to do with the source of water, and if we wish to trace a water parcel from one place to another, one depth to another, we prefer to remove the pressure dependence. (This is in analogy with temperature; we also remove the pressure dependence in the temperature.)
We define potential density as the density a parcel has when moved adiabatically to a reference pressure. If the reference pressure is the sea surface, then we compute the potential temperature of the parcel, and evaluate the density at pressure 0 dbar. The measured salinity is used as it has very little pressure dependence.
We can also use any other pressure as a reference. We refer to potential density at the sea surface as "sigma sub theta" (Greek in class - sorry about the notes), if potential temperature is used, and "sigma sub t" if measured temperature is used. The latter is an outdated method. We refer to potential density referenced to 1000 dbar as "sigma sub 1", to 2000 dbar as "sigma sub 2", to 3000 dbar as "sigma sub 3" and so on, following the nomenclature introduced by Lynn and Reid (1973); in these cases, potential temperature relative to the reference pressure is used in evaluating the potential density.
Cold water is more compressible than warm water.
That is, it is easier to deform a cold parcel than a warm parcel.
Therefore cold water becomes denser than warm water
when they are both submerged to the same pressure.
Therefore various reference pressures are necessary. We use
a pressure which is relatively close to the depth we are
interested in studying. The compressibility effect
is apparent when we look at contours of density at say 4000 dbar
compared with those at 0 dbar.
The dependence of compressibility on temperature can be important. For instance, water spilling out of the Mediterranean through the Strait of Gibraltar is extremely salty and rather warm, compared with water spilling into the Atlantic from the Greenland Sea over the Greenland-Iceland ridge. They both have about the same density at their sills. However, the warm, saline Mediterranean water does not compress as well as the Greenland Sea water, and does not reach the ocean bottom. (There is also a difference in how the two types of water entrain other waters as they plunge downwards, so this is not a straightforward explanation.)
Neutral density. When analyzing properties in the ocean to determine where water parcels originated, it is assumed that most motion occurs with very little change in the density of the parcel, with the exception of changes due to pressure. This concept is essentially a statement that water follows an isentropic surface if it moves with no exchange of heat and salt. Defining an isentropic surface in the presence of mixing presents some difficulties. The isopycnal surfaces which we use in practice to map and trace water parcels should approximate isentropic surfaces. We typically use a reference pressure for the density which is within about 500 meters of the pressure of interest. (This pressure interval has just been found through experience to be adequate.) Therefore when working in the top 500 meters, we use a surface reference pressure. When working at 500 to 1500 meters, we use a reference pressure of 1000 dbar, etc. This discretization takes care of most of the problems associated with the effect of pressure on density. When isopycnals cover a greater range of pressure, then they must be patched into the shallower or deeper range - this is the practice followed by Reid in his various monographs on Pacific and Atlantic circulation.
Ivers (1976), working with Reid, introduced a more continuously varying reference pressure for isopycnal surfaces, which he then referred to as a "neutral surface". If a parcel is followed along its path, assuming the path is known, then it is possible to track its pressure continuously and continuously adjust its reference pressure and density.
McDougall (1987) refined the neutral surface concept and Jackett and McDougall (1997) have released a computer program and lookup table for computing neutral density. Neutral density depends on location in latitude/longitude/depth and is based on marching outwards around the world from a single point in the middle of the Pacific, using a climatological temperature/salinity data set, and tracking imaginary parcels along radiating lines.
Initial usage indicates that neutral density as determined from this program can successfully replace the approximate neutral surfaces produced by adjusting reference pressures every 1000 dbar.
Density maps at the sea surface (Levitus annual mean [kg/m^3-1000]).
Meridional sections of potential density
In the Atlantic sections, note the deep inversion of sigma theta with depth in the South Atlantic due to use of surface pressure for referencing the density. There is no such deep inversion in sigma 4 since a more appropriate, local, reference pressure is used.
Where there is a sound speed minimum, it functions as a wave guide.
The faster that sea ice is frozen, the less likely that the salt can escape. Thus the saltiest sea ice is formed at the lowest temperatures. Sverdrup et al. (1942 text) tabulate the salinity of ice formed from water which starts at salinity 30. When frozen at an air temperature of -16C, the salinity of the ice is 5.6. When frozen at an air temperature of -40C, the salinity of the ice is 10.2.
Seawater properties are valuable tools for tracing water parcels as differing water mass formation processes imprint different amounts of various properties on the water parcels. They are of most use when the sources and sinks of one property compared with another differ. Some tracers are biogenic and hence non-conservative. These include oxygen and the various nutrients, all discussed very briefly here. Some useful tracers are inert but with time-dependent inputs, such as chlorofluorocarbons. Some useful tracers have decay times and decay products, which can serve as a useful measure of age. The latter are referred to as transient tracers, and are not discussed here.
8.1. Oxygen. Non-conservative tracer. Source is primarily air-sea interaction, some subsurface source in outgassing by plankton. Oxygen is consumed in situ. Oxygen content decreases with age, so it can be used in a rough way to date the water. It is not a good age tracer because the consumption rate is not a constant. Since waters of different oxygen content mix, the age is not simply related to content.
Per cent saturation of oxygen depends strongly on temperature (show figure). Cold water holds more oxygen. Thus the per cent saturation (or related quantity "Apparent oxygen utilization" is a better tracer than oxygen content itself.
8.2. Nitrate and phosphate: Also non-conservative. Nitrate and phosphate are completely depleted in surface waters in the subtropical regions where there is net downwelling from the surface and hence no subsurface source of nutrients. In upwelling regions there is measurable nitrate/phosphate in the surface waters due to the subsurface source (figure from Hayward and McGowan; other figures based on woce data). Nitrogen is present in sea water in dissolved N2 gas, nitrite, ammonia, and nitrate, as well as in organic forms. As water leaves the sea surface, particularly the euphotic zone, productivity is limited by sunlight and nutrients are "regenerated". That is, the marine snow is decomposed by bacteria and produces nitrate and phosphate. Nitrate and phosphate thus increase with the age of the water. Vertical sections and maps of nitrate and phosphate appear nearly as mirror images of oxygen, but there are important differences in their patterns, particularly in the upper 1000 meters; vertical extrema are not always co-located and sometime large multiple extrema appear on one parameter and not in the others (e.g. in oxygen but not in nitrate/phosphate).
Nitrate/oxygen and phosphate/oxygen combinations - nearly conservative tracers. Nitrate/oxygen and phosphate/oxygen are present in seawater in nearly constant proportions, given by the Redfield ratio. There are small variations in this ratio, with particularly large deviations near the sea surface. Because of the near constancy of this ratio, a combination of nitrate and oxygen and of phosphate and oxygen is a nearly conservative tracer (Broecker).
8.3 Dissolved silica - non-conservative. In seawater it is present as H2SiO4 (silicic acid) rather than silicate (SiO3), but many people use the term silicate. This nutrient is also depleted in surface waters similarly to nitrate and phosphate - completely depleted in downwelling areas and small but measurable quantities in upwelling areas. Subsurface distributions of silica look something like nitrate and phosphate and mirror oxygen since silica is also regenerated in situ below the euphotic zone. However, silica in marine organisms is associated with skeletons rather than fleshy parts and so dissolves more slowly in the water. Much of the silica thus falls to the bottom of the ocean and accumulates in the sediments (map of types of sediments). Dissolution from the bottom sediments constitutes a source of silica for the water column which is not available for nitrate, phosphate or oxygen. Another independent source of silica are the hydrothermal vents which spew water of extremely high temperature, silica content, and helium content, as well as many other minerals, into the ocean. The three named quantities are used commonly to trace hydrothermal water.
8.4. Transient tracers. Other tracers used commonly for ventilation and deep water circulation include chlorofluorocarbons, tritium, helium-3 and carbon-14. CFC's and tritium are strictly anthropogenic. Their source functions have been well described and they are used to trace recently ventilated waters into the ocean, and various combinations of CFC's, tritium/helium3 are used to attach ages to water parcels, although not without approximation.
Maps at the sea surface
As has already been described in the first half of the course (Hendershott) and as will be discussed in topic 3, most of our knowledge of the circulation is somewhat indirect, using the geostrophic method to determine velocity referenced to a known velocity pattern at some depth. If the reference velocity pattern is not known well, then we must deduce it.
Deduction of the absolute velocity field is based on all of the information that we can bring to bear. This includes identifying sources of waters, by their contrasting properties, and determining which direction they appear to spread on average.
Water properties are used to trace parcels over great distances. Over these distances, parcels mix with waters of other properties. It is assumed that mixing is easier along isentropic (isopycnal) surfaces than across them. However, it is clear from distributions of some properties that of course there is mixing both along and across isopycnals (isopycnal and diapycnal mixing). This tracing of waters is useful in conjunction with the relative geostrophic flow calculations that can be made from the observed density field, in order to narrow down the actual general circulation.
We use the concept of water masses as a convenient way to tag the basic source waters. The definition of a "water mass" is somewhat vague, but is in the sense of "cores" of high or low properties, such as salinity or oxygen, in the vertical and along isopycnal surfaces. A range of densities (depths) is usually considered for a given water mass. Water mass definitions may change as a layer is followed from one basin or ocean to another. Many examples of water masses will be given in topics 4 and following.
Traditionally, water mass analysis was based on plotting various properties against each other, and attempting to explain the observed distributions of properties as a result of mixing between the identified "sources". However, point sources of waters occur only in relatively few regions, and in general "source" waters have a range of properties. The sources are generally surface waters (see properties given above in the surface maps), or near-surface waters that are created by, say, brine rejection, or flow over and through a narrow passage/sill.
More recently, water mass analysis has been quantified with the application of least squares methods, but is still based on assumptions about the source waters.
The most commonly used property-property displays are (a) potential temperature vs. salinity, and (b) properties along isopycnal surfaces. Worthington (1981) (figure) shows the volume of water as a function of potential temperature and salinity for the world oceans. The diagram shows how the three major ocean basins meet in properties at the cold and fresh extreme, and are differentiated at higher temperatures. The Atlantic is the most saline and the Pacific the freshest ocean. The Pacific contains a large amount of water in a very narrow range of potential temperature/salinity, reflecting the very long distance the deep Pacific waters have traveled from surface or overflow sources, in comparison with the Atlantic which contains many sources, and the Indian, which is intermediate between the Atlantic and Pacific.
Many examples of isopycnal property distributions will be shown in the lectures on specific oceans, and will be used to identify sources and pathways for the waters.
Figure. Potential temperature versus salinity along 20 and 25W in the
Atlantic Ocean, from Iceland across the equator to South Georgia Island.
Blue - equator to Iceland. Red - equator to about 30S. Green - 30S to South Georgia Island. The Atlantic 25W meridional potential temperature and salinity sections were already shown.
|
|CALCULATING RELATIVE HUMIDITY
METEOROLOGIST JEFF HABY
RH requires the correct
equation(s). The RH is the amount of moisture in the air (via moisture mass or vapor pressure) divided by the
maximum amount of moisture that could exist in the air at a specific temperature (via max moisture mass or
saturation vapor pressure). RH is expressed as a percentage and has no units since the units in
both the numerator and denominator are the same. The percentage is found by multiplying the ratio by 100%.
The RH is NOT the dewpoint divided by the temperature. For example, if the temperature was 60 F and
the dewpoint was 30 F, you would not simply take (30/60)*100% = 50% RH.
When given temperature and
dewpoint, the vapor pressure (plugging Td in place of T into Clausius-Clapeyron
equation) and the saturation vapor pressure (plugging T into Clausius-Clapeyron equation) can be determined.
The RH = E/Es*100%.
LN(Es/6.11) = (L/Rv )(1/273 - 1/T)
Es = Saturation vapor pressure
L = Latent heat of vaporization = 2.453 × 10^6 J/kg
Rv = Gas constant for moist air = 461 J/kg
T = Temperature in Kelvins
The mixing ratio is defined as the mass of water vapor divided by the mass of dry air. In a lab setting,
the lab technician could measure both the mass of water vapor and mass of dry air in an air sample. The
water vapor in a sample of air divided by the mass of
dry air is W. The lab technician could then
saturated the air (making sure temperature remains the same) and recalculate the mass of water vapor
divided by the mass of dry air. This would be Ws. The RH = W/Ws*100%
To get W and Ws, use the equation:
W= (0.622*e) / (P - e) and Ws = (0.622*Es) / (P - Es)
This requires that E and Es are known. Therefore, without using the Clausius-Clapeyron equation, calculating
RH outside of a lab setting is difficult.
--operational methods of calculating RH--
1. Mixing ratio can be determined using the Skew-T log-P diagram. For any pressure level, the mixing
ratio is read through the dewpoint and the saturation mixing ratio is read through the temperature. By
reading the mixing ratio values off the Skew-T you can determine W and Ws for any temperature and dewpoint.
RH = W/Ws*100%
2. Take the temperature and dewpoint and plug them into the Clausius-Clapeyron equation. There are computer
programs that will do this. The computer uses the graph of the Clausius-Clapeyron equation for all temperature
and dewpoints to find RH.
3. Many textbooks have a graph or table data of saturation mixing ratio and/or saturation vapor pressure for
various temperatures. Using dewpoint will
either give the actual vapor pressure or actual mixing ratio while using temperature will either give the
saturation vapor pressure and saturation mixing ratio (depending on if graph is showing vapor pressure or
mixing ratio). RH is E/Es*100% or W/Ws*100%.
|
Sam measures a plant's height.
Measurements of plants are done in different ways. One of the easiest is to stand a ruler next to the plant and measure the height of a particular stem or the whole plant. Measurements can be made daily, weekly, or monthly. Choose a frequency that matches the age of the plant and a question about the plant. Here are some points to consider.
- A newly planted or very young plant might be expected to grow more rapidly.
- A plant just pruned or trimmed might have a stem that would grow more rapidly, or not.
- An established might grow more rapidly during the spring than in the summer.
Choose a question to answer and decide on the frequency of your measurements that would best answers the question.
|
Choose one specific social problem and explain how Progressive women reformers proposed to solve that problem.
1 Answer | Add Yours
During the Progressive Era, women played a very important role in trying to bring about social change. They were concerned about a variety of issues. One issue that they were concerned with was the state of family life among the poor and particularly among immigrants. They had two main strategies to help deal with this problem.
Progressive women felt that these families were in bad situations in part because they were ignorant. The poorer women, it was felt, did not know how to do things like giving proper care to their children. This caused health and other problems for the children. The Progressive women proposed to remedy this problem through the settlement house movement. These houses were essentially community centers where poorer women could go to do things like getting educated on how to best care for their children and keep them healthy.
Progressive women also felt these families’ bad conditions arose in part from alcohol. They felt that men would waste money on alcohol instead of spending it on improving the lives of their wives and children. They felt that men would beat their wives and kids while drunk. They proposed to deal with this in part through education and the settlement houses. However, this was where temperance and prohibition came in. It was felt that a ban on alcohol would help to alleviate this problem.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes
|
The resource has been added to your collection
In this Earth science animation, middle and high school students observe the retreat of ice sheets in North America for the past 18,000 years. Students are instructed to observe the animation carefully to see how the sea level changes as the ice sheets retreat. The animation presents images in 1,000-year increments from 18,000 years ago to the present. Movie controls allow students to repeat, pause, or step through the animation, which can give students more time to analyze how the shape of the North American coastlines changed as the ice retreated. Copyright 2005 Eisenhower National Clearinghouse
This resource has not yet been reviewed.
Not Rated Yet.
|
Many centuries ago, when the Ptolemaic model of the universe was considered correct, people believed that the everything revolved around the Earth. At least they got it right with respect to the Moon! The Moon is the Earth's satellite. By definition a satellite is a "secondary body orbiting a primary body". The Earth is a satellite of the Sun and the Moon is a satellite of the Earth. Much of astronomy has to do with the motion of one "body" around another so the word "satellite" is very important to understand and our satellite, the Moon, is a great way to learn about satellites and their motion.
The word "satellite" can also apply to an object placed into orbit around another object. In this century we have placed artificial satellites into orbit around the Earth, the Moon and several other planets.
The word "moon" has come to mean any natural satellite around a planet. Mars has two moons, Jupiter has at least 16 moons. Notice the small "m" in "moons". Those satellites are moons with a small "m", but our moon is called the "Moon". Some (although not all) astronomers understand that the Earth's moon is written as "Moon" when used as a proper noun. It's an easy mistake to make.
As you might imagine, the name or word "moon" has been around a long time and has been applied to satellites of other worlds. That is acceptable. Some folks, in order to highlight the idea that the "Moon" deserves a better name than simply "moon", will call our satellite "Luna". This is particularly true in conversations when it isn't always clear what M/moon you are talking about.
OK, the Moon revolves around the Earth. Right?
The Moon revolves around the Earth but that is a bit of an over simplification. Our Moon is a very large object. Indeed, the Earth-Moon "system" could be properly thought of as a "double planet system". No other pair in the Solar System are so close in size (except for Pluto's main satellite, Charon, but they are an unusual pair for a lot of other reasons that I will not discuss at this time). The Earth's diameter almost four times the diameter of the Moon. [The Earth's diameter is 12,700 kilometers and the Moon has a diameter of 3,400 kilometers. That's a difference of about four fold.]
The Earth is bigger so it is the "primary" body and the Moon is its satellite, but the Moon is a very big satellite! However, size can be deceiving because it is gravity that affects orbits.
All matter has mass and all mass attracts other masses
by a force called gravity. The force of gravity depends
upon the mass of the two objects and the distance between them.
The Earth has a mass over 80 times that of the Moon, so the Earth is clearly the dominate partner. However, the Moon is still pretty massive, and pretty close too, so it is an oversimplification to say that the Moon goes around the Earth. In fact, the two bodies revolve around each other!
The center of the Moon's orbit is not in the center of the Earth. The Earth and Moon revolve around a common point called the barycenter.
The barycenter is
the center of mass of any system.
Imagine the barycenter as the point about which two people spin when they are dancing together. During some dances (square dancing, Scottish Highland dancing, etc.) it is common for both partners to grasp each other's hands and revolve in a circle. If you watch this spin carefully you will see that it is rarely centered - the larger partner is closest to the "spin barycenter" and determines where the smaller partner goes.
That is what happens with most satellites.
Where's the Earth-Moon barycenter?
It's in the Earth but not at its center.
|Imagine a line drawn from the Earth's center to the Moon's center. The barycenter is along that line. The balancing of the two masses places it at all times within the Earth (below the Earth's surface), but never at its center.|
The barycenter moves as the Moon revolves around
Opps. That's not true. The Earth and the Moon revolve around the Earth-Moon barycenter!
Most folks are not familiar with the concept of a barycenter so it is "safe" (fair) to say that the "Moon revolves around the Earth" when in fact they both revolve around its barycenter which happens to be within the Earth. Indeed, it is such a well-accepted expression that I will use it ("the Moon revolves around the Earth") throughout the rest of our lessons. It isn't completely true but it is pretty close to true and it is easier to say that phrase than to explain it!
Why doesn't the Moon fall into the Earth?
Because the Moon is in orbit around the Earth. (Actually, around the Earth-Moon barycenter.)
What's an orbit?
Gee I wish you hadn't asked that!
There are two ways to understand orbits. One way (the best way - the correct way) is to learn how the effects of gravity, acceleration and velocity act together through a series of mathematical equations to produce a net effect that describes the position and motion of an object in orbit around another object. The other way is to simply imagine the final effect. I like that way because it's much easier.
[But, hey, don't think you're going to get away from this too easily. We'll return to orbits in June when I teach you Kepler's Laws. Don't worry. It's low on math and, unfortunately, will not give you the math you need to understand orbits. If you feel cheated, do a search using keywords like "gravitational formula". But finish this lesson first!]
As the Moon falls towards the Earth, under the influence of gravity, the Moon's orbital velocity (actually its "tangential velocity") causes the Moon to fall off center to the Earth. While the Moon falls (say, 1000 kilometers) towards the Earth, the sideways motion of the Moon (several thousand kilometers) during that time it is falling, changes the Moon's position with respect to the Earth's surface such that the surface of the Earth is now (1000 kilometers) further away.
[Wow! What a mouth full. Read that again - with and without the stuff in parentheses.]
The combined effect of the Moon's sideways motion along with its falling means the Moon is ALWAYS falling towards the Earth but NEVER HITS it! It's in "free fall", because it is falling freely, but never hits its "target". For every meter that the Moon falls towards the Earth, the Earth's surface curves a meter away due to the motion of the Moon.
Notice that the Earth isn't moving out of the way of the falling Moon. (In fact, the Earth's orbit causes the Earth to move slightly forward, in the same direction of the Moon but that motion isn't as much as that of the Moon so it has little effect and can be ignored.) Instead, the Moon is missing its target because the Moon's orbital velocity pushes it off center from the Earth. The overall effect is that the Moon falls towards the Earth at the same rate that the Earth's surface curves away from it.
This diagram (which illustrates this idea using "vectors", a more scientifically rigorous way to think about it) might help you to understand what an orbit really is.
All orbits, not just the Moon's orbit, behave this way so it's a good idea to understand this important idea.
It takes 27.3 days for the Moon to circle the Earth. (Actually it takes 27.3 days for the Moon to circle the barycenter! ) That means the Moon will move westward through the sky at a rate of 13.2 degrees each night. [That's a full 360 degree circle divided by 27.3 days to give 13.2 degrees of motion each night.] Don't confuse that with the nightly motion of the stars caused by the Earth's rotation. What I mean is that the Moon will move through the starfield 13.2 degrees eastward each night. [That's a "fist and several fingers" if you are using the measuring method I taught you last month.] No other (natural) object moves through the starfield so quickly.
The Moon's orbital period is exactly equal to its rotational period. That means the Moon turns to face the Earth as it revolves around us so we see only one side of the Moon from Earth. Again, this behaviour is similar to that found in dancing.
Yeah, but dance partners hold each other that way. What causes the Moon to face the Earth?
To understand tidal friction it helps to first understand tides. Tides are caused by the Moon's gravitational forces working on the Earth and its oceans.
|This diagram shows a cross-section of the Earth (in light grey) and its ocean (in dark blue). Understand that this diagram is a simplification of the Earth because it doesn't show the continents, but I'm sure you get the picture. The water in the oceans is attracted by the Moon's gravity and bulges (upwards) towards the Moon. That causes a high tide to occur on the side facing the Moon.|
What causes the other high tide on the other side?
As the Moon pulls on the water it is also pulling
on the Earth. The Moon's gravity pulls the Earth away from the
water on the Earth's far side! The net effect is that the water
is higher on the side opposite the Moon. (This effect is actually caused by inertia - it just depends on how you want to explain it.)
The highest of the two tides is the one facing the Moon because the water is closer on that side and therefore the effect of the Moon's gravity is stronger.
So, the highest tide occurs when the Moon is directly overhead.
Ah, well it would be if not for two complications.
First, the tidal "bulge" cannot keep up with the Earth's rotation. Our oceans are relatively shallow and water has a lot of momentum (when moving) and a lot of inertia (when not moving). Together, this causes the high tide to be delayed by about a quarter of the Earth's rotation. That means the tide is highest when the Moon is on the horizon. That's when you would expect the lowest tides!
Second, local coastline geography can affect the local tides. If the water has to travel around a large island, peninsula or other bit of land, it will be delayed.
By the way, the low tides fit between the high tides. They occur because they are at the positions of the ocean least affected by the Moon's gravity, due to their positions at right angles to that of the Moon's pull. Off course, the two complications noted above for high tides also affect low tides.
Once you understand the local tidal effects along your coastline, assuming you have a coastline, you can keep time to the tides by knowing that they will progress at the same rate as the Moon. If the highest tide today was at 10AM, there will be another, slightly lower, high tide 12 hours later - at 10PM. (Actually, that's not true. Read the rest of this paragraph to understand why the high tides are slightly more than 12 hours apart.) The two low tides will occur half way between the high tides (roughly). Tomorrow, the highest tide will occur later because the Moon has moved a small amount ahead of the Earth's rotation. Remember, each day the Moon moves 13.2 degrees ahead of us. That means the highest tide will be advanced by about 0.88 hours. [I got that by dividing the Moon's relative motion of 13.2 degrees by 360 degrees to get 0.03666 of a day. That means the Moon is ahead of the Earth's rotation by 0.03666 of a day and multiplying that by 24 hours (in a day) I get 0.88 hours.]. Each tide, high or low, will be advanced by 0.88 hours from the previous day because of the Moon's motion.
Some students get confused here so let me explain it again.
When I say the Moon (and tide) has advanced by 0.88 hours I mean that the Moon has moved that much farther ahead in its orbit. That means it will take an additional 0.88 hours for the Earth to rotate to the new alignment so the tide will be 0.88 hours later. So each day the the tide is about an hour late.
There are two problems whenever I teach this subject.
Today's 9:00AM high will return tomorrow at 9:53AM (0.88 x 60 minutes = 52.8 minutes). But before that high tide there will be another high tide in 12.44 hours which is at 10:37 PM. And there will two low tides - one 6.22 hours after 9AM (at 3.22PM) and another one 12.44 hours later (at 4:06AM).
OK, fine. But I want to learn astronomy not oceanography! What's this have to do with the Moon always facing us?
Oh, yeah right.
All astronomers should understand tides because they cause tidal friction and that's what causes the Moon to face us. The ocean tides are an obvious effect of the Moon's gravity but the Moon causes "land tides" to occur too. As the Moon passes overhead the Earth rises towards it by several centimeters and then drops down again as the Moon moves on. (Actually, as the Earth rotates.) Don't confuse this with the tug of the Moon that causes the Earth to move towards it, producing the lower high tide opposite the Moon. What I am talking about here is the actual distortion of the Earth's solid "rock" due to the Moon's gravity! These land tides are not noticeable because the shifting they cause is very slight, very slow, and the rock returns to the same position as the Earth rotates. It has no overall effect on the Earth's position or shape.
Land tides occur on every object in the Solar System (if it has "land"). They cause friction and affect the orientation of many satellites. Here's how.
Both the primary and secondary bodies in a pair can be affected by
land tides caused by its partner. Long ago, when the Moon used to have a more rapid rotation, it
experienced land tides caused by the gravitational pull of the
Earth. Because the Earth is so big, the Moon experienced a great
deal of land tide and the regular "land bugle" produced
by the land tide caused tremendous friction with the Moon's own rocks. It was like applying brakes to the Moon's rotation! This caused the rotation
of the Moon to slow down and after a long period of time (millions of years) the rotation became so slow that one side always faced the Earth.
That way there is no longer a "moving land tide" on the Moon. Now that the Moon is always positioned with one side to us, it is no longer tugged around by the Earth as the Moon orbits us. The Moon has reached a "comfortable" position (tidally speaking).
This is true of our Moon and of many other moons. The secondary partner is "despun" (as we like to say) by the tidal friction caused by its primary partner. When a body has despun its partner it is said to have "captured" its partner's rotation and we describe the partner's rotation as synchronous with that of the primary body. Many moons in the Solar System have been despun, their rotation captured by their larger partner, so they have a synchronous rotation. A despun moon shows only one side, one face, to its primary partner.
Tidal friction is an important "shaper" of the interaction
between a pair of worlds. Not only does it cause a body to be
despun, but it also changes the orbital period and its orbital
distance. That's because of a complexity involving the "conservation
of angular momentum". This is an important part of the physics
of astronomy but I think we've gone deep enough.
According to calculations, we know that long ago the Moon was closer to the Earth and the Earth rotated more quickly than it does today. Tidal friction has despun the Moon, moved it further from the Earth and has also slowed the Earth's spin!
Tidal friction is still at work today and it's moving the Moon away from us at a rate of a few centimeters each year. Meanwhile, the land tides on the Earth caused by the Moon, are slowly braking the Earth's rotation, slowing us down. Many billions of years from now the Moon will have captured the Earth's rotation! So someday in the far, far future the Earth and Moon will be much further apart and face each other. At that time they will be like two proper dancers face-to-face. The Earth will have a "moon side" from which the Moon will always be seen and a "star side" from which you could never see the Moon, only the stars. (Cool, Huh? )
But let's concentrate on the current situation.
Because the Moon's rotation has been captured by the Earth, one side always faces towards the Earth and (of course) that means one side always faces away from the Earth. The side facing us is called either the "near side" or more rarely the "Earth side" because it is the side nearest us and the side from which you could see the Earth if you were on the Moon. The opposite side is called either the "far side" or more rarely the "star side" of the Moon because that side is furthest from us and if you were there (on the star side of the Moon) you could never see Earth, only stars.
It is unfortunate that the phrase "the dark side" is often used to describe the far side of the Moon because it isn't any darker than the Earth side. The Moon undergoes one rotation as it completes each orbit around the Earth and as it does so it undergoes one complete "Moon day". Therefore the side facing way from us, the star side, experiences just as much sunlight as the Earth side. Perhaps it was named "darkside" because it cannot be seen from here and was thus a mysterious place. Indeed, until 1959, when the Russians sent Lunik 3 around the Moon, we had no idea what lay on the "dark side" of the Moon.
OK, the dark side isn't really dark so I'll call it the star side or far side of the Moon, but how does the Sun fit into all of this?
That's a very important question because it's "how the Sun
fits into all of this" that causes the phases of the moon.
In our next lesson I will teach you how the geometry (positions) of the Sun, Earth and Moon cause the phases of the Moon.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.