content
stringlengths 275
370k
|
---|
Learning how to calculate cardiac output is complex. Cardiac output is the blood pumped by the left ventricle. It is measured is by calculating what is called "stroke volume" and multiplying it by your heart rate. Cardiac output is one way to measure how efficient the heart is. High cardiac output is related to an increase in heart disease, heart failure and stroke. The harder the heart has to work when it is resting (i.e. not exercising), the less efficient it is. You can't measure cardiac output by yourself, you need specialized equipment and the assistance of a health care professional. Learn about the different ways to calculate cardiac output below.
- Electrical Cardiometry. This method of calculating cardiac output uses four electrodes (don't worry they are applied outside the body). The electrodes measure the electrical activity of the red blood cells moving through the aorta, called "bioimpedance." They calculation is a ratio of applied electrical current and corresponding output from the heart. This method is non-invasive and it can be performed by a Cardiologist or a Cardio Technician.
- Echocardiography. Echocardiography is a common method for calculating Cardiac Output. An echocardiogram is essentially an ultrasound of your heart. A technician, called a Sonographer applies a microphone to your chest which enables the sound waves to produce a picture of your heart. The Sonogrpher will record your cardiac output at different intervals and in different positions. The process can take anywhere from 30 to 45 minutes. Your doctor will then interpret the results of the Echocardiogram. People receive echocardiograms if there is a family history of heart problems or if you are having symptoms of a possible heart problem.
- Magnetic Resonance Imaging. Magagnetic Resonance Imaging(MRI) is frequently used to obtain images of tissue or bone. An MRI is like an X-Ray, but with significantly more detail. An MRI can be used to look at Cardiac Output by examining the heart structure and the function of the heart. A Cardiac MRI allows doctors to see detailed pictures of the heart's vessels, valves and any blockage that could be occurring. Cardiac output is measured by looking at blood flow generated by the pictures from the magnetic current. This technology is non-invasive and painless; however, while you are having an MRI performed, you cannot wear any metal. Because you are essentially in a huge magnet, the metal will be pulled out of you from the strong current.
What Others Are Reading Right Now.
Acting, comedy and strong spirits converge in Speakeasy. When host Russell Peters interviews entertainers about all sorts of topics, neither the drinks nor the conversation is wate …
6 Things You Think Your Girlfriend Cares About But She Doesn...
Guys, it may be time to refocus your efforts.
10 Red Flags That Kill Your Chances With Women
Wondering why that first date didn’t lead to a second? Read on. |
The Economist explains
ALTHOUGH undeniably graceful, gliding has until now been suitable only for pleasure flights. But this is changing, as researchers exploit wind power to enhance the capabilities of unmanned aircraft, especially small drones. Soon, these gliders will be able to stay aloft for weeks. They will thus be able to act as communication relays, keep a persistent eye on the ground below and even track marine animals thousands of kilometres across the ocean.
One such glider, the hand-launched Tactical Long Endurance Unmanned Aerial System (TALEUAS) is being developed at the Unites States’ Naval Postgraduate School in Monterey, California. It needs an electric propeller to get airborne, but give it a few minutes to reach a reasonable altitude and TALEUAS can fly all day just by riding rising currents of warm air called thermals.
When TALEUAS encounters a thermal it senses the lift and spirals around to take advantage of it. Vultures and eagles use the same technique, and Kevin Jones, who is in charge of the project, says he has often found TALEUAS sharing the air with these raptors. On some occasions, indeed, the birds found that the thermals they were attempting to join it in were too weak for their weight, as the drone is more efficient than they are at gliding.
TALEUAS’s endurance is limited only by the power requirements of its electronics and payload, for at the moment these are battery powered. Dr Jones and his team are, however, covering the craft’s wings with solar cells that will generate power during the day, and are replacing its lithium-polymer battery with a lithium-ion one capable of storing enough energy to last the night. That done, TALEUAS will be able to stay aloft indefinitely.
TALEUAS does, however, depend on chance to locate useful thermals in the first place. Roke Manor Research, a British firm, hopes to eliminate that element of chance by allowing drones actively to seek out rising air in places where the hunt is most likely to be propitious. As well as thermals, Mike Hook, the project’s leader, and his team are looking at orographic lift, produced by wind blowing over a ridge, and lee waves caused by wind striking mountains. Their software combines several approaches to the search for rising air. It analyses the local landscape for large flat areas that are likely to produce thermals, and for ridges that might generate orographic lift. It also employs cameras to spot cumulus clouds formed by rapidly rising hot air. Such software replicates the behaviour of a skilled sailplane pilot—or a vulture—in knowing where to find rising air and where to avoid downdraughts.
Perhaps the most ambitious scheme for a robot glider, however, is the artificial albatross proposed by Philip Richardson of the Woods Hole Oceanographic Institution, in Massachusetts. Like its natural counterpart, this artificial bird harnesses wind shear—the difference in wind speed at different heights—in a technique called dynamic soaring.
The air is quite still near the surface of the sea even when it is blowing powerfully just a few metres above, so an albatross can rise up and face into the wind, gaining height like a kite in a breeze, then turn to glide down in any direction. By repeating this manoeuvre it can fly thousands of kilometres without flapping its wings, and by tacking it can travel anywhere, regardless of the wind direction, with an average speed six times that of the wind. Dr Richardson thinks he can replicate this with his robot bird. If he does, he will surely break all records for the time a heavier-than-air artefact has stayed airborne.
Correction: The original version of the article mistakenly talked of a British firm called Roke Manor Systems. Its name is, in fact, Roke Manor Research. Sorry. |
Looking for a phonics resource to teach or review silent e long vowel words? This resource is perfect for teaching the long U sound spelled u_e! This phonics unit was created for students in 1st and 2nd grade and is aligned to the Common Core State Standards.
Long Vowels - A Phonics Unit: u_e has everything a teacher needs to introduce, practice, and assess phonics and word recognition using the long o sound. This 50 page download includes:
- Word/Picture strips that fit perfectly in a pocket chart (provided in
full color & b/w for more printing options)
- More word cards (without pictures) that can be used as
flashcards, to display words for reference, and much, much more.
- Reading Practice Page - use to practice fluency with not only
words, but with phrases and sentences also
- Word Sort Flip Flap Foldable Activity - students read and sort
words according to spelling patterns
- I Have, Who Has Card Game - enough cards for 19 students to
play at one time. (full color & b/w version with and without pictures
- 11 interactive printables provided in various formats and levels to
allow for differentiation
- teacher directions/suggestions for all activities
- answer keys for all printables
Download the preview above to get a snapshot of ALL 50 pages of this resource.
Thank you for visiting my store! Feel free to email me with any comments, questions, or concerns at
Visit my blog, www.morethanmathbymo.blogspot.com
for news about my latest freebies, giveaways, and classroom ideas!
Like More Than Math by Mo
boards on Pinterest.
You can now follow me on Instagram |
Learn something new every day More Info... by email
The best tips for learning music theory are to begin with basic concepts, translate those concepts into more easily recognizable common names, and understand those concepts in practical terms by learning to play a musical instrument. Basic theory lessons can be taught from a textbook for beginners or through online programs. Comparing technical music terms to common words and sounds helps students learn them quickly while providing fun ways to remember each one. Music theory also acts as the foundation for playing and enjoying music, and serves to enhance mastering an instrument.
Individuals interested in learning music theory should begin with basic concepts on which to build future knowledge. Understanding this discipline is similar to learning mathematics. Every note in music is assigned a numerical component that indicates to the musician how long to play a note or hold a rest. The notes are placed within individual segments known as measures where each measure can only hold a predetermined number of note values. Each note and rest value must add up together to equal one whole measure.
Students can find lesson materials online that teach music theory for beginners, and can also order books on the subject from music websites and local book stores. The best resources available for learning music theory will provide areas for the student to duplicate what she has just learned on blank musical staff paper following the introduction of new concepts. For example, if one chapter of the material focuses on understanding quarter notes, eighth notes, and sixteenth notes, then the end of the chapter should guide the student in drawing several measures that include each type of note in a basic 4/4 time signature. This methodology mirrors math textbooks which provide practice problems that challenge a student to use newly learned problem solving techniques at the end of each new section.
Some students may find it beneficial to assign common names to musical technicalities to help them remember each concept. Music shares a similarity to everyday speech in that they are both rhythmic. Just as music notes have assigned values, so can words be broken down into individual syllables. Most children learn to speak long before they begin learning music theory and can pick up new musical concepts quickly when they are structured within a framework they already understand.
Using syllables and common words to teach note values is one example of translating music into more easily understood terms. One quarter note is equivalent to two eighth notes. A new musician may have difficulty understanding what two eighth notes followed by one quarter note will sound like when rhythmically clapped correct with the hands. They are more likely to be familiar, however, with the word "butterscotch" which, when clapped according to each syllable, will produce a rhythm identical to two eighth notes followed by a quarter note. This type of instruction can be found in some music theory textbooks for beginners.
Learning music theory should be combined with learning to play a musical instrument. The purpose of music theory is to better understand the concepts which guide music, and the way it is written, and the way it is played. Students can take the facts they learn in a theory book and discover their practical application by playing them on a piano, flute, trumpet, or any instrument they prefer. This technique trains the ear to identify musical values based on their roles in different melodies and harmonies. Soon the eye learns to translate two measures of straight sixteenth notes for the mind's ear as a section of music which will be played lively and fast.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Hydrocarbon are the simplest types of organic compounds which contain only carbon and hydrogen. There are several types of hydrocarbons including halocarbons, alcohols, ethers, amines, aldehydes, keytones and carboxylic acids.
An introduction to the simplest type of organic compound.
Alkanes are hydrocarbons that only consist of single bonds. The ratio of carbon atoms to hydrogen atoms in alkanes is always in the form of n: 2n + 2.
Naming alkanes can be difficult because each alkane consists of a parent chain and one or more branches. First, it is necessary to count the number of atoms in the longest chain. Then we name the branches based upon how many atoms they contain and number the branches by the atom in the parent chain they connect to, keeping the numbers as low as possible. For multiple identical branches, prefixes are necessary to indicate quantity.
How to name alkanes.
Naming Simple Alkanes
Alkenes - Alkynes
Alkenes and alkynes are unsaturated bonds that contain one or more double or triple bonds (e.g., ethene, propene, butene, etc.) and have different chemical compounds and properties. Their properties are similar to those of alkanes meaning that they are non-polar, have low solubility in water and have low melting and boiling points. Alkenes and alkynes are much more reactive than alkanes.
An introduction to alkenes and alkynes.
Naming Alkenes - Naming Alkynes
The rules in naming alkenes and alkyles are generally similar to naming alkanes, but also include denoting multiple bonds. First, it is necessary to count the number of atoms in the longest chain. Then we number the branches, always using the lowest numbers possible. Finally, chains or branches that include multiple bonds have different suffixes.
How to name alkenes and alkynes.
We welcome your feedback, comments and questions about this site - please submit your feedback via our Feedback page. |
A relational database management system must manage its stored data using only its relational capabilities.
1. Information Rule
All information in the database should be represented in one and only one way - as values in a table.
2. Guaranteed Access Rule
Each and every datum (atomic value) is guaranteed to be logically accessible by resorting to a combination of table name, primary key value and column name.
3. Systematic Treatment of Null Values
Null values (distinct from empty character string or a string of blank characters and distinct from zero or any other number) are supported in the fully relational DBMS for representing missing information in a systematic way, independent of data type.
Null and N/A should be handled differently.
4. Dynamic On-line Catalog Based on the Relational Model
The database description is represented at the logical level in the same way as ordinary data, so authorized users can apply the same relational language to its interrogation as they apply to regular data.
Every database should have a catalog and description of the fields indices and mappings.
5. Comprehensive Data Sublanguage Rule
A relational system may support several languages and various modes of terminal use. However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings and whose ability to support all of the following is comprehensible:
data manipulation (interactive and by program)
transaction boundaries (begin, commit, and rollback).
This refers to a structured query language (SQL).
6. View Updating Rule
All views that are theoretically updateable are also updateable by the system.
This refers to virtual tables created dynamically by joining other tables using SQL. There is a problem here: if a view does not contain the linking field, subsequent updates of that view would violate the relational integrity of the system.
7. High-level Insert, Update, and Delete
The capability of handling a base relation or a derived relation as a single operand applies nor only to the retrieval of data but also to the insertion, update, and deletion of data.
This refers to the ability to insert multiple records simultaneously.
8. Physical Data Independence
Application programs and terminal activities remain logically unimpaired whenever any changes aMay 23, 2006>
User interaction is independent of the physical location and access of the database.
9. Logical Data Independence
Application programs and terminal activities remain logically unimpaired when information preserving changes of any kind that theoretically permit unimpairment are made to the base tables.
This means that programs using the data continue to function even when the data is changed
10. Integrity Independence
Integrity constraints specific to a particular relational database must be definable in the relational data sublanguage and storable in the catalog, not in the application programs.
It is desireable to habe the database itself enforce the data rules, rather than the interfacing programs.
11. Distribution Independence
The data manipulation sublanguage of a relational DBMS must enable application programs and terminal activities to remain logically unimpaired whether and whenever data are physically centralized or distributed.
The system should work regardless of the location, or the degreee of aggregation of the data.
12. Nonsubversion Rule
If a relational system has or supports a low-level (single-record-at-a-time) language, that low-level language cannot be used to subvert or bypass the integrity rules or constraints expressed in the higher-level (multiple-records-at-a-time) relational language.
There needs to be a water-tight enforcement of the data rules without any exceptions.
Codd, E. (1985). "Is Your DBMS Really Relational?" and "Does Your DBMS Run By the Rules?" ComputerWorld, October 14 and October 21.
Elmasri, R., & Navathe, S. (1994). Fundamentals of Database Systems. 2nd ed. Redwood City, CA: The Benjamin/Cummings Publishing Co. pp. 283 – 285.
Metro NY / NJ SQL Server Consultants
We specialize is custom database software. Call us for a free consultation (973) 635 0080
Computer Consultants: Database Development
Computer Programming: Web Site Services
Computer Programming: Custom Software
Computer Consulting: IT Consulting
Paladin Consultants, LLC Home Page
Computer Consulting: Contact Us |
The Industrial Revolution changed life for most humans living throughout the world. From 1750 to 1850, incredible developments in agriculture, technology, mining, manufacturing and transportation, profoundly affected social and economic conditions initially in Great Britain, Western Europe, North America and Japan, ultimately spreading to around the globe. One of the most amazing inventions in the area of transportation, was the steam engine.
The steam engine, considered by many historians to be the single most important invention of the Industrial Revolution, had it origins about 40 years before the advent of the revolution. In 1712, Thomas Newcomen invented the atmospheric engine, which was improved upon by a Scottish inventor and mechanical engineer James Watt. The steam engine was first used to pump water from mining sites, but quickly were being adopted by mills that genereated power with water. These mills had to be located near water, but once they started to utilize steam engines, they could build their factories anywhere.
From an economic standpoint, steam engines improved productivity and through time, smaller and more reliable engines were developed. Richard Trevithick developed a high-pressure engine, which led to their use in vehicles, farm equipment, boats and trains. From Trevithick's contributions, others began to perfect the engine, which allowed manufacturers to transport their products great distances within a relatively short period of time.
The Locomotive: one of hundreds of industrial uses for one of the greatest inventions of all time, the steam engine. |
Activity created by Lindsey Macdonald and Annette Benson, CILMAR, based on the following:
Pronin, E., Lin, D.Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28(3). 369-381. https://doi.org/10.1177/0146167202286008
As a result of this activity, participants will be able to: 1. Define the concept of the bias blind spot. 2. Identify how the bias blind spot affects people's perceptions and judgments. 3. Consider how to check their own blind spots in the future. |
What is ringworm?
Ringworm (tinea), also known as dermatophytosis, is a common infection that causes itchy red patches on a person’s skin. It is very contagious and, despite its name, is caused by a fungus, not a worm.
What are the symptoms?
Symptoms vary based on the location and extent of the infection. The main symptom of ringworm is a raised, circular rash that is red, and very mildly inflamed with scaling edges and clear, normal skin in the middle. It is typically very itchy and may blister and ooze. This ring-shaped rash can be caused by a number of different fungi and occur anywhere on a person’s skin (tinea corporis), scalp (tinea capitis) , or nails (tinea unguium). When ringworm affects the feet, it is called athlete’s foot (tinea pedis); when it occurs in the groin area, it is called jock itch (tinea cruris).
What are the causes?
Despite its name, ringworm is caused by a fungus, not a worm. It occurs when tiny fungi attach to themselves to a person’s body, much like parasites. Known as dermatophytes, these fungi live on the cells in the outer layer of the skin.
Ringworm is extremely contagious and can spread from person to person through contact with the skin. It can also spread to a person from contact with animals, including cats, dogs, ferrets, rabbits, goats, pigs, and horses. Touching objects that have come in contact with an infected person, such as towels, bedding, shower or pool surfaces, and combs or brushes, can also spread ringworm. In rare cases, people can catch ringworm from soil infected with the fungus.
Who is likely to develop ringworm?
The fungi that cause infections tend to grow and spread in areas that are moist and warm. Ringworm is more prevalent in the summer and can affect people of all ages. The following factors can raise the risk of developing ringworm:
- Having close contact with an infected person or their belongings
- Having had a previous fungal infection
- Having a lowered immune system because of AIDS, cancer, or diabetes
- Living in a warm, humid climate
- Wearing tight, moist clothing, such as a bathing suit, for long periods of time
- Playing contact sports, such as wrestling or football
- Sweating excessively.
How is ringworm diagnosed?
Doctors typically diagnose ringworm based on the appearance of a person’s skin. In some cases, a sample of skin scrapings may be taken and examined under a microscope to confirm the diagnosis.
What is the conventional treatment?
Ringworm often responds to self-care measures and clears up on its own within about four weeks. These measures include:
- Keeping the area clean and dry
- Washing sheets and bedding every day while infected to prevent spreading
- Applying over-the-counter antifungal lotions, powders, creams, or ointments that contain clotrimazole (Lotrimin AF), miconazole (Micatin), terbinafine (Lamisil), or tolnaftate (Tinactin).
If ringworm is severe, covers large areas of the body, or doesn’t respond to self-care measures, conventional doctors may prescribe stronger topical antifungal medication or oral drugs such as Sporanox and Diflucan. These drugs can have unwanted side effects, including rash, stomach upset, and abnormal liver function, and their use may interfere with the anticoagulant drug warfarin (Coumadin).
What therapies does Dr. Weil recommend for ringworm?
The best therapy is prevention. Reducing the amount of moisture the feet are exposed to and wearing footwear that can breathe helps prevent both infection and recurrences. Changing socks frequently (especially in warm weather and after sports activity), and thoroughly drying the feet and spaces between the toes after swimming or bathing are also helpful strategies.
In addition to the self-care measures described above, Dr. Weil recommends eating one or two cloves of raw garlic, a potent antifungal agent, a day. He also notes that tea tree oil works as well as pharmaceutical antifungal products without their side effects. Apply a light coating to affected areas three or four times a day, and continue to apply it for two weeks after the infection disappears to make sure the fungus is eradicated. |
Money and Finance in the Roman Empire Ipass Loans
Great emperors Julius Caesar, Marcus Aurelius, and Nero ruled Rome in ancient times. When we talk about “Kaiser” or “Czar,” we’re referring to Caesar, and the month of July is named after him. Such was his grandeur. The Roman Empire included the whole Mediterranean region and a fourth of the total population at its height. Just in the city of Rome alone lived nearly a million people. The Colosseum was used to amuse the populace with gladiator and animal battles and is currently the most visited tourist destination in the Italian city Ipass Loans. The political history of the Empire is frequently taught, but less is known about Roman economics and finance. Today’s essay is a brief history lesson on money and banking under one of the greatest empires the world has ever known: Rome.
The Roman economy was a rustic but relatively urban market economy in which money was utilized as a means of transaction. There were indirect trades. Goods and services such as wine, donkeys, and wheat were set by supply and demand. Of course, some government action now and again, for example, when a famine developed due to drought. But for the most part, the market economy was left on its own. Yet, Rome was not even relatively as affluent as we are now. Most people lived barely over the subsistence threshold. The political elite was mainly preoccupied with their political positions, which is to say they were engaged with wars and power politics, not with economic progress.
Markets were, compared to now, inefficient owing to the slowness of information, but commerce continued across the empire. The Roman infrastructure is recognized for its excellence. The name “aqueduct,” for example, is still used. Grain was brought from regions surrounding the Mediterranean.
The cornerstone of the Roman monetary system was the denarius: a coin containing around 97 percent pure silver. Copper and gold were also commonly utilized, but silver was the predominant monetary unit for daily transactions. As the quantity of silver in the denarius initially remained generally consistent, the Romans had a relatively stable financial system. It was the monetary standard for more than 450 years. However, throughout the years, the worth of the currency fell, which led prices to go higher. New Emperors would often request that all coinage be returned to government control and re-stamped with their image after a power transition. In this procedure, cheaper copper was added, and the proportional quantity of silver in the currency fell. Inflation is not new. Unlike today’s government and commercial banks, the Roman emperors simply melted copper with silver to spend more than taxes could be collected.
In the absence of general growth, capital markets in Rome cannot to the slightest degree be compared to current needs. However, there existed a demand for capital in which money was loaned and borrowed.
Loans were often provided with interest payments frequently between 4 and 12 percent. The money came from the elite, such as Senators who sponsored the Empire to gain from their positions. This wealth was not only used to support military battles to stimulate imperial expansion but also to fund manufacturing and commerce. To institutionalize these methods, something like a banking system arose to link the supply of money with demand. Again, incomparable to contemporary financial organizations, but entities that borrow and lend, pushing money flows around to support the economy.
The ancient Roman Empire was affluent compared to its contemporaries. It had a solid monetary system based on silver, which degraded over time via inflation. Free market institutions were in existence and mostly left to their own devices. Capital rushed from savings to investors (the aristocracy) to fund industry and military development, but remember that this was an elite game. Ordinary inhabitants at the margins of the Empire were not prosperous but pleased to not die of famine. Still, the amount of growth the Romans accomplished is impressive. The market has produced a massive empire. |
The hominid family tree has 7 widely acknowledged genera: Sahelanthropus, Orrorin, Ardipithecus, Kenyanthropus, Paranthropus, Australopithecus, and Homo. There were doubtlessly more.
Hominins were born when grasses were on the rise. ~ American paleoanthropologist Ann Gibbons
Climate was a likely driver of the radiation of early hominids, at a time of warming, when forests thinned. Social factors may have also been instrumental, but there is no evidence of this, nor would there be even if it were so.
The fossils of the earliest hominids were found in Europe. Living 9.6–7.2 MYA, Ouranopithecus appears a mixture of ape and hominid.
The fossils of the other early hominids were dug in central east Africa. Later finds range more widely, to the west and south.
Teeth are the most common fossils, because they withstand the punishments of time better than any other body part. Dental remains, by the shape of various teeth and their wear patterns, yield indications of diet.
To estimate evolution, early hominids have been compared to our closest relatives: chimpanzees and gorillas. This apples-and-oranges comparison is complicated by lacking any significant fossil record of apes.
Once asked if hominid fossils spoke to him, Kenyan fossil hunter Kamoya Kimera replied: “yes, but you can’t understand them!”
Sahelanthropus tchadensis (“the Sahel man from Chad”) (7–5.6 MYA). Few fossil specimens are known: a partial cranium, 5 pieces of jaw, and several teeth. No other body parts have been found.
Sahelanthropus probably walked upright. The braincase, at 320–380 cm2, appears like extant chimpanzees; almost 4 times smaller than modern humans.
Sahelanthropus‘s huge brow ridges and face resemble the face of Homo, as does the small canine teeth. But Sahelanthropus‘ tooth enamel was very thick: suited for chewing vegetation, nuts, and tubers. This combination has not been found in fossil apes, nor seen in later hominids.
Sahelanthropus may represent a common ancestor of humans and chimps, or an ancestor to neither. Some cranial characteristics resemble those of Miocene apes. Other features are more like later hominids. Sahelanthropus‘ fossils are separated from both these groups by geography and time, so Sahelanthropus as its own genus is justified.
Precious few ancestors of chimps or gorillas have been found. The upshot: the inclusion of Sahelanthropus in the hominid family tree is problematic, and its place uncertain.
Orrorin tugenensis (“original man of the Tugen hills”) (6.2–5.6 MYA) presaged later hominids. The thickness of a found femur (thigh bone), as well as its hip socket, identifies Orrorin as bipedal, though the orientation and shape of the toe suggest a grasping foot. The shapes of the upper arm bone and slightly curved finger bone intimates that the upper arms were weight-bearing. Orrorin looks to have been a tree climber as well as walking upright.
Found teeth suggest a diet of fruit and seeds: large upper front teeth, reduced apelike pointed canines, microdont (smaller) post-canines like modern humans, and low cusped molars.
The location of fossils found indicate that Orrorin lived in dry evergreen forests, lakeside woodlands, and wet grasslands – not the savanna long assumed as the progenitor environment of hominins.
Orrorin arose 3 MYA earlier than Australopithecus afarensis, but Orrorin‘s anatomy is closer to modern humans. For one, Orrorin‘s hand indicates a precision grip, more capable of fine manipulation than many later hominids.
If Orrorin is the ancestor to modern humans, australopiths may be a side branch of the hominin tree. The lineage to humans remains unclear and contentious.
<em>Ardipithecus (the Afar word for “basal family ancestor”) (5.8–4.3 MYA) is a genus with 2 known species: Ar. kadabba and Ar. ramidus.
Little is known of the earlier Ar. kadabba, as only teeth and pieces of skeletal bones have been recovered, dated 5.6 MYA. Ar. kadabba lived in a habitat mix of woodland and grassland, with small lakes, swamps, and springs.
By contrast, a variety of Ar. ramidus fossils have been found, altogether comprising almost every part of the skeleton. The single most complete fossilized skeletal remains were of a female that lived 4.4 MYA, named Ardi.
Ar. ramidus had a modest stature: around 1.2 meters, and modest brain size (300–350 cm2). There was little sexual dimorphism: males and females were similarly sized, unlike chimpanzees. This suggests that that competition between males was minimal. Sexual dimorphism commonly occurs when males compete for sex.
Males and females may have both taken part in food gathering and taking care of offspring. Sociality may not have been as ritualized or segregated as it is with some other primates, notably baboons.
Many physical features suggest Ar. ramidus both walked upright and swung from limb to limb. Chimp feet are good for grasping trees. Ar. ramidus feet were better for bipedal locomotion but the arms and hands are consistent with clambering in trees.
Ar. ramidus lived in the woods, preferring the trees to the open plains of the savanna. They lived mostly during a time when drought was rare.
The Ar. ramidus teeth found are not sufficiently differentiated to indicate limited dietary preferences. This suggests an omnivore or broad vegetarian diet, as the molar enamel is thicker than that of chimps, but thinner than later hominins.
It is unlikely that Ar. ramidus ate hard foods, such as tubers and nuts, which require thick enamel for heavy chewing. Ar. ramidus probably enjoyed fruits, leaves, and softer vegetables, and perhaps insects, eggs, and small animals.
Plant-eating baboons lived in close association with the australopithecines and probably helped form an effective early warning system against predators such as big cats, especially leopards. ~ English educator Douglas Palmer
Australopithecus (“Southern ape from the lake”) (4.2–1.8 MYA) is a relatively long-lived genus of hominids, with considerable species diversity. There were at least 8 distinct australopiths. All have been found in Africa.
Most australopiths dieted on fruit, grasses, sedges, vegetables, and tubers. That an australopith evolved into the first of the Homo genus is generally accepted.
In most mammals, females stay in their natal community, while males disperse after adolescence to avoid inbreeding. Among the apes, gorillas follow this pattern. In contrast, chimpanzees and early hominids practiced the opposite. For chimps and their close evolutionary descendants, including australopiths, cohesive male bonding required growing up together. For the sake of communally defending territory, post-adolescent females were forced to disperse to find new lives. Doubtlessly males did not mind taking them into their tribe.
Four million years ago, if you broke your jaw, it was probably a fatal injury. You wouldn’t be able to chew food. You’d just starve to death. ~ American evolutionary biologist David Carrier
Early hominids losing the sharp canine teeth of their primate ancestors took much of the bite out of biting an opponent. There was compensation at hand.
Australopiths evolved numerous traits that endowed greater fighting ability, including a hand that afforded formation of a fist. This turned a delicate musculoskeletal system into a club.
Contemporaneously, the faces of males diverged from those of females. The facial bones that differ most are those that strengthened to protect the face from injury during fist fights.
The function of the fist was reinforced by bipedality. Unlike other primates, apes and hominids walk on their heels. This body posture lends extra punching power, and suggests physical conflict was instrumental in hominin descent.
Later Homo descent traded weaker muscles for comity. Human arm and upper body strength is much less than australopiths. This reduction in damaging ability afforded more gracile bodies, even as facial bone differences still characterize human sexual dimorphism.
Au. anamensis first made the scene 4.2 MYA, lasting to 3.9 MYA. Its thick enamel teeth suggest a diet of nuts, seeds, fruits, and leaves. Au. anamensis was a tree climber. How bipedal it was remains unsettled.
Au. afarensis (3.9–2.9 MYA) likely descended from Au. anamensis. Au. afarensis fossils have been found only in east Africa.
Cranial capacity to body mass was 1.2%, compared to 2.75% in humans today – roughly 1/3rd the brain size. Essentially, the anatomy of Au. afarensis appears apelike above the neck and humanlike below. Au. afarensis illustrates mosaic evolution: a variety of adaptive changes in separate places at disparate rates during different times.
There is no doubt that Au. afarensis walked upright: ranging the savannas and open woodlands, eating foraged fruits, seeds, sedges, and roots. Still, Au. afarensis retained a fondness for trees, where it spent much of its time.
Lucy is a famous Au. afarensis fossil find: a nearly complete female skeleton, 25 years old when she died 3.18 MYA.
Another find – a female juvenile dubbed Selam by its discoverers – suggests that Au. afarensis mixed walking with climbing. Selam’s shoulder blades and socket indicate adaptation for frequent tree-climbing. Selam presents an interpretive challenge. A terrestrial lifestyle while retaining some shoulder characteristics of earlier tree-climbing hominins is possible. Despite their similarities, Lucy and Selam may not have been the same species.
Au. afarensis had stone tools and practiced butchery. Ungulate bones found near Selam appear to have been cut by stone tools. But Au. afarensis lacked the manual dexterity to craft stone tools. Its hand proportions were more like gorillas than later hominins. While Au. afarensis may have been able to bring the tips of its fingers and thumb together, its thumbs were too short for the precision grip that later hominins had, enabling them as tool makers.
Like Ardipithecus ramidus, Au. afarensis were only moderately sexually dimorphic. Adult males stood ~1.6 meters and weighted 45 kg. Females were a bit smaller and less heavy. Males also typically displayed large crests at the top of their skulls; females did not.
Closely related hominin species shared the planet many times in the past few million years. ~ American paleontologist Bernard Wood
Au. afarensis had contemporaneous cousins. Besides closely related species in east Africa, one known relative was in North Africa; the other in the far south.
Lucy’s Northern Cousin
A 3.5-million-year-old foot fossil found in Ethiopia reveals a species like Ar. ramidus, at least from the ankle down. While Lucy walked about much of the time, this northern species had feet for climbing trees and grasping limbs. It would have walked awkwardly.
Lucy’s Southern Cousin
A nearly complete fossil skeleton dating to 3.67 MYA was found in South Africa’s Sterkfontein Caves in the mid-1990s. Similar partial skeletons had already been discovered at a nearby site in 1948. This small-footed hominoid with a flat face is known as Australopithecus prometheus.
Lucy’s Next-Door Neighbor
Current fossil evidence clearly shows that there were at least 2, if not 3, early human species living at the same time and in close geographic proximity. ~ Ethiopian paleoanthropologist Yohannes Haile-Selassie
3.4 MYA jaw fossils were found in Ethiopia in 2011, showing that Lucy had a nearby relative with a quite different diet. The species was dubbed Australopithecus deyiremeda.
Au. bahrelghazali is a controversial 1993 discovery by French paleontologist Michel Brunet. His find was a single specimen, dated 3.6 MYA. It has been the only australopithecine fossil found in central Africa. All others are from east Africa, some 2,500 km distant.
Au. bahrelghazali has similar dentistry to Au. afarensis, but is more like Sahelanthropus in the shape of its skull.
Stingy Brunet classified his find as a separate species, but refuses to let others examine the remains, contrary to the International Code of Zoological Nomenclature.
Au. africanus, which lived in southern Africa, dates to 3.3 MYA. Au. africanus was a significant step from Au. afarensis. Several anatomical features were significantly more like the Homo genus, including a larger cranium (400–625 cm2), gracile (slender) build, and more humanlike hands.
Au. africanus had a pelvis that was better built for walking than Au. afarensis. Yet Au. africanus retained the apelike curved fingers of tree climbers.
The diet of Au. africanus seems to have been seasonal, favoring fruit, though able to chew seeds and other food requiring mastication.
Sexual dimorphism showed in spinal adaptation of females to bear lumbar loads upright while pregnant.
Like chimpanzees, Au. africanus had patrilocal communities: in reaching sexual maturity, females emigrated to find mates.
A. garhi was a gracile species that lived 2.5 MYA. Large molars and pre-molars suggest a diet of tough, fibrous foods: perhaps tubers and other chewy vegetables. Au. garhi shaped stone tools. Only a single find of cranial fragments has been made of Au. garhi, in Ethiopia. Much mystery remains.
Au. sediba represents an amalgam. Fossils date to 2 MYA. Evident adaptations include better walking, even running, and a surprisingly modern hand capable of a precision grip. Vestiges of the past remained, albeit with a twist. Au. sediba had long arms, climbed trees, and walked upright but had a different gait from that of either chimps or humans. This suggests that several forms of bipedalism evolved among hominids.
While Au. sediba had a humanlike lower rib cage, its lower back was longer and more flexible than people today. Au. sediba brain size was on par with other australopiths (420–450 cm2), but the cranial shape presaged modern humans.
Au. sediba lived in woodlands, eating fruit, tree leaves, and bark. Au. sediba teeth were like Au. africanus; perhaps an instance of parallel evolution, as Au. sediba had no links to earlier hominins often regarded as Homo ancestors, notably Au. afarensis. Au. sediba may have simply been one of several different australopiths living at the time.
One of the main conclusions from discovering Kenyanthropus was that human evolution is a mosaic process, with different species showing unexpected combinations of anatomy. ~ English anthropologist Charles Lockwood
Kenyanthropus platyops (“flat-faced man of Kenya”) (3.5–3.2 MYA). The fossils of K. platyops were discovered in Lake Turkana, Kenya in 1999.
Kenyanthropus had a broad, flat face; striking with its high cheek bone and flat plane behind its nose bone. Its toe bone suggests that Kenyanthropus walked upright. Teeth were intermediate between ape and human characteristics. Its small ear hole is like that of a chimp. Its small brain fits with other early hominids.
Whether Kenyanthropus represents a new genus, a species of Australopithecus, or merely an Au. afarensis individual remains unsettled. It has been suggested that Kenyanthropus was one of numerous hominid species around that time, each adapted to a specific environment: that is, a product of adaptive radiation.
Paranthropus (“ape that lived alongside humans”) (2.7–1.2 MYA). The bipedal paranthrops probably descended from gracile australopiths but took a decided turn to the muscular.
Paranthropus had a robust anatomy, with massive jaws, sturdy bones, and strong muscles. Paranthrops lived off vegetation and possibly grubs, resorting to chewy fare only when starving.
Paranthrops were sexually dimorphic. While there was some size variation in the different paranthropic species, males were 1.2–1.4 meters, weighing ~68 kg, while females were 0.9–1.0 meters, 40–45 kilos. Males had a sagittal (cranial) crest (shades of Au. afarensis), but the smaller females did not. Like Au. africanus, Paranthropus were patrilocal.
P. aethiopicus lived 2.7–2.3 MYA; P. boisei: 2.6–1.3 MYA. P. robustus: 2.3–1.2 MYA.
Paranthrops’ brain size was that of a chimp today, ~40% that of a modern human; but paranthrops had significantly larger braincases than australopiths.
P. robustus had a hand adapted for a precision grip and tool use. It is unclear the extent to which paranthrops crafted or used tools. Paranthrops are thought to have preferred living in wooded areas, not grasslands.
Paranthropus were unrelated to the lineage that led to humans: hominids, not hominins. Paranthrops were on an evolutionary path that led to extinction.
The tough build of Paranthropus may have disguised a hominid poorly adapted to the demands of changing climate, at least compared to the upstart Homo genus that prevailed. |
Lasers play roles in many manufacturing processes, from welding car parts to crafting engine components with 3D printers. To control these tasks, manufacturers must ensure that their lasers fire at the correct power. To date, there has been no way to precisely measure laser power during the manufacturing process in real time; for example, while lasers are cutting or melting objects. Without this information, some manufacturers may have to spend more time and money assessing whether their parts meet manufacturing specifications after production.
A new laser power sensor — a chip-sized “smart mirror” — was developed for lasers of hundreds of watts — the range typically used for manufacturing processes. The radiation-pressure power meter could be integrated into machines employed in additive manufacturing, a type of 3D printing that builds an object layer by layer, often using a laser to melt the materials that form the object.
Conventional techniques for gauging laser power require an apparatus that absorbs all the energy from the beam as heat. Measuring temperature change allows researchers to calculate the laser's power. But if the measurement requires absorbing all the energy from the laser beam, then manufacturers can't measure the beam while it's actually being used. Radiation pressure solves this problem. Light has no mass, but it does have momentum, which allows it to produce a force when it strikes an object. A 1-kilowatt (kW) laser beam has a small but noticeable force — about the weight of a grain of sand.
By shining a laser beam on a reflective surface and then measuring how much the surface moves in response to light's pressure, researchers can both measure the laser's force (and therefore, its power) and also use the light that bounces off the surface directly for manufacturing work.
The new device works essentially as a capacitor, measuring changes in capacitance between two charged plates, each about the size of a half dollar. The top plate is coated with a highly reflective mirror called a distributed Bragg reflector, which uses alternating layers of silicon and silicon dioxide. Laser light hitting the top plate imparts a force that causes that plate to move closer to the bottom plate, which changes the capacitance, or its ability to store electric charge. The higher the laser power, the greater the force on the top plate.
Laser light in the range used for manufacturing — in the hundreds of watts range — is not powerful enough to move the plate very far. That means that any physical vibrations in the room could cause that top plate to move in a way that wipes out the tiny signal it's designed to measure. The new sensor is insensitive to vibration. Both the top and bottom plates are attached to the device by springs. Ambient influences, such as vibrations if someone closes a door in the room or walks past the table, cause both plates to move in tandem. But a force that affects only the top plate causes it to move independently.
With this technique in place, the sensor can make precise, real-time power measurements for lasers of hundreds of watts, with a background noise level of just 2.5 watts. The prototype sensor has been tested at a laser power of 250 watts. With further work, that range will likely extend to about 1 kW on the high end and below 1 watt on the low end. |
Helping Child Vision Development
Did you know that babies have to learn how to see?
It might seem strange, since using our eyes is something we do automatically all day, but babies need to develop a number of visual skills in order to effectively use their eyes and process what they’re seeing, just like they have to learn how to walk and talk. Parents can be a big help to this process, particularly by choosing age-appropriate toys.
What a Baby Sees in the First Six Months
An infant’s world is made up of light, shadow, and blurry shapes. They can only effectively focus on things 8-15 inches away — coincidentally the perfect distance to see the face of the person holding them! Over time, they begin to see things more clearly and sharply, and parents can help in several ways:
- Fill their surroundings with color. It takes a few weeks before a baby’s color vision starts to develop, and once it does, they won’t be able to get enough of bright, pretty colors. That’s why they enjoy mobiles.
- Help them get used to tracking movement with their eyes by moving objects in front of them.
- Play peek-a-boo. This isn’t only to make them laugh (even though that already makes it worth doing); it’s a great way of giving them practice focusing their eyes.
The Dramatic Progress in Months 6-12
Hand-eye coordination begins to develop around month six, and parents can help by keeping Baby well supplied with colorful objects to grab and play with. Crawling also helps them learn coordination (which does sometimes come at the price of some bumps on the noggin, since they haven’t learned that their heads don’t stop at their eyes yet).
Months 6-12 are when your baby will get bored of peek-a-boo. The reason they love peek-a-boo so much in the early months is that they don’t understand object permanence yet, so it looks like magic to them, but eventually they figure out the trick: Mom and Dad aren’t blinking out of existence when they’re out of sight, they’re just hiding behind their hands! At this point, you can change the game and start hiding toys under a blanket and challenging them to find them.
Toddlerhood and Advanced Visual Skills
Toddlers gain a lot of coordination when they learn to walk, and playing with balls helps too. Comprehension and balance are big factors in a toddler’s visual skills. When they begin talking, they start putting names to the objects they see, and around age two, they might discover burgeoning artistic talent. Make sure they have access to paper and crayons! Big, interlocking blocks or wooden blocks are also great for toddlers.
Early Childhood Eye Exams
As important as it is to provide the right types of toys and play the right games with your baby, eye exams are critical too. Babies and toddlers lack the words and understanding to communicate to us if something is wrong with their eyesight, so more than anyone else, they need an eye doctor to check for them. This is why we recommend scheduling the first eye exam at six months and another around their third birthday. |
Meningitis is an infection of the meninges, the membranes that surround the brain and spinal cord. Encephalitis is inflammation of the brain itself. Anyone can get encephalitis or meningitis.
Causes of encephalitis and meningitis include viruses, bacteria, fungus, and parasites. Anyone experiencing symptoms of meningitis or encephalitis should see a doctor immediately.
Symptoms of encephalitis that might require emergency treatment include loss of consciousness, seizures, muscle weakness, or sudden severe dementia.
Other symptoms include: sudden fever, headache, vomiting, heightened sensitivity to light, stiff neck and back, confusion and impaired judgment, drowsiness, weak muscles, a clumsy and unsteady gait, irritability.
In more severe cases, people may have problems with speech or hearing, vision problems, and hallucinations.
Symptoms of meningitis, which may appear suddenly, often include: high fever, severe and persistent headache, stiff neck, nausea, sensitivity to bright light, vomiting, and changes in behavior such as confusion, sleepiness, and difficulty waking up.
In infants, symptoms of meningitis or encephalitis may include fever, vomiting, lethargy, body stiffness, unexplained irritability, and a full or bulging fontanel (the soft spot on the top of the head).
Anyone experiencing symptoms of meningitis or encephalitis should see a doctor immediately. Antibiotics for most types of meningitis can greatly reduce the risk of dying from the disease.
Antiviral medications may be prescribed for viral encephalitis or other severe viral infections. Anticonvulsants are used to prevent or treat seizures. Corticosteroid drugs can reduce brain swelling and inflammation. Over-the-counter medications may be used for fever and headache.
Individuals with encephalitis or bacterial meningitis are usually hospitalized for treatment. Affected individuals with breathing difficulties may require artificial respiration.
The prognosis for people with encephalitis or meningitis varies. In most cases, people with very mild encephalitis or meningitis can make a full recovery, although the process may be slow. Individuals who experience mild symptoms may recover in 2-4 weeks.
Other cases are severe, and permanent impairment or death is possible. The acute phase of encephalitis may last for 1 to 2 weeks, with gradual or sudden resolution of fever and neurological symptoms.
Individuals treated for bacterial meningitis typically show some relief within 48-72 hours. Neurological symptoms may require many months before full recovery.
With early diagnosis and prompt treatment, most individuals recover from meningitis. However, in some cases, the disease progresses so rapidly that death occurs during the first 48 hours, despite early treatment.
Current research efforts include gaining a better understanding of how the central nervous system responds to inflammation in the brain.
A better understanding of the molecular mechanisms involved in the protection and disruption of the blood-brain barrier could lead to the development of new treatments for several neuroinflammatory diseases such as meningitis and encephalitis.
Additional research focuses on autoimmune causes of encephalitis and optional treatments for them. |
What Is Experiential Learning:
Experiential learning is also known as the process of learning through experience. Also, more specifically defined as “learning through action “. Hands-on learning is a better fit for an example of experiential learning. Experiential learning is exclusively designed to engage participants in developing skills and as well as enhances their knowledge. Students involve in an active role during learning. This will obviously lead students to experience greater gratification in learning.
Why Experiential Learning Is Important in Boosting Skills:
- Enhances ability in applying knowledge
- Improves access to real-time coaching and feedback.
- Promotes teamwork as well as communication skills.
- Enhances interpersonal skills
- Helps understanding Emotions of self and others
- Development of productive practice habits.
- Experiential learning enables the student to involve in creative tasks. With this exposure their brains tend to seek their own, unique and most fulfilling solution to a hands-on task.
- Accomplishments involving knowledge are obvious.
- Experiential learning is useful in converting data and concepts to “real” by applying them to hands-on tasks.
- Helps in better planning with a bigger goal in mind.
- It is an opportunity for creation and using the available resources in a better manner.
- Participants start analysing how their actions affect the outcome. And how their outcome may have varied. This analysis helps them understand the concepts in a better way. And also how they can be applied to other, varied circumstances.
- As it involves trial and error methods, mistakes become valuable.
- Students tend to find better approaches as they engage in hands-on tasks. They find that some approaches work better than others. Therefore, students learn not to fear mistakes, but values them.
- As this process involves developing plan of action, students will obviously learn problem solving skills. This in turn prepares students for Real Life
- Its about shift focus from I, Me , Mine and Myself to We, Us, Our, and Everyone.
Lighthouse Training Academy provides best in town experiential learning with activities that are communal in nature. Enhances skill set and also boosts learning capabilities according to circumstances. |
CORVALLIS, Ore. – Restoration efforts for kelp forests may be most effective in areas where the bedrock seafloor is highly contoured, research by Oregon State University suggests.
The findings, published today in the Proceedings of the National Academy of Sciences, are important because kelp, large algae with massive ecological and economic importance around the world, are under siege from environmental change and overgrazing by urchins.
The study, led by recent graduate Zachary Randell when he was an Oregon State doctoral student, shows that kelp forests in areas with ocean floor ruggedness – scientifically termed substrate complexity or surface rugosity – tend toward stability.
There were urchins in kelp forests growing in areas of high substrate complexity, but those urchins did not cause widespread destruction of kelp forests. One hypothesis is that substrate complexity retains “drift algae” – detached pieces of kelp and other algae – which urchins prefer to eat over live kelp, Randell said.
“Kelp forests are declining worldwide due to different combinations of environmental change and lower numbers of the predators that control urchins, like sea stars, cod and sea otters,” Randell said. “More and more kelp forests are transitioning into what are known as ‘urchin barrens,’ an alternative arrangement of the ecosystem with dramatically fewer species, including species of recreational, commercial and conservation interest.”
Kelp are a foundation species that occupy nearly 50% of the world’s marine ecoregions. They especially thrive in cold water, where they form large aquatic forests that provide essential habitat, food and refuge for many species. Their sensitivity to certain growing conditions means climate change and a warming ocean are particularly problematic for them.
Kelp are often harvested for use in products ranging from toothpaste and shampoos to puddings and cakes, and they also help support nutrient cycling, shoreline protection and commercial fisheries such as rockfish. Economists place kelp’s value in the range of billions of dollars annually.
Looking for a connection between seafloor structure and kelp forest health, Randell and collaborators at Oregon State and the U.S. Geological Survey Western Ecological Research Center looked at decades of data from subtidal monitoring around San Nicolas Island. The island is the farthest offshore of the Channel Islands off the Southern California coast, and the data came from nearly 40 years of biannual surveys by scientific SCUBA divers.
The research group that also included OSU’s Mark Novak and the USGS’s Michael Kenner, Joseph Tomoleoni and Julie Yee is the first to demonstrate a link between substrate complexity and kelp forest dynamics, Randell said.
The scientists found that relatively flat seafloor locations showed abrupt transitions between kelp forests and urchin barrens, and from urchin barrens to kelp forests.
“But sites with a lot of physical structure along the seafloor showed resilience – the forested state was able to persist, even when similarly disturbed compared to the flat locations,” Randell said. “The seafloor connection needs further study given recent kelp declines and a push for restoration efforts like outplanting or urchin removal. Understanding the factors behind tipping points and switches between states, which many types of ecosystems are subject to, is key to effective management.”
The research suggests restoration efforts could target areas with abundant substrate complexity to create “hotspots of resilience,” Randell said. Where complexity doesn’t exist naturally, it could be manufactured – for example, via artificial reefs – he added.
“Those artificial reefs could also be the focus of outplanting recovery efforts of sea urchin predators, such as the captive rearing and release of sea stars,” he said. “The reefs could be constructed as optimal environments for urchin predators and also for urchins and the drift algae that urchins eat.”
Another interesting component of the findings, Randell noted, is that urchin density in a healthy kelp forest was often similar to that of an urchin barren; thus the animal that often gets a bad rap as a destroyer of kelp forests may actually be just one piece in a much larger puzzle, he said.
“The common practice of culling urchins will decrease grazing rates only in the short term and won’t by itself bring about kelp forest stability,” Randell said. “Urchin removal is likely to be most effective for jump-starting kelp recovery when efforts are focused on high-complexity substrate and paired with other tactics like kelp outplanting.”
The U.S. Geological Survey, the U.S. Navy, the National Science Foundation, the David and Lucile Packard Foundation and the Partnership for Interdisciplinary Studies of Coastal Oceans supported this research.
About the OSU College of Science: As one of the largest academic units at OSU, the College of Science has seven departments and 12 pre-professional programs. It provides the basic science courses essential to the education of every OSU student, builds future leaders in science, and its faculty are international leaders in scientific research. |
Universal Astrolabe (11th century) *
Photo: Volker Moehrke * In the Middle Ages, while Europeans were busy warring, plundering, and burning heretics at the stake, Muslim scholars were inventing the most advanced devices of the day. They refined the scientific method, developed effective cardiac drugs, and built celestial observatories—yet over time their contributions were largely forgotten.
Historian Fuat Sezgin spent 60 years tracking down ancient manuscripts and commissioning craftspeople to reproduce hundreds of instruments, from clocks to syringes. His replicas on display at the Islamic Science and Technology History Museum in Istanbul remind us that the culture now often associated with an antiscience ideology was once a catalyst for innovation. "Modern Muslims do not know this great history," Sezgin says, "so they sometimes have a complex toward modern science." His work exposes a geeky heritage to be proud of. Here are a few of those bright ideas from the so-called Dark Ages.
Universal Astrolabe* (11th century), pictured above*
What it is: An instrument for reading the stars
Why it matters: Starting around AD 622, Muhammad's followers spread throughout the Middle East and into Central Asia and North Africa. Astrolabes, which may date back to the Greeks, enabled travelers to determine time and direction from the constellations. But early users had to tote around a set of customized plates for each latitude. This all-in-one model, created by an astronomer known as Azarchel, lightened adventurers' loads. With it, globe-trotting Muslims could pray daily at the correct hours, facing Mecca, whether they were in Ibiza or Kazakhstan. Astrolabes ultimately led to the development of astronomical clocks.
Alembic (8th century)
What it is: An apparatus for distilling liquids
Why it matters: Islamic culture forbids the drinking of alcohol, but early Muslim scientists made great advances in distillation, a process they refined to create medicines, perfumes, and essential oils. The alembic was the first device that could fully separate substances with different volatilities. A liquid mixture was heated until the component with the lowest boiling point vaporized and rose to meet cool air at the neck. There it condensed back into liquid form, and the purified fluid dripped into a collection container. This was the precursor of the pot still, without which—perish the thought—whiskey would not exist.
Torpedo (13th century)
What it is: The first self-propelled projectile for sea warfare.
Why it matters: The Chinese invented gunpowder, but Hassan al-Rammah got the idea of stuffing it into a metal case and shooting it across water to shock and awe an enemy ship. In The Book of Fighting on Horseback and With War Engines (1280), al-Rammah dubbed it a "self-moving and combustible egg." This spiny missile would be filled with saltpeter, flammable liquid, and metal filings. Once ignited, combustion would propel the torpedo to its target, where it might explode.
Start Previous: How to Mix an Exploding Drink, Rev Up Vintage Speakers, Flirt With Confidence Next: Wired Guide to the Tubiverse, From SimTube to PotTube History's Greatest Gadgets
World's First Computer Rebuilt, Rebooted After 2,000 Years |
Phoenix dactylifera, Date Palm
Arecaceae, palm family.
Phoenix is the Latin term for the Greek word that means "date palm."
The species name dactylifera means "finger-bearing" and refers to the fruit clusters produced by this palm. Dactylifera is a combination of the Greek word dactylus, or "finger," and the Latin word ferous, or "bearing."
The common name comes from the fact that the palm bears fruit called dates, which are not only edible, but also tasty.
While the native range of this palm is uncertain, it is thought to be indigenous to either North Africa or the Middle East. It is also present in Turkey, Pakistan, and Northwest India, but is thought to have been introduced to these areas long ago through human transport. Although date palm prefers dry climates, it occurs along rivers and streams and in areas of the desert that have underground water sources. In America, this tree grows well in regions where there is low humidity, although it is found in humid areas like Florida, and where the temperatures do not fall below 15°F. Date palm is slow growing and requires full sun for optimal growth; it can reach heights up to 80 feet. The pinnately compound blue-green to gray-green leaves or fronds can grow to 20 feet in length; leaflets are 1 to 2 feet long and form a "V" shape down the rachis. The petiole (stem that attaches the leaf to the trunk) is considered "false" because it contains 3- to 4-inch thorns that are actually modified leaflets. When young, the trunk bears boots (remnant petioles that were attached to the trunk); when mature, the boots wear and become knobby but still show a characteristic spiraling leaf arrangement. Orange inflorescences can reach lengths of 4 feet, are heavily branched, bear small white blossoms, and grow among the leaves. The oblong edible fruits are 1 to 3 inches long and occur in orange or red masses when mature.
Each individual tree is either a male or a female (as is true for all species within this genus). Male trees are extremely allergenic because their pollen is air-borne, whereas the female palms cause minimal to no allergies.
People in the Middle East have used the date palm as a main food source for at least 1000 years. It is only in the last several hundred years that it became a global commodity. Today, cultivation of date palm has spread into many other parts of the world including the United States. Currently, there are hundreds of varieties of the date palm, with noticeable differences in fruit characteristics. Only two varieties produce fruit in areas with humidity similar to that of the Gulf Coast region of the United States.
Date palm makes an attractive landscape specimen with its blue-green leaves, textured trunk, and bright orange inflorescences. It requires neutral to acidic soils that are well drained, and it should be located where it can receive direct sunlight. The date palm is commonly used as a street tree because it is able to thrive even when there is limited space for root growth; however, some find the fallen fruits to be messy and undesirable along sidewalks. Although its crown is wide, it is not very dense, and therefore the date palm does not function well as a shade tree. Fruit production requires that both female and male trees be present in the same area. Some minor upkeep may also be required, and many homeowners trim the lower leaves to discourage a fungus that commonly develops in warmer climates. This species is susceptible to lethal yellowing disease (https://edis.ifas.ufl.edu/pp146), so it is best to avoid planting the date palm where the disease is present.
Borror, D. J. 1988. Dictionary of root words and combining forms (2nd ed.). Mountain View, CA: Mayfield Publishing Company.
Coombes, A. 1994. Dictionary of plant names: Botanical names and their common name equivalents. Portland, OR: Timber Press.
Floridata.com. 2004. Phoenix dactylifera, Retrieved from http://www.floridata.com/ref/p/phoe_dac.cfm
Gilman, E. F. 1997. Trees for urban and suburban landscapes. Albany, NY: Delmar Publishers.
Harrison, N. A. and M. L. Elliot. 2009. Lethal yellowing (LY) of palm (PP146). Gainesville, FL: UF-IFAS Florida Cooperative Extension Service. Retrieved from https://edis.ifas.ufl.edu/pp146
Meerow, A. W. 2004. Betrock's guide to landscape palms (9th ed.). Hollywood, FL: Betrock Information Systems.
Ogren, T. L. 2000. Allergy-free gardening: The revolutionary guide to healthy landscaping. Berkeley, CA: Ten Speed Press.
Riffle, R. L. and P. Craft. 2003. An encyclopedia of cultivated palms. Portland, OR: Timber Press, Inc. |
Periodontal (gum) disease is an infection caused by bacterial plaque, a thin, sticky layer of microorganisms (called a biofilm) that collects at the gum line in the absence of effective daily oral hygiene. Left for long periods of time, plaque will cause inflammation that can gradually separate the gums from the teeth — forming little spaces that are referred to as “periodontal pockets.” The pockets offer a sheltered environment for the disease-causing (pathogenic) bacteria to reproduce. If the infection remains untreated, it can spread from the gum tissues into the bone that supports the teeth. Should this happen, your teeth may loosen and eventually be lost.
When treating gum disease, it is often best to begin with a non-surgical approach consisting of one or more of the following:
- Scaling and Root Planing. An important goal in the treatment of gum disease is to rid the teeth and gums of pathogenic bacteria and the toxins they produce, which may become incorporated into the root surface of the teeth. This is done with a deep-cleaning procedure called scaling and root planing (or root debridement). Scaling involves removing plaque and hard deposits (calculus or tartar) from the surface of the teeth, both above and below the gum line. Root planing is the smoothing of the tooth-root surfaces, making them more difficult for bacteria to adhere to.
- Antibiotics/Antimicrobials. As gum disease progresses, periodontal pockets and bone loss can result in the formation of tiny, hard to reach areas that are difficult to clean with handheld instruments. Sometimes it's best to try to disinfect these relatively inaccessible places with a prescription antimicrobial rinse (usually containing chlorhexidine), or even a topical antibiotic (such as tetracycline or doxycyline) applied directly to the affected areas. These are used only on a short-term basis, because it isn't desirable to suppress beneficial types of oral bacteria.
- Bite Adjustment. If some of your teeth are loose, they may need to be protected from the stresses of biting and chewing — particularly if you have teeth-grinding or clenching habits. For example, it is possible to carefully reshape minute amounts of tooth surface enamel to change the way upper and lower teeth contact each other, thus lessening the force and reducing their mobility. It's also possible to join your teeth together with a small metal or plastic brace so that they can support each other, and/or to provide you with a bite guard to wear when you are most likely to grind or clench you teeth.
- Oral Hygiene. Since dental plaque is the main cause of periodontal disease, it's essential to remove it on a daily basis. That means you will play a large role in keeping your mouth disease-free. You will be instructed in the most effective brushing and flossing techniques, and given recommendations for products that you should use at home. Then you'll be encouraged to keep up the routine daily. Becoming an active participant in your own care is the best way to ensure your periodontal treatment succeeds. And while you're focusing on your oral health, remember that giving up smoking helps not just your mouth, but your whole body.
Often, nonsurgical treatment is enough to control a periodontal infection, restore oral tissues to good health, and tighten loose teeth. At that point, keeping up your oral hygiene routine at home and having regular checkups and cleanings at the dental office will give you the best chance to remain disease-free.
Understanding Gum (Periodontal) Disease Have your gums ever bled when you brushed or flossed? This most commonly overlooked simple sign may be the start of a silent progressive disease leading to tooth loss. Learn what you can do to prevent this problem and keep your teeth for life... Read Article
Treating Difficult Areas Of Periodontal Disease Local antimicrobial or antibiotic therapy is sometimes used to treat difficult areas of periodontal (gum) disease. However, it is important to realize that while periodontal disease is a bacterially induced and sustained disease, mechanical cleaning to reduce bacteria is the best and most often used treatment... Read Article |
- Types of Utility
- Types of Production
- Levels of Production and Related Activities
- Factors of Production and their Rewards
- Division of Labour and Specialisation
- Classification of Goods and Services Produced in an Economy
- Negative Effects of Production Activities on the Community
- Production refers to the creation of goods and services in order to satisfy human wants.
- It may refer to the increasing the usefulness of goods and services.
- Production involves the activities that enables one to provide goods and services.
- Such activities may include:
- The transformation of raw materials into finished goods
- The goods and services created through production must have usefulness (utility) to the consumer
- Utility refers to the ability of a good or service to satisfy human wants.
- There are for types of utility;
- Form utility
- Place utility
- Time utility
- Possessive utility
- Refers to changing of the form of a commodity e.g. converting raw materials into finished goods. A good example of form utility is the conversion of sugarcane into sugar through processing
- Refers to changing of the location of a commodity in order to bridge the geographical gap between the producer and the consumer.
- Place utility is facilitated by transportation. E.g. transporting bread to school.
- Refers to making commodities available to consumers at the right time. It created when a good is stored until it is ready for use.
- Time utility is facilitated by storage. E.g. food in the school store during the holiday to be used when the school re-opens
- Refers to the transfer of ownership of commodities from one person to another.
- Possessive utility is mostly facilitated through trade. E.g. transferring ownership of bread the shopkeeper to the student.
- For ownership of goods and services to be transferred, a price must be paid.
Production can be classified into two;
- Direct production
- Indirect production
- Refers to the production of goods and services for one’s own consumption.
- It is also known as subsistence production.
Examples of direct production may include cases where one grows food products for his/her own use or where one makes clothes for his/her own use
Characteristics of Direct Production
- Goods produced are of low quality
- Encourages individualism
- Leads to low standard of living
- Can be very tiring
- It does not encourage invention and innovation
- A lot of time is wasted as one moves from one job to another
- Cheap tools are used in production
- It is mostly done on small scale
- Goods and services are produced for one’s own use
- The rate of production is low
Advantages of Direct Production
- Requires less finances
- Required goods can be produced directly
- Specialisation is not necessary hence the producer can engage in several other activities
Disadvantages of Direct Production
- It cannot satisfy all the needs of the producer
- Poor quality goods are produced
- Improvement of the quality of goods may not be possible
- It discourages creativity and innovation
- Variety of goods cannot be produced
- Results in poor living standards
- The rate of production is quite low
Reasons for the Popularity of Direct Production
- Most producers rely on poor technology
- Most producers have low incomes
- Negative attitude towards commercialization of production activities
- Poor resource endowment
- Lack of international trade
- Lack of market for goods and services produced
- Lack of skills to produce in large scale
- Refers to the production of goods and services with the aim of selling the excess to acquire what one does not have.
- Indirect production is geared towards satisfying the wants of an individual and those of others.
Characteristics of Indirect Production
- Production is with a view of exchange
- Encourages specialisation
- It results in surplus production of goods and services
- It encourages invention and innovation
- Improves the standard of living of the producer
Advantages of Indirect Production
- It encourages specialization in production
- It improves the skills of the producer since tasks are done repeatedly
- Better quality goods and services can be produced
- The rate/speed of production is high
- It leads to creativity, invention and innovation in production
- Variety of goods can be produced
- It leads to high living standards
- It promotes trade
- It promotes peace and understanding among people through trade
- It promotes division of labour
Indirect production encourages interdependence among countries as they engage in trade.
There are three levels of production;
- Primary level
- Secondary level
- Tertiary level
- This level is also known as the extraction level. It involves the extraction of goods from their natural setting.
- The products of primary level are either used in their original form or are processed further to make them more useful. E.g. water can be consumed in its natural state while wood must be processed into furniture in order to be useful.
- At this level, human beings do not create goods.
- Examples of activities involved in primary production are;
- This level involves the transformation of raw materials into finished goods or into a more useful form.
- This level includes manufacturing and construction activities.
- Examples of manufacturing activities include;
- Food processing
- Textile and furniture making
- Examples of construction activities include;
- Building houses
- Construction of roads
- Construction of railway lines
- This level of production deals with provision of services. These services may be classified into two;
- Commercial services: these are activities are either or which assist trade to take place. Commercial services includes the following occupations; wholesaling, retailing, banking and insurance
- Direct personal services: these are services which are rendered directly to the consumer (s). examples of occupations categorised under non-personal services include the following; nursing, teaching, legal practice and pastoral duties
|Primary||Extraction||• Lumbering |
• Maize milling
Direct personal services
- Factors of production are the necessary resources required in the production process.
- Factors of production are categorised into four;
- Land refers to all the natural resources below, on or above the surface of the earth.
- Land is important in production because it provides space on which production takes place besides providing raw materials to be used in the production process.
- Land earns a reward/remuneration/income in form of loyalty/rent/rates.
- It is basic factor of production, i.e. without land production cannot take place
- Its supply is fixed .i.e. its size cannot be increased
- It is a natural resource
- It is subjected to the law of diminishing returns i.e. its productivity reduces with continuous usage.
- It lacks geographical mobility i.e. it cannot be moved. It is however occupationally mobile i.e. it can be put to several uses
- Its quality is homogeneous i.e. productivity varies from one piece of land to another
- Its productivity can be increased by increasing the quality and quantity of capital.
- Refers to the human effort applied in production. This effort can either be physical or mental or a combination of both e.g. a driver uses his hands, legs and brain at once while driving.
- For any human effort to be regarded as labour, it must be aimed at production and it must be paid for.
- The reward for labour is wages, commission or salaries.
Forms of Labour
- Labour may take three forms;
- Skilled labour
- Semi-skilled labour
- Unskilled labour
- Skilled labour refers to people who have acquired the relevant skills for the job.
- Semi-skilled labour refers to people who have acquired a certain level of skills for the job. Unskilled labour refers to people who have specialised skills for the job
Characteristics of Labour
- It is a basic factor of production
- It cannot be stored
- It cannot be separated from the labourer
- It is saleable
- It is human, with ability to think and capacity to get annoyed
- Labour is mobile i.e. a labourer can move from one place to another or from one profession to another.
Ways of Improving the Efficiency of Labour
- Giving workers relevant tools and equipment
- Paying workers well
- Appropriately training workers
- Improving the working conditions
- Giving workers incentives
- Providing job security
- Giving proper job descriptions to workers
- Giving workers fringe benefits such as housing, free meals etc.
- Capital refers to all man-made resources used in production of goods and services. However in production, capital refers to those goods that are produced in order to be used in producing other goods and services.
- Capital includes machines, tools and equipment.
- Goods that are used in the production of other goods and services are known as capital goods or producer goods.
- The reward for capital is interest.
Characteristics of Capital
- It is man-made
- It is a basic factor of production
- It is subject to depreciation
- It can be improved by the use of technology
- Refers to the ability to organise other factors of production in appropriate proportions for effective production.
- Entrepreneurship is conducted by an entrepreneur
- The entrepreneur incurs all the costs of production
- The reward for entrepreneurship is profit
Functions of an Entrepreneur
- He controls the business
- He starts the business
- He makes all decisions
- He acquires and pays for all the factors of production
- Bears all the risks and enjoys all the profit
- Incurs the cost of production i.e. he pays for expenses such as water, electricity, stationery and postage
- He owns the whole project
- Division of labour: this is where the production process in divided into stages and each assigned to an individual or a group of persons. E.g. the process of producing bread may divided into weighing, mixing ingredients, baking, packaging and selling and each of these areas assigned to certain individuals or groups of individuals
- Specialisation: this is where a person concentrates in the production of what he/she can produce best.
- Specialisation increases efficiency in the production of goods and services.
NOTE: division of labour leads to specialisation.
- Increases output per worker
- Enables workers engage in areas where they are best talented hence producing high quality goods and services
- Specialisation encourages invention and innovation as the workers tries to come up with improved methods of production
- Division of labour encourages the use of machines hence improving efficiency
- Specialisation an d division of labour enables the worker to enrich his/her skills in a particular area
- Division of labour ensures that tasks are accomplished with speed hence saving time
- High quality and services are produced
- Reduces the amount of mental and physical effort used by a worker as he/she gets used to one routine
- It increases the rate/speed of production
- Specialisation leads to monotony of work resulting in boredom
- Specialisation of labour may hinder creativity since it makes people work like machines
- Specialisation makes a worker depend on only one line of trade, therefore if his/her or the goods he/she lose demand, the worker becomes unemployed
- Specialisation and division of labour encourages the use of machines. These machines have replaced human labour resulting in unemployment
- Specialisation makes a country dependent on other countries for what it doesn’t produce
- Specialisation and division of labour brings people together resulting in social problems such as crimes
- The worker does not have pride in the final product because it results from the efforts of several people
- Failure of production in one stage affects the entire process of production
- Free goods and economic goods
- Free (non-economic) goods are those goods that are provided by nature e.g. air.
- They are free, in abundance and priceless. Free goods have utility (usefulness) but no money value.
- Economic goods on the other hand are those that are scarce in supply and have monetary value e.g. a car. People must work to obtain them. A price is paid to acquire them.
- Producer goods and consumer goods
- Producer (capital) goods are those goods which are produced to be used in the production of other goods and services e.g. a jembe.
- Consumer goods are those goods that are ready for final usage (consumption) e.g. food, clothes, medicine etc.
NOTE: goods will be classified as consumer or producer goods depending on their intended purpose. For example vehicles used in factories are classified as producer goods whereas vehicles used for domestic purposes are classified as consumer goods.
Features of producer goods
- They are produced to produce other goods
- Some may be durable in nature i.e. used again and again e.g. machines
- Some can only be used once in production e.g. raw materials
Features of consumer goods
- They are produced for final consumption
- They are produced using producer goods
- Some may be durable i.e. they may be used again and again e.g. furniture, personal cars and radios.
- Some can be used once e.g. bread, sugar etc.
- Perishable goods and durable goods
- Perishable goods are those goods that go bad quickly unless stored using special facilities. They include; tomatoes, meat, flowers etc.
- Durable goods on the other hand are those goods that can stay for so long without spoiling e.g. tools, furniture etc.
- Public goods and private goods
- Public goods are those that belong to no one in particular. They are either owned by the government or collectively by the public. Examples of public goods include; airports, public schools, public parks, roads etc.
- On the other hand, private goods are owned by individuals or a group of individuals. The owners have exclusive rights to the usage of these goods.
- Examples of private goods include; personal cars, mobile phones, private schools etc.
- Intermediate goods and finished goods
- Intermediate goods are those goods which are still undergoing the production process e.g. sugar cane, wool, cotton, wheat etc.
- Finished (final) goods on the other hand are those goods that have come out of the production process (outputs). Examples of final goods include; ugali, wheat flour, sugar clothes etc.
- Material goods and non-material goods
- Material goods are tangible commodities such as food, desks, chairs etc.
- Non-material goods on the other hand are intangible items or services such as teaching, nursing, banking etc.
- Results in air pollution that causes airborne diseases
- Results in water pollution that causes water borne diseases
- May cause congestion in places where production activities take place
- Results in noise pollution that can cause hearing problems
- Leads to pressure on available health facilities
- Results in solid waste pollution
- Results in environmental degradation that may cause health problems
- Results in social evils in regions where production activities take place
Please download this document as PDF to read all it's contents.
Why PDF Download?
- You will have the content in your phone/computer to read anytime.
- Study when offline.(No internet/data bundles needed.)
- Easily print the notes to hard copy.
EitherClick here to download the pdf version of "Production - Business Studies Form 1 Notes", and read the full contents of this page
This is the most affordable option. You get all topics/full set papers at a lesser price than if you bought per paper/ per topic.Click here to download the full document with all topics |
Certain bacteria have learned to manipulate the proportion of females and males in insect populations. Now Uppsala University researchers have mapped the entire genome of a bacterium that infects a close relative of the fruit fly. The findings, published in PNAS, reveal extremely high frequencies of gene exchange within this group of bacteria. In the future it is hoped that it will be possible to use sex-manipulating bacteria as environmentally friendly pesticides against harmful insects.
Bacteria belonging to the Wolbachia group are adapted to invertebrate animals such as insects, spiders, scorpions, and worms. These bacteria spread via the female's eggs from one generation to the next and manipulate the sex quotas among the infected animals so that more females are produced in the population. Mechanically speaking, the bacteria convert genetic males into females or kill male embryos that are then eaten by their sisters or make females lay unfertilized eggs that all become females. However, what happens most commonly is that the males cannot reproduce with non-infected females. This gives the infected females a great advantage, and the infection spreads rapidly among the population.
The studies of the whole genome have shown that these bacteria carry genes that are common among higher organisms, but rare among other bacteria. The scientists believe that the bacteria have stolen these genes from the genome in the host cell and that they now use them to manipulate the sex quotas among the insects.
"With the help of viruses, these bacteria exchange genes with each other, which leads to a rapid dissemination of genes that are thought to be important for sex manipulation," says Lisa Klasson, one of the researchers behind the study.
The researchers have shown that the genomes of these bacteria are evolutionary mosaics, with DNA pieces from many closely related bacteria. The effect is that each gene has its own evolutionary history and that the potential for variation is infinite.
"It's fascinating that bacteria, with only 1,000 genes, can control complicated developmental processes and behaviors in insects," says Siv Andersson.
By mapping how the genes in these bacteria change over time and figuring out the mechanisms behind sex manipulation, scientists will be able to lay a foundation for finding new pesticides for insects, based on nature's own principles. |
A hard disk (or hard drive) is internal hardware which stores and provides access to large amounts of information. Most new computers include an internal hard disk that contains several gigabytes of storage capacity.
Magnetic platters, electronics and mechanics make up a hard disk. The platters are fixed to a spindle. On each side of a platter there is a read/write head. Each platter is divided in to tracks, which again is divided into sectors. A characteristic with hard disks is that the platters and the mechanics are in an airtight enclosure, and that the read/write heads do not touch the platters as long as the platters are rotating. |
Measles and Immune Amnesia
Measles TransmissionPerhaps the best-known characteristic of measles is its extreme contagiousness. The measles virus (MV), a single-stranded (-)RNA virus that belongs to the genus morbillivirus, has only one natural host: humans. MV is extremely transmissible by aerosol droplets; in a room full of exposed people, 90% who are unvaccinated will develop the disease. To complicate matters, this microbe can linger in the air for up to 2 hours.
Measles SymptomsInfected individuals show symptoms 10-12 days after exposure, including fever, cough, runny nose, watery eyes, Koplik spots, and rash. Typically, measles infection is self-limiting and requires nothing more than palliative care to treat these symptoms. Immune-mediated clearance of the virus results in recovery and life-long immunity to the disease.
More serious complications such as pneumonia, encephalitis, and even death can occur during acute infection. Statistics show that 2-3 out of every 1000 cases will result in brain damage or death. In 2017 alone, the WHO estimated that measles and its related secondary infections were responsible for approximately 110,000 deaths worldwide. Most of these were in children less than 5 years of age.
Immune Amnesia: How Your Immune System Forgets to FightOne of the most unique—and most dangerous—features of measles pathogenesis is its ability to reset the immune systems of infected patients. During the acute phase of infection, measles induces immune suppression through a process called immune amnesia. Studies in non-human primates revealed that MV actually replaces the old memory cells of its host with new, MV-specific lymphocytes. As a result, the patient emerges with both a strong MV-specific immunity and an increased vulnerability to all other pathogens.
Many pathogens suppress immune function; the influenza virus damages airway epithelial cells and increases patient susceptibility to pneumonia-causing bacterial species. However, the ability to destroy immunological memory and replace memory lymphocytes is unique to MV. This MV-specific phenomenon raises a number of questions:
- How is immune amnesia accomplished?
- How long does the amnesia last?
- What can be done to correct or prevent the problem?
How Does Measles Virus Cause Immune Amnesia?MV causes infection by fusing with the plasma membranes of host cells in a receptor-dependent manner. When MV enters the respiratory tract, it infects alveolar macrophages in the lungs first. The primary role of these specialized immune cells is to engulf and destroy foreign substances like dust, bacteria, and viral particles. Alveolar macrophages also possess a membrane glycoprotein called Signaling Lymphhocytic Activation Molecule (SLAM) that has been identified as the high affinity cellular receptor for MV. MV uses SLAM to fuse directly with the plasma membrane, bypass destructive phagocytosis, and release its genome and replication machinery directly into the cell cytoplasm. Instead of destroying the measles virus upon contact, hijacked macrophages transport viral copies straight to the closest lymph nodes for dissemination.
Infected macrophages travel to lymph tissue, where the virus comes in contact with the memory cells of the immune system (memory T-cells and B-cells). These lymphocytes are recon strategists. They identify foreign invaders through antigen detection and process these molecular patterns to generate long-lived memory cells for future protection. If a second encounter occurs, memory cells will mount a faster and stronger immune response to that pathogen than during the first encounter.
Memory T-cells and B-cells contain SLAM surface receptors as well. Research has shown that MV binds and infects memory T-cells, memory B-cells, and naive B-cells of the immune system. Once infection is established, the virus spreads through the body by budding from infected cells. Clearance of MV requires the elimination of virally-infected lymphocytes. Immune-mediated destruction of memory T cells and B cells is initiated, and memories of past infections are destroyed along with them.
The number of T cells and B cells significantly decreases during the acute stage of measles infection, but there is a rapid return to normal WBC levels after the virus is cleared from the system. This observation masked what was really going on until researchers were able to evaluate the qualitative composition of recovered lymphocyte populations. We now know that the memory T-cells and B-cells that are produced immediately following infection are dramatically different from those that existed before the measles infection. Not only have pre-existing immune memory cells been erased, but there has been a massive production of new lymphocytes. And these have only one memory. Measles. Thus, the host is left totally immune to MV and significantly vulnerable to all other secondary infections. But for how long?
We now know that the memory T-cells and B-cells that are produced immediately following infection are dramatically different from those that existed before the measles infection.
How Long Does the Amnesia Last?Michael Mina and colleagues at Emory University in Atlanta, Georgia, developed a statistical model to analyze the duration of measles-induced immune suppression in children. Examination of child mortality rates in the US, UK, and Denmark in the decades before and after the introduction of the measles vaccine revealed that nearly half of all childhood deaths from infectious disease could be related to MV infection when the disease was prevalent. That means infections other than measles resulted in death, due to the MV effect on the immune system.
Furthermore, it was determined that it takes approximately 2-3 years post-measles infection for protective immune memory to be restored. The average duration of measles-induced immune amnesia was 27 months in all 3 countries. Corresponding evidence indicates that it may take up to 5 years for children to develop healthy immune systems even in the absence of the immune suppressing effects of MV infection. If MV infection essentially resets a child’s developing immunity to that of a newborn, re-vaccination or exposure to all previously encountered microbes will be required in order to rebuild proper immune function.
What Can Be Done to Correct or Prevent the Problem?Fortunately, the measles vaccine is highly effective at protecting against not only MV but also many of the opportunistic pathogens that are eager to take advantage of measles-induced immune amnesia. According to the Centers for Disease Control and Prevention (CDC), the measles, mumps, and rubella (MMR) vaccine is 97% effective at preventing measles after 2 doses, and widespread vaccination has led to a greater than 99% reduction in disease in the United States.
Why Measles Vaccination MattersMeasles continues to be one of the most highly contagious diseases in the world. The Centers for Disease Control and Prevention (CDC) reported 839 individual cases in the U.S. from January 1 to May 10, 2019. Outbreaks of this magnitude have not been seen since the Vaccines for Children (VFC) program was initiated 25 years ago and the subsequent absence of continuous disease transmission in the United States for 12+ months declared measles eliminated in the year 2000. Lack of vaccination and acquired immunity, as well as international travel, are all contributing factors to the current state of affairs.
The World Health Organization (WHO) calculated a 300% increase in measles cases in the first quarter of 2019 compared to this time last year. Madagascar, Brazil, India, the Philippians, Ukraine, and Venezuela have all been hit hard, while France, Greece, Israel, and Georgia have each endured smaller outbreaks of the disease.
Altogether, the situation has become a global crisis that is generating a lot of discussion about our collective and individual susceptibilities to measles infection. Fortunately, vaccination not only prevents the spread of measles, but also reduces the impact of immune amnesia and the subsequent secondary infections that are associated with this MV-specific phenomenon.
- ASM Article: ASM Applauds Subcommittee Hearing on Measles Outbreaks in the U.S.
- Stat News: The Tricks that Make Measles so Infectious
- ASM Resource Pages: What You Need to Know About the Measles Outbreak |
Website building concepts – Basic HTML
Basics of HTML
HTML is very easy. Don’t think that it is very complex and you can’t learn it. You can learn basic HTML within few hours and you will be very comfortable around it.
To learn Basic HTML is very important in the digital marketing career.
What is HTML?
HTML is Hyper text markup language. It is a language used to describe web pages.
What is a web browser?
Web browser in short browser is a tool which is used to retrieve and display content on the world wide web. It reads the HTML documents and display them as web pages.
A Basic HTML Example
The above image is a simple example of HTML. We will explain you each step and how it will display in web browser.
- The first line contains DOCTYPE declaration which defines the document type of the HTML.
- The text between and describes an HTML document.
- The Head section contains information about HTML document.
- The title tag defines title for the document.
- The body tag describes the content of the page, which is visible.
- h1 describes a heading.
- p tag describes a paragraph.
How the page looks in a web browser?
To view this page in web browser, you need to save it in .html extension. Then, open that file in browser. The page looks as below.
There are other tags that can be added to this code, like bold, underline, text colors and images.
Heading Tags : There are 6 heading tags from h1 to h6. h1 defines most important and h6 defines least important heading.
Line break : ‘br’ element defines line break. This is an empty tag which doesn’t have closing tag.
Image : Images are defined using ‘img’ tag which is an empty tag. The alt attribute of the img tag defines alternative text for the image if image is not displayed.
You can learn about other basic HTML tags/elements in w3schools site. Click on the below link to learn more about HTML. |
It is a way to automatically detect periodicities.
To give you an idea of what the Periodicity Transform is good at, let's look at a simple example. Here is a data record that I constructed by adding together two periodic sequences, one with period 13 and the other with period 19. For good measure, I added about 25% noise. The result looks like this:
It's not so easy to "see" the two periodic sequences that are buried inside, is it? If we take the Fourier transform (DFT), then we get:
Again, its pretty difficult to see any pattern here. The 13-periodic and the 19-periodic sequences are still hidden. Now let's apply the periodicity transform called the "M-Best" algorithm, which searches for the M largest periodicities. With M=10, we get:
Now that's more like it! The two periods (at 13 and 19) are clearly visible. The other eight small periods reflect minor accidental patterns that happen to occur in the noise. The Periodicity Transforms are good at finding periodicities in data.
Most standard transforms can be interpreted as projections onto suitable subspaces, and in most cases (such as the Fourier and Wavelet transforms), the subspaces are orthogonal. Such orthogonality implies that the projection onto one subspace is independent of the projection onto others. Thus a projection onto one sinusoidal basis function (in the Fourier Transform) is independent of the projections onto others, and the Fourier decomposition can proceed by projecting onto one subspace, subtracting out the projection, and repeating. Orthogonality guarantees that the order of projection is irrelevant. This is not true for projection onto nonorthogonal subspaces such as the periodic subspaces Sp. Thus the order in which the projections occur effects the decomposition, and the Periodicity Transform does not in general provide a unique representation. Once the succession of the projections is specified, however, then the answer is unique.
The Periodicity Transform searches for the best periodic characterization of the length N signal x. The underlying technique is to project x onto some periodic subspace Sp. This periodicity is then removed from x leaving the residual r stripped of its p-periodicities. Both the projection x and the residual r may contain other periodicities, and so may be decomposed into other q-periodic components by further projection onto Sq. The trick in designing a useful algorithm is to provide a sensible criterion for choosing the order in which the successive p's and q's are chosen. The intended goal of the decomposition, the amount of computational resources available, and the measure of "goodness-of-fit" all influence the algorithm. Our paper discusses four ways to mind our p's and q's.
(1) The "small to large" algorithm assumes a threshold T and calculates the projections onto Sp beginning with p=1 and progressing through p=N/2. Whenever the projection contains at least T percent of the energy in x, then the corresponding projection is chosen as a basis element.
(2) The "M-best" algorithm maintains a list of the M best periodicities and the corresponding basis elements. When a new (sub)periodicity is detected that removes more power from the signal than one currently on the list, the new one replaces the old, and the algorithm iterates.
(3) The "best-correlation" algorithm projects x onto all the periodic basis elements, essentially measuring the correlation between x and the individual periodic basis elements. The p with the largest (in absolute value) correlation is then used for the projection.
(4) The "best-frequency" algorithm determines p by Fourier methods and then projects onto Sp.
MATLAB and Mathematica routines to calculate all of these variations are available by clicking here.
For more details, see our paper:
W. A. Sethares and T. W. Staley, "Periodicity Transforms", IEEE Transactions on Signal Processing, Nov 1999. You can download a slightly raw pdf version here.
A companion paper explores the application of Periodicity Transforms to the automatic detection of rhythm in musical performance.
To get to my homepage, click here. |
Indian Arts and Culture Hinduism History and Significance of Diwali, the Festival of Lights A Significant Celebration of Light, Love, and Joy Share Flipboard Email Print Dan Kitwood / Getty Images Hinduism Indian Arts and Culture India Past and Present Important Texts Temples and Organizations Hindu Gods Hindu Gurus and Saints Table of Contents Expand The Origins of Diwali The Four Days of Diwali Dhanteras: The Tradition of Gambling The Significance of Lights and Firecrackers The Spiritual Significance of Diwali From Darkness Unto Light... By Subhamoy Das M.A., English Literature, University of North Bengal Subhamoy Das is the co-author of "Applied Hinduism: Ancient Wisdom for Today's World." He has written several books about Hinduism for children and young adults. our editorial process Subhamoy Das Updated January 29, 2019 Deepawali, Deepavali, or Diwali is the biggest and the brightest of all Hindu festivals. It is the festival of lights: deep means "light" and avali "a row" to become "a row of lights." Diwali is marked by four days of celebration, which literally illuminates the country with its brilliance and dazzles people with its joy. jayk7/Getty Images The Diwali festival occurs in late October or early November. It falls on the 15th day of the Hindu month of Kartik, so it varies every year. Each of the four days in the festival of Diwali is marked with a different tradition. What remains constant is the celebration of life, its enjoyment, and a sense of goodness. The Origins of Diwali Historically, Diwali can be traced back to ancient India. It most likely began as an important harvest festival. However, there are various legends pointing to the origin of Diwali. Some believe it to be the celebration of the marriage of Lakshmi, the goddess of wealth, with Lord Vishnu. Others use it as a celebration of her birthday, as Lakshmi is said to have been born on the new-moon day of Kartik. In Bengal, the festival is dedicated to the worship of Mother Kali, the dark goddess of strength. Lord Ganesha—the elephant-headed god and symbol of auspiciousness and wisdom—is also worshiped in most Hindu homes on this day. In Jainism, Deepawali has the added significance of marking the great event of Lord Mahavira attaining the eternal bliss of nirvana. Diwali also commemorates the return of Lord Rama (along with Ma Sita and Lakshman) from his 14-year-long exile and vanquishing the demon-king Ravana. In joyous celebration of the return of their king, the people of Ayodhya, the capital of Rama, illuminated the kingdom with earthen diyas (oil lamps) and set off firecrackers. The Badami Cave Temples with sculture depicting Vishnu resting on Shesha (snake). Frédéric Soltan/Getty Images The Four Days of Diwali Each day of Diwali has its own tale to tell. The first day of the festival, Naraka Chaturdasi marks the vanquishing of the demon Naraka by Lord Krishna and his wife Satyabhama. Amavasya, the second day of Deepawali, marks the worship of Lakshmi when she is in her most benevolent mood, fulfilling the wishes of her devotees. Amavasya also tells the story of Lord Vishnu, who, in his dwarf incarnation, vanquished the tyrant Bali and banished him to hell. Bali is allowed to return to earth once a year to light millions of lamps and dispel darkness and ignorance while spreading the radiance of love and wisdom. It is on the third day of Deepawali, Kartika Shudda Padyami, that Bali steps out of hell and rules the earth according to the boon given by Lord Vishnu. The fourth day is referred to as Yama Dvitiya (also called Bhai Dooj), and on this day sisters invite their brothers to their homes. Dhanteras: The Tradition of Gambling Some people refer to Diwali as a five-day festival because they include the festival of Dhanteras (dhan meaning "wealth" and teras meaning "13th"). This celebration of wealth and prosperity occurs two days before the festival of lights. The tradition of gambling on Diwali also has a legend behind it. It is believed that on this day, Goddess Parvati played dice with her husband Lord Shiva. She decreed that whosoever gambled on Diwali night would prosper throughout the ensuing year. The Significance of Lights and Firecrackers Allison Joyce/Stringer/Getty Images All of the simple rituals of Diwali have a significance and a story behind them. Homes are illuminated with lights, and firecrackers fill the skies as an expression of respect to the heavens for the attainment of health, wealth, knowledge, peace, and prosperity. According to one belief, the sound of firecrackers indicates the joy of the people living on earth, making the gods aware of their plentiful state. Still another possible reason has a more scientific basis: the fumes produced by the firecrackers kill or repel many insects, including mosquitoes, which are plentiful after the rains. The Spiritual Significance of Diwali Beyond the lights, gambling, and fun, Diwali is also a time to reflect on life and make changes for the upcoming year. With that, there are a number of customs that revelers hold dear each year. Give and forgive. It is common practice that people forget and forgive the wrongs done by others during Diwali. There is an air of freedom, festivity, and friendliness everywhere. Rise and shine. Waking up during the Brahmamuhurta (at 4 a.m., or 1 1/2 hours before sunrise) is a great blessing from the standpoint of health, ethical discipline, efficiency in work, and spiritual advancement. The sages who instituted this Deepawali custom may have hoped that their descendants would realize its benefits and make it a regular habit in their lives. Unite and unify. Diwali is a unifying event, and it can soften even the hardest of hearts. It is a time when people mingle about in joy and embrace one another. Those with keen inner spiritual ears will clearly hear the voice of the sages, "O children of God unite, and love all." The vibrations produced by the greetings of love, which fill the atmosphere, are powerful. When the heart has considerably hardened, only a continuous celebration of Deepavali can rekindle the urgent need of turning away from the ruinous path of hatred. Prosper and progress. On this day, Hindu merchants in North India open their new account books and pray for success and prosperity during the coming year. People buy new clothes for the family. Employers, too, purchase new clothes for their employees. Homes are cleaned and decorated by day and illuminated by night with earthen oil lamps. The best and finest illuminations can be seen in Bombay and Amritsar. The famous Golden Temple at Amritsar is lit in the evening with thousands of lamps. This festival instills charity in the hearts of people, who perform good deeds. This includes Govardhan Puja, a celebration by Vaishnavites on the fourth day of Diwali. On this day, they feed the poor on an incredible scale. Illuminate your inner self. The lights of Diwali also signify a time of inner illumination. Hindus believe that the light of lights is the one that steadily shines in the chamber of the heart. Sitting quietly and fixing the mind on this supreme light illuminates the soul. It is an opportunity to cultivate and enjoy eternal bliss. From Darkness Unto Light... In each legend, myth, and story of Deepawali lies the significance of the victory of good over evil. It is with each Deepawali and the lights that illuminate our homes and hearts that this simple truth finds new reason and hope. From darkness unto light—the light empowers us to commit ourselves to good deeds and brings us closer to divinity. During Diwali, lights illuminate every corner of India, and the scent of incense sticks hangs in the air, mingled with the sounds of firecrackers, joy, togetherness, and hope. Diwali is celebrated around the globe. Outside of India, it is more than a Hindu festival; it's a celebration of South-Asian identities. If you are away from the sights and sounds of Diwali, light a diya, sit quietly, shut your eyes, withdraw the senses, concentrate on this supreme light, and illuminate the soul. |
- Overall, homework does appear to result in higher levels of achievement for older students (at the secondary level).
- For these students, more time spent on homework is associated with higher levels of achievement, although there is probably a level beyond which more is counterproductive (perhaps at three hours a day).
- For students aged 11-13, homework appears to be of benefit, but not to the same degree as for older students.
- For these students, spending more than an hour or two on homework does not result in greater benefit.
- There is little evidence of benefit for students younger than 11, although it can be plausibly argued that small amounts of homework can have an indirect benefit for promoting good study habits and attitudes to learning.
The Suggested Benefits of Homework
The most obvious presumed benefit of homework is, of course, that it will improve students' understanding and retention of the material covered. However, partly because this (most measurable) benefit has not been consistently demonstrated, it has also been assumed that homework has less direct benefits:
- improving study skills, especially time management
- teaching students that learning can take place outside the classroom
- involving parents
- promoting responsibility and self-discipline
Probably the most obvious negative effect is the stress homework can produce in both student and parent. Homework can be a major battleground between parent and child, and in such cases, it's hard to argue that it's worth it. There are other potential problems with homework:
- homework demands can limit the time available to spend on other beneficial activities, such as sport and community involvement
- too much homework can lead to students losing interest in the subject, or even in learning
- parents can confuse students by using teaching methods different from those of their teachers
- homework can widen social inequalities
- homework may encourage cheating
Because homework has been a difficult variable to study directly, uncontaminated by other variables, research has produced mixed and inconclusive results. However, it does seem that the weight of the evidence is in favor of homework. According to Cooper's much-cited review of homework studies, there have been 20 studies since 1962 that compared the achievement of students who receive homework with students given no homework. Of these, 14 showed a benefit from doing homework, and six didn't.
The clearest point is the striking influence of age. There seems, from these studies, to be a clear and significant benefit to doing homework for high school students. Students 11 to 13 years of age also showed a clear benefit, but it was much smaller. Students below this age showed no benefit.
In 50 studies, time students reported spending on homework was correlated with their achievement. 43 of the 50 studies showed that students who did more homework achieved more; only 7 studies showed the opposite. The effect was greatest for the high school students and, again, didn't really exist for the elementary school students.For the students in the middle age range (11-13 years), more time spent on homework was associated with higher levels of achievement only up to one to two hours; more than this didn't lead to any more improvement.
TIMSS, however, found little correlation between amount of homework and levels of achievement in mathematics. While they did find that, on average, students who reported spending less than an hour a day on homework had lower average science achievement than classmates who reported more out-of-school study time, spending a lot of time studying was not necessarily associated with higher achievement. Students who reported spending between one and three hours a day on out-of-school study had average achievement that was as high as or higher than that of students who reported doing more than three hours a day.
Two British studies found that while homework in secondary schools produced better exam results, the influence was relatively small. Students who spent seven hours a week or more on a subject achieved about a third of an A level grade better than students of the same gender and ability who spent less than two hours a week.
A survey conducted by the United States Bureau of the Census (1984) found that public elementary school students reported spending an average of 4.9 hours and private school elementary students 5.5 hours a week on homework. Public high school students reported doing 6.5 hours and private school students 14.2 hours. Recent research studies by the Brown Center on Education Policy concluded that the majority of U.S. students (83% of nine-year-olds; 66% of thirteen-year-olds; 65% of seventeen-year-olds) spend less than an hour a day on homework, and this has held true for most of the past 50 years. In the last 20 years, homework has increased only in the lower grade levels, where it least matters (and indeed, may be counterproductive).
In America, NEA and the National PTA recommendations are in line with those suggested by Harris Cooper: 10 to 20 minutes per night in the first grade, and an additional 10 minutes per grade level thereafter (giving 2 hours for 12th grade).
In Britain, the Government has laid down guidelines, recommending that children as young as five should do up to an hour a week of homework on reading, spelling and numbers, rising to 1.5 hours per week for 8-9 year olds, and 30 minutes a day for 10-11 year olds. The primary motivation for the Government policy on this seems to be a hope that this will reduce the time children spend watching TV, and, presumably, instill good study habits.
TIMSS found that students on average across all the TIMSS 1999 countries spent one hour per day doing science homework, and 2.8 hours a day on all homework (the United States was below this level). On average across all countries, 36% of students reported spending one hour or more per day doing science homework.
There is some evidence that the relationship between time on homework and academic achievement may be curvilinear: pupils doing either very little or a great deal of homework tend to perform less well at school than those doing 'moderate' amounts. Presumably the association between lots of homework and poorer performance occurs because hard work is not the only factor to consider in performance -- ability and strategic skills count for a great deal, and it is likely that many very hard-working students work so long because they lack the skills to work more effectively.
By which I mean, what factors distinguish "good", i.e. useful, homework, from less productive (and even counterproductive) homework. This is the $64,000 question, and, unfortunately, research can tell us very little about it.
Cooper did conclude that there is considerable evidence that homework results in better achievement if material is distributed across several assignments rather than concentrated only on material covered in class that day.
There is no evidence that parental involvement helps, although it may well be that parental involvement can help, if done appropriately. Unfortunately, parental involvement can often be inappropriate.
A burning question for many parents!
A British study found that watching TV while doing homework was associated with poorer quality of work and more time spent. However, simply listening to the soundtrack did not affect the quality of the work or time spent. It's assumed that it's the constant task-switching caused by looking back and forth between the screen and the work that causes the negative effect. From this, it would also seem that listening to the radio should not be a problem. It's worth noting that we become less able to multi-task as we age, and that parents' objections to their children's study environment probably reflect their awareness that they themselves would find it difficult to concentrate in such circumstances.
You can read the TIMSS report at:
You can read an article on the motivational benefits of homework at:
And there are more articles about homework, with more details of Cooper's review at:
And a British review of homework research is available at:
April 2012: my update to this article. |
Each year, over 700,000 people experience either their first or a recurrent stroke. Stroke is the fifth leading cause of death in the United States. A stroke is better described as an attack of the brain that can happen at any time. A stroke occurs when the blood flow to an area of the brain is obstructed. When this occurs, brain cells begin to die as a result of being deprived of oxygen.
When brain cells are lost during a stroke, muscle control and memory capabilities are affected. Strokes are much more common among older people, because the disorders that lead to strokes progress over time. Over sixty percent of all strokes occur in the senior citizen population.
A stroke may be caused by the bursting of a blood vessel or by the blockage of an artery. Some individuals may experience a temporary disruption of blood flow to the brain, while others may suffer from an extended blood flow disruption. Different areas of the brain are responsible for different functions; therefore, the results of the stroke may vary, depending on the area of the brain that is affected.
A stroke can alter the functions of movement, speech, sensation, sight, balance and coordination. If circulation to the brain is restored promptly, symptoms are more likely to improve within a few days; however, if the blood supply was obstructed for an extended period of time, the damage done to the brain may be more severe. Symptoms may be present for several months and physical rehabilitation may be required.
The degree of impairment and type of disability one may incur following a stroke depend upon which area of the brain is affected by the attack. Paralysis is a frequently-occurring disability following a stroke. Paralysis, in many cases, may affect only an arm, leg or the face. In other instances, a complete side of the body may be affected.Oftentimes, individuals may lose the ability to feel pain, touch, position or temperature. Sensory deficits may also inhibit an individual’s ability to recognize common objects.
Twenty-five percent of all stroke victims experience a language impairment that involves the ability to write, speak or understand language. Stroke can also have a negative effect on alertness, memory and learning. Those affected by a stroke may exhibit a significantly shorter attention span and may experience bouts of short-term memory deficit. After a stroke, individuals may feel anger, anxiety or sadness. These feelings are a natural response to the psychological trauma suffered from the stroke.
Generally, stroke can cause the following impairments:
A stroke is always a medical emergency and occurs when the brain’s blood supply is interrupted. The deprivation of the blood supply to the brain causes a deficiency of oxygen and other nutrients, resulting in lost brain cells. During a stroke, prompt treatment is crucial. Fortunately, medical alerts provide peace of mind that medical help can be called immediately.
Early actions can minimize damage to the brain, as well as lessen the probability of other complications. If a stroke goes untreated for an extended period of time, the risks of severe brain damage and disability increase drastically.
Common symptoms experienced by those having a stroke are:
Many seniors opt to not utilize medical alerts; however, individuals should seek medical attention immediately at the first signs and symptoms of a stroke. Telephones should be within reach, or bells may be used to alert others in the home if attention is needed. Stroke is often described as a “brain attack,” due to the disruption of the blood supply to the brain.
Stroke survivors often suffer from cognitive and physical disabilities. The severity of the disability is dependent on the damage done to the brain. For this reason, it is essential to seek emergency care without delay when stroke symptoms develop. The sooner a patient reaches a medical facility, the better their chances of survival and recovery.
A stroke can be a devastating experience that affects one’s abilities and independence. Stroke recovery is usually a slow process, and it may take months to years for the brain to heal. During a stroke, individuals are affected differently. Seniors may be unable to get to the phone due to paralysis, become unable to communicate or experience loss of vision. For this reason, medical alert buttons can be the lifesaver needed to summon emergency help when time is of the essence. |
Q: What is vaccine-derived polio?
A: Oral polio vaccine (OPV) contains an attenuated (weakened) vaccine-virus, activating an immune response in the body. When a child is immunized with OPV, the weakened vaccine-virus replicates in the intestine for a limited period, thereby developing immunity by building up antibodies. During this time, the vaccine-virus is also excreted. In areas of inadequate sanitation, this excreted vaccine-virus can spread in the immediate community (and this can offer protection to other children through ‘passive’ immunization), before eventually dying out.
On rare occasions, if a population is seriously under-immunized, an excreted vaccine-virus can continue to circulate for an extended period of time. The longer it is allowed to survive, the more genetic changes it undergoes. In very rare instances, the vaccine-virus can genetically change into a form that can paralyse – this is what is known as a circulating vaccine-derived poliovirus (cVDPV).
It takes a long time for a cVDPV to occur. Generally, the strain will have been allowed to circulate in an un- or under-immunized population for a period of at least 12 months. Circulating VDPVs occur when routine or supplementary immunization activities (SIAs) are poorly conducted and a population is left susceptible to poliovirus, whether from vaccine-derived or wild poliovirus. Hence, the problem is not with the vaccine itself, but low vaccination coverage. If a population is fully immunized, they will be protected against both vaccine-derived and wild polioviruses.
Since 2000, more than 10 billion doses of OPV have been administered to nearly 3 billion children worldwide. As a result, more than 13 million cases of polio have been prevented, and the disease has been reduced by more than 99%. During that time, 24 cVDPV outbreaks occurred in 21 countries, resulting in fewer than 760 VDPV cases.
Until 2015, over 90% of cVDPV cases were due to the type 2 component in OPV. With the transmission of wild poliovirus type 2 already successfully interrupted since 1999, in April 2016 a switch was implemented from trivalent OPV to bivalent OPV in routine immunization programmes. The removal of the type 2 component of OPV is associated with significant public health benefits, including a reduction of the risk of cases of cVDPV2.
The small risk of cVDPVs pales in significance to the tremendous public health benefits associated with OPV. Every year, hundreds of thousands of cases due to wild polio virus are prevented. Well over 10 million cases have been averted since large-scale administration of OPV began 20 years ago.
Circulating VDPVs in the past have been rapidly stopped with 2–3 rounds of high-quality immunization campaigns. The solution is the same for all polio outbreaks: immunize every child several times with the oral vaccine to stop polio transmission, regardless of the origin of the virus. |
Levels of Measurement
The level of measurement refers to the relationship among the values that are assigned to the attributes for a variable. What does that mean? Begin with the idea of the variable, in this example “party affiliation.”
That variable has a number of attributes. Let’s assume that in this particular election context the only relevant attributes are “republican”, “democrat”, and “independent”. For purposes of analyzing the results of this variable, we arbitrarily assign the values
3 to the three attributes. The level of measurement describes the relationship among these three values. In this case, we simply are using the numbers as shorter placeholders for the lengthier text terms. We don’t assume that higher values mean “more” of something and lower numbers signify “less”. We don’t assume the the value of
2 means that democrats are twice something that republicans are. We don’t assume that republicans are in first place or have the highest priority just because they have the value of
1. In this case, we only use the values as a shorter name for the attribute. Here, we would describe the level of measurement as “nominal”.
Why is Level of Measurement Important?
First, knowing the level of measurement helps you decide how to interpret the data from that variable. When you know that a measure is nominal (like the one just described), then you know that the numerical values are just short codes for the longer names. Second, knowing the level of measurement helps you decide what statistical analysis is appropriate on the values that were assigned. If a measure is nominal, then you know that you would never average the data values or do a t-test on the data.
There are typically four levels of measurement that are defined:
In nominal measurement the numerical values just “name” the attribute uniquely. No ordering of the cases is implied. For example, jersey numbers in basketball are measures at the nominal level. A player with number
30 is not more of anything than a player with number
15, and is certainly not twice whatever number
In ordinal measurement the attributes can be rank-ordered. Here, distances between attributes do not have any meaning. For example, on a survey you might code Educational Attainment as 0=less than high school; 1=some high school.; 2=high school degree; 3=some college; 4=college degree; 5=post college. In this measure, higher numbers mean more education. But is distance from 0 to 1 same as 3 to 4? Of course not. The interval between values is not interpretable in an ordinal measure.
In interval measurement the distance between attributes does have meaning. For example, when we measure temperature (in Fahrenheit), the distance from 30-40 is same as distance from 70-80. The interval between values is interpretable. Because of this, it makes sense to compute an average of an interval variable, where it doesn’t make sense to do so for ordinal scales. But note that in interval measurement ratios don’t make any sense - 80 degrees is not twice as hot as 40 degrees (although the attribute value is twice as large).
Finally, in ratio measurement there is always an absolute zero that is meaningful. This means that you can construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio variable. In applied social research most “count” variables are ratio, for example, the number of clients in past six months. Why? Because you can have zero clients and because it is meaningful to say that “…we had twice as many clients in the past six months as we did in the previous six months.”
It’s important to recognize that there is a hierarchy implied in the level of measurement idea. At lower levels of measurement, assumptions tend to be less restrictive and data analyses tend to be less sensitive. At each level up the hierarchy, the current level includes all of the qualities of the one below it and adds something new. In general, it is desirable to have a higher level of measurement (e.g., interval or ratio) rather than a lower one (nominal or ordinal). |
What is Cancer?
Cancer is a disease that results from abnormal growth and division of cells that make up the body’s tissues and organs. Under normal circumstances, cells reproduce in an orderly fashion to replace old cells, maintain tissue health and repair injuries.
However, when growth control is lost and cells divide too much and too fast, a cellular mass -or “tumor” -is formed.
If the tumor is confined to a few cell layers and it does not invade surrounding tissues or organs, it is considered benign. By contrast, if the tumor spreads to surrounding tissues or organs, it is considered malignant, or cancerous. In order to grow further, a cancer develops its own blood vessels and this process is called angiogenesis. When it first develops, a malignant tumor may be confined to its original site.
If cancerous cells are not treated they may break away from the original tumor, travel, and grow within other body parts, the process is known as metastasis.
Cancer Screening is the performance of tests on apparently well people in order to detect a medical condition at an earlier stage.
Click on the below links to find more about the individual cancers.
Esophageal cancer (also called cancer of the esophagus) is a malignant tumor that grows in the lining of the esophagus. The esophagus (the gullet) is the tube that carries food from the mouth down into the stomach using a series of muscular movements.
Types of esophageal cancer
Two types of cancer, squamous cell carcinoma and adenocarcinoma, make up 90 per cent of all esophageal cancers. Esophageal cancer can occur in any section of the esophagus. Most cancers in the top part of the esophagus are squamous cell cancers. They are called this because the cells lining the top part of the esophagus are squamous cells. Squamous means scaly.
Most cancers at the end of the esophagus that joins the stomach are adenocarcinomas. Adenocarcinomas are often found in people who have a condition called Barrett’s. |
IBM announced its researchers have built a device capable of delaying the flow of light on a silicon chip, a requirement to one day allow computers to use optical communications to achieve better performance.
Researchers have known that the use of optical instead of electrical signals for transferring data within a computer chip might result in significant performance enhancements since light signals can carry more information faster. Yet, "buffering" or temporarily holding data on the chip is critical in controlling the flow of information, so a means for doing so with light signals is necessary. The work announced today outlines just such a means for buffering optical signals on a chip.
"Today's more powerful microprocessors are capable of performing much more work if we can only find a way to increase the flow of information within a computer," said Dr. T.C. Chen, vice president of Science and Technology for IBM Research. "As more and more data is capable of being processed on a chip, we believe optical communications is the way to eliminate these bottlenecks. As a result, the focus in high-performance computing is shifting from improvements in computation to those in communication within the system."
Long delays can be achieved by passing light through optical fibers. However, the current "delay line" devices for doing so are too large for use on a microchip, where space is precious and expensive. For practical on-chip integration, the area of a delay line should be well below one square millimeter and its construction should be compatible with current chip manufacturing techniques.
IBM scientists were able to meet this size restriction and achieve the necessary level of control of the light signal by passing it through a new form of silicon-based optical delay line built of up to 100 cascaded "micro-ring resonators," built using current silicon complementary metal-oxide-semiconductor (CMOS) fabrication tools. When the optical waveguide is curved to form a ring, light is forced to circle multiple times, delaying its travel. The optical buffer device based on this simple concept can briefly store 10 bits of optical information within an area of 0.03 square millimeters. That's 10 percent of the storage density of a floppy disk, and a great improvement compared to previous results. This advancement could potentially lead to integrating hundreds of these devices on one computer chip, an important step towards on-chip optical communications.
The report on this work, "Ultra-compact optical buffers on a silicon chip," by Fengnian Xia, Lidija Sekaric and Yurii Vlasov of IBM's T.J.Watson Research Center in Yorktown Heights, N.Y., is published December 22 in the premiere issue of the journal Nature Photonics. This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) through the Defense Sciences Office program "Slowing, Storing and Processing Light." |
In this chapter we have explored how we can write objects to a file and read them back. Making your class serializable makes it very easy to save your application data in a file. While what we have discussed is by no means exhaustive, you now know enough to deal with straightforward object serialization. The important points in this chapter are:
To make objects of a class serializable the class must implement the Serializable interface.
Objects are written to a file using an ObjectOutputStream object and read from a file using and ObjectInputStream object.
Objects are written to a file by calling the writeObject() method for the ObjectOutputStream object corresponding to the file.
Objects are read from a file by calling the readObject() method for the ObjectInputStream object corresponding to the file.
When necessary, for instance if a superclass is not serializable, you can implement the readObject() and writeObject() methods for your classes.
A good horse cannot be of a bad color. |
Buckle Up Children!
Our UMA students are encouraged to create their own original ideas inspired by traditional Montessori activities. Heather Sharma from Winchester, VA submitted this simple, home-made sequencing activity for the classroom, with the willing assistance of her helpful son. The subject of fastening a seat belt not only reinforces personal safety, but also independence (fastening your own seatbelt) and responsibility (always buckle up!). And more…
Indirect preparation for language
Sequencing picture cards are designed to provide an indirect preparation for language (writing and reading). Sequencing cards always tell a “story” through pictures, placed in random order on a tray/basket. The story cards first are placed in order of logical progression (“first, next, then, last”) on a table or floor mat. There is always a beginning, middle, and end. Once this is completed, we then “read” the cards orally from left to right in story form. This activity helps broaden the child’s vocabulary, encourages the spoken language through story telling and elaboration, as well as inspires great follow-up conversation!
Indirect preparation for math and more…
Sequencing picture cards also provide an indirect preparation for math. Math concepts require order and sequence; for example, increasing numerical quantities (1-10), equations (1+2=3). Sequencing activities also help the child develop a sense of time or history, even a simple concept such as yesterday, today, or tomorrow. On a broader basis, this sense of time could be in the form of personal history (from newborn to now) or inventions (from dial-up phone to cell phone), or… ! AND, sequencing cards aid in understanding science concepts such as life cycles of plants or animals, or geography concepts such as islands being formed by volcanic eruptions…all requiring specific sequencing of events.
There are so many ways to introduce sequencing activities in all areas of learning! Thank you, Heather Sharma, for sharing your sequencing cards with us!
For more sequencing (and patterning): |
When developing learning materials, most instructional designers and trainers rarely give much thought to how they use visuals and graphics. Typically, they just add them as a way to liven up dull looking text.
In contrast, as most graphic designers and artists know well, there is an entire vocabulary and language connected with the use of visuals. This is something rarely included as part of conventional instructional design training. A pity, because it is a language which instructional designers and trainers would get a great deal of benefit from knowing.
If you are interested in learning more about the language of visuals, as good a starting point as any is an understanding of the five instructional functions for graphics. These functional categories are as follows:
Decorative visuals: used to make instruction more appealing and motivating. They typically do not have a strong association with the instructional content. Interestingly, in a study of sixth grade science textbooks in the US, Richard Mayer found that over 85% of graphics fell into the decorative category.
This statistic seems to support the view expressed in the opening of this article - that many instructional designers pay little attention to the significance of visuals and graphics. In the light of this finding, it's probably fair to say that decorative graphics should be used with caution.
Representative visuals: used to make information more concrete. They convey information quickly and easily, reducing the need for lengthy textual explanation.
Organisational visuals: help learners understand the structure, sequence and hierarchy of information and help people integrate that into their existing knowledge. Examples include charts, graphs and displays that help people see relationships between elements.
Interpretive visuals: used to help learners understand difficult and ambiguous or abstract content. In general, they help make information more comprehensible. Examples include models of systems and diagrams of processes.
Transformative visuals: used to make information more memorable. They are intended to aid learners' thought processes. They focus more on helping the learner understand than on presenting content. Transformative visuals can be a little unconventional and because of this are not widely found in learning materials.
In conclusion, we've all heard the phrase "a picture is worth a thousand words". And many people accept this wisdom without question.
In fact, just because something is visually composed doesn't necessarily make it more valid or easier to understand. A poorly designed visual or graphic could just as easily impede learning as facilitate it.
Indeed, a poorly designed graphic where the purpose and instructional function are mismatched might need a thousand words to help explain it clearly to learners. |
In 1915, German mathematician Amalie Emmy Noether deduced that the principles of the conservation of physical quantities such as energy and momentum can be traced to the behavior of the laws describing them in relation to the operation of certain continuous symmetry transformations. She tends to think of symmetry in terms of mirror reflections: left-right, top-bottom, front-back. We say something is symmetrical if it looks the same on either side of some center or axis of symmetry. In this case, a symmetry transformation is the act of reflecting an object as though in a mirror. If the object is unchanged or invariant following such an act we say it is symmetrical. Noether theorem connects each conservation law with a continuous symmetry transformation. She found that the laws governing energy are invariant to continuous changes or translation in time. For linear momentum, Noether found the laws to be invariant to continuous translations in space. The laws governing conservation of linear momentum do not depend on any specific location in space. They are the same here, there, and everywhere. For angular momentum, the laws are invariant to rotational symmetry transformations.
The entire Scientific Community is stuck in Noether symmetry, and of course if you have a symmetrical operation about a center axis of rotation this system will not move from rotational energy.
This is what all our University Physics Departments teach their students - The Conservation of Angular Momentum states you cannot move a system from rotational energy.
This is true - if you have symmetry about the center axis of rotation of a system, then the system cannot move.
Let's get to the point of this web site supersymmetry.com, that explains in laymans terms a systems of eccentric mass load about the center axis of rotation and the systems move from rotational energy
This also makes experimental physical proof of the existance of the Gravitino and the beginning of explaining Supersymmetry.
Supersymmetry is part of the quantum [QUANT MECH] structure of space and time. The discovery of Quantum Mechanics changed our understanding of almost everything in physics, but our basic way of thinking about space and time will now be changed.
Showing the nature of Sypersymmetry would changed that by revealing a quantum dimension of space and time. This quantum dimension would be manifested in the existence of new elementary particles, which would be produced by eccentric mass load rotation about a center axis of rotation.
In the suprsymmetric theory the graviton, the quantum of the gravitational field, has a superpartner gravitino. The Gravitino becomes massive when the supersymmetry is broken. The Gravitinos mass is charcterizes the amount of supersymmetry breaking, and adding a second eccentric load mass system as defined in this web site.
Albert Einstein's Equivalence Principle, the effects of gravity are exactly equivalent to the effect of acceleration.
Systems in motion from internal forces, example move satellite's in space - Force Equals Mass Times Acceleration - Wheat Ridge, Colorado
Experimental Proof of Systems moving from internal rotational forces - from electrical power
Perhaps you are familiar with the idea that moving objects have what we call kinetic energy; the faster an object moves, the greater its kinetic energy. This means that in pre-relativity physics, a non-moving object has no energy. In relativity, however, we cannot ignore time, and all objects are essentially always moving through time. Moreover, because time is not "the" fourth dimension but just one of the four space time dimensions, there's no reason to think time should be ignored when we consider an objects energy.
Einstein worked through this idea in a some what different way, with his equations of special relativity and discovered that there is indeed a extra component to energy, beyond the normal kinetic energy, that had not previously been recognized. He found that for a moving object, this extra energy manifests itself as mass increase, which can be expressed in a simple formula.
The eccentric load mass through the process defined in the Sequence of Operation; Generates an acceleration force on the said axis of rotation of the System, in a direction that is towards 180 degrees. Eccentric Orbit [ASTRON] An orbit of a celestial body that deviates marked by from a circle. That being stated let us now define the Eccentric Orbital definition to a macroscopic scale of the 1st video in this web site. The Eccentric Orbits of Eccentric Load Mass within this proto type in video demonstrates a Main Frame that contain the first Eccentric Orbital Mass system mounted on the Main Frame rotating clockwise; and a second Eccentric Orbital Mass System mounted on the Main Frame rotating counter clockwise directly in front of the first Eccentric Orbital Mass System. This sequence of operation is defined in the Eccentric Load Mass drawing - designing, illustrating the mechanical sequence of operation. Also illustrated below is a Space Time Diagram; with Space defined in feet on the horizontal axis, and Time defined in seconds on the Vertical Axis, the Experimental results of the 1st video of The Systems Movement on the Surface Tension of Water, in a direction that is constant.
In the mind's eye first envision a System that contain kinetic energy, for example let us define this sequence of operation, and yes the axis of rotation requires power to turn it. Next Systems eccentric load of mass that is revolving about a axis of rotation. At 180 degrees the eccentric load of mass is out accelerating to the furthest distance from said axis of rotation. At 0 degrees the eccentric load of mass is falling back towards the axis of rotation. At 90 and 270 degrees the eccentric load mass is almost equal and opposite of the said axis of rotation. The said axis of rotation is forced to its new position in space time, as a result the said axis of rotation is moving in space and time. The eccentric load mass accelerating about said axis of rotation forcing it to move, translate in a linear direction that is macroscopically seeable to the observer looking at this System.
This web site supersymmetry.com has one main function. To provide whomever visits to observe macroscopic experimental proof of different proto types moving in a direction that is constant. The videos are experimental proof to observe and analyze the translational behavior of the systems in a zero gravity environment, or at least a frictionless horizontal plane where gravity has no effect. These systems are moving from rotational energy of internal force. Which are outside the accepted limits of knowledge, we have the physical proto type proof right here! All you have to do is look at the videos of systems moving from rotational internal forces.
Inertia. Physics The tendency of a body at rest to remain at rest or of a body in straight line motion to stay in motion in a straight line unless acted on by an outside force; the resistance of a body to changes in momentum. 2. Resistance or disinclination to motion, action, or change
Newton's first law states that every object will remain at rest or in uniform motion in a straight line unless compelled to change its state by the action of an external force. This is normally taken as the definition of inertia.
Thomas L Navarro’s Systems are experimental proof of Systems moving from internal forces; from motion of two Eccentric Systems on the surface tension of water, inside the water tank weighted down to sink almost fully into the water, on the 65 pound table top model and the electrical magnetic system demonstrating the dynamics of eccentric load mass examples.
For any rotational motion that is also accompanied by a linear progression, we say the system has “chirality”. System one is Clock wise rotation of the “eccentric load” of mass is called clock wise “chirality”. System two is Counter clock wise rotation of the “eccentric load” of mass system is called counter clock wise “chirality”. Systems one and two are attached to a main frame See United States Patent 5,473,957 for details of operation.
The last video on this web site; is the table top system moving from rotational energy, resulting from the "eccentric load" of the mass, on level top of a desk top surface. At the end of the video the system moves in slow motion so you the observer can see the "eccentric load" of mass distribution about the center axis go close over the center axis (distance "d") and go out from the center axis (distance "B"). The drawings below are the Sequence of Operation, simple math functions of the "eccentric mass load" of mass distribution about the center axis of rotation of one counter clock wise system and one clock wise system.
For any rotational motion that is also accompanied by a linear progression, we say the system has "chirality". Clock wise rotation of the "eccentric load" of mass system is called clock wise "chirality". Counter clock wise rotation of the other "eccentric load" of mass system is called counter clock wise "chirality".
The above and below are a set of dynamic drawings to describe the motion of the "eccentric load" of mass systems. The systems design incorporates an "eccentric load" of mass which is constantly biased in one direction such that it does not rotate symmetrically around the system's center of mass. The set of drawings given above and below have been derived in an attempt to analyze the translational behavior of the system in zero gravity environment, or at least a frictionless horizontal plane where gravity has no effect. The analysis assumes a configuration which is described in this patent, and comprises of two parallel systems with their central axes rotating in opposite directions so as to cancel out any lateral translation. There this analysis will only consider translation in the direction of "eccentric load" of mass, hereafter referred to as the 180 degree area that "eccentric load" of masses each occupies.
The above drawing is the illustration of the "eccentric load" of mass unitary group of transformations of the "eccentric load" of mass of one counter clockwise complex variable. In a complex plane formed by one real axis and one imaginary axis; we can pin point any complex number on the origin to the point through the continuous angle, that this line makes with one real axis. There is a deep connection between this phase motion at 180 degrees, in which the angle is "positive" in phase angle.
The top circle 0=90 degrees is the initial starting point where the "eccentric load" of mass is at rest at 90 degrees, with distance "A" from the center axis of rotation.
The second 0=180 degrees demonstrates the "eccentric load" of mass accelerating counter clock wise to the 180 degree area with the "eccentric load" of mass furthers out from the center axis, as illustrated by distance "B", "positive" in phase angle.
The third 0=270 degrees demonstrates the "eccentric load" of mass accelerating counter clock wise to the 270 degree area with the "eccentric load" of mass a distance "C" from the center axis of rotation.
The fourth 0=360 degrees demonstrates closest (minimum distance) of the "eccentric load" of mass accelelerating counter clock wise from the center axis of rotation, illustrated as distance "d". Please take notice that the "eccentric load" of mass is distance "d" from the center axis, and the phase distance is "minimal" at this point.
The fifth 0=0 degrees demonstrates the "eccentric load" of mass accelerating counter clock wise, to be back at 90 degrees the initial starting point of acceleration, and again with distance "A" from the center axis. To start the counter clock wise acceleration of the "eccentric load" of mass all over again. Please take notice that the total phase wave is not symmetrical.
The above drawing is the illustration of the "eccentric load" of mass unitary group of transformations of the "eccentric load" of mass of one clockwise complex variable. In a complex plane formed by one real axis; we can pin point any complex number on the origin to the point through the continuous angle, and this line makes with one real axis. There is a deep connection between this "eccentric load" of mass motion at 180 degrees, in which the phase angle is very "positive."
The top circle 0=270 degrees is the initial starting point where the "eccentric load" of mass is at rest at 270 degrees, with distance "A" from the center axis of rotation.
The second 0=180 degrees demonstrates the "eccentric load" of mass accelerating clock wise to the 180 degree area with the "eccentric load" of mass furthest from the center axis, as illustrated by distance "B"
The third 0=90 degrees demonstrates the "eccentric load" of mass accelerating clock wise toward the 90 degree area with the "eccentric load" of mass a distance "C" from the center axis of rotation.
The forth 0=360 degrees demonstrates closest (minimum distance) of the "eccentric load" of mass accelerating clock wise from the center axis of rotation, illustrated as the distance "d". Please take notice that the "eccentric load" of mass is distance "d" from the center axis, and the phase wave is not symmetrical.
The fifth 0=0 degrees demonstrates the "eccentric load" of mass accelerating clock wise; to be back at 90 degrees the initial starting point of acceleration, and again with distance "A" from the center axis. To start the clock wise acceleration of the "eccentric load" of mass all over again. Please take notice that the phase wave is not symmetrical.
In 1915; for linear momentum, Amalie Emmy Noether found the laws to be invariant to continuous translations in space. For angular momentum, the laws are invariant to rotational symmetry transformations.
Amalie Emmy Noether's are only "valid" and "only work" for symmetrical operations of systems about a center axis of rotation.
This web site; Supersymmetry.com introduces two "eccentric load" of mass distribution systems of "eccentric load" of mass (fermions) about a center axis of rotation, one system is rotating clock wise and the other systems is rotating counter clock wise, with both systems contributing to a common center, of point of acceleration. Each of the systems "eccentric load" of mass (fermions) peaks with the greatest radius for its respective center axis at one hundred eight degrees, and in unison each of the systems "eccentric load" mass (fermions) revolve closer to their respective center axis at zero degrees. This "eccentric load" of mass distribution systems moves (translates) the entire system in a linear direction. This is a "violation of the conservation of angular momentum." Amalie Emmy Noether theory is for "symmetrical operations only."
Macroscopic Experimental Proofs; of Control of Orbital and Spin Dynamics, in the Higgs Fields, Dynamic Functions are represented in the following video's. 1st video System moving on the Surface Tension of Water, it is also displayed on youtube, enjoy it here first. 2nd video is the lower view of the armatures revolving inside the stator wall; to observe cylinders of steel (eccentric load mass) compress the spring at 180 degrees of the electromagnetic system, due to the increase in "eccentric load" mass from kenetic energy, macroscopic observable effects of the kenetic energy in motion. The next 9 videos are the Test Flight with Zero Gravity Corporation, in November of 2011. The next video is the system in a water tank, Buoyancy Test. The last video the 65 pound system moving on a level surface of a table top.
Supersymmetry (SUSY). An alternative to the Standard Model of particle physics in which the asymmetry between matter particles (fermions) and force particles (bosons) is explained in terms of a broken supersymmetry. Supersymmetry predicts that at least five types of Higgs bosons exist. Supersymmetry resolves many of the problems with the Standard Model, and the evidence for super-partners are a product of "eccentric load" of mass acceleration about the center axis of systems described in this web site, that move the system from "eccentric load" of mass, rotational acceleration of the fermions that generate systems motion, (kinetic energy equals one half the mass times the velocity squared).
Gravitino The superpartner of the graviton. When supersymmetry is broken the gravition become massive, and the splitting of the gravitino and graviton masses sets the scale of all the superpartner masses. Since the graviton remains massless, the gravitino mass is the basic mass scale of the broke supersymmetric theory.
The Higgs is important not for what it is but for what it does. The Higgs Particle arises from a field pervading space, know as the "Higgs field." Everything in the known universe, as it travels through space, moves through the Higgs field; it's always there, lurking invisibly in the background.
Michael Faraday convinced physicist to think of fields as real physical things rather than as calculational devices.
Supersymmetry is the surprising idea, or hypothesis, that at the deepest level, for the ultimate or final theory, the laws of nature don't change if fermions are transformed into bosons and vise versa.
The fermonic or bosonic nature of particles comes from their spin, and spin related to quantum theory and special relativity, both of which in turn involve space and time in their formulation. The formulation of supersymmetry must also involve space and time as well as interchange of bosons and fermions.
M/string theory is as testable as F=ma
If we allow for new "fermionic" dimensions, then it turns out that one more symmetry can exist--and it is supersymmetry.
The possibility makes it easier to think about having a theory with a symmetry under interchange of bosons and fermions--that is, supersymmetry.
Possibilities for the dark matter have been suggested. More dark galaxies reveal themselves.
At times it is thoughtful to think of supersymmetry as a space-time symmetry, but in an extended space-time, called "superspace."
Once superspace was formulated, we immediately thought of using it as the basis of generalized geometrical theory of gravity, "supergravity." Supergravity incorportes general relativity, and extends it. The graviton that mediates gravity is predicted to have a superpartner, the gravitino. Let us now remember the equivalence principle [RELAT] In general relativity, the principle that the observed local effects of a gravitational field are indistinguishable from those arising for acceleration of the frame of reference. Let me add straight line acceleration or rotational acceleration.
In the supersymmetric theory of graviton, the quantum of the gravitational field, has a superpartner gravitino. Both are massless in the unbroken theory, but the gravitino becomes massive when the supersymmetry is broken. The gravitino mass the characterizes the size of supersymmetry breaking, all the other superpartner masses are proportional to it.
Particles come in two types: The particles that make up matter, known as "fermions," and the particles that carry forces, known as "bosons." The difference between the two is that fermions take up space, while bosons can pile on top of one another. You can't just take a pile of identical fermions and put them all at the same place; the laws of quantum mechanics won't alow it. That's why collections of fermions make up solid objects like weight benches and planets: The fermions can't be squeezed on top of one another.
Bosons don't take up any space at all. Two bosons, or two trillion bosons, can easily sit at exactly the same location, right on top of one another. That's why bosons are force-carrying particles; they can combine to make a macroscopic force field, like the gravitational field that holds us to the earth or the magnetic field of the earth that deflects deadly cosmic rays from our sun.
Physicists tend to use the words "force," "interaction," and "coupling" in practically interchangeable ways. This reflects on the deep truths uncovered by modern physics: Forces can be thought of as resulting from the exchange of particles. When the moon feels the gravitational pull of the earth, we can think of gravitons passing back and forth between the earth and the moon.
Aside from the Higgs, we know four kinds of forces, each with its own associated boson particles. There's gravity, associated with a particle called the "gravitons." We haven't actually observed individual gravitons. However can observe the effects of kenetic energy "1/2 Mass Times the Velocity Squared," with respect the the systems moving from rotational energy in this web site videos.
The particles associated with electromagetism are called "photons," which we see and use directly in our everyday life, visible light, radio waves, cell phones bars on your cell phone, etc.
There is the strong nuclear force, which holds quarks together inside protons and neutrons; its particles are charmingly named "gluons." The strong nuclear force is very strong, and interacts with quarks but not with electrons. Gluons are massless, just like photons and gravitons.
The weak force comes with three different bosons, the neutral Z and the two charged W's.
The Higgs is fundamentally different from all the other bosons. The Higgs boson is a form of matter named after on of the physicist who first considered the possibility. It forms a field, something like a magnetic field/or (Higgs mechanism) that is composed of photons/or gravitons/ or gravitinos that fills all of space. The idea of the Higgs boson, ad the way in which it gives mass to other particles in nature dervies from many sources in other fields of physics as well.
Fields have a value at every point in space, and when space is completely empty those values are typically zero. Fields like the gravitational field sit quietly at zero when space is truely empty. If the gravitational field or the electromagnetic field sit quietly at zero, then space is truely empty. If the gravitational field or electromagnetic field is at some other value, they carry energy, and therefore space is not empty, as it is the Higgs field occupying the entire universe.
The Higgs field is different, and it can be zero or some other value but it doesn't want to be zero, it sits at some constant number everywhere in the universe.
Empty space is full of the Higgs field, just a constant field, sitting quietly in the background. It's that every present field at every point in the universe that makes the weak interactions what they are and gives masses to elementary fermions.
What gives particles mass is the Higgs field sitting quietly in the background, providing a medium through which other particles move, affecting their properties along the way.
The Scientific Community firmly states that Gravitons are only produced by the gravitational interactions, however Einstein will take control back again, to point out the Equivalence principle.
Equivalence principle [RELAT] In general relativity, the principle that the observerable local effects of a gravitational field are indistinguishable from those arising from acceleration of the frame of reference. Also know as Einsteins's equivalency principle; principle of equivalence. First of all it must be realized that we can actually dispose of the idea of gravity as a "force." Imagine standing in a small room with no windows. You notice that your feet are pressed firmly against the floor. Holding an apple out in front of you, you let go, and the apple falls directly toward the floor with a constant acceleration. Suppose that, unknown to you, this room were actually millions of miles out in space extremely far from any source of gravitation. Also suppose that, unknown to you, underneath the floor there were a powerful set of rockets with a very large supply of fuel. If the rockets had been turned on ever since you had been mysteriously placed in the room and if there were no noise or vibration from the rocket engines, they would be producing an acceleration of the entire windowless spaceship that would delude you into thinking that you were at rest in a gravitational field.
The four forces of nature, gravity, electromagnetism, and the strong and weak nuclear forces are all based on symmetries. The Higgs boson also carries a force; but it's not what gives particles masses, that's the Higgs field in the background, and it's not based on any symmetry. As mass moves, relative to us, it acquires additional energy of motion, known in physicst's jargon as kinetic energy. It acquires just a little kinetic energy if it moves slowly. But the kinetic energy becomes greater and greater as the particle moves faster and faster.
For kenetic energy, the acceleration of fermions are responsible for the forces are translations (changes of position) and rotations (changes in orientation) but also in four-dimensional space time, not just three-dimensional space time.
Gravitons do interact with gravity themselves; because everything interacts with gravity, but for the most part gravity is so weak you wouldn't notice. Things change when you collect a large amount of mass to create a strong gravitational field or; things change when mass is accelerated, the acceleration of mass, increases the mass, Kinetic Energy [MECH] The energy which a body possesses because of its motion, in classical mechanics, equal to one half of the body's mass times the square of the speed.
The United States Patent number 5,473,957 system's displayed in this web site and youtu.be, are systems rotating moving in the Higgs field; (through space and time) set up vibrations in the "Acceleration Fields," the Higgs boson comes along, a vibrating wave in the Higgs field, they set up vibrations in yet another field, in this case the "Acceleration Field." That's how a Higgs can turn into Gravitinos; first it turns into virtual charged massive particles, and then they quickly convert into gravitinos, that collect on the mass of the rotating systems's various parts, various fermions parts of the system are accelerating faster than its opposite counter parts - that has revolved in closer to the center axis of origin of acceleration, than its opposite counter parts that has next accelerated with a greater radius and velocity (thus generates more kinetic energy due to the increase in velocity), resulting in the center of gravity of the total system to move in the direction of the greatest radius and velocity from the center axis, that generates more kinetic energy due to its increase in velocity about the center axis. Furthermore, if parts have taken one step in the macroscopic fermionic dimensions and then step back again as various parts in uniform motion, move closer to the center axis, you will find that the total system has moved in ordinary space or time by some minimum amount. Thus the motion in the macroscopic fermionic dimensions is tied up, in a complicated way, with the ordinary motion of the total system, by kenetic energy accumulating on the fermion parts due to acceleration, and this kinetic energy can be measured, one half the mass times the velocity squared.
Please observe within this web site total systems, that have two systems within; one system rotating clockwise and the other system rotating counter clockwise, moving the total system from "eccentric load" of mass distribution of each system being greater in radius from the center axis of the systems at one hundred eighty degrees, the systems are uniform at one hundred eighty degrees, and the mass distribution of each system being less in radius systems are uniform at zero degrees - from the center axis of the system, (see United States Paten Number:5,473,957 for details of systems operation). Macroscopic Systems' Space-Time Picture of Systems' with Chirality "R" Clock wise and Chirality "L" Counter clock wise.
The 1st video lasts less than a minute and the system increases in velocity as it travels toward you the observer. Click on this next video start button to observe this video now.
The system is moving from rotational energy.
Please be advised that the validity of weak force symmetry (in the spirit of Emmy Noether's theorem) gives rise to a conservation law; the Law of Conservation of Parity. Parity is a measure of the "handedness" of a system. The Conservation Law of Parity is not valid. The laws of Physics contain forces and interactions that are not symmetric under parity. This happens for the class of interactions called the "weak interactions" that are producing the decay of the pion and, subsequently, the decay of the muon. This is an example of a "broken symmetry" that occurs throughout the weak interactions, which also produce numerous other effects. Parity is violated. Parity is not a symmetry.
Supergravity: 'A' Supersymmetry United States Patent Patent Number: 5,473,957 Navarro SYSTEM FOR GENERATING CONTROLLABLE REFERENCE ENVIRONMENT AND STEERABLE TRANSLATIONAL FORCE FROM INTERACTION THEREWITH Inventor: Thomas L. Navarro
ABSTRACT A controlled translational force generating system includes a main frame, a first set of parallel "eccentric mass" sub- systems mounted on the main frame and being counter rotatable to generate a set of initial translational forces, and a second set of parallel balance subsystems mounted on the main frame and being counter rotatable to produce a controlled reference environment. The set of translational forces generated by the parallel "eccentric mass" subsystems through interaction with the controlled reference environment produced by the parallel balance subsystems produce a control able steerable straight-line resulted translational force which causes the "System" to move along a desired directional path. A System of Force Equals Mass Times Acceleration Inc. (F=MA Inc.)
Supergravity [PHYS] A supersymmetry which is used to unify general relativity and quantum theory; it is formed by adding to the poincare group, as a symmetry of space-time, four new generators that behave as spinors and vary as the square root of the translations.
Symmetry [MATH] A geometric object G has this property relative to some configuration S of its points if S determines two pieces of G which can be reflected onto each other through S.
Amelie Emmy Noether's Theorem is for "symmetrical operations only." If the laws of motion of a system are invariant under a particular transformation, then there exists a specific physical quantity, whose value remains constant in time. In other words, the presence of symmetry implies the existence of constants of motion. Empirical laws of Noether's Theorem are only correct for symmetrical operations of mass and space and time. For example, if a bump exists on the road, the car's speed does not remain constant. It changes when the car hits a bump. The existence of a bump in a specific place along the highway implies that the road is not the same everywhere and that the symmetry of space translation is lost. Therefore, a breakdown of the symmetry implies a failure of conservation for momentum.
Please take note, the 1st video, this system can go around and collect space junk from electrical energy. The system in this video is operating off of a 12 volt battery, 7.2 Amp. Hr. and in space it can operate off of solar cells with battery back up. It moves with out rocket fuel propulsion, and instead uses rotational energy for travel in space.
These systems are the first generation mechanical and electromagnetic, and are fabricated to demonstrate dynamical linear propagation motion, of these systems.
The video above is the electromagnetic system with a strobe on and the armatures demonstrating that the eccentric load mass always face in the same driection as they revolve within the 360 degree electro magnetic stator wall of the system. Please observe the steel cylinder inside the clear plexie glass tube and watch it compress the spring at the 180 degree area, furtherest from the center axis as a result of the gain of mass from the kenetic energy.
The Surface Test, the last video is on a Table Top demonstrates the dynamics. These systems move/translate in a linear direction that is constant. Buoyancy test in water demonstrates the same dynamical results as the Surface Test.
Test Flight, "Zero Gravity Corporation" in November of 2011; the experimental results of the Zero Gravity Corporation Test Flight, the proto type could just barely climb out of plumb in the earths gravitational field and was no match for the micro gravity in this Zero Gravity test flight. The 1st video on this web site, "system on the surface tension of water" will be the next choice for the next Zero Gravity Test Flight. That is my next step in this Higgs Field Dynamic Investigation.
Supersymmetry, Supergravity, and Superunification This is a new system of a Unified Field Theory using acceleration in place of gravitation. Next are the 9 videos of the Zero Gravity Test Flight.
Let us now explain that fermions have mass, weight and charge. This is the counter clock wise one half of the system; that demonstrates the right hand thumbs down rotation spin, the other half of the system not illustrated would be the right hand thumbs up and clockwise rotation spin. A Mathematical Proof with out variables: A equals the top center axis of this system; B equals the bottom center axis of gravity of this system, C equals the elector-magnetic quadrupole stator wall, E equals the quadrupole armature reacting to the stator wall, F equals the quadrupole armature reacting to the stator wall, G equals the quadrupole armature reacting to the stator wall, H equals the quadrupole armature reacting to the stator wall, L equals negative energy with respect to A or B center axis with respect to the long radius from A or B center axis, M also demonstrates positive kinetic energy with respect to A and B axis, N equals loss of positive energy with respect to A and B center axis,O equals gain of positive energy with respect to A and B center axis. See Invention United States Patent Number 5,473,957. for detials of operation, System For Generating Controllable Reference Environment and Steerable Translational Force From Interaction Therewithin Force Equals Mass Times Acceleration Inc. (F=MA Inc.) Thomas L. Navarro President, FORCE EQUALS MASS TIMES ACCELERATION License/Registration No.12628 (City & County of Denver) Spacecraft Attitude Control And Translations Systems Inventor and Patent Officer (Patent number 5,473,957 E-mail address is [email protected]
Personal as follows: James D. Isaacson Engineering Consultant, BSME, MSME, Sage W3indstar Marolf, "Consultant" Kevin Larson, Master In refrigeration, Master in Lathe and Bridge Port Operations, Robin W. Navarro "Administrative", Zachary J Navarro Electrical Consultant, Tiffany J Navarro "Administrative," John Jason Navarro, Camille Navarro Flight Consultants. Relevant Application Theory, In 1907 Einstein published this principle in a lengthy paper in Jahrbuch der Radioaktivitat, and thus was born the famous Principle of Equivalence. According to this a gravitational field of force is precisely equivalent to acceleration. With the hard work of F=MA Inc. we have validated with no uncertainty the best to explain dynamical translational propagation of linear motion of these systems. Picture of four new generators that behave as spinors and vary as the square root of the translations. Armatures that can be electric generators and are spinors in a superconducting magnetic field. Currently this a normal electromagnetic field of Direct Current. Translate Forward An Alternative to the Unified Field Theory using ""Acceleration" instead of "Gravitation": Patent numbers 5,473,957 describe implementations of super-conducting magnetic fields of the quadrupole electromagnetic configuration and the equivalence principle that illustrates the effects of an acceleration field being equal to the effects of a gravitational field. The electromagnetic system functions very well with normal direct current, and the system is a example of normal conductors, however improvements can be implemented with super conducting magnetic fields.
Buoyancy Test, The Table Top System was enclosed in a water-tight clear plastic box. The system was completely self contained and there were no links between the internal and external environment. The weight of the system was 65 pounds; however, 270 pounds of weight was added to allow submersion of the system approximately 80% in the water tank. The system moves very slowly due to the total system to eccentric load mass ratio of an incredible 335 to 1. This system translated in a linear direction that is constant, as the system would hit the side of the tank and bounce off of the inner wall of the tank and with that bounce momentum would move towards the center of the tank and the "eccentric load" mass distribution would build up momentum and again travel linearly and again hit the side of the inner tank.
Related Physics Gravity is due to a change in the curvature of space-time produced by the presence of matter. Matter accelerating creates 'pseudo gravity' or a change in the curvature of space-time. In the vacuum of space, if the means of mass acceleration stops the mass will then be riding on a soliton wave of gravitons and will maintain a constant velocity unless disturbed by a gravitational field. To observe quantum activity, you must disturb it.
The electro magnetic configuration in the patent is a macroscopic conjecture of supergravity. If operating in a 'super-conducting' environment, the four armatures can be four new generators that "behave" as spinors and vary as the square root of the translation. The concept of an electric motor to generate translational force provides us with the means to propagate systems over vast distances in space. Electric power can be generated with no need to refuel rocket systems; explaining the benefits of "Supergravity a Supersymmetry." Linear Translation of this system can be created by "eccentric load" mass. Power can be turned on to an electromagnet creating a flow of photons that generate the magnetic field, this is the systems built in protection form the cosmic radiation, no need for other cosmic radiation shield devices. Shut the electric power off and the magnetic force field dissipates, however the electric power will always be left on.
The Higgs fields are always present in the vacuum of space, again you must disturb it with acceleration of fermions in the vacuum of space or any gravitational field. The Higgs fields occupy all space and time. Space and Time is the same everywhere.
Einstein devoted the last 30 years of his life in search for a "unified field theory," which would unite space-time and "Gravitation." Replace "Acceleration" in place of "Gravitation" to the Unified field theory. [Thomas L. Navarro] A theory which attempts to express super conducting magnetic fields of "eccentric load" of mass systems; one clockwise the other counter clockwise, accelerating about a axis of rotation, and electromagnetism, within a single unified framework. This attempt is to differ from Einstein's general theory of relativity with a theory of acceleration in place of gravitation. These new theories from F=MA Inc. Implements the acceleration field from "eccentric load" of mass systems rotating about a center, axis unified with new technologies in electromagnetism, super conducting magnetic fields.
There is no way to make particles behave the way they do, and simultaneously to have mass, without something like a "Higgs mechanism."
Number 1 drawing: Our observer is standing in the windowless room; rockets accelerating him up and the motor below his center rotating his room in the direction of the arrows. An apple hangs on a string being pulled by centrifugal force towards the outside wall of the windowless room, this experiment is occurring in deep space far removed from any gravitational fields.
Number 2 drawing: Our observer brings one of his arms down changing his center of gravity and as a result of this action in drawing.
Number 3 drawing our observer is stuck on the wall from the centrifugal force created by the motor rotating his windowless room. Please take an observation of this system's motion being spiral, in the vertical direction as his body's mass rotates on the outside wall of the windowless room caused by the motor rotating the room.
Number 4 drawing: Our observer's motor below him shut down; and now the apple on the string hangs down perfectly plumb as if he were in a gravitational field. Our observer is no longer stuck the wall and can walk around the room as if he was in a gravitational field, and the round objects he was holding now occupy the floor as they were at rest in a gravitational field.
Number 5 drawing: Our observer's rockets shut off. He floats in space, there is no acceleration to create a pseudo gravitational field for him. His apple floats still attached to the string, the round objects float in space with our observer. Acceleration effects mimic gravitational effects, or acceleration effects copy or imitate gravitational effects.
The F=MA Inc. Impulse Force Generation Translation System
In deep space far removed from any source of a gravitational field, a spacecraft that has been accelerated by rocket continues on at a constant velocity. After the rocket engine is shut off, it is simply riding on a soliton wave of gravitinos in the Higgs field that has joined the mass of the space-craft. Propagated by the gravitinos in the Higgs field, the spacecraft is on its merry way through the cosmos of the Higgs fields .
The Laws Conservation of Angular Momentum are strictly for symmetrical operations only.
This video is the Table Top System, please pust the start button and at the last of this video you can see in slow motion the "eccentric load" of mass distribution of the clock wise and counter clock systems.
BY DEFINITION THE PRINCIPAL OF SCIENCE: IS THE TEST OF ALL KNOWLEDGE--BY EXPERIMENT. · THE ONLY JUDGE OF SCIENTIFIC TRUTH IS EXPERIMENTAL PROOF. This web site is experimental proof of systems translating, moving in a linear direction due to eccentric load masses rotational energy, in the Higgs fields. EXPERIMENTAL RESULTS MAY PRODUCE NEW ACCOMPLISHMENTS, NEW THEORIES NEW PHYSICAL PROOF, THAT CAN IMPLEMENT NEW SCIENTIFIC LINEAR MOTION DYNAMICS, In the Scalar Fields, now known as the Higgs Field. F=MA Inc. Acronym Definitions List also know as Force Equals Mass Times Acceleration Inc. |
What is Earwax?
According to the American Hearing Research Foundation, earwax is a “product of the ear …made by wax glands in the external ear canal.” It is a waxy substance that can vary in color (from yellowish to brown or even grey), amount, and consistency (soft, viscous, dry, and wet, etc). Earwax, also known as cerumen, functions as a protector of the internal ear canal, and as a lubricator. It is also believed to have antifungal and antibacterial properties. On the downside, having excess earwax in our ears can retain bacteria and lead to infection which may cause pain, and/or itching. And so earwax removal becomes necessary.
Where does Earwax Come From?
Earwax production has a genetic basis. In fact, there are two types of earwax: wet and dry, which are inherited. People with Asian and Native American ancestors tend to have the dry type (grey and flaky) while Caucasians and African Americans tend to have the wet type (moist, yellowish to brown). The lipid (fat) content is different in both types of earwax. Dry wax has about 20% fat while wet wax has nearly 50% lipid content (Burkhart et al, 2000).
Wet Earwax vs. Dry Earwax
The difference between dry and wet earwax has a genetic basis. Earwax type is perfect example of a how a trait can be determined by the difference in a single base (nucleotide). Dr. Koh-ichiro Yoshiura, from the Department of Human Genetics, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan, reports, in a study published in the prestigious journal Nature, that a single nucleotide polymorphism (SNP) is responsible for the difference between dry and wet wax.
According to this study, the ABCC11 gene is responsible for the determination of earwax type (wet or dry). The nucleotide (base) 538 can be either Adenine (A) or Guanidine (G). The genotype AA leads to the dry earwax phenotype, while the combinations GA and GG lead to the wet type of earwax. The dry-earwax gene is recessive, meaning both parents must pass a copy to their children for the phenotype to manifest, while the wet-earwax allele is dominant. In this situation having only one copy of the gene will be enough for a child to inherit the wet type of earwax.
Burkhart et al, 2000. Burkhart CN et al. In pursuit of ceruminolytic agents: a study of earwax composition. American Journal of Otol. 21(2):157-60. |
The dance forms which have their origin in Latin American countries (Central and South America) are known as Latin dances. These dances are categorized into social and ballroom dances. The adjectives which best describe the different Latin dance types are expressive, passionate, suggestive and romantic. The different types of Latin dances are Salsa, Samba, Rumba, Cha-cha-cha, Merengue, Danza, Mambo, Bolero, Cumbia, Bachata and Tumba. The list presented below should help in understanding more about them.
Latin Dance Names
The following dance forms represent most of the Latin dance steps. The origin of these dance forms and their evolution can be understood by means of the information presented below.
One of the popular Latin dance types, the samba is classified in two forms; these forms are the traditional samba and modern ballroom samba. The Samba dance originated in Brazil at the beginning of 20th century. In this partner dance, the moves do not change with music. The music in 4/4 or 2/4 time is used to perform samba. Counting of the basic steps is carried out in two ways i.e. '1-a-2' or '1-2'. A slight, dropping action is used to perform the basic steps of samba. This dance has its origin in the traditional dances of Afro-Brazilian origin.
Invented by Perez Prado, the Mambo was a popular dance form in Cuba, New York and Mexico City. The original Mambo dance of Cuba was based on a thought that body movements and sound of music should converge together. In this form of dance, it is important to feel the music. This concept or practice of dancing didn't go well with the dance teachers based in USA. The Mambo dance was thus, 'standardized' to make it suitable for ballrooms.
The salsa dance is developed from the confluence of European and African cultures. It is a partner dance which first reached Puerto Rico and then spread to the Caribbean islands. The word salsa is used to refer to 'sauce' in Spanish. Connotation of this word in American Spanish is a 'mixture of ingredients'. One has to follow a four-beat measure in salsa; there are three weight changes in these four beats.
This Latin American dance has its origin in Cuba. Enrique Jorrin, a Cuban violinist and composer introduced the cha-cha-cha music; the cha-cha-cha dance is named after this form of music. The shuffling of dancer's feet and rhythm of guiro led to the formation of this name, cha-cha-cha. Today's style of cha-cha-cha dancing was developed by Monsieur Pierre a dance teacher from London. Monsieur Pierre traveled to Cuba and studied this dance. He noted that the cha-cha-cha dance starts in the second beat and has a split fourth beat. When he came back home, a new dance called ballroom cha-cha-cha was created.
It is a Latin American dance performed in two-beats and a partner is required. The leader in this dance holds the waist of the follower with his right hand. Left hand is used by the leader to hold the follower's right hand. Knees are alternately bent at the left and right sides. It facilitates the sideways movements of hips. In this form of dance, the partners circle each other and also walk sideways by taking small steps. Merengue is the official dance form of the Dominican Republic.
The Latin dance styles and steps described in the article should be useful in learning them. The different Latin dance types take us close to cultures of their countries of origin.se to cultures of their countries of origin. |
The Scientific Method
Scientists are curious people who ask a lot of questions about the world around them and then
find the answers. Ecologists are scientists who ask questions about how plants
and animals interact with their environment, and find meaningful patterns to help answer these questions.
The scientific method is one way scientists go from being curious and asking a question to finding an answer.
There are 5 steps to the scientific method:
- Make an educated guess or prediction: Hypothesis
- Take a look: Observations
- Write it down: Data
- Make it a picture. Graphs
- Decide what it means. Conclusions
Lets's look at an example. Start with the question,
"What falls faster, a bowling ball or a feather?" |
This project works with young people 14 yrs +
The Salaam Shalom’s Resistance project’s approach is to focus on the actions of bystanders and victims, encouraging them to become ‘Resistors’, rather than focusing on bullies. Research has found that bullying will stop in less than 10 seconds, 60% of the time, when peers intervene. Yet bystanders only intervene in 10-25% of bullying incidents.
‘This workshop actually made me want to stand up and do something.’ ‘Really enjoyed the session, feeling genuinely challenged after that! Thanks’
Aims of The Resistance Education Project;
- To draw parallels between historical and contemporary forms of discrimination and its impact.
- Help young people assess situations of conflict & progress from being bystanders to Resistors.
- To increase young people’s confidence in preventing and challenging discrimination.We want to help equip young people with the skills and confidence to become active Resistors of prejudice and discrimination. Our workshops provide students with interactive media, hard hitting resources and the use of creative arts to analyze processes from bullying to genocide.The project is aimed at young people 16 to 18 year old’s to help them understand their role and responsibility in preventing prejudice and discrimination in their lives and the lives of the people around them.
- Young people will learn to identify key behaviours of ‘victims’, ‘perpetrators’, ‘bystanders’ and ‘resistors’ in any conflict situation.
- Young people will become more aware of the wider impact that their attitudes and actions towards prejudice and discrimination can have on society; through analysing historical and contemporary examples.
- Young people will have identified the most common forms of bullying and discrimination amongst their peers and they will be more motivated to take action to prevent them.
We want our students to feel they can sustain their learning beyond the classroom. We want teachers to be fully equipped with the necessary resources to support this journey and help address further issues around equality and diversity . This is why Salaam Shalom have produced a Resistance learning resource pack in consultation with a number of education professionals. This resource pack is provided to teaching staff following delivery of the Resistance session.
Email to enquire: [email protected] |
Antibiotics have been used since the 1940s and have greatly reduced illness and death from infectious diseases. However, the use of antibiotics for infection control requires careful consideration, education, and appropriate administration.
Here’s what you need to know about antibiotic stewardship and the appropriate use of antibiotics to manage infections.
What is antibiotic stewardship?
According to the Center for Disease Control and Prevention (CDC), antibiotic stewardship is the effort to measure and improve how antibiotics are prescribed by clinicians and used by patients. It’s among the current hot topics we’re hearing about from experts, colleagues, and even politicians.
What is antibiotic resistance?
Antibiotic resistance is a serious public health concern that affects patient care, safety,
and healthcare costs; it’s driven by the inappropriate use of antibiotics in humans, animals, and agriculture. Antibiotic resistance resulting from the inappropriate prescribing and use of antibiotics needs to be addressed in all medical settings, including urgent care clinics.
Why does this matter to you?
Drug resistance occurs when microbes survive and grow in the presence of a drug that normally kills or inhibits their growth, which means that the current antibiotics are not as effective and will not work as well, not just for the individual patient, but for all patients. Antimicrobial resistance is a growing health issue because more resistant microbes are being detected. This means that previously simple-to-treat infections may become untreatable.
What do healthcare professionals, patients, and their families need to know about antibiotic prescribing and use?
Antibiotics have transformed our ability to treat infections; however, they do not work against all infections, and they do not work as well as they once did against some infections. The CDC urges healthcare professionals, patients, and families to learn more about the prescribing of antibiotics and their use.
The CDC provides these seven facts you should know to Be Antibiotics Aware:
- Antibiotics save lives. When a patient needs antibiotics, the benefits outweigh the risks of side effects or antibiotic resistance.
- Antibiotics aren’t always the answer. Everyone can help improve antibiotic prescribing or use.
- Antibiotics do not work on viruses, such as colds and flu, or runny noses, even if the mucus is thick, yellow, or green.
- Antibiotics are only needed for treating certain infections caused by bacteria. Antibiotics also won’t help some common bacterial infections including most cases of bronchitis, many sinus infections, and some ear infections.
- An antibiotic will not make you feel better if you have a virus. Respiratory viruses usually go away in a week or two without treatment.
- Ask your healthcare professional about the best way to feel better while your body fights off the virus.
- Taking antibiotics creates resistant bacteria. Antibiotic resistance occurs when bacteria develop the ability to defeat the drugs designed to kill them.
- If you need antibiotics, take them exactly as prescribed. Talk with your doctor if you have any questions about your antibiotics, or if you develop any side effects, especially diarrhea, since that could be a C. difficile (C. diff) infection, which needs to be treated right away.
How do patients know when antibiotics are and aren’t needed for common infections?
Common infections, whether caused by bacteria or viruses, are often painful and can get in the way of our well-being and everyday lives. Your healthcare professional is the best resource to advise whether or not a specific condition needs an antibiotic.
Bronchitis: Antibiotics are not indicated to treat acute bronchitis (chest colds), which is rarely caused by bacteria. They may only be indicated by your healthcare professional when appropriate for chronic bronchial conditions.
Common cold and runny nose: Antibiotics cannot cure a cold, but your healthcare professional may prescribe other appropriate medicine to treat your condition. More than 200 viruses can cause the common cold, and antibiotics do not work against these viruses.
Ear infection: Antibiotics can help some ear infections, but only your healthcare provider can tell you when it’s appropriate to treat your condition with an antibiotic.
Influenza (flu): Antiviral drugs, not antibiotics, are used to fight the flu viruses in your body.
Sinus infection (sinusitis): Sometimes antibiotics may be needed if your sinus infection is bacterial. Once your healthcare professional evaluates you, they can determine the best course of action.
Sore throat: A sore throat almost always gets better on its own without antibiotics.
Urinary tract infection (UTI): Bacteria are often the cause of bladder, kidney, and other UTIs, and antibiotic treatment is usually helpful in treating a bacterial infection. Your healthcare professional will be able to determine if you have a UTI and whether antibiotic treatment is appropriate.
While antibiotics cannot treat infections caused by viruses, here are some things you can do to get symptom relief.
What does this mean for patients visiting urgent care clinics?
Urgent care clinicians and facilities see an estimated 160 million patient visits each year. Compared to other specialties, urgent care providers see a significant percentage of patients with acute, infectious disease–related symptoms. This results in both appropriate antibiotic prescribing, as well as a greater opportunity for antibiotic stewardship.
What can I expect when I visit FastMed Urgent Care?
FastMed Urgent Care is committed to providing high-quality healthcare to its patients and has been awarded The Joint Commission’s Gold Seal of Approval® for accreditation by demonstrating compliance with the Joint Commission’s standard for healthcare quality and safety in ambulatory healthcare. Learn more. |
Gerjuoy, Edward Department of Physics, University of Pittsburgh, Pittsburgh, Pennsylvania.
- Links to Primary Literature
- Additional Readings
The modern theory of matter holding that elementary particles (such as electrons, protons, and neutrons) have wavelike properties. By 1915, experiments on the diffraction (bending) of x-rays into special directions by crystals had established that x-rays were electromagnetic waves, akin to visible and infrared light, but of much shorter wavelength than those other electromagnetic radiations. However, in 1923 A. H. Compton showed that observations on x-rays scattered by, for example, a graphite target could be quantitatively predicted via the hypothesis that the x-ray scattering from each individual target atom resulted from elastic (billiard-ball-like) collisions between the comparatively slowly moving atomic electrons and what were in effect particles (now commonly called photons) in the incident x-radiation, with each incident photon having an energy equal to the product of Planck's constant (6.6 × 10-34 joule second) and the speed of light divided by the wavelength, and momentum equal to Planck's constant divided by the wavelength. Thus, some experiments with electromagnetic radiation seemingly can be understood only by visualizing the radiation as waves, while other experiments on the same radiation seemingly require that the radiation be visualized as a stream of particles. This quite unintuitive wave-particle duality manifested by electromagnetic radiation is all the more remarkable in that the particlelike properties associated with the radiation, namely the energy and momentum of its photons, are given in terms of its wavelength, a concept that seemingly has no meaning except in a wave context. See also: Compton effect; Photon; X-ray diffraction
The content above is only an excerpt.
for your institution. Subscribe
To learn more about subscribing to AccessScience, or to request a no-risk trial of this award-winning scientific reference for your institution, fill in your information and a member of our Sales Team will contact you as soon as possible.
to your librarian. Recommend
Let your librarian know about the award-winning gateway to the most trustworthy and accurate scientific information.
AccessScience provides the most accurate and trustworthy scientific information available.
Recognized as an award-winning gateway to scientific knowledge, AccessScience is an amazing online resource that contains high-quality reference material written specifically for students. Contributors include more than 9000 highly qualified scientists and 43 Nobel Prize winners.
MORE THAN 8500 articles and Research Reviews covering all major scientific disciplines and encompassing the McGraw-Hill Encyclopedia of Science & Technology and McGraw-Hill Yearbook of Science & Technology
115,000-PLUS definitions from the McGraw-Hill Dictionary of Scientific and Technical Terms
3000 biographies of notable scientific figures
MORE THAN 19,000 downloadable images and animations illustrating key topics
ENGAGING VIDEOS highlighting the life and work of award-winning scientists
SUGGESTIONS FOR FURTHER STUDY and additional readings to guide students to deeper understanding and research
LINKS TO CITABLE LITERATURE help students expand their knowledge using primary sources of information |
WHAT TO DO BEFORE A NUCLEAR BLAST
To prepare for a nuclear blast, you should do the following:
- Find out from officials if any public buildings in your community have been designated as fallout shelters. If none have been designated, make your own list of potential shelters near your home, workplace, and school. These places would include basements or the windowless center area of middle floors in high-rise buildings, as well as subways and tunnels.
- If you live in an apartment building or high-rise, talk to the manager about the safest place in the building for sheltering and about providing for building occupants until it is safe to go out.
- During periods of increased threat increase your disaster supplies to be adequate for up to two weeks.
Taking shelter during a nuclear blast is absolutely necessary. There are two kinds of shelters – blast and fallout. The following describes the two kinds of shelters:
- Blast shelters are specifically constructed to offer some protection against blast pressure, initial radiation, heat, and fire. But even a blast shelter cannot withstand a direct hit from a nuclear blast.
- Fallout shelters do not need to be specially constructed for protecting against fallout. They can be any protected space, provided that the walls and roof are thick and dense enough to absorb the radiation given off by fallout particles.
WHAT TO DO DURING A NUCLEAR BLAST
If an attack warning is issued:
- Take cover as quickly as you can, below ground if possible, and stay there until instructed to do otherwise.
- Listen for official information and follow instructions.
If you are caught outside and unable to get inside immediately:
- Do not look at the flash or fireball – it can blind you.
- Take cover behind anything that might offer protection.
- Lie flat on the ground and cover your head. If the nuclear blast is some distance away, it could take 30 seconds or more for the blast wave to hit.
- Take shelter as soon as you can, even if you are many miles from ground zero where the attack occurred – radioactive fallout can be carried by the winds for hundreds of miles. Remember the three protective factors: Distance, shielding, and time.
The three factors for protecting oneself from radiation and fallout are distance, shielding, and time:
- Distance – the more distance between you and the fallout particles, the better. An underground area such as a home or office building basement offers more protection than the first floor of a building. A floor near the middle of a high-rise may be better, depending on what is nearby at that level on which significant fallout particles would collect. Flat roofs collect fallout particles so the top floor is not a good choice, nor is a floor adjacent to a neighboring flat roof.
- Shielding – the heavier and denser the materials – thick walls, concrete, bricks, books and earth – between you and the fallout particles, the better.
- Time – fallout radiation loses its intensity fairly rapidly. In time, you will be able to leave the fallout shelter. Radioactive fallout poses the greatest threat to people during the first two weeks, by which time it has declined to about 1 percent of its initial radiation level.
Remember that any protection, however temporary, is better than none at all, and the more shielding, distance, and time you can take advantage of, the better.
WHAT TO DO AFTER A NUCLEAR BLAST
Decay rates of the radioactive fallout are the same for any size nuclear device. However, the amount of fallout will vary based on the size of the device and its proximity to the ground. Therefore, it might be necessary for those in the areas with highest radiation levels to shelter for up to a month.
The heaviest fallout would be limited to the area at or downwind from the nuclear blast, and 80 percent of the fallout would occur during the first 24 hours.
People in most of the areas that would be affected could be allowed to come out of shelter within a few days and, if necessary, evacuate to unaffected areas.
Remember the following when returning home:
- Keep listening to the radio and television for news about what to do, where to go, and places to avoid.
- Stay away from damaged areas. Stay away from areas marked “radiation hazard” or “HAZMAT.” Remember that radiation cannot be seen, smelled, or otherwise detected by human senses. |
Victimization can be defined as the act or process of someone being injured or damaged by another person. The resulting damage may be physical (e.g., bruises, broken bones) or psychological (e.g., posttraumatic stress disorder [PTSD], depression). Victimization is a frequent event that occurs within an interpersonal context, often involving an abuse of power, such as a parent who abuses a child; an adult child who abuses a frail, elderly parent; or a teacher who sexually abuses a student. Although past research on victimization has tended to be compartmentalized, a more integrative approach is needed not only because of the frequent comorbidity among the different types of victimization, but also because of the shared psychological issues. The shared core psychological issues extending across types of victimization include damage to interpersonal relationships and self. Although victimization may often involve traumatic experiences, trauma may not involve victimization. For example, stepping off a curb and falling and breaking an ankle might be a traumatic event; however, such an event does not define an experience of victimization because it is not an interpersonal event.
To understand victimization, several core themes need to be acknowledged. Contrary to a layperson’s perspective, victimization is not a rare event that occurs only in a stranger-on-stranger context. On the contrary, victimization is an extraordinarily frequent event that most often occurs in, and adheres to, the ordinary roles of human life. Although stereotyped conceptions of victimization do occur (e.g., a woman raped by a stranger walking down a street at night) and are damaging and need to be addressed, these types of victimization are not the norm outside the context of a war. Rather, the most significant sources of victimization are those that arise out of our ordinary day-to-day roles, such as those of spouse, parent, child, and friend. Thus, victimization must be understood as an inherent part of human relationships.
Unfortunately, research and writing about victimization is often compartmentalized or balkanized. For example, researchers who study child sexual abuse frequently do not consider the co-occurrence of other forms of victimization, such as physical abuse. Similarly, researchers who study physical abuse may fail to acknowledge the effects of witnessing domestic violence. This has lead to a failure to appreciate the total context of the victimization. Furthermore, such balkanization has led to the failure of researchers to create conceptual models that are organized around general concepts of victimization. Instead, most research and most models of victimization are limited to a particular context. As the field has matured, there is growing recognition that such balkanization can lead to failures to recognize the similarities in these experiences. In particular, such balkanization has prevented researchers from recognizing the common core of the victimization experience: the need to focus on the interpersonal nature and consequence of victimization.
This entry does not discuss victimization that is related to social and political processes such as war. Although war and genocide are grim fields from which victimization springs, such events are beyond the scope of this entry and require their own level of analysis and consideration. Likewise, victimization that is the result of living in a socially disintegrated or impoverished state (e.g., dangerous neighborhoods or extreme poverty), while profoundly damaging to human beings, is not discussed here.
This entry focuses on phenomena that occur in the context of human relationships, particularly those relationships that are defined as the ordinary relationships in which people are involved. The experiences of victimization are defined not simply by who did it and what was done but, instead, by what core psychological process is involved. Such an integrative approach is a useful developmental stage in understanding the phenomena of victimization for a number of reasons. First, more and more researchers are finding that unique, isolated victimization may be rare and that, instead, multiple victimizations of the same person, occurring across time and context, are more typical. In short, there is an enormous amount of overlap among victimized populations in their exposure to what had been seen as distinct and unique victimization situations. As researchers have identified this process, what has come to be understood as a variation of the Matthew Principle is true—”He who has, receiveth; he who has not, receiveth not.” That is, victimization has a far higher likelihood of occurring among certain groups and certain people, particularly those previously victimized.
An abused child may be bullied at school and, as an adult, be a victim of domestic violence. Furthermore, the effect of these different victimizations may be more than simply the sum of the individual types.
Finally, the need for an integrative approach is particularly demonstrated by the shared interpersonal nature of the victimization phenomena. If the key facet of the victimization experience that defines it is the interpersonal nature of the victimization, then there is quite likely to be a shared psychological expression of exposure to victimization across types of victimization. An integrative approach allows for the examination of this common core of psychological features attendant to this definition of victimization.
Effects of Victimization
The early research on the consequences of victimization detailed the many psychological consequences of exposure to victimization. Typically, researchers would identify populations previously victimized and compare this population with a non-victimized population on standardized measures, primarily of psychological disturbance. This research has demonstrated that victimization exposure is a pathogen. In addition to the possible physical effects associated with victimization, there may be psychological symptoms across a range of domains, such as dissociation, depression, anxiety, and interpersonal difficulties. Additionally, specific forms may have more specific outcomes. For example, child sexual abuse may be linked to sexual difficulties. Not only is there a wide range of possible symptoms associated with victimization, but there also is a wide range of severity of response to victimization. With the maturation of the field, particularly with the leadership provided by researchers such as David Finkelhor, emphasis has shifted from specific psychological symptoms and the recognition of PTSD to core psychological issues or processes that are affected by victimization. These core psychological issues include damage to interpersonal relationships and self.
One of the accomplishments of the several decades of research into the consequences of exposure to violence and victimization is the recognition that PTSD is often a specific consequence of victimization. This recognition has brought considerable attention to the role of trauma in the lives of human beings and an awareness that exposure to trauma, particularly chronic, repetitive trauma, creates a unique kind of psychological response that does not fit the typical understanding of PTSD and, instead, requires an understanding of not only trauma and its response but also trauma and the task of adjusting to chronic exposure to trauma. This has led researchers to identify different types of PTSD, described as complex PTSD, to distinguish it from the diagnosis of PTSD as given in the Diagnostic and Statistical Manual (fourth edition; DSM-IV).
Likewise, in the lives of children, there is a greater recognition that the responses of children to chronic, repetitive stressful events cannot be subsumed under the diagnosis of PTSD, which was developed primarily in the crucible of wartime experiences of soldiers. Thus, in the current scientific community, there is an appreciation that the unique adjustment capacities and responses of children and adolescents require some new types of diagnostic nomenclature. In particular, the notion of a developmental trauma disorder has been brought into the scientific community by several people and is being considered for inclusion in subsequent editions of the DSM. The finding that should be emphasized, however, is that trauma exposure is a unique and particular pathogen that occasions a range of responses in humans. In part, these outcomes can be captured by the diagnosis of PTSD; however, the range of responses needs a more articulated and specific set of diagnostic categories to be able to delineate the variety of responses and syndromes observed in children, adolescents, and adults.
The fact that victimization typically occurs within the context of an interpersonal relationship has profound consequences for understanding the consequences of victimization. Such victimization elicits unique interpersonal, emotional, and developmental issues. Humans form their working models of the world in the context of relationships. It is how we come to understand what we may expect from other people and how we learn to interact with others. Thus, the consequences of victimization, particularly victimization that occurs in the context of central human relationships, are far reaching and may affect later relationships.
As originally proposed by John Bowlby, our core attachment figures are the lens through which we develop our understanding of the world. The theory of the world we form in these relationships, thus, becomes the template against which we judge subsequent experiences and by which we shape our own actions in the world. When these models are damaged or distorted by victimization, the primary consequence is that all subsequent interactions are affected by the accommodations that the victim has to make to the experience of victimization. For example, as a result of abuse by a parent, a child believes that all relationships are potentially hurtful. The child then enters into all subsequent relationships with a sense of mistrust and an expectation that rejection and harm will soon follow. The microenvironment that the child has created, in turn, may lead to these expectations being fulfilled.
Thus, at the heart of the victimization experience is the damage done to the victim’s sense of trust and his or her ability to create a safe, attached relationship. The betrayal of victimization is considered to be one of the most difficult processes for humans to incorporate into their expectation of the world as being a benign or benevolent place. Particularly, when victimization is repetitive and ongoing, there is no opportunity for the development of a secure base in any attached relationship.
This damage to the attachment’s schema occurs along with changes in other cognitive schemas. The way in which the world is experienced and interpreted is transformed by victimization exposure. Cognitive schemas, particularly with the perception of relationships, are transformed in negative ways. Roland Summit was among the first to explain these changes in cognitive schemas through his description of the accommodation syndrome, wherein the experience of victimization fixes and makes rigid subsequent interpretations of reality.
The core cognitive schemas of relationship are all profoundly influenced by the experience of victimization. Finkelhor has summarized for a developmental approach, in particular, how this damage is mediated through four core conditions: (1) repetitive and ongoing victimization occurs, (2) the victim’s core relationships are altered, (3) victimization is added to other stressors, and (4) victimization occurs during a critical developmental stage. That is, if victimization is repetitive, if the nature of the victim’s relationship with core attachment relationships is damaged by the victimization, is added on to other stressors, and occurs at a critical period, then these serve as moderators that contribute to the power of the victimization experience through the powerful degradation of development processes.
In terms of critical developmental tasks that can be affected by victimization, perhaps the most core cognitive schema affected is that of the self. Early child development requires the development of a sense of self. One of the core functions of this self is the ability to manage one’s emotions, physiological arousal, basic daily living tasks, as well as managing and regulating affect. In particular, affect regulation is perhaps the most critical task for all humans. The experience of victimization may have a particularly critical influence on children’s ability to regulate their emotional responding to the world. Victimization occurring during adulthood has the effect of undermining acquired competencies and forcing a kind of psychological regression. A very typical experience in adult victimization is for the victim to lose significant psychological developmental accomplishments and regress to previous levels of dependence. There may be a corresponding failure to be emotionally autonomous and self-regulating. There is considerable research that demonstrates that these experiences, moreover, have the power to foreclose the future accomplishment of a developmental task by the consequence of victims being burdened by psychological symptoms and/or accommodating to the victimization by a disengagement from the social world and a lack of confidence in their own self-efficacy.
As described by Finkelhor and Angela Browne, the damage to the self also may include feelings of stigmatization and powerlessness. The person may feel responsible and to blame for what happened. For example, the physically abused child and battered wife may feel deserving of the abuse. Furthermore, given the nature of the interpersonal relationship, the victim may feel too ashamed to report the experience. For example, an elderly person abused by an adult child may feel too ashamed to report the experience. Victimization also may be accompanied by a feeling of powerlessness. The stalking victim, for example, may feel a loss of control over his or her life.
As was previously noted, victimization is not usually an isolated event, and this is important in understanding the consequences of victimization. Finkelhor suggests that there is an additive effect when victimization occurs in the context of other stressors. He also notes that if victimization occurs during a critical period of development, it can interrupt successful task resolution of a developmental stage. Finkelhor’s model, defining the moderating effects of damaging context, is a useful attempt at bringing understanding of the psychological processes to the specific understanding of the victimization effects. There is now an increasing body of literature that does confirm most of Finkelhor’s suggestions, particularly those having to do with multiple victimizations and the cumulative effect of victimization co-occurring with other stressors.
In summary, victimization is a frequent event with profound consequences on human adjustment. To have a more nuanced psychological understanding of victimization, the interpersonal context of the experience must be included in our theoretical and practical models of those who have been victimized.
- Finkelhor, D., & Browne, A. (1985). The traumatic impact of child sexual abuse: A conceptualization. American Journal of Orthopsychiatry, 55, 530-541.
- Finkelhor, D., Ormrod, R., Turner, H., & Hamby, S. L. (2005). The victimization of children and youth: A comprehensive, national survey. Child Maltreatment, 10, 5-25.
- Herman, J. L. (1992). Trauma and recovery. New York: Basic Books.
- Myers, J. E. B., Berliner, L., Briere, J., Hendrix, C. T., Jenny, C., & Reid, T. A. (Eds.). (2002). The APSAC handbook on child maltreatment (2nd ed.). Thousand Oaks, CA: Sage.
- Summit, R. C. (1983). The child sexual abuse accommodation syndrome. Child Abuse & Neglect, 7, 177-193. |
A team of Hungarian researchers have created a swarm of ten drones that can “self-organize” in the air. The project was modeled on birds such as pigeons, “which fly in tight bunches while making adjustments and decisions.”
Tamas Vicsek, a physicist at Budapest’s Eötvös Loránd University said “We came to the conclusion that one of the best ways to understand how animals move together is to build robots — flying robots.”
The drones can negotiate tricky paths, such as when their route becomes tightly confined. When that happens, some of them hover in place to wait their turn. And it’s all done without a central computer or controlling device, the researchers say. Instead, they use “flocking algorithms,” says Gabor Vasarhelyi, who led the robotics phase of the project.
“Drones are most commonly associated with war, terrorism, and cyberattacks, but drones can be used in more peaceful civil applications as well,” Vasarhelyi says. “With a flock of drones, you can create a self-organized monitoring system from the air, or you can even deliver food or mail.”
The team of designers will debut their flying robot drone swarm at the International Conference on Intelligent Robots and Systems conference this year in Chicago. They will present their paper titled “Outdoor flocking and formation flight with autonomous aerial robots.” |
As we learn about the different parts of a science board, we’re going to refer to the following experiment:
Project summary: A group of plants of the exact same height is divided into five groups. Each of four groups is given a different type of fertilizer. The fifth group is given only water. At the end of one month, plants are measured.
Purpose (Problem) – The purpose is what your project hopes to find out or prove. It’s the ‘big question’. What is your goal? What are you trying to test? That’s your purpose, sometimes stated as a problem. The purpose of our science project is to find out, “What type of fertilizer produces the most plant growth?”
Hypothesis – An hypothesis is simply an educated guess about what will happen in your experiment. To form your hypothesis, take all the information you know about your science project question, and use it to predict what you think will happen. It doesn’t matter if you’re right or wrong; that’s what the experiment will tell you! In our experiment, the hypothesis will be, “I think that …. will make plants grow the highest.” Use what you know about fertilizer, advertisements, comments from a gardener you know, or personal experience to formulate your hypothesis.
Materials – This is a detailed list of exactly what you used (or plan to use) in your experiment:
- Four types of liquid houseplant fertilizer -Peters Professional® All Purpose Plant Food, Spectrum® Colorburst Plant Food, Osmocote® Indoor Outdoor Plant Food, and Miracle-Gro® All Purpose Plant Food
- 20 identical terra cotta pots filled with potting soil
- 20 bush bean plants of identical height
Procedure – A step by step description of how to do your experiment. Another person should be able to do your experiment again, just by following your procedure.
Graph – The words chart and graph are used interchangeably. We use the word “graph” for a numbers placed on a grid (or spreadsheet) like the
one at the right. And a chart…
Chart – A chart arranges the information (data) from your experiment visually, so you can see it. Look at the charts to the right. The first gives all the heights of the plants on the last day. The second gives the average height.
Abstract – Some science fairs require an abstract, which is a brief but complete summary of your project. It probably should not be more than 250 words. This doesn’t go on your board, but is in a folder as part of the total display.
Data – Data means information. It’s plural, so the absolute correct usage would be “The data show us that…” (Actually, one piece of data is datum, which you really don’t need to know unless you’re taking Latin or have an extremely pedantic teacher.) Your data will most often be in numbers, although if you were a zoologist, your data might be observations about the feeding habits of anteaters. The measurements of the plant height (the numbers in the graph) give the data for our experiment.
Analysis – When you explain your data and observations, you are giving an analysis. What have you learned? Why did you get the results you did? What did the experiment prove? And, most important, was your hypothesis correct? The analysis for the fertilizer experiment would begin “We discovered that the Miracle Gro produced the most plant growth. While water produced the least growth overall, it is worth noting that two of the plants died after having been added Peters fertilizer. Our hypothesis was disproved, as we thought the Peters fertilizer would produce the tallest plants.
Conclusion – Answer your problem/purpose statement. What does it all add up to? What did you learn from your project?
Application – What questions come up as a result of your experiment? What else would you like to know? If you did this project again, what would you change? How can this project help in real life? While we discovered which plants grew tallest, we didn’t test which plants had the most flowers, and would give the most fruit. This would be what we would like to see answered in our next experiment. We have learned, however, that it is important to use a fertilizer, and we have learned some of the best brands. |
OUR GOAL: To move an open C chord up the neck so that it becomes an F chord.
How many frets do we need to move it?
The answer is in the CHROMATIC SCALE.
First let’s go through the chromatic scale from C to F:
C, C#, D, D#, E, F
This tells me I need to move the shape 5 frets up the neck.
Here’s another way of looking at it:
If we move the C chord shape up the neck one fret at a time we eventually get to an F chord.
Here is a C chord:
If we move it up one fret we get a C# chord:
Move it up one more fret and you get a D chord:
If we continue this way and move the shape up the neck one fret at a time, we eventually get to an F chord:
D#, E, and finally F!
Move a C chord up the neck 5 frets and it becomes an F chord!
(Note: THE NUMBER 5 in this diagram is telling use to bar at the 5th fret)
Here is the whole process in tab. Hope this makes sense!
TRY THIS ONE: How many frets up the neck do you need to move an open A chord in order for it to become a D chord? Answer at the bottom of the page.
Question: What are the notes in an E major scale?
We began our walking exploration of the CHROMATIC SCALE last newsletter by walking the C major scale.
You may have noticed the notes laid out in a certain pattern:
WHOLE STEP, WHOLE STEP, HALF STEP, WHOLE STEP, WHOLE STEP,
WHOLE STEP, HALF STEP.
This pattern can also be seen when we play a major scale up one string.
Take the C major scale which we learned to play up one string in UD #37:
C D E F G A B C
The pattern goes like this:
Two frets C to D
Two frets D to E
One fret E to F
Two frets F to G
Two frets G to A
Two frets A to B
One fret B to C
In terms of the chromatic scale 2 FRETS equals a WHOLE STEP
and 1 FRET equals a HALF STEP. So you can see the pattern is the same!
All major scales follow this same pattern in the spacing of the notes!
All you need to do to create a major scale is to start on a note and follow the pattern.
Let me show you have that works:
Here is the chromatic scale C to C.
C, C#, D, D#, E, F, F#, G, G#, A, A#, B, C
Our goal is to create an E major scale, so we start on an E note. Oh Oh, if we start on the E note we’ll run out of space.
So, here is the chromatic scale in a circle so you can never run out of space:
(Note that the diagram above also shows you your options in terms of naming a note with a sharp or flat. The note between C and D can be called C# or Db.)
If we start on an E note and go up a whole step we get an F# note.
a whole step up from F# is G#. Up a half step to A. Up a whole step to B. Whole step to C#. Whole step to D#. Half step brings us back to E
And so we have the notes in an E major scale:
E, F#, G#, A, B, C#, D#, E
How do I know whether to use # or b to name the notes in a scale?
In scales we don’t want to repeat the same letter and we also don’t want to skip a letter. So for E major we don’t want to go E, Gb, G#. We say E, F#, G#… So that he letters follow in logical sequence.
Let’s do another scale: F major!
We start on F. Whole step to G. Whole step to A. Half step to Bb. Whole step to C. Whole step to D. Whole step to E. Half step to F.
The notes in an F major scale are F, G, A, Bb, C, D, E, F.
The idea is that if you develop a strong sense for the chromatic scale that you can build scales in your head as you need them. Try it next time you are on a long bus or plane ride!
Question: Why is the IV Chord in the key of F a Bb chord? Why isn’t it a B chord?
As we learned in UD #17, the chords in a certain key are built off the notes of the scale. The fourth note in an F major scale is a Bb. So the IV chord is Bb, too. Cool eh?
4. Learning the fretboard of your ukulele.
Question: What note is on the 4th fret of the E string?
This is very similar to question 2. We just go up the E string and use our knowledge of the chromatic scale to guide us. E is open, first fret is F, second fret is F#, 3rd fret is G, 4th fret is G#. Answer: The 4th fret of the E string is G# (or Ab). So if you know the chromatic scale you can always extrapolate on the notes that you do know on the fretboard. Once you know one of the notes on a string you can figure out what the other ones are. Nifty eh?
Hope that makes sense. I will see you next newsletter!
Tags: Chromatic Scale |
Objective: Chapter 1, The Founding Fathers: An Age of Realism The Founders of the Constitution were a group of men who desired to establish a form of government that allowed for personal freedom while protecting the interests of an entire nation. The objective of this lesson is to discuss the philosophies and events that affected the thinking of the Founders of the United States Constitution.
1. As a class, discuss the history behind the formation of the United States. Why were the new Americans attempting to break off their association from Great Britain? What did they stand to gain and lose by doing so? What general ideals did the Founders describe in the making of the Constitution?
2. Divide students into groups of three or four to discuss how the contrast between the social class of the Founders and the average American affected the development of the Constitution. Why did...
This section contains 8,880 words
(approx. 30 pages at 300 words per page) |
Slave religion is the beliefs, religious faith, and practices of Africans brought to the New World beginning in 1619 and that African Americans kept until they were emancipated. West Africans believe that there is a high god, who created all things and they believe on lesser gods who follow the high god. Having these lesser gods meant that they would pray to different gods when dealing with rain, fertility, and crops. They also believed that a status with the lesser gods was occupied by the spirits of their ancestors. Africans thought that their ancestors were the living dead because they were both close to the living as well as the ultimate beings. The purpose of them living was to honor the ancestors, recognize the lesser gods, and give all power and admiration to the high god.
Christianity became alive as slaves began to combine their African religious beliefs with Christian beliefs in order to make up what is called slave religion. At the beginning, between 1619 and the early 1700s, slave owners were not really trying to convert their slaves into Christians. Then, slave owners began to have different thoughts between each other as well. Some believed that slaves were more than inferior so this meant that they should try to acquire Christian redemption. Others believed that converting slaves into Christians would cause many problems because they could start thinking that they were equal to whites since they were sharing the same beliefs. To them, a converted slave would become lazy or even resistant to their white masters.
Then, in 1701, this all began to change when white missionaries and slave masters realized that slaves should be converted into Christians. The formation in London of the Society for the Propagation of the Gospel in Foreign Parts (SPG) was how this all began. The number of slaves that they converted was limited due to the lack of ministers that were sent to North America and because some slave owners objected for their slaves to be taught Christian beliefs. The SPG was converting slaves as they demanded control of their body instead of African beliefs in which they emphasize on physical movement caused by spirit possession.
This society was fairly effective but it was not until the Great Awakenings (1740 and the early 1800s) that black slaves began to turn towards Christianity in large numbers. Preachers that were related with the Great Awakening emphasized conversion of the heart, encouraged overjoyed body expressions, and required a simple confession of Jesus Christs lordship. These ideas were obviously accepted by slaves because they converted throughout the South, but there were some that still resisted some of the theology and religious practices of the Great Awakenings. White preachers taught the slaves that they had to obey their masters as a sign of being faithful to God. In the other hand, white churches still thought that slaves were not equal because they held segregated religious services and controlled the free worship by slaves. Plantation owners went one step further as they established segregated seating by placing the slaves in the rear, in the balcony, or even outside the church windows.
Slaves prayed secretly to God as their only master and asked for them to be liberated from their owners. They reinterpreted Christianity by adding some of their African religion. Slaves identified themselves with the Old Testament Hebrew slaves as they were liberated by God. If God was able to liberate the Hebrew slaves that meant that if the slaves would pray enough to him; the same thing could take place for them. To them, faith was now a belief in and commitment to a God that helped the poor and judged the arrogant and the strong, their owners. Now, God instead of the plantation owner was the actual master of the slaves. Slaves believed that if God had sided against religious and political powers in the Bible, then he could also help them become free. They believed that Jesus was powerful enough to do anything.
Through their arrangement of God and Jesus, slaves were able to obtain a new meaning in their everyday life. They created things like "discourse of solidarity" in which one slave would never give information about another and even went to the extreme of religious resistance. Rebellion was now taking place. "Invisible Institution" was now clearly shown as slaves were conducting secret worship and prayer far away from the eyes of their masters. They would meet in the woods where they would get ready to receive a visit from the spirit who made them sing, pray, preach, shout, and enjoy their own free religious space in such an enthusiastic manner. In the Invisible Institution slaves learned things as oratorical skill and started to become leaders. Some received food and clothing but also counseling in order to keep in the right stage of mind.
In 1830s during the religious awakening in the South the slave owners were now bringing the Gospel to the quarters and this served as social control and as a way to convert the slaves. By 1860, about 15 percent of the slaves were members of either the Baptist or Methodist church. There they heard the same sermons, had the same discipline, and shared the communion table with whites. In the other hand, slaves still did not only follow these formal proceedings. Slaves would still listen to their own black preachers and they would also try to translate the Bible in a way in which it showed that they were Gods chosen people and that Judgment Day would castigate their masters. Slaves turned Christianity into their own terms. If their masters did not follow common Christian behavior then the slaves felt a great superiority over their masters.
Now, in the lower Gulf area, around Louisiana, some slaves followed VOODOO. In other places where slaves were imported illegally from Africa, they practiced Islam. Others did not have a religion at all.
African-American slave religion was very varied and was beyond the masters
observation and knowledge, which was why rebellion began to take place. Slave religion was
proven to be dangerous by Nat Turners Rebellion of 1831, which was the most
important revolt in the 19th century. Nat Turner was a slave in Southampton
County, Va., who believed God had called him in a religious vision to deliver his people
from enslavement. He used his literacy, articulation, and impulsiveness to preach and
gather others who would join him as he planned to strike one night after an eclipse of the
sun. He started with six followers but ended with eighty who marched to Jerusalem, in
Southampton County, and they killed fifty-seven men, women, and children until white
authorities ended the revolt. Turner avoided to be captured for two months before he was
caught and was finally executed in November 1831. Some white Southerners saw rebellion
starting everywhere and killed as many as two hundred slaves because of fear. They began
to be stricter and they showed a closer supervision and religious instruction. The Turner
revolt and the aftermath only proved that whites still did not know the slaves.
Encyclopedia of African-American Culture and History. New York: Simon and Schuster, 1996. 2452-2454 and 2465.
Genovese, Eugene D. Roll, Jordan, Roll. New York: Random House, Inc., 1972. 232-255. |
1924: Astronomer Edwin Hubble announces that the spiral nebula Andromeda is actually a galaxy and that the Milky Way is just one of many galaxies in the universe.
Before Copernicus and Galileo, humans thought our world was the center of creation. Then (except for a few notable stragglers) we learned that the sun and planets did not revolve around the Earth, and we discovered that our sun — though the center of our solar system and vitally important to us — was not the center of the universe or even a major star in our galaxy.
But we still grandiosely thought our own dear Milky Way contained all or most of the stars in existence. We were about to be knocked off our egotistical little pedestal once again.
Edwin Hubble was born in Missouri in 1889 and moved to Chicago in 1898. In high school, he broke the state record in the high jump, and went on to play basketball for the University of Chicago. He won a Rhodes scholarship and studied law at Oxford. He earned a Ph.D. in astronomy, but practiced law in Kentucky. After serving in World War I and rising to the rank of major, he got bored with law and returned to astronomy.
He trained the powerful new 100-inch telescope at Mount Wilson in Southern California on spiral nebulae. These fuzzy patches of light in the sky were generally thought to be clouds of gas or dust within our galaxy, which was presumed to include everything in the universe except the Magellanic Clouds. Some nebulae seemed to contain a few stars, but nothing like the multitudes of the Milky Way.
Hubble not only found a number of stars in Andromeda, he found Cepheid variable stars. These stars vary from bright to dim, and a very smart Harvard computationist named Henrietta Leavitt had discovered in 1912 that you could measure distance with them. Given the brightness of the star and its period — the length of time it takes to go from bright to dim and back again — you could determine how far away it is.
Hubble used Leavitt’s formula to calculate that Andromeda was approximately 860,000 light years away. That’s more than eight times the distance to the farthest stars in the Milky Way. This conclusively proved that the nebulae are separate star systems and that our galaxy is not the universe.
Cosmic though it was, the news did not make the front page of The New York Times. The paper did notice the following Feb. 25 that Hubble and a public health researcher split a $1,000 prize ($12,500 in today’s money) from the American Association for the Advancement of Science.
Hubble went on to discover another couple of dozen galaxies. Before the 1920s were over, he added another astronomical achievement to his reputation. By analyzing the Doppler effect on the spectroscopic signals of receding stars, he established that their red shift was proportional to their distance.
Photo: Edwin Hubble’s 1920s observations of Andromeda (whose ultraviolet spectrum is rendered here) expanded our notions of the size and nature of a universe that is itself expanding.
Galaxy Evolution Explorer image courtesy NASA.
This article first appeared on Wired.com Dec. 30, 2008. |
Modes of disposal of the corpse and attendant rites
The form of the disposal of the dead most generally used throughout the world in both the past and present has been burial in the ground. The practice of inhumation (burial) started in the Paleolithic era, doubtless as the most natural and simplest way of disposal. Whether it was then prompted by any esoteric motive, such as the return to the womb of Mother Earth, as has been suggested, cannot be proved. Among some later peoples, who have believed that primordial man was formed out of earth, it may have been deemed appropriate that the dead should be buried—the idea found classical expression in the divine pronouncement to Adam, recorded in Genesis 3:19: “You are dust, and to dust you shall return.” There is evidence that in ancient Crete the dead were believed to serve a great goddess, who was the source of fertility and life in the world above and who nourished and protected the dead in the earth beneath.
The mode of burial has varied greatly. Sometimes the body has been laid directly in the earth, with or without clothes and funerary equipment. It may be placed in either an extended or crouched position: the latter posture seems to have been more usual in prehistoric burials. Sometimes evidence of a traditional orientation of the corpse in the grave can be distinguished, which may relate to the direction in which the land of the dead was thought to lie. The use of coffins of various substances dates from the early 3rd millennium bc in Sumer and Egypt. Intended probably at first to protect and add dignity to the corpse, coffins became important adjuncts in the mortuary rituals of many religions. Their ritual use is most notable in ancient Egypt, where the mummies of important persons were often enclosed in several human-shaped coffins and then deposited in large, rectangular wooden coffins or stone sarcophagi. The interiors and exteriors of these coffins were used for the inscription of magical texts and symbols. Sarcophagi, elaborately carved with mythological scenes of mortuary significance, became fashionable among the wealthier classes of Greco-Roman society. Similar sarcophagi, carved with Christian scenes, came into use among Christians in the 4th and 5th centuries and afford rich iconographic evidence of the contemporary Christian attitude to death.
In the ancient Near East, the construction of stone tombs began in the 3rd millennium bc and inaugurated a tradition of funerary architecture that has produced such diverse monuments as the pyramids of Egypt, the Tāj Mahal, and the mausoleum of Lenin in Red Square, Moscow. The tomb was originally intended to house and protect the dead. In Egypt it was furnished to meet the needs of its magically resuscitated inmate, sometimes even to the provision of toilet facilities. Among many peoples, the belief that the dead actually dwelt in their tombs has caused the tombs of certain holy persons to become shrines, which thousands visit to seek for miracles of healing or to earn religious merit; notable examples of such centres of pilgrimage are the tombs of St. Peter in Rome, of Muḥammad at Medinah, and, in ancient times, the tomb of Imhotep at Ṣaqqārah, in Egypt.
The disposal of the corpse has been, universally, a ritual occasion of varying degrees of complexity and religious concern. Basically, the funeral consists of conveying the deceased from his home to the place of burial or cremation. This act of transportation has generally been made into a procession of mourners who lament the deceased, and it has often afforded an opportunity of advertising his wealth, status, or achievements. Many depictions of ancient Egyptian funerary processions graphically portray the basic pattern: the embalmed body of the deceased is borne on an ornate sledge, on which sit two mourning women. A priest precedes the bier, pouring libations and burning incense. In the cortege are groups of male mourners and lamenting women, and servants carry the funerary furniture, which indicates the wealth of the dead man. Ancient Roman funerary processions were notable for the parade of ancestors’ death masks. In Islāmic countries, friends carry the corpse on an open bier, generally followed by women relatives, lamenting with disheveled hair, and hired mourners. After a service in the mosque, the body is interred with its right side toward Mecca. In Hinduism the funeral procession is made to the place of cremation. It is preceded by a man carrying a firebrand kindled at the domestic hearth; a goat is sometimes sacrificed en route, and the mourners circumambulate the corpse, which is carried on a bier. Cremation is a ritual act, governed by careful prescriptions. The widow crouches by the pyre, on which in ancient times she sometimes died. After cremation, the remains are gathered and often deposited in sacred rivers.
Christian funerary ritual reached its fullest development in medieval Catholicism and was closely related to doctrinal belief, especially that concerning purgatory. Hence, the funerary ceremonies were invested with a sombre character that found visible expression in the use of black vestments and candles of unbleached wax and the solemn tolling of the church bell. The rites consisted of five distinctive episodes. The corpse was carried (in a coffin if one could be afforded) to the church in a doleful cortege of clergy and mourners, with the intoning of psalms and the purificatory use of incense. The coffin was deposited in the church and covered with a black pall, and the Office of the Dead was recited or sung, with the constant repetition of the petition: “Eternal rest grant unto him, O Lord, and let perpetual light shine upon him.” Next, requiem mass was said or sung, with the sacrifice offered for the repose of the soul of the deceased. After the mass followed the “Absolution” of the dead person, in which the coffin was solemnly perfumed with incense and sprinkled with holy water. The corpse was then carried to consecrated ground and buried, while appropriate prayers were recited by the officiating priest. Changes in these rites, including the use of white vestments and the recitation of prayers emphasizing the notions of hope and joy, were introduced into the Catholic liturgy only following the second Vatican Council (1962–65).
In some societies the burial of the dead has been accompanied by human sacrifice, with the intention either to propitiate the spirit of the deceased or to provide him with companions or servants in the next world. A classic instance of such propitiatory sacrifice occurs in Homer’s Iliad (xxiii:175–177): 12 young Trojans were slaughtered and burned on the funeral pyre of the Greek hero Patroclus. The royal graves excavated at the Sumerian city of Ur, dating c. 2700 bc, revealed that retinues of servants and soldiers had been buried with their royal masters. Evidence of a similar Chinese practice has been found in Shang-dynasty graves (12th to 11th centuries bc) at An-yang. In ancient Egypt models of servants, placed in tombs, were designed to be magically animated to serve their masters in the afterlife. A particular type of these models, known as an ushabti (“answerer”), was inscribed with chapter VI of the Book of the Dead, commanding it to answer for the deceased owner if he were required to do service in the next world.
The custom has also existed among some peoples of dismembering the body for burial or subsequently disinterring the bones for storage in some form. There is Paleolithic evidence of a cult of skulls, which suggests that the rest of the body was not ritually buried. The Egyptians removed the viscera, which were preserved separately in four canopic jars. The Romans observed the curious rite of the os resectum: after cremation a severed finger joint was buried, probably as a symbol of an earlier custom of inhumation. In medieval Europe the heart and sometimes the intestines of important persons were buried in separate places: e.g., the body of William the Conqueror was buried in St. Étienne at Caen, but his heart was left to Rouen Cathedral and his entrails for interment in the church of Chalus. To be noted also is the Zoroastrian and Parsi custom of exposing corpses on dakhmas (“towers of silence”) to be devoured by birds of prey, thus to avoid polluting earth or air by burial or cremation.
The alternative use of inhumation or cremation for the disposal of the corpse cannot be interpreted as generally denoting a difference of view about the fate of the dead. In India, cremation was indeed connected with the fire god Agni, but cremation does not necessarily indicate that the soul was thus freed to ascend to the sky. Burial has been the more general practice, whether the abode of the dead be located under the earth or in the heavens.
Post-funerary rites and customs
Funerary rites do not usually terminate with the disposal of the corpse either by burial or cremation. Post-funerary ceremonies and customs may continue for varying periods; they have generally had two not necessarily mutually exclusive motives: to mourn the dead and to purify the mourners. The mourning of the dead, especially by near relatives, has taken many forms. The wearing of old or colourless dress, either black or white, the shaving of the hair or letting it grow long and unkempt, and abstention from amusements have all been common practice. The meaning of such action seems evident: grief felt for the loss of a dear relative or friend naturally expresses itself in forms of self-denial. But the purpose may sometimes have been intended to divert the ill humour of the dead from those who still enjoyed life in this world.
The purification of mourners has been the other powerful motive in much post-funerary action. Death being regarded as baleful, all who came in contact with it were contaminated thereby. Consequently, among many peoples, various forms of purification have been prescribed, chiefly bathing and fumigation. Parsis are especially intent also on cleansing the room in which the death occurred and all articles that had contact with the dead body.
In some post-funerary rituals, dancing and athletic contests have had a place. The dancing seems to have been inspired by various but generally obscure motives. There is some evidence that Egyptian mortuary dances were intended to generate a vitalizing potency that would benefit the dead. Dances among other peoples suggest the purpose of warding off the (evil) spirits of the dead. Funeral games would seem to have been, in essence, prophylactic assertions of vitalizing energy in the presence of death. It has been suggested that the funeral games of the Etruscans, which involved the shedding of blood, had also a sacrificial significance.
Another widespread funerary custom has been the funeral banquet, which might be held in the presence of the corpse before burial or in the tomb-chapel (in ancient Rome) or on the return of the mourners to the home of the deceased. The purpose behind these meals is not clear, but they seem originally to have been of a ritual character. Two curious instances of mortuary eating may be mentioned in this connection. There was an old Welsh custom of “sin eating”: food and drink were handed across the corpse to a man who undertook thereby to ingest the sins of the deceased. In Bavaria, Leichennudeln, or “corpse cakes,” were placed upon the dead body before baking. By consuming these cakes, the kinsmen were supposed to absorb the virtues and abilities of their deceased relatives.
A remarkable post-funerary custom has been observed in Islām; it is known as the Chastisement of the Tomb. It is believed that, on the night following the burial, two angels, Munkar and Nakīr, enter the tomb. They question the deceased about his faith. If his answers are correct, the angels open a door in the side of the tomb for him to pass to repose in paradise. If the deceased fails his grisly interrogation, he is terribly beaten by the angels, and his torment continues until the end of the world and the final judgment. In preparation for this awful examination the roof of the tomb is constructed to enable the deceased to sit up; and, immediately after burial, a man known as a fiqī (or faqih) is employed to instruct the dead in the right answers. |
Mesopotamia, the area between the Euphrates River and the Tigris River is mainly silted ground wich proved out to be very fertile. This is where people first started agriculture, the rise of civilisation. There was no longer a need for people to gather food, they could grow it. People started to live in villages. The areas around the Mediterranean and the Gulf of Persia also turned out to be very fertile. These were conquered by the Persians, who had a stable government and were very good in building infrastructure. They were able to cover 2.500 kilometres in a week. They started irrigating their fields and products were transported to different places, which enabled people to start living in cities. As people started trading the necessity of growing there won food was no longer.
Around 500 BC, the Persian Empire was the land of plenty and it stretched all the way from Asia to the Mediterranean. Trade was flourishing and most of the money earned was invested in a strong army, to conquer even more territory. The Persians were getting silver from Egypt, ivory from India and they were building enormous structures in several cities. They extended the territory from the Black Sea to Central Asia and Mongolia. In the North they encountered aggressive Nomads, but they were good for trading horses.
The Greeks were afraid of the Persians, until Alexander of Macedonia became king, who became known as Alexander the Great. Alexander was not looking at Europe, there were no cities and no culture. He was aiming for the East. He conquered Egypt and also took Babylon from the Persians. He conquered all the Royal roads that connected the Persian cities. Alexander built Forts around the cities to protect them against the Nomads that lived on the plains. This whole network of fortifications resulted eventually in the Great Chinese Wall.
Alexander died in Babylon at the age of 32, but he achieved a lot in his lifetime. Seleucus, his successor, became governor of the areas Alexander the Great had conquered. An area that stretched from the Tigris River to the Indis River and from the Mediterranean to the Himalayas, a true empire. Seleucus founded the Seleucus Dynasty, which ruled the area for three centuries. In those days Greek language and cultured was educated as far as India.
During the Han Dynasty in the second century BC, the empire extended into China. There was a lot of trade in horses for the army. The best horses came from the Fergana valley and the Pamir Mountains in today’s Kyrgyzstan and Tajikistan. Nomads were good hunters and they formed a threat, they would get a lot of products in exchange for peace. The main product was silk, a strong but light material that could easily be transported. When the price off peace became to high, China used crusades to expel the Nomads more to the North.
Slowly trade between China and the rest of the world started. The journey through the Taklamakan Desert was a difficult and harsh one. A northern and a southern route through the desert arose. After the desert, the route continued through the Pamir Mountains. Traders on the Silk Road had to deal with extreme differences in altitude and temperature. The camel became a favourite way of transportation; this animal could cope with all the difficulties. In the beginning only expensive and scarce products were transported. Silk was the absolute number one. In China there were often not enough coins to pay the soldiers, so they were paid with silk.
In the meantime in Persia, the Arsacides took over from the Seleucides. The Arsacides were based on a combination of Greek and Persian influences.
Rome was the first city in Europe to grow from a village into a city and the Romans were ready to conquer the world. They were very combative; Gladiator fights in big arenas were their entertainment. With a well-trained army at their hands, they conquered Gaul in 52 BC. Gaul was the area that we know today as France, Belgium, The Netherlands, Luxemburg and the western part of Germany. The Romans were only interested in areas with big cities, where many people could pay taxes. They were not interested in Great Britain, but they were interested in the Egyptian port town of Alexandria and the agriculture area’s in the delta of the Nile River. The Romans took their chances when queen Cleopatra was in a sad state after the assassination of Julius Caesar. When the Roman army of Octaves defeated the Egyptians at Actium, queen Cleopatra committed suicide.
Rome was flourishing, emperor Augustus proclaimed Rome was made of bricks when he came to power and made of marble when he left. In all the areas conquered by the Romans, people were obliged to register. The Romans sent a delegation to Judea, to count the people to see if there were enough taxes to gain. Joseph, Maria and their child Jesus were also registered.
In Rome they thought of Asia being the land of comfort and luxury. Emperor Augustus send his soldiers to Asia, but they got drunk and lazy, realising there was more to life then the regime they were used to in Rome. Augustus made several attempts to get to know the land behind his new borders. He wanted details about the trade routes in Persia and Central-Asia. He was also interested in waterways via the Red Sea to the east. Roman traders made it all the way to India, Roman coins have been found in India from the time Augustus was emperor. These Roman traders brought the most extravagant products from everywhere. Expensive fish, living birds, silver toothpicks, jade, ivory and so on. Not all Romans were happy with the unnecessary luxury. Another thing they were not happy with was Chinese silk. The thin material was giving way to much of the female curves, it was almost transparent. The price had gone totally out of control; it was expensive to keep their women satisfied. At that time the price of silk was about 100 times the normal price. Money was flowing from the Roman economy to Asia, about 10% of their yearly budget and about half their coin production.
Villages along the trade routes were flourishing because of the trade in silk. Roads were improved and villages became cities. Impressive buildings were constructed in Tashkent, Bukhara and Samarkand in today’s Uzbekistan. Palmyra in Syria became the Venice in the sand and Petra in Jordan was located on the route between the Arab world and the Mediterranean. Trade fairs were organised in cities at the crossroads of different routes.
The Romans had little contact with the Chinese, because the Persians were in between. Sometimes diplomats joined a trade caravan. In the second century, Romans again had great ambitions; they conquered many Persian cities, among others Babylon. Romans were able to gain a lot of money by taxing all products on the trade routes in their newly acquired territories. They invested the money in port towns, because overseas trade was on the rise.
The Persians were feeling the pressure from their Roman neighbours. In the year 220 the Sassanids came to power. They decided to move regional power to central power. New severe rules for traders and markets were made and it worked. Persia was flourishing again and this brought Rome to totter.
Around 300 AC the Roman empire stretched from the North Sea to the Black Sea and from the Limes to the Caucasus and Yemen. Expanding their territory was more and more difficult. Rome became a victim of it’s own success; they became the target of the neighbouring nations.
The Romans needed a new strong leader. Emperor Constantine stepped forward, the son of a high officer. He built a new city that needed to match Rome. He chose the location of the old city of Byzantium, where Europe and Asia meet. He built enormous palaces and a horse race track and called the city Nova Roma. Soon his city became known as Constantinople. A strategic location, where many trade routes met. This way he could keep an eye on the trade and the taxes.
It was very busy on the different Silk Roads; about 2000 years ago there was already a great connection between Europe and the East. This is how pottery from France ended up in India and silk from China came to Rome. Locally minted coins went all over the world. Christianity also spread via de Silk Road, a religion from the East. Rome may be the forefather of Europe, but it was greatly influenced by the East. The rise of the Silk Road was an intriguing time.
(Resume by Marica van der Meer from a book by Peter Frankopan) |
WHAT IS FELINE HERPESVIRUS?
Feline herpesvirus (FHV-1 or rhinotracheitis virus) is a primary cause of feline upper respiratory disease in both household and wild cats. The virus is also a common cause of conjunctivitis, keratitis (corneal inflammation), and corneal ulcers. FHV-1 has been known to infect both kittens and adult cats, although the condition is more prevalent in younger cats. Most infected cats will become carriers. The virus can remain dormant in carriers for years and reappear during times of high stress or when the feline is dealing with other health problems such as FeLV (feline leukemia) or FIV (feline immunodeficiency virus).
SYMPTOMS AND TREATMENT
Indicators that your cat may be suffering from feline herpesvirus include clear or cloudy discharge from the eyes, sneezing, nasal discharge, drooling, fever, severe depression, and rarely, oral ulcers. Cats affected by feline herpes may not eat due to a reduced sense of smell and taste. If your cat is experiencing similar symptoms, visit your veterinarian. Your veterinarian will examine the cat and may perform some lab tests on her affected eye and nasal secretions.
Cats diagnosed with feline herpesvirus are given antibiotics to prevent any secondary bacterial infections. Those with corneal ulcers also receive an antiviral medication, to prevent permanent damage to the eyes. The antiviral medication is often paired with an oral form of the amino acid L-lysine, to manage chronic herpesvirus infections. Pain relievers are sometimes necessary to ease the pain from the corneal ulcers.
PREVENT THE SPREAD
Feline herpesvirus is a virulent condition that can be spread through contact with the discharge from the eyes and nose of an infected cat. Contaminated items such as food and water bowls, hands, and bedding can harbor the virus and may be responsible for transferring it from one cat to another. To help avoid carrier cats from developing active infections, protect them from stress, and talk to your veterinarian about the use of L-lysine.
Precautions such as regular vaccinations, proper sanitation, and limited contact with sick, strange or wild cats should limit the chances of your cat becoming infected with feline herpes. |
Technology used: NetLogo
Course: BIOL0140 Ecology and Evolution
Learning objective: Allow students to experimentally investigate evolution through a computer simulation
Reason for using the technology: After using EcoBeaker in their labs for several years, Professor Matt Landis and his colleagues wanted to try a different simulation model. Because EcoBeaker is proprietary software, the instructors weren’t able to answer students’ questions about how the model worked. They also weren’t able to fix software bugs. Using NetLogo allowed Matt to build and modify the model to directly address pedagogical needs.
Description: Matt used NetLogo, a free programmable modeling environment, to build a model of a finch population on Daphne Major, an island in the Galapagos. Try it yourself here (works best with recent versions of Firefox). Matt and the other BIOL0140 instructors have used this model for two years in a lab entitled “Computer Simulations and Evolution of Darwin’s Finches.”
The “Computer Simulations and Evolution of Darwin’s Finches” lab lasts for 3 weeks. In the first week, students form groups, familiarize themselves with the model and choose a topic. For example, they might decide to test how well genetically diverse populations withstand environmental variation. The students will develop a hypothesis, load the model on a computer, adjust the weather with a slider bar, and watch for changes in the population over hundreds of years. In the second week, the students meet with their instructors to refine their topics and review their results. In the third week, they present their findings to the rest of the class.
Matt learned NetLogo on his own. He worked from a model that he had created using other software, and he adapted the lab assignments from an EcoBeaker lab. The NetLogo site has a collection of samples that demonstrate other potential applications of the technology, including chemistry (polymer dynamics), political science (voting patterns), and public health (epidemiology). |
Laying out the JFrame
In the class, two buttons are created, eastButton and westButton . The
assertion for the two declarations, the class invariant , lets the reader know that
exactly one of the buttons is enabled at any point. As might be expected, the con-
structor adds the two buttons to the content pane. And, two statements disable the
west button and enable the east button, thus truthifying the class invariant. At the
end of the constructor, the frame is packed, as usual.
Making the buttons listen
Making a button listen is a three-step process:
1. Write a procedure actionPerformed to process a button click. It must
have one argument of type ActionEvent . Our procedure is given at the
bottom of the class in Fig. 17.14. It stores in local variable b a boolean
that indicates whether button eastButton is enabled and sets the en-
abledness of the two buttons accordingly. Here, you see calls to two
methods of class JButton : isEnabled and setEnabled . This particular
procedure does not access its parameter e . We talk about that later.
2. Have the class implement interface ActionListener . This ensures that
actionPerformed appears in the class. Do this by putting an implements
clause in the method header, as shown in Fig. 17.14. Do not worry if you
do not know about interfaces and implements clauses. Just do this.
3. Add an instance of this class as an action listener for the button. For
example, the following call adds this instance as a listener of button
westButton . Remember that keyword this , used in a method, refers to
the instance in which the method appears.
westButton.addActionListener( this );
Listening to mouse events |
This is a unit that incorporates a lot of hands-on activities which tries to provide students with opportunities to create models, to practice inquiry skills, to work with fellow students in teams, and to reinforce concepts discussed in class all by using the theme of BIOSPHERES. The target audience is mainly biology, although it could easily be integrated into a life science or environmental science class.
I have done this unit for 7 years, from second graders in our Future Flight Hawaii space program, to gifted and talented biology students in high school. I've adapted it to a space theme and an environmental theme. I will be presenting this Biosphere unit using a space theme from our Future Flight Hawaii program that I wrote up.
Peperation Time: A couple of days
Class Time Needed:
- 1 class day to make mobiles, to discuss major concepts, and to start planning their biospheres.
- 1 class day to set up and completely seal their biosphere(s) and to start taking data on it.
- 5 weeks- to take consistent, daily observations every class day. This can be done at the beginning of each class day, at the end, or on their own time (this will be at the teacher's discretion).
Scenario (for students):
You are an alien form on a distant planet. Your home world has received distant radio transmissions from a particular solar system and you have been sent as the head biologists on this mission to find the source of these transmissions and to see if life exists there. Your expedition team has traveled through the outer parts of this solar system and discovered no signs of life. But now you are approaching the unique and awesome Blue Planet.
Your Mission (for students):
Unlike your home planet in your far away galaxy, you have found the blue planet to have liquid H20, fascinating types of environments & Life!! You need to take a part of this unique biosphere you have never seen before, home to your fellow aliens so that they might have a "taste" of all you've experienced and seen. You want to collect some of these living things in their natural environment but you want to be sure they'll survive the long journey home and also be able to reproduce.
The biosphere, or "Living Ball" is all the living and nonliving parts of the Earth that sustain life. Organisms can be found in almost every place on Earth. At the start of this unit, I introduce the concept of a biosphere. I use a mobile with a leaf, the sun, animals, a zip lock bag sealed with air inside of it, and a cloud hanging and balancing on it, to emphasize the importance of balance in a biosphere. If there is time, I have the students create their own mobiles (for the lower grades, mobiles are kept simple; for the high school students I have them make multi-tiered mobiles showing interrelationships in the biosphere). I then relate this concept to space travel and share the scenario and mission statement with them above. (The students also love to come up with their own alien design, name, etc.) Students begin to realize their alien space craft is a mini-biosphere and that all systems must be balanced and self-sustaining in order for them to survive.
After this, I have my aliens start designing their biospheres, researching the living things they are finding, and deciding what to seal up in their 1 gallon mayo jars. As alien exobiologists, they need to determine what type of ecosystem will be the most successful on their long 5 week journey home, and what type and number of organisms will be able to survive in such limited living space and resources. In class, we seal the jars and my alien exobiologists observe them for a minimum of 5 weeks. They regularly take observations, and create colorful sketches which they eventually put together into a full 20-40 page report. (Happily, I've found many of my students running into my class before the day starts or during breaks-even on days when I don't have them - just to get a glimpse of their biosphere and to see if it is still balanced and surviving.) As the weeks progress, students also begin to catch on to the importance of biogeochemical cycles in a biosphere.
The next phase of this unit again relates the idea of space travel to the importance of a self-sustaining biosphere. "How can we, as humans, survive in outer space?" "How can we grow plants in space- in a space station, on the moon, or on Mars?" "Is there fertile soil on the moon or Mars?" "Is it economically feasible for us to bring soil into space?" After we discuss these types of lead questions, we then go on to several extension activities.
The scenario now changes. They are humans again, but this time colonists on the moon. Their mission is to design and build a lunar biosphere, using materials they think they can find on the lunar surface/sub-surface. Students will add terrestrial animals and plants and then seal the system. For students who want to do more, they can design and build a martian biosphere.
I then discuss once more with my students, the economic unfeasibility of growing plants on the moon or mars using soil from Earth. (It's too expensive!) "What can we do to grow plants?" Some believe it's impossible to grow plants without soil. With that, we do a hydroponic unit using a nutrient solution along with polycrystals and rockwool, creating hydroponic biospheres. (One teaspoon of polycrystals can absorb over 200 times its weight in water and will expand to about 3/4 cup when hydrated. Rockwool is spun basalt and serves as an excellent medium to grow plants in. It is also very lightweight.) We then do a three-way comparison of growing seeds--in soil, polycrystals, and rockwool--and we determine the most efficient and successful medium.
FOR THE FIRST SCENARIO AND MISSION:
Have the students bring in their own 1 gallon container/jars. You may want to bring in a few, just in case. Also, assuming the students can afford it, have them decide and bring in all their own biotic components themselves. (I usually have them work in teams.) You would want to have measuring balances, measuring cups, and rulers available for students. Fluorescent lamps do well. Have one or two fish nets ready for students who may need help putting their fishes in, and also have buckets, sponges and soap ready for clean up at the end of the period. Finally, have rolls of tape ready to cover their lids after they seal them, and have permanent markers to write their names, period, date, and time the biosphere was sealed.
Procedure/Description of Unit:
- Introduce the concept of biosphere, using the mobile.
- Have students design and build their own mobile showing their own concept of a
biosphere and the complex interrelationships between abiotic and biotic factors.
- Introduce the scenario and mission.
- Break students into alien teams of 2, (for middle/elem., you may want to make larger
- Have them design their biosphere, research their abiotic and biotic parts, and decide
what types and numbers of living/nonliving things they are going to seal into their jar.
- On the BIG MISSION DAY, have them bring in all their things and actually create and
seal their biospheres. Be sure students write their names, date, period, and time
in permanent ink and also double check that all biospheres are completely sealed.
- For the next 5 weeks, have students regularly take observations, either at the beginning
or end of each class period.
Have them keep accurate daily records of their observations and sketches.
- During the 5 weeks, you may want to go on and do the extension activities.
- After the 5 weeks are over, have students go over the successes and failures of their
ecosystems and analyze the rest of their data.
- Have the students turn in a final report 1 week later.
FOR EXTENSION ACTIVITIES:
- Clear plastic bins (like the ones to put hamburgers in)
- Fine dirt
- Basalt rocks, etc.
- Sealing tape
- Biotic materials like plant seedlings and insects
Procedure/description of activity:
- Have students break into teams and brainstorm what they feel a lunar biosphere
should look like and how it should be built.
- They should then try to create one using materials they think they would find on the
moon (cinder rocks, very fine dust, etc.) -- have them put the abiotic materials into
the clear plastic bins.
- Then they should add the biotic parts, seal it and watch it hopefully grow -- they
could also compare the lunar biosphere with their terrestrial one.
- Peanut and mung seedlings
- Plastic bins
- Nutrient solution
- Viewing tank
(*see references to learn where to order the rockwool, nutrient solution, and viewing tank)
- Potting soil
- 2 clear plastic bins
Procedure/description of activity:
- Assemble viewing tank, following the directions given with the unassembled kit*.
- Open up 6 packets of polycrystals - equiv. to about 6 tablespoons.
- make 1 quart of nutrient solution, using 1/4 tsp. of nutrients from bottle "a" and 1/4
tsp. of nutrients from bottle "b"*.
- Cut the rockwool into 1/2 inch cubes-- (*rockwool ca be ordered in small sheeet sizes-
see references below)
- Germinate seeds (I've used peanut seeds and mung beans.)
- Get potting soil.
- Pour 1 pint of the nutrient solution in a 14oz. plastic container.
- Add the 6 packets of polycrystals to the pint of nutrient solution in the container.
- Wait 1-2 hours for the crystals to hydrate completely.
- If necessary, add more nutrient solution until the crystals are almost saturated.
- Add the hydrated crystals directly into the viewing tank.
- Add 3 peanut and 3 mung seedlings to the tank. Be sure roots are properly immersed
into the crystals.
- Next, get 6 1/2 inch cubes of rockwool. Pour the nutrient solution over the cubes until
they are saturated.
- Use a pencil and create a "well" in each cube about 1/4-1/2 inch deep.
- Plant 3 peanut and 3 mung seedlings, one in each cube. Then put 3 cubes each into a
plastic bin and close the lid. (do not seal)
- Finally, put potting soil into 2 small pots and plant the 6 seedlings about 1 inch apart in
the soil. (3 in each pot) Use the same nutrient solution to water them.
- Observe daily and compare your findings.
METHOD OF EVALUATION:
- For the first scenario and mission, the main mode of evaluation will be their report.
The report should include their daily written observations and sketches and also their
analysis of their findings, including a look at the successes and failures of their sealed
mini-biospheres. Team members can also evaluate and assess their teammates progress
and contributions in a peer/team evaluation form during or at the end of their project.
Assessment will also include the success rate of their biosphere and how well planned it
- For the extension activities:
- Lunar biosphere:
mainly looking at the creativity and accuracy of their design. Also will be assessing
the survival rate of their lunar biosphere and looking at how well they understood
what a biosphere was and transferring it to another location (the moon).
mainly will be assessing them through their report and verbal discussion of their
findings. Will be evaluating them on the accuracy and analysis of their findings.
*The polycrystals and viewing tank can be bought from the following company:
- Captivation, Inc.
- 9 Cannongate Drive
- Nashua, NH 03063
- (603) 889-1156
*The rockwool and nutrient solution can be bought from the following company:
- Great Bulbs of Fire
- RR2 Box 815
- New Freedom, PA 17349
- (717) 235-3882
- (717) 235-7144 (fax) |
Along the banks of the Euphrates, gardens and
orchard flourished even in the blazing heat
of summer, and fish species unique to the river
inhabited its waters. The Euphrates, which rises
on the high plateaus to the north of Zeugma,
has given birth to a succession of civilisations,
nourished them with its flora and fauna, and
brought them prosperity since prehistoric times.
The Euphrates flows to the east of the Turkish
city of Gaziantep, which lies at the centre
of the area of Upper Mesopotamia known as the
Fertile Crescent. In the Middle Paleolithic
Age, when Neanderthal man was spreading around
the world, the conducive conditions here made
Gaziantep an important centre of human settlement.
In the area stretching westwards from the banks
of the Euphrates to Islahiye Plain are the traces
of many civilisations, including the renowned
archaeological sites of Kargamis, Zincirli,
Tilmen Höyük and Sakçagözü. |
Antifreeze proteins (AFP) are naturally occurring proteins that inhibit the formation of ice crystals when water temperature drops to freezing levels.
These proteins are usually found in organisms that live in subzero environments such as Antarctica. Certain vertebrates, plants, fungi and bacteria can inhibit the growth and recrystallization of ice in their bodies allowing them to survive in these temperatures.
Antifreeze proteins are not the same as automobile antifreeze, ethylene glycol. Ethylene glycol lowers the freezing temperature of water based on the concentration of the chemical in it.
New Family of Anti-Freeze Molecules Discovered
Chemists at New York University have discovered a family of anti-freeze molecules that prevent ice formation when water temperatures drop below 32 degrees Fahrenheit.They have reported their findings in the latest issue of the Proceedings of the National Academy of Sciences (PNAS).
Video: Antarctic Fish: Antifreeze Proteins In Action
"The growth and presence of ice can be damaging to everything from our vehicles to food to human tissue, so learning how to control this process would be remarkably beneficial," says co-author Kent Kirshenbaum, an associate professor in NYU's Department of Chemistry. "Our findings reveal how molecules ward off the freezing process and give new insights into how we might apply these principles elsewhere."
Applications for these type of molecules are numerous. They can be used in:
- Protecting crops from the cold and extending the harvest season in cooler climates
- Increasing the efficiency of fish farms during cold weather
- Increasing shelf life of frozen food
- Using extreme cold to destroy abnormal or diseased tissue without adverse effects to surrounding tissue
- Preservation of tissues and organs used in medical transplants and transfuion.
- Hypothermia therapy
Water Doesn't Have to Freeze
A common misperception is that water necessarily freezes when temperatures reach 32 degrees Fahrenheit or zero degrees Celsius. Not so, scientists point out.
"Nature has its own anti-freeze molecules," explains co-author Michael Ward, chair of NYU's Department of Chemistry. "We simply don't have the details on how they work."
To explore this topic, the researchers created artificial, simplified versions of protein molecules that, in nature, inhibit or delay freezing. These molecules were placed in microscopic droplets of water, and ice formation was monitored by video microscopy and X-ray analysis. The experiments allowed the researchers to determine which critical chemical features were required to stymie ice crystallization.
The results show that there are two ways that the molecules adopt to inhibit freezing:
- Freeze avoidance: The molecules prevent the formation of ice by reducing the temperature at which ice begins to form. This behavior slows down freezing but may be overcome when the temperature gets too cold.
- Freeze tolerance: The molecules interact with the forming ice to slow down its accumulation. It can prevent the damages of freezing, but not freezing altogether
New York University
Proceedings of the National Academy of Sciences (PNAS)
Using Extreme Cold To Treat Malfunctioning Heart Tissues
American Chemical Society Briefing: Eating cool: What to Eat To Beat The Heat
Studying 'Brain Freeze' Could Lead To New Treatments For Different Types of Headaches
Dung Beetles Use Dung As A Mobile Thermal Refuge For Thermoregulation
Ultra Cold Atom Physics Experiment Delves Deep Into Bose-Einstein Condensates
Quantum Effects in Cold Atom Physics Through Pre-Thermalization Are More Than Expected
Subglacial Lake Ellsworth 'Advance Party' Returns After Antarctic Expedition |
With industrial growth along the canal, Griffintown became a working class ghetto of mainly Irish immigrants. The booming industrialization of Montreal provided both skilled and unskilled work for these immigrants, most of whom settled in areas around the basins- Point St. Charles, Goose Bay and Griffintown. As the community grew, a church was built to accommodate the needs of the largely Irish Catholic community. St. Ann’s Church and the monastics who administered the school would become integral to the ebb and flow of the community until the church’s destruction in 1970.
While Irish immigration had been occurring during the early nineteenth century, famine and economic depression in Ireland in 1845-52 propelled immigration from the island, often in squalid coffin ships. (Thorton and Olson: Tidal Wave, 2) Large construction projects such as the Lachine Canal (finished in 1825) Grand Trunk Railroad depot in 1853 by Point St. Charles, and the Victoria Bridge in 1859 provided work in the region. In addition to these projects, heavy industry such as machinery and metalwork, furniture manufacturing, and textiles employed over 4,000 men, women and children within the area-though, as with most heavy industry, the workforce was primarily men and children.(Solonysznyj: Residential Persistence, 5)
Montreal was experiencing an industrial expansion. The railroad boom in the United States provided a market for metallurgy, and the canal allowed Montreal to grow as a port city, replacing Halifax and Port Hawkesbury as the gateway to Canada from the Atlantic. The Grand Trunk Railroad depot employed 750 workers(Ibid., 6), many of whom were better paid than other perceivably unskilled workers within Montreal. The bulk of employment in the area remained in unskilled factory positions.
The economic contraction of 1873 resonated throughout working-class Montreal. As protectionary tactics closed American and British borders to Canadian goods, the unresolved Canadian tariff system left Canadian markets flooded with foreign goods. Prices fell, and many firms that had opened during the economic boom were forced to declare bankruptcy. John A. MacDonald’s National Plan remedied some of the industrial decline, though the fate of industrial capitalism remained precarious. Job insecurity and loss of employer confidence plagued the area since the depression, ” the permanent sense of insecurity was a part of everyday life for the people of St Anne’s.” (Sobolewsky: Residential Persistence, 36).
Tragedy hit twice in the region. The first came as a city-wide smallpox epidemic in 1885 that ravaged the working class neighborhoods, whose geographic circumstances and lack of public works facilitated the spread of the disease. The disease claimed 3,000. (Ibid., 8 ) One year later, the protective barriers along the St Lawrence river gave way to the rising waters. Low-lying regions of Montreal, including Griffintown and the Grand Trunk Railroad depot in Point St Charles. Homes and businesses were flooding, but the most devastating result of the flood was the forced closure of several Grand Trunk Railroad shops. 1500 people were laid off and 100 found themselves homeless.(Ibid.)
The area sustained itself through foundries and factories that took advantage of the cheap labour resources and the proximity to both the port and the train. Despite the deindustrialization around Griffintown, the community flourished and began to see a more diverse group of immigrants-Jewish, Greek, Italian, Ukranian- though Irish remained the dominant group. (Solonysznyj, Residential persistance: 38) |
What is Early Literacy?
Early Literacy is what children know about reading and writing before they can actually read and write. Research shows that children get ready to read in a variety of ways years before they start school (right from birth even!). Our Early Literacy section gives parents and caregivers the information and ideas they need to help children prepare for school, as well as encourage literacy habits that build a confident and lifelong reader. For more information, select an Early Literacy link below.
Our list of 100 favorite picture books for pre-readers (printable).
Introduction to popular pre-reader program that builds a love of reading at home. Print book lists here.
Find library events and programs for kids.
Find preschools, daycares and learn about local resources for those in need.
Get the latest on which apps and websites are right for your child. Learn more about early literacy development for your family.
Can’t get to the library? You can use our e-resources to build a love of reading right from home. Or use some of our in-house e-resources to extend your child’s learning.
A quarterly newsletter to help parents of children from birth to age five put early literacy principles into practice.
Help your child get ready to read with these five early learning practices.
Learn how these six early learning skills can be incorporated into your child’s daily life.
Help enhance your child’s reading skills with these printable activities and worksheets. |
FIRE AND FOREST MANAGEMENT
The vast majority of western dry forests are at risk of large, high-intensity fire because of the effects of poor forest management over the past century. The primary factors that lead to current forest conditions include logging large trees, fire suppression, and livestock grazing. Since the beginning of the 20th century, all three of these factors have been present in western forests, and they continue to play a role today.
Logging operations have historically removed the largest and most fire-resistant trees. The young trees that replace cut trees are highly susceptible to fire and serve as fire ladders, allowing the fire to reach up into the canopy of the forest. Because fire-suppression efforts have been intensive and have effectively removed fire as a thinning agent from most forests, many small trees that would have been killed by fire have been allowed to survive. Besides being prone to fire, these small trees are present at such high densities that their growth is slowed by intense competition.
The relatively frequent, low-intensity surface fires that historically burned in many forests were carried primarily by ground vegetation such as grasses. But livestock grazing on our public lands has severely reduced the amount of grasses, and fires are now able to burn only when there is significant buildup of woody debris, often leading to severe fires. By shading the ground, grasses would suppress the growth of tree seedlings at the youngest stages. With grasses reduced or cropped short by livestock, tree seedlings are much more likely to survive, growing at high density and encroaching on meadows and grasslands.
The Center has three objectives in terms of policy on fire in western forests, and we’re promoting these through our work on forest management in New Mexico and Arizona, our programs to curb urban sprawl in fast-growing regions like southern California, and elsewhere.
First, fire policy must provide wildland-urban interface communities with protection from the threat of forest fire. Second, it must be geared at reducing the severity of unnatural forest fires and reintroducing fire as a natural component of the ecosystem. Third, forests should be put on a trajectory toward recovery through the reintroduction and enhancement of a range of natural forest ecosystem processes. The Center’s highest priorities include protecting lives and houses in the communities that are currently at risk from forest fires. At the same time, it is critical to protect areas of special concern, such as municipal watersheds and reservoirs and habitat for sensitive species.
|Photo © Robin Silver||HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /| |
Numeric Data Types
Visual Basic supplies several numeric data types for handling numbers in various representations. Integral types represent only whole numbers (positive, negative, and zero), and nonintegral types represent numbers with both integer and fractional parts.
For a table showing a side-by-side comparison of the Visual Basic data types, see.
Integral Numeric Types
Integral data types are those that represent only numbers without fractional parts.
The signed integral data types are(8-bit), (16-bit), (32-bit), and (64-bit). If a variable always stores integers rather than fractional numbers, declare it as one of these types.
The unsigned integral types are(8-bit), (16-bit), (32-bit), and (64-bit). If a variable contains binary data, or data of unknown nature, declare it as one of these types.
Arithmetic operations are faster with integral types than with other data types. They are fastest with the Integer and UInteger types in Visual Basic.
If you need to hold an integer larger than the Integer data type can hold, you can use the Long data type instead. Long variables can hold numbers from -9,223,372,036,854,775,808 through 9,223,372,036,854,775,807. Operations with Long are slightly slower than with Integer.
If you need even larger values, you can use the. You can hold numbers from -79,228,162,514,264,337,593,543,950,335 through 79,228,162,514,264,337,593,543,950,335 in a Decimal variable if you do not use any decimal places. However, operations with Decimal numbers are considerably slower than with any other numeric data type.
If you do not need the full range of the Integer data type, you can use the Short data type, which can hold integers from -32,768 through 32,767. For the smallest integer range, the SByte data type holds integers from -128 through 127. If you have a very large number of variables that hold small integers, the common language runtime can sometimes store your Short and SByte variables more efficiently and save memory consumption. However, operations with Short and SByte are somewhat slower than with Integer.
If you know that your variable never needs to hold a negative number, you can use the unsigned types Byte, UShort, UInteger, and ULong. Each of these data types can hold a positive integer twice as large as its corresponding signed type (SByte, Short, Integer, and Long). In terms of performance, each unsigned type is exactly as efficient as its corresponding signed type. In particular, UInteger shares with Integer the distinction of being the most efficient of all the elementary numeric data types.
Nonintegral Numeric Types
Nonintegral data types are those that represent numbers with both integer and fractional parts.
The nonintegral numeric data types are Decimal (128-bit fixed point),(32-bit floating point), and (64-bit floating point). They are all signed types. If a variable can contain a fraction, declare it as one of these types.
Decimal is not a floating-point data type. Decimal numbers have a binary integer value and an integer scaling factor that specifies what portion of the value is a decimal fraction.
Floating-point (Single and Double) numbers have larger ranges than Decimal numbers but can be subject to rounding errors. Floating-point types support fewer significant digits than Decimal but can represent values of greater magnitude.
Nonintegral number values can be expressed as mmmEeee, in which mmm is the mantissa (the significant digits) and eee is the exponent (a power of 10). The highest positive values of the nonintegral types are 7.9228162514264337593543950335E+28 for Decimal, 3.4028235E+38 for Single, and 1.79769313486231570E+308 for Double.
Double is the most efficient of the fractional data types, because the processors on current platforms perform floating-point operations in double precision. However, operations with Double are not as fast as with the integral types such as Integer.
For numbers with the smallest possible magnitude (closest to 0), Double variables can hold numbers as small as -4.94065645841246544E-324 for negative values and 4.94065645841246544E-324 for positive values.
Small Fractional Numbers
If you do not need the full range of the Double data type, you can use the Single data type, which can hold floating-point numbers from -3.4028235E+38 through 3.4028235E+38. The smallest magnitudes for Single variables are -1.401298E-45 for negative values and 1.401298E-45 for positive values. If you have a very large number of variables that hold small floating-point numbers, the common language runtime can sometimes store your Single variables more efficiently and save memory consumption. |
Sign into Remind.com
Things you need to do the FIRST WEEK of class:
1. Get a 1 inch, 3 ring binder. THIS IS REQUIRED. You will have difficulty passing this class without it.
2. Log into the US History 2 section of REMIND.COM . (see left)
3. Sign into our Online Textbook (Brinkley) Directions given in class
4. Read The American Vision (TAV) chapter 14.tab at left
5. Answer the reading checks that you find as you read the chapters.
6, In the section assessment, complete #1- DEFINE and #2- Identify.
7. Sign into TURNITIN
US 1 REVIEW- WESTERN EXPANSION AND ITS IMPACT ON THE AMERICAN CHARACTER (1860-1895) CLICK HERE
U.S. History Term Paper Assignment
- The Term Paper is worth 15% of the 1st Semester Grade. It must be submitted through TURNITIN.COM
- Print out 1 copy of both drafts to turn into teacher.
- All work is to be typed and done in the MLA style as described in The Little, Brown ESSENTIAL HANDBOOK.
- Begin with your topic question and your text book. Locate your topic in the index and read the relevant pages.
- Select a topic that interests you:
- · How did the Muckrakers awaken the public to the growing social, economic, & political inequities in the nation?
· How did progressive reforms benefit society?
· Why did the Grange develop in the late 1800s?
· How did the Supreme Court decision in Plessy v. Ferguson affect the Jim Crow laws?
· How did attitudes toward women & minorities hurt labor unions in the late 1800s?
· How did the Industrial Revolution change America?
· How did the Industrial Revolution change the role of women in America?
· How did women’s suffrage affect America?
· Why did the population of the U.S. double from 1880-1920?
· How did rapid urbanization impact American life in the late 1800s?
· How did progressives attempt to address problems linked to urbanization & industrialization?
· Why did the immigration law of 1917 develop and how was it different from the laws passed in 1882 & 1907?
THE RISE OF AMERICAN IMPERIALISM (1890-1913) Things to consider |
Life Science: Session 2
CO2 and O2
Which organisms require CO2?
Life on this planet is characterized by the need to acquire carbon to make the organic molecules that compose and are used by an organism’s cells. Some organisms, like plants, some protists, and many bacteria, are able to extract carbon dioxide gas (CO2) from the environment and convert it into organic carbon. In photosynthesis, for example, the carbon in CO2 becomes part of a sugar molecule, which becomes a source of energy as well as building materials. The gas CO2 is thus a key molecule in organisms that make their own food.
Which organisms require O2?
Another critical gas is oxygen gas (O2).
Much of life on this planet is aerobic, meaning oxygen is required for
survival. Oxygen serves as a key constituent in the process that releases
the energy stored in food. The oxygen is used in cell respiration, which
is a process that is much like burning a candle. When a candle is burned,
O2 combines with the chemicals that store energy in wax, producing
light and heat energy. When food is burned, O2 combines with
sugar, making its energy available to fuel cell processes.
Something that surprises many people is that plants and other photosynthesizes require O2 as well as CO2. Photosynthesis is indeed the process by which plants make food, but once this food is made, cellular respiration is required to release its energy. Plants thus require both gases for survival.
Is O2 always required to “burn” food for energy?
Many organisms, even humans, are capable of anaerobic energy production in a process known as fermentation. Fermentation processes do not produce as much energy as aerobic reactions and often generate harmful byproducts. An example in humans is the production of lactic acid by lactic acid fermentation in muscle cells during exercise. This occurs when O2 supplies in muscles are insufficient and the body adjusts by using fermentation as an energy reaction. This leads to the familiar burning sensation in overworked muscles. Fermentation is the process that creates many food products: yogurt, wine, and cheese, for example. In addition, there are certain bacteria, archaea, and protists that are strictly anaerobic, meaning oxygen is poisonous. Their only energy reaction is fermentation. These organisms live in habitats such as sediments or lakes that totally lack oxygen.
|prev: the domain archaea||next: taxonomic classification| |
There is a story, often told, that on the last day of the Constitutional Convention of 1787, citizens of Philadelphia gathered outside Independence Hall to learn what form of national government the convention had produced during the closed-door meetings.
A woman approached Benjamin Franklin and asked, “Well Doctor, what have we got, a republic or a monarchy?” Franklin replied, “A republic… if you can keep it.”
From our historical perspective – 229 years later – it may seem like an absurd question. After all, the colonists had just fought the Revolution to overthrow a monarch, so why would the Constitutional Convention produce another monarchy? But Americans had been raised as the subjects of a monarchy. Monarchies were the most dominant form of government throughout history, and they were still the prevalent form of government around much of the world at that time.
The notion that people could live in a free country with individual liberty was almost unheard of. Plus, in the 1780s, many Americans expected that George Washington might be president for life – a kind of elected monarch.
So the question wasn’t absurd. And Franklin wasn’t being flippant with his answer. Those five words – “if you can keep it” – held a deep meaning to him. Franklin, like most of the Founders, had been greatly influenced by the republics of antiquity, especially the ancient republic of Rome. And he knew – as did all the Founders – that it was difficult to keep a republic alive and well. (For the record, the dictionary definition of a “republic” is “a state in which the supreme power rests in the body of citizens entitled to vote and is exercised by representatives chosen directly or indirectly by them.” And – importantly – the head of state is not a monarch.)
Gordon S. Wood – an award-winning author and professor of history – has written extensively on the American founding and the differences between monarchies and republics. Monarchs, he says, possessed a number of means for holding their diverse and corrupt societies together. Republics, on the other hand, possessed few of the “adhesive attributes of monarchies. Therefore, order…would have to come from below, from the virtue or selflessness of the people themselves. Yet precisely because republics were so utterly dependent on the people, they were also the states most sensitive to changes in the moral character of their societies.
“In short, republics were the most delicate and fragile kinds of states. There was nothing but the moral quality of the people themselves to keep republics from being torn apart by factionalism and division. Republics were thus the states most likely to experience political death. Without virtue and self-sacrifice, republics would fall apart.”
Here, in 2016, we take it as a matter of faith that the colonists united, threw off an oppressive monarchy in a long, hard-fought war, then created a republic that grew to span the continent and became the greatest nation on earth. That outcome, however, was far from certain.
“After all,” Wood says, “those thirteen colonies made up an insignificant proportion of the Western world, numbering perhaps two million people, huddled along a narrow strip of the Atlantic coast, three thousand miles from the centers of civilization.”
But the Americans “began their Revolution in a spirit of high adventure. They knew they were embarking on a grand experiment in self-government. That experiment remained very much in doubt during the first half of the nineteenth century, especially during the Civil War, when monarchy still dominated all of Europe. Hence we can understand the importance of Lincoln’s Gettysburg Address, in which he described the Civil War as a test of whether a nation conceived in liberty could long endure. This idea that republican government was a perilous experiment was part of America’s consciousness from the beginning.”
So Franklin well knew what he was saying that day outside Independence Hall. Many nations had lived and died before we began our grand experiment. The death of Rome was of particular interest. Reading about the fall of Rome from the great Latin writers of antiquity, people of the eighteenth century came to realize, Wood says, “that the Roman republic became great not simply by the force of its arms; nor was it destroyed by military might. Both Rome’s greatness and its eventual fall were caused by the character of its people.
“As long as the Roman people maintained their love of virtue, their simplicity and equality,” their scorn of great social distinctions, “and their willingness to fight for the state, they attained great heights of glory. But when they became too luxury-loving, too obsessed with refinements and social distinctions, too preoccupied with money,” and too self-indulgent to fight for the state, “their politics became corrupted, selfishness predominated, and the dissolution of the state had to follow. Rome fell not because of the invasions of the barbarians from without, but because of decay from within.”
In 1852, Hungarian patriot Louis Kossuth, in a speech here in the United States, said that it was “America’s destiny to become the cornerstone of Liberty on earth. Should the Republic of America ever lose this consciousness of this destiny,” that moment would be the beginning of America’s decline.
It’s been 229 years since Franklin said, “a republic…if you can keep it.” America may not still have that “new-republic smell” like when it was first driven off the showroom floor. There may be some dents and dings and rips in the upholstery, but the republic is still here. Through good times and lean, we have somehow managed to survive. Perhaps against all odds, America – this grand experiment in self-government and freedom – is still the beacon of hope and the “cornerstone of Liberty on earth.”
Can we keep it alive? That depends on us, the citizens of this great nation. As with Rome, the republic relies on “our moral character, our virtue and self-sacrifice.” Let us be equal to the task so that in 200 years, it can be said, “a republic…and you have kept it.”
Happy Fourth of July everyone.
Paul E. Pfeifer is the senior associate justice on the Ohio Supreme Court, serving since 1993. He resides in Bucyrus. |
INTRODUCTION: A printed circuit board (PCB) IS a sheet of insulating material usually Bakelite with metallic circuitry photo chemically formed upon that material or substrate. On this sheet small holes are given for accommodating the diverse components of the circuit to be assembled. Interconnections between components are achieved by means of conducting paths (metallic conductor pattern) running path or through the substrate is called tracks. Tracks meet components to which they are to be linked through a larger conductor area known as land or pad. The electrical connection between a land and component‘s terminal is achieved by means of a solder joint. Every circuit has its own PCB but the process of manufacturing is more or less the same. First of all the Bakelite sheet of the proper size is taken. The whole sheet is covered with a copper strip on it; the circuit lines are drawn and covered with an enamel layer. After the paint is dried up into solution of ferrite chloride with few drops of hydrochloric acid (HCl). The copper (except the painted portion) is dissolved in the solution later on; this paint is cleaned off by petrol or kerosene. On this copper strip, holes are provided to accommodate the components. To protect from dampness, the copper strips are covered with a layer of varnish.
ADVANTAGES OF PCB: The copper strips over the PCB serve the purpose of the wires hence much wire is saved. Circuit charactistics can be maintained without introducing variation in inter-circuit capacitance. Mass production can be attained at lower cost. Inspection time is decreased as probability of error is removed. As the components are tightly fixed and therefore risk of short circuiting is minimised. |
Like other islands vulnerable to exotic species, the Dry Tortugas support a population of black rats (Rattus rattus). Rats have inhabited the Tortugas probably since the arrival of humans. Though the size of the rat population is controlled through abatement efforts, rats may be encountered as they forage through campsites looking for food. Campers are most affected by this nuisance rat population and are made aware, through site bulletins and ranger education, of the need to safeguard food. The only reliable way to protect food and protect damage to gear is to store your food and food trash in hard-sided containers. Rats will chew through tents or backpacks if they smell food and can also climb the provided hanging posts.
Rats also can potentially affect nesting sea turtle and bird activities through predation of eggs. The Dry Tortugas provide critically valuable nesting habitat for vulnerable wildlife such as sea turtles and sooty terns because of the lack of native mammalian predators like raccoons and fox. Because one of the mission goals of the National Park Service is to ensure that native plants and animals are not impaired by invasive exotic plants or animals, abatement efforts against the rat population take place regularly. |
In Nevada, Fires Follow Rain
When you think of a desert state like Nevada, wildfires might not immediately come to mind. The low brush and shrubs in Nevada can fuel fires, however, and summers following above average winter rains often see more fires than usual. This is because the rain promotes plant growth, so there is more fuel available to spread a big fire.
Images from satellites can help scientists identify where wildfires are burning. They can also measure the "greenness" of the land, which represents how abundant plant growth is on the ground. Images from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) capture this greenness, and every week the Wildland Fire Assessment System analyzes the conditions. This constant measurement provides a useful indicator of what the wildfire risk is from region to region. When the image is brown or red, it means the ecosystem is very dry. On the other hand, green on the land means there is more lush vegetation and plant growth.
More fires are expected in Nevada when MODIS images are greener than average during the fire season. In the above graphic, the greenness images were taken during the weeks of June 24, 2003 and June 28, 2005; these weeks occurred early in the fire season, when many wildfires were just beginning. |
Help children develop spelling and word formation with Roll A Word dice. Simply roll the dice and let the letters guide you. Firstly roll the dice and view the upper face, then group dice together to form words. Choose the dice you want to use and the best dice to start your word. Contains 10 dice and 1 handy carry sack. Ages 4-9 years. WARNING: CHOKING HAZARD - SMALL PARTS. Not for children under 3 yrs. |
Blood groups are determined by the presence or absence of certain antigens (proteins and sugars) on the red blood cell membrane. Normally dogs do not have antibodies against any of the antigens present on their own red blood cells or against other canine blood group antigens unless they have been previously exposed to them by transfusion. In some species (such as humans and cats), however, antibodies from one individual that react with antigens of another individual of the same species may be present without any prior exposure.
Dogs have many blood groups, and their red blood cells may contain any combination of these since each blood group is inherited independently. The most important of these is called Dog Erythrocyte Antigen (DEA) 1.1. Typing of blood donors and recipients is done before transfusion. Approximately 40% of dogs are positive for DEA 1.1, meaning that they have that antigen on their red blood cells. By selecting donor animals that lack DEA 1.1 or that match the recipient, the risk of sensitizing the recipient can be minimized. If a dog is DEA 1.1-negative and is given DEA 1.1-positive blood, it may develop antibodies that rapidly destroy the red blood cells if a second DEA 1.1-positive transfusion is given. A DEA 1.1-positive dog may receive either positive or negative blood.
An animal's blood group is determined by measuring the reaction of a small sample of blood to certain antibodies. Dogs are routinely typed only for the most potent antigen, DEA 1.1. In addition to DEA 1.1 at least 12 other blood group systems are present. Although the risk is less, any antigen might cause a reaction if those cells are given to a previously sensitized dog. Any dog that has had a previous transfusion may have antibodies to any of the blood group antigens not present on their own red blood cells. These antibodies can be detected by testing the red blood cells from a potential donor with plasma (the clear, yellowish liquid part of blood) taken from the recipient. This procedure is called a major crossmatch. If agglutination occurs, the recipient has antibodies that could destroy the donated red blood cells. That donor is incompatible and should not be used.
Often, the need for a blood transfusion is an emergency, such as severe bleeding or sudden destruction of red blood cells due to other disease. Transfusions may also be needed to treat anemia. Animals with blood clotting disorders often require repeated transfusions of whole blood, red blood cells, plasma, or platelets. The most serious risk of transfusion is acute destruction of red blood cells, usually caused by a previously formed antibody to DEA 1.1, or to another antigen. Fortunately, this is rare. A more common problem in dogs that have received multiple transfusions is delayed destruction of the red blood cells, caused by antibodies to some of the minor blood group antigens.
Other complications of transfusions include infection from contaminated blood, a decrease in blood calcium levels, and accumulation of fluid in the lungs as a result of giving too large a volume of blood. Skin hives, fever, or vomiting are seen occasionally. Fortunately most transfusions are safe and effective.
Last full review/revision July 2011 by Peter H. Holmes, BVMS, PhD, Dr HC, FRCVS, FRSE, OBE; Nemi C. Jain, MVSc, PhD; David J. Waltisbuhl, BASc, MSc; Michael Bernstein, DVM, DACVIM; Karen L. Campbell, MS, DVM, DACVIM, DACVD; Timothy M. Fan, DVM, PhD, DACVIM; Wayne K. Jorgensen, BSc, PhD; Susan L. Payne, PhD |
When I saw this image appearing in my RSS I couldn't tell what it was. It looked like a close up of the skin of some animal. Perhaps a detail of a bird or a reptile, I thought. Maybe a colorized microscopic view into some human body part. The answer couldn't possibly be more different than what I expected.
You are looking at Mars! A photograph taken by the HiRISE camera aboard NASA's Mars Reconnaissance Orbiter at the beginning of 2014. It's a "giant landform on Mars," according to the space agency. One with "steep faces or slip faces several hundreds of meters tall" what was "formed over thousands of Mars years, probably longer."
Sandy landforms formed by the wind, or aeolian bedforms, are classified by the wavelength—or length—between crests. On Mars, we can observe four classes of bedforms (in order of increasing wavelengths): ripples, transverse aeolian ridges (known as TARs), dunes, and what are called "draa." All of these are visible in this Juventae Chasma image.
Ripples are the smallest bedforms (less than 20 meters) and can only be observed in high-resolution images commonly superposed on many surfaces. TARs are slightly larger bedforms (wavelengths approximately 20 to 70 meters), which are often light in tone relative to their surroundings. Dark-toned dunes (wavelengths 100 meters to 1 kilometer) are a common landform and many are active today. What geologists call "draa" is the highest-order bedform with largest wavelengths (greater than 1 kilometer), and is relatively uncommon on Mars. |
eXtensible Markup Language
A standard that forms the basis for most modern markup languages. XML is an extremely flexible format that only defines "ground rules" for other languages that define a format for structured data designed to be interpreted by software on devices. XML by itself is not a data format.
Examples of XML-based standards include xHTML, for creating web pages, RSS, for feeds of new information (such as news headlines), and SyncML, for managing personal data such as contacts, email, files, and events. |
The scarcity of young Sequoias strikes every visitor, the fact being that they are only to be found in certain favored spots. These are, either where the loose debris of leaves and branches which covers the ground has been cleared away by fire, or on the spots where trees have been uprooted. Here the young trees grow in abundance, and serve to replace those that fall. The explanation of this is, that during the long summer drought the loose surface debris is so dried up that the roots of the seedling Sequoias perish before they can penetrate the earth beneath. They require to germinate on the soil itself, and this they are enabled to do when the earth is turned up by the fall of a tree, or where a fire has cleared off the debris. They also flourish under the shade of the huge fallen trunks in hollow places, where moisture is preserved throughout the summer. Most of the other conifers of these forests, especially the pines, have much larger seeds than the Sequoias, and the store of nourishment in these more bulky seeds enables the young plants to tide over the first summer’s drought. It is clear, therefore, that there are no indications of natural decay in these forest giants. In every stage of their growth they are vigorous and healthy, and they have nothing to fear except from the destroying hand of man.
[Illustration: REDWOOD TREE WITH TRIPLE TRUNK.]
Destruction from this cause is, however, rapidly diminishing both the giant Sequoia and its near ally the noble redwood (Sequoia sempervirens), a tree which is more beautiful in foliage and in some other respects more remarkable than its brother species, while there is reason to believe that under favorable conditions it reaches an equally phenomenal size. It once covered almost all the coast ranges of central and northern California, but has been long since cleared away in the vicinity of San Francisco, and greatly diminished elsewhere. A grove is preserved for the benefit of tourists near Santa Cruz, the largest tree being two hundred and ninety-six feet high, twenty-nine feet diameter at the ground and fifteen feet at six feet above it. One of these trees having a triple trunk is here figured from a photograph. Much larger trees, however, exist in the great forests of this tree in the northern part of the State; but these are rapidly being destroyed for the timber, which is so good and durable as to be in great demand. Hence Californians have a saying that the redwood is too good a tree to live. On the mountains a few miles east of the Bay of San Francisco, there are a number of patches of young redwoods, indicating where large trees have been felled, it being a peculiarity of this tree that it sends up vigorous young plants from the roots of old ones immediately around the base. Hence in the forests these trees often stand in groups arranged nearly in a circle, thus marking out the size of the huge trunks of their parents. It is from this quality that the tree has been named |
Inquiry Based Learning
The deep-seated impulse to question is the fuel that propels the success of any lifelong learner. It seems a little nonsensical to have to design a curriculum placing querying and curiosity at the centre of the school day; questions have, after all, formed the foundation of critical thinking--and by extension of civilization--since before Socrates But for decades, education researchers have exhaustively documented how entrenched twentieth-century education models squeezed the urge to ask out of the majority of the world's primary school population.
Inquiry based learning was developed based on data suggesting that when students arrive at knowledge— be it a mathematical formula, scientific principle, or historical explanation— in the attempt to answer a broad, provocative question they much more likely to retain the knowledge. It has also been repeated that a inquiry-based approach increases student engagement in a manner that causes obstinate achievement gaps between genders and races to narrow. The function of the teacher in an IBL classroom is plan out a productive questioning sequence ahead of time to ensure that students feel as if they have stumbled upon the desired content. This process embeds the questioning reflex fundamental to a student’s future success and will make them well-practiced at drawing on a diverse array of resources to find satisfactory answers to complex questions.
We have long been aware that twenty-first-century adults are expected to change careers six times over the course of their lifetimes, but education systems have been dangerously slow adjust to the reality that the value of science literacy and competence is the one constant in an economy in a constant state of flux. The ability to answer complex questions in a creative, concise manner will outlive the usefulness of any academic content a teacher might impart. |
A new study shows that nickel oxide superconductors, which conduct electricity with no loss at higher temperatures than conventional superconductors do, contain a type of quantum matter called charge density waves, or CDWs, that can accompany superconductivity.
The presence of CDWs shows that these recently discovered materials, also known as nickelates, are capable of forming correlated states – “electron soups” that can host a variety of quantum phases, including superconductivity, researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University reported in Nature Physics today.
“Unlike in any other superconductor we know about, CDWs appear even before we dope the material by replacing some atoms with others to change the number of electrons that are free to move around,” said Wei-Sheng Lee, a SLAC lead scientist and investigator with the Stanford Institute for Materials and Energy Science (SIMES) who led the study.
“This makes the nickelates a very interesting new system – a new playground for studying unconventional superconductors.”
Nickelates and cuprates
In the 35 years since the first unconventional “high-temperature” superconductors were discovered, researchers have been racing to find one that could carry electricity with no loss at close to room temperature. This would be a revolutionary development, allowing things like perfectly efficient power lines, maglev trains and a host of other futuristic, energy-saving technologies.
But while a vigorous global research effort has pinned down many aspects of their nature and behavior, people still don’t know exactly how these materials become superconducting.
So the discovery of nickelate’s superconducting powers by SIMES investigators three years ago was exciting because it gave scientists a fresh perspective on the problem.
Since then, SIMES researchers have explored the nickelates’ electronic structure – basically the way their electrons behave – and magnetic behavior. These studies turned up important similarities and subtle differences between nickelates and the copper oxides or cuprates – the first high-temperature superconductors ever discovered and still the world record holders for high-temperature operation at everyday pressures.
Since nickel and copper sit right next to each other on the periodic table of the elements, scientists were not surprised to see a kinship there, and in fact had suspected that nickelates might make good superconductors. But it turned out to be extraordinarily difficult to construct materials with just the right characteristics.
“This is still very new,” Lee said. “People are still struggling to synthesize thin films of these materials and understand how different conditions can affect the underlying microscopic mechanisms related to superconductivity.”
Frozen electron ripples
CDWs are just one of the weird states of matter that jostle for prominence in superconducting materials. You can think of them as a pattern of frozen electron ripples superimposed on the material’s atomic structure, with a higher density of electrons in the peaks of the ripples and a lower density of electrons in the troughs.
As researchers adjust the material’s temperature and level of doping, various states emerge and fade away. When conditions are just right, the material’s electrons lose their individual identities and form an electron soup, and quantum states such as superconductivity and CDWs can emerge.
An earlier study by the SIMES group did not find CDWs in nickelates that contain the rare-earth element neodymium. But in this latest study, the SIMES team created and examined a different nickelate material where neodymium was replaced with another rare-earth element, lanthanum.
“The emergence of CDWs can be very sensitive to things like strain or disorder in their surroundings, which can be tuned by using different rare-earth elements,” explained Matteo Rossi, who led the experiments while a postdoctoral researcher at SLAC.
The team carried out experiments at three X-ray light sources – the Diamond Light Source in the UK, the Stanford Synchrotron Radiation Lightsource at SLAC and the Advanced Light Source at DOE’s Lawrence Berkeley National Laboratory. Each of these facilities offered specialized tools for probing and understanding the material at a fundamental level. All the experiments had to be carried out remotely because of pandemic restrictions.
The experiments showed that this nickelate could host both CDWs and superconducting states of matter – and that these states were present even before the material was doped. This was surprising, because doping is usually an essential part of getting materials to superconduct.
Lee said the fact that this nickelate is essentially self-doping makes it significantly different from the cuprates.
“This makes nickelates a very interesting new system for studying how these quantum phases compete or intertwine with each other,” he said. “And it means a lot of tools that are used to study other unconventional superconductors may be relevant to this one, too.” |
Fourteenth-century English Society In The Canterbury Tales
In perusing Geoffrey Chaucer’s most sensational exhibition of pictures in The General Prologue of his most prestigious work, The Canterbury Tales, one comprehends why he is regarded as the Father of the English Literary Canon. Chaucer, in contrast to nobody of his time, set out to advise new and interesting stories essentially to engage fourteenth-century England. The Canterbury Tales tells the story of twenty-nine travelers who meets by chance at the Tabard Inn in Southwark directly outside of London. These diverse, yet bright explorers are headed to visit the place of worship, St. Thomas a Becket at Canterbury house of prayer. At the encouragement of the owner, at that point turned host they each agree to tell two stories, one going to Canterbury and one returning. The Canterbury Tales is organized in structure, and are intended to uncover the life of fourteenth-century England through the decorated, however, exemplary characters just as Chaucer’s very own history.
Chaucer was conceived in 1340 a child of a wealthy London trader. Like most wealthy young men, he turned into a page in an honorable family unit. For Chaucer’s case, he progressed toward becoming page to the Countess of Ulster, a little girl in-law of King Edward III. This is the place Chaucer would have been taught in the estimation of the highborn culture of the time, including its’ artistic taste, which were most likely depending on French models. While taking an interest in the lord’s military venture against the French, he was caught and delivered by the ruler. He then turned into a squire in the ruler’s family unit, which presumed him to take conciliatory voyages abroad. These voyages carried him to Italy. Italy would soon affect his later scholarly work as he was undeniably impacted by Dante, Petrarch, and Boccaccio. He then progressed toward becoming Controller of the Customs of Hides, Skins, and Wools in the port of London, which implied that he was an administration official who worked with fabric shippers. Chaucer’s experience administering imported fabrics may be the reason he could depict his characters so exactly and strikingly. After his arrival to London, he held various positions in government, which included being an individual from Parliament. In this way, we can see that through taking a gander at Chaucer’s history that he picked up motivation for his characters in The Canterbury Tales through his life as well as work encounters.
Chaucer’s extraordinary authenticity of his characters was for all intents and purposes obscure to pursuers in the fourteenth century. He had the option to bring individuals from numerous different backgrounds together in the General Prologue of The Canterbury Tales. The explorers speak to various cross-segment of fourteenth-century English society, a representation of the country overall. Medieval social theories partitioned society into three wide classes, called ‘homes.’ There was the military bequest who managed the ministry, who supplicated, and the common people home who worked. Chaucer’s The Canterbury Tales is a legacy parody, which implies that it was a basic discourse on the individuals from every home. The Knight and Squire speak to the military domain. The church bequest is spoken to by the Prioress, her Secretary Nun, Priest, the Monk,the Friar, and the Parson. Different characters, for example, the Merchant and Skipper are individuals from the common people. Chaucer’s portrayals of the different characters and their social jobs uncover the impact of the medieval class of bequest parody.
Starting with the Knight, the principal characters will be investigated, therefore we can imagine how their ways of life depict fourteenth-century England society. The Knight speaks to the perfect respectable medieval Christian warrior who, ‘cherished gallantry, truth, charitable nature, and cordiality.’ His child is the youthful hearty squire who might imitate his dad’s example of getting to be a Knight. At that point, the humble Prioress is presented with her slick social graces and rosary about her arm. The Monk was a masculine man, seeker, and one who did not like the exacting principles of St. Benedict. The Friar was portrayed as the, ‘champion poor person of his fellowship,’ whom was more worried about benefits rather than dismissing individuals from their transgressions. Ministers were intensely disliked in the fourteenth century in England. The wealthy Merchant, poor Oxford understudy, and crafty Lawyer are presented immediately. A Franklin went in the Lawyer’s organization. In Chaucer’s time, Franklin was a ‘liberated person.’ This one specifically was known for his friendliness, extraordinary nourishment, and wine basement. The five guildsmen of one extraordinary society are presented immediately, trailed by the dark-colored cleaned captain and the doctor who ‘could tell the reason that expedited each human disease.’ The clamorous Wife of Bath was then presented looking for her 6th spouse. The hole in her front teeth was viewed as something attractive in Chaucer’s time. The parson is the main genuine churchman in the organization. He tries proposing for others to do to his gathering; He is poor in assets, yet wealthy in soul. A Manciple, a Miller, a Reeve, a Summoner, and a Pardoner complete the organization of explorers. The Manciple was a regulator at a legal advisor’s school who could keep pace in mind with the law understudies. The blazing-tempered Reeve was ranch supervisor, while the uncleanliness incurred Summoner brought those blamed for disregarding Church Laws to court. The oily since a long time ago haired Pardoner offered guilty pleasures to discharge corrupt spirits in return for gifts to the Church. Like pardoners of this time, he fooled individuals into accepting he had relics, for example, the Veil of Mary, and he kept gifts for himself. The mill operator, strong and sturdy, tells the subsequent story, titled the Miller’s Tale.
The intoxicated Miller’s story is an anecdote about a youthful, poor understudy of soothsaying named Nicholas who starts an undertaking with his proprietor’s significant other, blazing Alison. His landowner, John, the craftsman is a cuckold in each feeling of the word. Alison and Nicholas need to go through a night together, so Nicholas comes up with an arrangement to get it going. He tells John, his landowner, that there will be a flood and persuades him to remain the night in a bath swinging from the roof of his animal dwelling place. At the same time, Absolom, a youthful area agent is totally enamored with Alison too. He shows up outside of the room window where Alison and Nicholas lie, and asks Alison for a kiss. She sticks her uncovered scrape the bottom of the window and gives him a chance to kiss that. Absolom is angry. He returns with a hot iron, and asks for another kiss. This time, Nicholas sticks his uncovered hit rock bottom of the window and gets scorched. Nicholas cries, ‘Help! Water! Water! Help, for God’s own heart!’ At this John accepts that the flood has begun. He breaks the rope and comes smashing down into the road. The Miller’s Tale is in the fabliau classification of writing, which means it is a short, hilarious, negative story where the characters are regularly generalizations. It likewise has a strange peak that is the consequence of some crazy joke.
Chaucer has crossed paths with each of the twenty-nine travelers and the characters in their stories in his artful climax in The Canterbury Tales. Which is the manner by which he can depict them so completely, and why The Canterbury Tales is a work of art. Chaucer’s undertaking in making writing as well as wonderful language for all classes of society, and today Chaucer still remains as one of the extraordinary shapers of scholarly account as well as character. |
Ringing in the ear, or tinnitus, is a widespread condition that affects an estimated 50 million Americans. Some people describe it as a hissing, roaring, whooshing or buzzing sound instead of ringing. It may be sporadic or constant, and is a symptom of an underlying condition rather than a disease itself. There are many factors that can cause tinnitus.
There are treatments and therapies available for tinnitus. Many people have been told by physicians that there is nothing that can be done and have lost hope in getting some relief from their tinnitus. However, while there are no medications or common medical procedures (by physicians) to “cure” tinnitus, there are many therapies and treatments available to provide relief from this condition.
What Are the Causes of Tinnitus?
Tinnitus is categorized as being either pulsatile or nonpulsatile.
People who suffer from pulsatile tinnitus report hearing the sound of their own pulse. It is caused by abnormal blood flow within the arteries of the neck or inside the ear, and is fairly rare. Possible causes include:
- Fluid in the middle ear.
- Ear infections.
- High blood pressure.
- Head and neck tumors.
- Blocked arteries.
Nonpulsatile tinnitus – ringing in the ears not accompanied by any type of rhythm – is considerably more common. It can be caused by a variety of conditions including:
- Presbycusis (age-related hearing loss).
- Noise exposure.
- Impacted earwax.
- Otosclerosis (stiffening of the bones in the middle ear).
- Meniere’s disease.
- TMJ disorders.
- Ototoxic medications.
- Thyroid conditions.
- Head or neck trauma.
- Acoustic neuromas.
Tinnitus is also classified as being either subjective (heard only by the patient) or objective (ringing can be heard by an impartial observer, such as a doctor). Most cases of tinnitus are subjective in nature.
How Is Tinnitus Treated?
Tinnitus can’t be cured, but there are treatments that make it less of a distraction. The approach taken depends on the underlying condition responsible for the ringing in your ears. Sometimes, simple steps like removing built-up earwax or switching to a new medication can markedly decrease symptoms.
Others benefit from noise suppression therapy or masking techniques designed to cover up the ringing noise. White noise machines, fans, air conditioners and humidifiers are all popular, easy to use options.
Tinnitus retraining devices, which rely on patterned tones, are a newer technique that has proven beneficial to many patients.Call the University of the Pacific in San Francisco at (415) 780-2001 or Stockton at (209) 946-7378 for more information or to schedule an appointment. |
Testing for COVID-19 has been an understandably popular topic in traditional and social media alike, complete with misinformation and plenty of questions. Since this is a topic we get many questions about, we wanted to address the different types of testing available, how the tests work, and at what stage of infection they should be used. The information listed underneath the test name is specific to our Neighbors Emergency Centers.
Here is a breakdown of the three types of testing available at Neighbors Emergency Center:
Performed for the purpose of diagnosing an individual with an active infection.
Rapid antigen testing, or ‘real-time’ testing, can be quickly performed at the ER with a nasal swab. The test detects a viral protein in an individual with an active infection. Manufacturer-published information states that the test is 100% specific (Meaning: if it is positive it is 100% accurate), and that it is 80% sensitive (meaning: if it is negative, it is 80% accurate). If an antigen test comes back negative, but symptoms strongly indicate a potential COVID infection, a PCR or Molecular test can be performed for confirmation.
Molecular (PCR) Test
This is the most commonly used test in the US. Molecular testing detects viral genetic material (RNA) and is considered very accurate. These tests are performed with a nasal or (less commonly) oral swab. Because the genetic material has to be analyzed in a diagnostic lab setting, these tests take more time. If an antigen test is coming back negative, but you are experiencing COVID symptoms, then we may need to perform and send out a PCR test.
Antibody testing is performed to determine if your body is producing antibodies from a previous infection. It is not used to diagnose a current infection.
Antibody tests determine if your blood contains the disease-specific proteins (antibodies) created to help fight off infections and provide protection against getting that disease again (immunity). Depending on when someone was infected and the timing of the test, the test may not find antibodies in someone with a current COVID-19 infection. Antibody tests should not be used to diagnose active COVID-19 infection.
All three types of testing are available curbside at Neighbors. Please visit our testing page to schedule an appointment. During periods of increased testing, test results may be delayed. As always, if you are experiencing an emergency, our Neighbors Emergency Centers are open 24/7 – even on holidays and weekends. |
In podcasting, because there is no visual component, sound is all you have to convey the facts and emotion of your story to your listeners. While there are traditionally four types of sounds that podcasts incorporate, learning how they play off of one another can help create interesting audio effects that will keep your listeners actively tuned in.
In This Lesson
Telling Stories With Sound
- Natural Sounds
- Ambient Sounds
- Identifying different sounds in action
- Ethical considerations in audio storytelling
By paying attention to the sound design that goes into a podcast, a lot can be learned about different audio elements and how they interact. In this activity, students will listen to three student-produced podcasts from Best of SNO, identify the types of sound that are included in them, and acknowledge how the host chose to use each one.
After completing this activity, students should have a better understanding of how narration, natural sounds, and ambient sounds complement each other, and how they each play a different role in helping listeners “see with their ears.” |
Why batteries fail?
In order for us to understand why batteries fail, unfortunately a little bit of chemistry is needed. There are two main battery chemistries used today: lead-acid and nickel-cadmium. Other chemistries are coming, like lithium, which is prevalent in portable battery systems, but not stationary, yet.
Volta invented the primary (non-rechargeable) battery in 1800. Planté invented the lead-acid battery in 1859 and in 1881 Faure first pasted lead-acid plates. With refinements over the decades, it has become a critically important back-up power source.
The refinements include improved alloys, grid designs, jar and cover materials and improved jar-to-cover and post seals.
Arguably, the most revolutionary development was the valve-regulated development. Many similar improvements in nickel-cadmium chemistry have been developed over the years.
Battery construction and nomenclature
A battery must have several components to work properly:
- a jar to hold everything and a cover,
- electrolyte (sulphuric acid or potassium hydroxide solution),
- negative and positive plates,
- top connections welding all like-polarity plates together and then
- posts that are also connected to the top connections of the like-polarity plates.
Hence, there is always an odd number of plates in a battery, e.g., a 100A33 battery is comprised of 33 plates with 16 positive plates and 17 negative plates. In this example, each positive plate is rated at 100 Ah. Multiply 16 by 100 and the capacity at the 8-hour rate is found, namely, 1600 Ah. Europe uses a little different calculation than the US standards.
In batteries that have higher capacities, there are frequently four or six posts. This is to avoid overheating of the current-carrying components of the battery during high current draws or lengthy discharges.
A lead-acid battery is a series of plates connected to top lead connected to posts. If the top lead, posts and intercell connectors are not suffciently large enough to safely carry the electrons, then overheating may occur (i2R heating) and damage the battery or in the worst cases, damage installed electronics due to smoke or re.
To prevent plates from touching each other and shorting the battery, there is a separator between each of the plates. Figure 1 is a diagram of a four-post battery from the top looking through the cover. It does not show the separators.
|Title:||Battery Testing Guide by MEGGER|
|Download:||Right here | Video Courses | Membership | Download Updates| |
If you’re thinking of learning Portuguese, you’ve probably already discovered one of the biggest differences between Portuguese and English: all nouns have a gender in Portuguese. Getting the gender of the nouns in any sentence right is quite important, because it affects a lot of the other words around it, particularly the adjectives, prepositions, and pronouns. The easiest way to become proficient in using gender in Portuguese is to practice with a tutor.
In this post, we’ll take a look at the various types of masculine and feminine nouns, and the ways of identifying a noun's gender.
Typically Masculine Nouns
With certain nouns, you can guess whether they are masculine or feminine just by looking at the letters they end with. Here are five common types of noun endings that are usually masculine.
Nouns That End with “o”
Examples: garfo (fork), prato (plate), queijo (cheese), tempo (time)
The overwhelming majority of nouns ending with “o” are masculine; this is the most common type of masculine noun in Portuguese. However, there are a few exceptions that can trip you up, such as tribo (tribe), which is feminine.
Nouns That End with Consonants
Examples: lugar (place), valor (value), professor (teacher), final (end)
Vowel endings are much more common than consonant endings. When a noun does end with a consonant, it is usually masculine. There are, however, exceptions, such as some nouns ending in “z,” for example, voz (voice), and most nouns that end with “em.”
Nouns That End with “i”
Examples: pai (father), lei (law), rei (king), abacaxi (pineapple)
Nouns That End with “u”
Examples: céu (heaven), museu (museum), grau (degree), chapéu (hat)
Nouns That End with “ema”
Examples: problema (problem), sistema (system), tema (theme), poema (poem)
Nouns ending with “a” are generally feminine (see below). However, with gender in Portuguese, there are always complications and exceptions! Nouns ending with “ema” generally have Greek, rather than Latin roots, and are usually masculine. An exception is cinema (cinema), which is feminine.
Typically Feminine Nouns
Here are five common types of noun endings that are usually feminine.
Nouns That End with “a”
Examples: coisa (thing), casa (house), vida (life), pessoa (person)
This is the most common type of Portuguese feminine noun. Not all nouns ending with “a” are feminine, though: as discussed above, words ending with “ema” are generally masculine. There is another very important noun ending with “a” that is masculine: dia, meaning “day.”
Nouns That End with “ã”
Examples: manhã (morning), maçã (apple), hortelã (mint), irmã (sister)
There are exceptions to this, such as talismã (talisman), which is masculine, but they are not words you are likely to encounter too frequently.
Nouns That End with “ação”
Examples: relação (relation), situação (situation), informação (information), população (population)
With gender in Portuguese, you always have to keep an eye out for exceptions! Coração (heart), is a masculine noun.
Nouns That End with “dade”
Examples: cidade (city/town), verdade (truth), actividade (activity), sociedade (society)
At least one rule for gender in Portuguese doesn’t have any exceptions! If you see a noun ending in “dade,” you can be 100% confident that it’s feminine.
Nouns That End with “agem”
Examples: imagem (image), viagem (journey), vantagem (advantage), mensagem (message)
Personagem (character), however, is a masculine noun.
Nouns that refer to people often have both masculine and feminine versions to reflect the fact that the person they are referring to can be either male or female. These are often, but not always, the names of professions. Below are five examples:
M: irmão (brother)
F: irmã (sister)
Single Form Inflecting Nouns
Some nouns, again generally those that refer to people, have a single form that can take either gender depending on the person to whom they refer. These come in three main types:
Nouns That End with “ante”
Examples: habitante (inhabitant), estudante (student), comandante (commander), fumante (smoker)
Nouns That End with “ente”
Examples: presidente (president), agente (agent), cliente (customer), dirigente (driver)
Nouns That End with “ista”
Examples: artista (artist), jornalista (journalist), cientista (scientist), especialista (specialist)
Practice Using Gender in Portuguese
As you can see, gender in Portuguese is a complex subject, especially for an English-speaking learner who quite possibly hasn’t encountered the concept of objects having a gender before. The best way of learning it is to practice with a tutor who can correct your mistakes and help you to remember all the rules and exceptions. At PortugueseTutoring.com, all our tutors are qualified Portuguese language instructors, so you can be confident you’re in the best possible hands. |
Lesson 29 – الدَّرْسُ التَّاسِعُ وَالْعِشْرُونَ
Tenses of the Verb (Past, Present and Future)- زَمَن الفِعْل (الماضي ،وَالْمُضارِع ،والمُسْتَقْبَل)
Characteristics of the Present Verb - خصائص الفعل المضارع
- We are still in lesson twenty nine of our free Arabic language course. This Arabic course with images and audios will help you learn Arabic.
- We have learned the forms taken by the past and present tense verbs as well as the way the future tense is written – i.e. expressed by a present verb preceded by particles (sa- سـ or sawfa سوف).
- In this part we will discuss the present verb in more detail because the present verb in Arabic has many various characteristics, including:
- It indicates an action being performed at the time of speaking or a future event, that is the present verb can express the present time (as we have already explained in Part 2 of this lesson).
- It can also express future events. If it is preceded by ((أن, the meaning of the verb indicates the future. For example:
I want to go to Egypt next year
- Also, if the present verb is preceded by إن (conditional if) - through which we understand that there is a hope that the action may happen in the future – the verb also indicates the future - as in the example below.
If Muhammad studies, he (will) succeed.
I will not go to school tomorrow. |
Teenagers and saving
Talking with your child about money can go smoother if you keep the conversation age appropriate. The conversation starters and activities here can help you find the words.
Conversations about saving
“A good rule of thumb is to save 10 percent of what you earn, and have at least three months’ worth of living expenses saved up in case of an emergency.”
- Once your teen has a steady job, help him set up a savings program so that at least 10 percent of earnings goes directly into his savings account.
- Help your teen track what he actually spends in a month. Talk about how to estimate three months’ worth of expenses, and how much to save from each paycheck to build up his savings.
- Talk about how to keep money in a safe place, like a federally insured bank or credit union.
- Explain that, if possible, it’s better to have more savings—like six to nine months’ worth of living expenses, instead of only three.
- Discuss how much your child can save. What will she gain? What will she have to give up? Is it worth it?
- Explain to your child that once she starts a job, she may be offered an account at work called a 401(k). Some employers provide matching contributions as an incentive to save, so it’s smart to save at least enough for the maximum matching contribution.
Activities about saving
“The sooner you start saving, the faster your money can grow from compound interest.”
- Compound interest is when your child earns interest on both the money she saves and the interest she earns. Show your child the following: If she sets aside $100 every year starting at age 14, she’d have about $23,000 at age 65. However, if she begins saving at age 35, she’d have about $7,000 at age 65. The example assumes the account earns 5 percent every year.
- Experiment with your child to show the effect of saving different amounts at different interest rates. Try out the SEC’s . |
This old phone (found in Nelson Ghost Town, Nevada) provides an opportunity for teachers and students to experience how examining old things (with some guiding questions) provides an opportunity to experience how thinking, analyzing, synthesizing, and sharing information leads to learning.
Enlarge this photo and project it for all to see. Allow time for a silent, uninterrupted viewing by everyone. After a few minutes of viewing,
- Ask, “Using only the information in this picture, do you think this is a real phone?”. Allow time for participants to share all evidence from the picture that proves whether or not this is is a real phone.
- Ask participants if anyone has additional background knowledge to add to these observations.
- Examine the decorative images surrounding the phone to determine an approximate date of this phone’s manufacture.
- Ask, “What information about that time in history do these pictures suggest?”
- Ask, “What different purposes do the words above the phone and the words on the phone have?”
- Look at each component of the phone and decide the suggested and actual purpose of each component.
- Provide time for participants to develop additional questions about this phone.
- Provide time to search for more information about “fun phones” and to share the information with the class.
As you share this experience with students, make note of the learning that conversations about “old stuff” provide. Listen for and record evidence of:
- oral language development
- research skills
- topic or genre specific vocabulary development
- focused, critical reading and viewing
- genre specific writing
- developing research questions
- connecting background knowledge to new knowledge
- linking research information to other areas of study
For more on instilling a sense of wonder and curiosity, check out our post Wondering Leads to Learning and What is This? Experiencing Curiosity, Questioning, and Searching for Information.
For more conversations about education, please visit:Beyond the Apple . . . Reframing Conversations in Education or contact us at [email protected] |
What is hypotension?
Low blood pressure characteristics
Hypotension is too low blood pressure which supposes an inferior bloody supply to some parts of the body. It is the opposite of hypertension
Hypotension is the symptom of something is going wrong in one’s body. However, in most cases, it doesn’t cause any symptoms and needs no treatment.
But hypotension also accounts for many cases of fainting which can be highly responsible for people falling down. Many fractures occur as a result of them, specially among old people.
Serious hypotension can be very dangerous, since it may lead to hypotensive shock. Vital organs as the heart or brain can be severely damaged because of lack of blood irrigation.
Types of hypotension?
- Orthostatic hypotension: (“postural hypotension”) the most normal type of hypotension. It is what is colloquially normally called a ” head rush” or a ” dizzy spell”. It occurs when a person makes a change of his /her body position, either from sitting to standing or from lying to standing.
This is caused because gravity makes some blood (about 10-15% of the whole) to be addressed to the lower part of the body, so there is a certain lost in the upper part, which produces a momentary blood insufficiency in some vital organs, especially in the brain,giving way some disagreeable feelings as light headache or fainting sensation.
In normal conditions body reacts contracting the blood vessels and increasing heart rate to pump up more blood again, but under certain circumstances (certain medications or diseases) blood pressure is not stabilized as soon as necessary, causing this type of hypotension. It usually lasts some seconds or minutes.
- Postprandial orthostatic hypotension: About 30 –75 minutes after eating, blood is addressed to the intestines in order to facilitate digestion. When the body is not able to compensate the blood lost to some vital organs as the brain, symptoms of this type of hypotension appear.
- Neurally mediated hypotension (NMH)/ Neurocardiogenic syncope: Hypotension is produced because a abnormal reflex interaction between the heart and the brain. Although both organs are normal, body is unable to regulate blood pressure when being upright.
The brain sends a defective signal to the heart. Instead of making it beat faster it gives it the signal of slowing heart rate and dilating the vessels further.
Symptoms of hypotension?
The main possible symptoms or hypotension are:
- Light headache
- General weakness
- Fatigue and sleepiness.
- Heart rate is usually faster than normal after doing some exercise.
Other possible symptoms are:
- Blurred or dimmed vision
- Excessive transpiration
- Tingling sensation
- Sexual inappetence.
Causes of hypotension?
- Genetic causes: In most cases hypotension is produced by genetic causes.
- Body diseases: Some body diseases can produce hypotension, such as: diabetes, anemia, hyperthyroidism, flue, poor peripheral circulation. arrhythmias, heart attack, heart failure, dehydratation, anaphylactic shock, infections, etc.
- Medicine or drug ingestion: Some medicines or drugs, specially those with vasodilating properties, may be responsible sometimes to lower blood pressure. For example: painkillers, antidepressants, diuretics, high blood pressure medicines or other heart medicines. Alcohol produces vein dilation and loss of fluids.
- Physical of emotional shock: The sudden disturbance of emotions can produce hypotension.(seeing accidents, seeing blood, being to anxious or nervous, watching someone being in pain, experiencing dental treatment, etc.) Physical trauma can produce similar effects (suffering an accident, falling, sudden strong hits, etc.)
- Being in a quiet upright posture for to long: Standing up or even sitting upright without moving can produce the same symptoms.
- Hot weather or to much warm environments: Being exposed to too much hot, such as it happens in summer weather, or being in closed warm places can trigger hypotension. This usually happens to many people when traveling by crowded tubes or by crowded buses.
- Inadequate diet: A low salt diet intake can be the factor that produces hypotension. Sodium is necessary for the body to retain liquids and raise blood pressure. Salt importance has been depreciated in later years because of many studies have pointed out its negative effects for health, specially as a triggering hypertension factor.
Maybe some should restrict salt intake very much because they are prone to develop high blood pressure, but sodium is necessary to maintain the fluids balance in the body, particularly for those who use to offer low blood pressure.
Diagnosis of hypotension
In front of symptoms of hypotension, we recommend a visit to the specialist who can diagnose its possible existence.
The diagnosis is done mainly by means of blood pressure test, temperature measurements or heart rate screening.
Some other tests could be necessary, such as, for example, electrocardiograms, examination of urine, x-rays or blood analysis.
When to go to the doctor urgently?
This visit is particularly urgent if you feel:
- Chest pain
- Breathing problems
- Painful micturition
- Black or dark feces
- Prolonged diarrhea
- Vomiting or inappetence.
Treatment of hypotension
Once the diagnosis, the doctor will decide the best course of treatment. Treatment can be based on non-pharmacologic treatments, such as
- Lying on the back and raising your feet.
- Changing your diet or adopting certain body postures or physical therapies.
For those with more severe hypotension nor responding to the previous steps some medications could be needed.
The natural treatment of hypotension involves using a series of natural remedies that can help treat the problem.
More information about hypotension natural treatments.
25 August, 2020 |
IF you're a student, you rely on one brain function above all others: memory.
These days, we understand more about the structure of memory than we ever have before, so we can find the best techniques for training your brain to hang on to as much information as possible. The process depends on the brain's neuroplasticity, its ability to reorganise itself throughout your life by breaking and forming new connections between its billions of cells.
How does it work? Information is transmitted by brain cells called neurons. When you learn something new, a group of neurons activate in a part of the brain called the hippocampus. It's like a pattern of light bulbs turning on.
Your hippocampus is forced to store many new patterns every day. This increases hugely when you are revising. Provided with the right trigger, the hippocampus should be able to retrieve any pattern. But if it keeps getting new information, the overworked brain might go wrong. That's what happens when you think you've committed a new fact to memory, only to find 15 minutes later that it has disappeared again.
So what's the best way to revise? Here are some top tips to get information into your brain and keep it there.
FORGET ABOUT INITIAL LETTERS
Teachers often urge students to make up mnemonics - sentences based on the initial letters of items you're trying to remember. Trouble is, they help you remember the order, but not the names. The mnemonic Kings Prefer Cheese Over Fried Green Spinach can help you recall the order of taxonomy in biology (kingdom, phylum, class, order, family, genus, species) but that's only helpful if you're given the names of the ranks. The mnemonic is providing you with a cue but, if you haven't memorised the names, the information you want to recall is not there. You're just giving your overflowing hippocampus yet another pattern of activity to store and retrieve.
Pathways between neurons can be strengthened over time. Simple repetition - practising retrieving a memory over and over again - is the best form of consolidating the pattern.
USE SCIENCE TO HELP YOU RETRIEVE INFO
Science tells us the ideal time to revise what you've learned is just before you're about to forget it. And because memories get stronger the more you retrieve them, you should wait exponentially longer each time - after a few minutes, then a few hours, then a day, then a few days. This technique is known as spaced repetition.
This also explains why you forget things so quickly after a week of cramming for an exam. Because the exponential curve of memory retrieval does not continue, the process reverses and, within a few weeks, you have forgotten everything.
TAKE REGULAR BREAKS
Breaks are important to minimise interference. When your hippocampus is forced to store many new (and often similar) patterns in a short time, it can get them jumbled up.
The best example of this is when you get a new telephone number. Your old number is still so well entrenched in your memory that remembering the new one is a nightmare. It's even worse if the new one has a few similarities to the old.
Plan your revision so you can take breaks and revise what you've just learned before moving on to anything new.
GUARDIAN NEWS & MEDIA |
The elusive harp sponge dwells nearly two miles below the surface of the ocean, far deeper than humans are able to explore. No one even knew they existed before scientists off California’s Monterey Bay used a remote control vehicle to spy on the meat-eating sponge from afar. Their findings, published Oct. 18 in Invertebrate Biology, reveal the secrets of this slow-motion hunter.
The harp sponge, Chondrocladia lyra, has thin vertical fingers covered in fibers similar to Velcro. These barbs snare small crustaceans swept up in deep ocean currents. The sponge then wraps these tasty morsels in a thin membrane, dismembers them into bite-size pieces, and voila! Let the slow digestion process begin!
The sponge’s sticky fibers also come in handy for reproduction. Spheres at the end of the harp sponge’s branches produce compact balls of sperm, called spermatophores, which they release into the water. Neighboring sponges snag these sperm-bearing packages in order to fertilize the eggs contained farther down their branches.
Cameras on the researchers’ remote control vehicles also revealed that the harp sponge has many neighbors in its deep sea home, including sea anemones, sea pens, and sea cucumbers.
Image courtesy of Monterey Bay Aquarium Research Institute (MBARI) |
Problem: Idioms- that tricky literary device that seems to elude our students. Solution: An engaging hands-on activity. This set of "I have...., Who has..." cards with have your class laughing and on-point with learning and reviewing idioms. This set can be used as an introduction to idioms- allowing you to stop and discuss what idioms mean OR it makes a perfect review for the end-of-unit assessment of students understanding.
I love Dr. Seuss! So I found a way to use it to teach Upper Elementary, 4th thru 6th grade (Big Kids)! Come read how I turned The Lorax into a Common Core Language Arts Unit using the book and movie. (The movie comprehension questions are my fav) |
Presentation on theme: "Game Theory. Learning Objectives Define game theory, and explain how it helps to better understand mutually interdependent management decisions Explain."— Presentation transcript:
Learning Objectives Define game theory, and explain how it helps to better understand mutually interdependent management decisions Explain the essential dilemma faced by participants in the game called Prisoners’ Dilemma Explain the concept of a dominant strategy and its role in understanding how auctions can help improve the price for sellers, while still benefiting buyers
Overview I.Introduction to Game Theory II. Simultaneous-Move, One-Shot Games III. Infinitely Repeated Games IV. Finitely Repeated Games V. Multistage Games
Game Theory Optimization has two shortcomings when applied to actual business situations – Assumes factors such as reaction of competitors or tastes and preferences of consumers remain constant. – Managers sometimes make decisions when other parties have more information about market conditions. Game theory is concerned with “how individuals make decisions when they are aware that their actions affect each other and when each individual takes this into account.” Game Theory is a useful tool for managers
In the analysis of games, the order in which players make decisions is important Simultaneous-move game- Each player makes decision without knowledge of other players decision Sequential-move game: player makes a move after observing other player’s move
One shot game – underlying game is played only once Repeated game – underlying game is played more than once
How managers use game theory: Betrand Duopoly game: 2 gas stations – no location advantage. Consumers view product as perfect substitutes and will purchase from station that sells at lower price. First thing manager must do in the morning is to tell attendant to put up price without knowledge of rival’s price. This is a simultaneous move game. If Manager of station A calls in price higher than B will lose sales that day
Normal Form Game A Normal Form Game consists of: – Players. – Strategies or feasible actions. – Payoffs.
A Normal Form Game Player 2 Player 1 12,11 11,1214,13 11,1010,1112,12 10,1510,1313,14
Simultaneous-move, One shot game Important to managers making decisions in an environment of interdependence. E.g. profits of firm A depends not only on firm’s A actions but on the actions of rival firm B as well.
Normal Form Game: Scenario Analysis Player 2 Player 1 10,20 15,8 -10, 710,10
What’s the optimal strategy? Complex question. Depends on the nature game being played. The game above is easy to characterize the optimal decision– a situation that involves a dominant strategy. A strategy is dominant if it results in the highest payoff regardless of the action of the opponent
For player 1, the dominant strategy is UP. Regardless of what player 2 chooses, if A chooses UP, she’ll earn more. Principle: Check to see if you have a dominant strategy. If you have one, play it.
What should a player do in the absence of a dominant strategy (e.g. Player 2)? Play a SECURE STRATEGY -- A strategy that guarantees the highest payoff given the worst possible scenario. Find the worse payoff that could arise for each action and choose the action that has the highest of the worse payoffs.
Secure strategy for player 2 is RIGHT. Guarantees a payment of 8 rather than 7 from LEFT 2 shortcomings: 1.Very conservative strategy 2.Does not take into account the optimal decision of your rival and thus may prevent you from earning a significantly higher payoff. Player 2 should actually choose LEFT, knowing that player 1 will play UP
Principle: Put yourself in your rival’s shoes If you do not have a dominant strategy, look at the game from your rival’s perspective. If your rival has a dominant strategy, anticipate that she will play it.
Putting Yourself in your Rival’s Shoes What should player 2 do? – 2 has no dominant strategy! – But 2 should reason that 1 will play “a”. – Therefore 2 should choose “C”. Player 2 Player 1 12,11 11,1214,13 11,1010,1112,12 10,1510,1313,14
The Outcome This outcome is called a Nash equilibrium: – “a” is player 1’s best response to “C”. – “C” is player 2’s best response to “a”. Player 2 Player 1 12,11 11,1214,13 11,1010,1112,12 10,1510,1313,14
Nash Equilibrium Given the strategies of other players, no player can improve her payoff by unilaterally changing her own strategy. Every player is doing the best she can given what other players are doing. In original example, Nash equilibrium is when A chooses UP and B chooses LEFT.
Application of One shot games Two managers want to maximize market share. Strategies are pricing decisions. (charge high or low prices) Simultaneous moves. One-shot game. (firms meet once and only once in the market)
The Market-Share Game in Normal Form Manager 2 Manager 1
Market Share game Equilibrium Each manager’s best decision is to charge a low price regardless of the other’s decision. Outcome of game is that both firms charge a low price and earn 0 profits Low prices for both managers is the Nash Equilibrium
If firms collude to charge high prices, profits will be higher for both Classic case in Economics called dilemma because the Nash equilibrium outcome is inferior (from the firms viewpoint) to the situation where they both “agree” to charge high prices Even if firms meet secretly to collude, is there an incentive to “cheat” on the agreement?
To advertise or Not? Your firm competes against another firm for customers You and your rivals know your product will be obsolete at the end of the year (one shot game) and must simultaneously determine whether or not to advertise. In your industry, advertising does not increase industry demand but induces consumers to switch among the products of the different firms
To advertise or Not? Dominant strategy of each firm is to advertise. unique Nash equilibrium. Collusion will not work because this is a one- shot game and if there’s agreement not to advertise, each firm will have an incentve to cheat.
Key Insight: Game theory can be used to analyze situations where “payoffs” are non monetary! We will, without loss of generality, focus on environments where businesses want to maximize profits. – Hence, payoffs are measured in monetary units.
Examples of Coordination Games Industry standards – size of floppy disks. – size of CDs. National standards – electric current. – traffic laws.
Coordination Decisions: Firms don’t have competing objectives but coordinating their decisions will lead to higher profits e.g. Producing appliances that require either 90- volt or 120-volt outlets
A Coordination Game in Normal Form Firm B Firm A
Coordination Game: 2 Nash Equilibria What would you do if you manage Firm A? If you do not know what firm B is going to do, you’ll have to guess what B will do. Effectively, both you and firm B will do better by coordinating your actions. 2 Nash equilibria. If the firms can ‘talk’ to each other, they can agree on what to produce. Notice, there’s no incentive to cheat here This is a game of coordination rather than game of conflicting interest
Simultaneous-Move Bargaining Management and a union are negotiating a wage increase. Strategies are wage offers & wage demands. Players have one chance to reach an agreement and offer is made simultaneously. Parties are bargaining over how much of $100 in surplus must go to the union
Assume the surplus can be split only into $50 increments One shot to reach agreement Parties simultaneously write the amount they desire on a piece of paper. If the sum of the amounts does not exceed $100, players get the specified amount If sum exceeds $100, stalemate, costing each player $1
The Bargaining Game in Normal Form Union Management
Simultaneous-Move Bargaining 3 Nash equilibria outcomes. Multiplicity of equilbria leads to inefficiency if parties fail to “co-odinate” on an equilibrium 6 of 9 outcomes are inefficient because they don’t sum up to 100 Clearly, in this game management must ask for 50 if they
Key Insights: Not all games are games of conflict. Communication can help solve coordination problems. Sequential moves can help solve coordination problems.
Infinitely Repeated Games Game played over and over again. Players receive payoff during each repetition of game Firms compete week after week, year after year game is repeated over time To evaluate profits earned during this game, consider the PV of all payoffs. If payoffs are the same in each period, then for an infinitely played game PV = (1+i)/i * constant profit
An Advertising Game Two firms (Kellogg’s & General Mills) managers want to maximize profits. Strategies consist of pricing actions. Simultaneous moves. – Repeated interaction.
Equilibrium to the One-Shot Pricing Game General Mills Kellogg’s
When firms repeatedly face this type of matrix, they use “trigger strategy” Trigger Strategy – is a strategy that is contingent on the past plays of players in a game A player who adopts a trigger strategy continues to choose the same action until some other player takes an action that “triggers” a different action by the first player
Can collusion work if firms play the game each year, forever? Consider the following “trigger strategy” by each firm: – “We will each charge the high price, provided neither of us has ever “cheated” in the past. If one of us cheats and charges a low price, the other player will “punish” the deviator by charging low price in ever period thereafter” In effect, each firm agrees to “cooperate” so long as the rival hasn’t “cheated” in the past. “Cheating” triggers punishment in all future periods.
Kellogg’s profits? Cooperate = 10 +10/(1+i) + 10/(1+i) 2 + 10/(1+i) 3 + … = 10 + 10/i Value of a perpetuity of $12 paid at the end of every year Cheat = 50+0 +0 +0 +0 There’s no incentive to cheat if the PV from cheating is less than the PV from not cheating
Kellogg’s Gain to Cheating: Cheat - Cooperate = 50 - (10 + 10/i) = 40 - 10/i – Suppose i =.05 Cheat - Cooperate = 40- 10/.05 = 40 - 200 = -160 It doesn’t pay to deviate. As long as i is less than 25%, it pays not cheat. – Collusion is a Nash equilibrium in the infinitely repeated game!
Benefits & Costs of Cheating Cheat - Cooperate = 40 - 10/i – 40 = Immediate Benefit (50 - 10 today) – 10/i = PV of Future Cost (10 - 0 forever after) If Immediate Benefit - PV of Future Cost > 0 – Pays to “cheat”. If Immediate Benefit - PV of Future Cost 0 – Doesn’t pay to “cheat”.
Application of Infinitely repeated games (product quality) Firm Consumers
If one shot game, Nash equilibrium = low quality product and don’t buy If infinitely repeated and consumers tell firm: “I’ll buy your product and will continue to buy if it is of good quality. But if it turns out to be shoddy, I’ll tell my friends not to buy anything from you again”. Given this strategy of consumers, what should the firm do? If the interest rate is not too high, the best alternative is to sell a high product quality
If firm cheats and sells shoddy product, it will earn 10 now but 0 forever thereafter. It will not pay for the firm to cheat if the interest rate is low.
FINITE REPEATED GAMES Games that eventually end 1.Games in which players do not know when the game will end 2.Games in which players know when it will end.
Suppose two duopolists repeatedly play the pricing game until their product become obsolete. Suppose the firms don’t know when the game will end but there’s a probability p that the game will end after every given play Probability the game will be played tomorrow if played today is (1-p). If the game is played tomorrow, the probability it will be played the next day is (1-p)2 etc.
Pricing Game that is infinitely repeated General Mills Kellogg’s
Suppose firms adopt trigger strategies, whereby each agrees to charge a high price but if a firm deviates and charges a low price, the other firm will punish it by charging low price until the game ends. Assume interest rate is zero Does Kellogg’s have an incentive to cheat?
Kellogg’s profits? Cooperate = 10 +10/(1-p) + 10/(1-p) 2 + 10/(1-p) 3 + … = 10/p Cheat = 50+0 +0 +0 +0 There’s no incentive to cheat if the profit from cheating is less than the profit from not cheating. If there is a 10% that the government will ban the sale of the item, then profit from not cheating is 100 It pays not to cheat
Key Insight Collusion can be sustained as a Nash equilibrium when there is no certain “end” to a game. Doing so requires: – Ability to monitor actions of rivals. – Ability (and reputation for) punishing defectors. – Low interest rate. – High probability of future interaction.
End of Period Problem When players know precisely when a repeated game will end, end-of-period problem arises In the final period, there’s no tomorrow and there’s no way to punish a player for doing something wrong in the last period. Consequently, players will behave as if it was a one shot game
Resignations, Quits & Snake Oil salesmen Workers work hard if threatened with being fired if benefits of shirking are less than cost of being fired When worker announces that she wants to quit, say tomorrow, the cost of shirking is low so threat of firing has no effect What can managers do to overcome problem? 1.Fire the worker as soon as she announces plan to quit? Problems Snake Oil Salesmen move about so no punishments
Factors affecting collusion in pricing games Number of firms: Collusion is easier when there are few firms rather than many. Firm Size: Economies of scale exists in monitoring. Easier for large firms to monitor small ones than other way round History of the Market: Explicit meeting to collude or tacit collusion? Punishment Mechanism: How do we punish our rivals when they cheat?
Real World Examples of Collusion Garbage Collection Industry OPEC NASDAQ Airlines |
Annabel and Vin from Don’t Inflate to Celebrate https://balloonsblow.org/ came to talk to the whole of year 3 about the dangers of balloons and how they can cause harm to the wildlife. They showed the children what happens when a balloon bursts and turns into what looks like a jellyfish. Animals then think this is food and eat it which can then harm them.
They also read the children a story called Marli’s Tangled Tale about the dangers of balloons. Year 3 will be using this book in their Literacy lessons next week. We also found out other ways we can celebrate special occasions without using balloons that are eco friendly like blowing bubbles, lighting a candle, putting up fabric bunting, floating flowers. |
Solve by quadratic formula
In this blog post, we will take a look at how to Solve by quadratic formula. We will also look at some example problems and how to approach them.
Solving by quadratic formula
When you try to Solve by quadratic formula, there are often multiple ways to approach it. Therefore, it is an essential subject for students to learn. The good news is that there are various ways to solve algebra problems. However, some of these strategies may be more effective than others. Therefore, it is important to find one that works best for you. For example, you can use a step-by-step method or a system that incorporates visualization techniques. Other factors that can help you solve algebra problems include hard work and dedication. Therefore, if you are willing to put in the time and effort needed to master algebra, then it will not be long before you start seeing results.
Linear equations are equations that have only one variable. They may be written in the form y = mx + b or y = mx + b where y and x are variables, m and b are constants. An example of a simple linear equation is: If y = 2x + 2 then y = 4. An example of a more complicated linear equation is: If y = 5x - 7 then y = 0. A solution to a linear equation is the set of values that results in the equation being true when x is fixed. One common way to solve linear equations is to use substitution. Substitution involves replacing each variable with a different value. For example, if x = 3 then by substituting this value for x in the original equation, we obtain the following:
Square roots are one of the most useful tools in math. You can use them to solve a wide range of equations and expressions. For example, you can use square roots to find the value of negative numbers such as -5 or -43. You can also use square roots to find values that don’t fit into a particular type of equation. For example, you can use square roots to find the unknown number that fits between two known values. There are two main ways to solve an equation with a square root. The first is by solving the equation for its variables and then substituting the resulting expression into the original equation. To do this, first rewrite the expression in standard form by taking all of its non-root variables and multiplying both sides by their corresponding factors. Next, take all of the roots (including any common denominators) and multiply each side of the equation by them. Finally, divide both sides by the product of all of those products. This should leave you with an expression that closely resembles the original one. The second way is by using a table of square roots or a calculator that allows you to enter your expression directly into its keypad without having to write it out first. This can be more efficient if you routinely work with similar expressions so you know how to quickly type them in.
Inequality equations are situations where two values are unequal. In other words, the value of one is higher than the other. These equations can be solved in various ways, depending on the situation. One way to solve an inequality equation is to multiply the left-hand side by a fraction. For example, let’s say you have $5 and $6 on your balance. If you want to know how much money you have, divide $5 by 6, which gives you an answer of $1. If you want to know how much money you have less than $6, divide 5/6 by 1, giving an answer of 0.333333333. This means that you have $1 less than what you started with. Another way to solve an inequality equation is to raise both sides to a power. For example, let’s say you have $5 and $6 on your balance. If you want to know how much money you have less than $10, raise both sides to the power of 2 (2x=10), giving an answer of 0.25. This means that you have 25 cents less than what you started with. In order to solve inequalities, we must first understand how they work. When two values are unequal in size or amount, the equation will always be true by definition. When a value is greater than another value, |
The computer has been becoming smaller and smaller since its invention.
The computer has been becoming smaller and smaller since its invention. It has downsized from enormous room-sized giants to pocket-sized mobile phones. Each new model: smaller and faster than the one before. Now, the computer and software engineering is about to enter into a new era: Quantum Computing. Quantum computing is not as simple as its name. For a regular person, it may not make any sense at all. However, some basics of this strange technique can be explained.
What are Quantum Computers?
Quantum computers are computers that use quantum mechanics. Quantum mechanics is a branch of physics. Quantum computing employs the phenomena of superposition and entanglement for operating. We know that classical computers process data in form of ‘zeros’ and ‘ones’ otherwise known as ‘bits’. However, quantum computers use ‘qubits’ to process data.
New data processor: Qubits
Bits used in classical computers can either have a value of 1 or 0. However, qubits can have both of these values (0 and 1) at the same time. Qubits can even have more than two values. This is called superposition. The interesting part is the observation of a qubit. When unobserved, a qubit is in an average value of all the possible values assigned to it. When it is observed, it picks one of these assigned values.
Hence, qubits can be assigned a large number of values. And essentially increase the amount of data for the same number of qubits, as compared to bits of classical computing. This is the fundamental property and advantage of quantum computing.
Entanglement is another strange and important part of quantum computers. Entanglement happens when two particles separated by any distance influence one another. In other words, change in state of one particle affects the state of the other. In quantum computing, the same phenomenon is applied to qubits. Qubits, due to entanglement, create a communication network within a quantum system. It speeds up the quantum computer much faster than a classical computer.
Read The Review Master’s impressive article on Google Map’s Newest Update
Hurdles and Difficulties
Quantum computers are still a difficult concept because of many technical difficulties. It is very difficult to increase the number of qubits physically. Assigning arbitrary values to qubits is a difficult task. Reading qubit information is not easy. Moreover, due to superposition, a large amount of data can be lost when qubits are observed.
Quantum computers are yet to be realized. However, if manufactured and perfected over time, they will provide a revolutionary technological advancement in data processing. |
Scientists have proven that global temperatures have increased 1.5°C above pre-industrial levels. Human activities, especially those that produce greenhouse gases, are largely to blame for this.
Climate change impacts
Across the world the impacts of climate change are being felt in various ways. In some countries, we are seeing shifting weather patterns that threaten food production. Elsewhere, rapid glacier melt threatens sea level rise that risks catastrophic flooding. Currently the impacts of climate change are worse in poorer countries. Nonetheless, it is likely that climate change will increasingly affect the UK in years to come. Impacts it is likely we will see include:
- Warmer weather.
- Very cold winters will be rare.
- Winters will be wetter and summers will be hotter and more prolonged.
- Increased local flooding and flash floods.
- Increased pressure on water resources.
- Severe weather events occurring more often.
- Threat from flooding, droughts, heat waves, severe gales and snow.
Rising sea level
- Sea level could rise by 40cm, leading to coastal erosion and flood risks.
- Coastal ecosystems will be drastically altered.
- Temperature increase will change the crops we grow, impacting our diets.
- Changes in temperature will cause different illnesses, creating more problems for children and the elderly.
- Foreign diseases associated with hot weather migrating north, for example, Malaria.
Homes and lifestyle
- Cost of living to increase due to food, fuel and water shortages.
- Homes potentially damaged and insurance will increase due to severe weather.
- Extreme weather affecting homes, work, infrastructure and travel links.
- Domestic crops will begin to struggle due to weather change.
- Soil will be less fertile as we struggle to grow crops that no longer thrive.
- Hot weather will kill livestock.
- Temperature change threatens birds, fish and land animals.
- Some species cannot adapt to changes.
- New competition/disease brought by migrating animals.
- The plants and trees that can grow will change.
- Dubbed as the “evil twin of climate change".
- The ongoing decrease in ocean pH, 30% decrease so far.
- Predicted 150% by 2100 which has not been experienced for 400,000 years.
- Wide implications for ocean life, particularly animals with shells or skeletons.
Flooding in Eden
While climate change is global in scale, Eden is no stranger to the impacts. The 2005, 2009 and 2016 floods had damaging impacts across the district. We have suffered social, economic and environmental losses. It is predicted similar flooding could become more common as temperatures increase.
In December 2015, Storm Desmond caused disruptions across Eden and Cumbria. It impacted homes and businesses for months after the event.
Our response to climate change
The Climate Change Act of 2008 was passed by our Government to ensure that we keep greenhouse gas emissions at least 80% lower than the 1990 baseline by 2050, to avoid dangerous climate change. The Act made the UK the first country with a legally binding framework to cut carbon emissions. Alongside this Act, the Committee on Climate Change was set up to advise the Government on legislation and progress on reducing emissions. The UK has struggled to meet carbon targets set in this act, even though the need is getting stronger.
Climate change effects are being experienced in our country, which we now must adapt to.
Our experience of climate change has already created costs to our economy, society and environment. We must work to mitigate against climate change, as well as adapt to the impacts that we already experience.
Mitigation: reducing climate change
Reducing greenhouse gas emissions: energy efficiency, renewable energy, lower consumption rates, electric cars.
Enhancing our carbon sinks: planting trees and carbon sequestering plants, invest in carbon capture technology.
Adaptation: adapting to life with climate change
Reduce our vulnerability to harmful effects.
Innovation (infrastructure like flood defences/devices like water desalination/resilient food/efficient healthcare).
Emergency planning for severe weather. |
Throughout the season we will examine
Indigenous animals and their significance in...
Telling stories, traditions and values.
What will we learn?
In this class, we will learn:
- Totem poles and the stories
- Animals in Indigenous Art
- Animal Indigenous symbols
- Indigenous stories and Traditions
- Raven's Tale
Significance of Indigenous Studies
Animals are well represented in the Indigenous culture through the stories told and passed down through many generations.
Learning about these stories and traditions will increase understanding for the first people of Canada.
Cultural awareness creates a safer world for human differences.
Our class is happening every Friday
From 1 - 4 pm.
Don't miss out! |
Prevention Principles - National Institute of Health
This is a link to a 49 page online booklet that outlines prevention principles for parents, teachers and community leaders. These principles are intended to help parents, educators, and community leaders think about, plan for, and deliver research-based drug abuse prevention programs at the community level. The references following each principle are representative of current research. Below is an exerpt from the booklet:
Risk Factors and Protective Factors
Principle 1 - Prevention programs should enhance protective factors and reverse or reduce risk factors.
- The risk of becoming a drug abuser involves the relationship among the number and type of risk factors (e.g., deviant attitudes and behaviors) and protective factors (e.g., parental support).
- The potential impact of specific risk and protective factors changes with age. For example, risk factors within the family have greater impact on a younger child, while association with drug-abusing peers may be a more significant risk factor for an adolescent.
- Early intervention with risk factors (e.g., aggressive behavior and poor self-control) often has a greater impact than later intervention by changing a child’s life path (trajectory) away from problems and toward positive behaviors.
- While risk and protective factors can affect people of all groups, these factors can have a different effect depending on a person’s age, gender, ethnicity, culture, and environment. |
|Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting|
The field-effect transistor (FET) is an active "voltage" device. Unlike bipolar transistors, FETs are not current amplifiers. Rather, they act much like vacuum tubes in basic operation. FETs are three-lead devices similar in appearance to bipolar transistors. The three leads are referred to as the gate, source, and drain. These three leads are somewhat analogous to the bipolar transistor's base, emitter, and collector leads, respectively. There are two general types of FETs: junction field-effect transistors (JFETs) and insulated-gate metal oxide semiconductor field-effect transistors (MOS FET or IGFET).
FETs are manufactured as either N-channel or P-channel devices. N channel FETs are used in applications requiring the drain to be positive relative to the source; the opposite is true of P-channel FETs. The schematic symbols for N-channel and P-channel JFETs and MOSFETs are shown in Fig. ---1. Note that the arrow always points toward the channel (the interconnection between the source and drain) when symbolizing N channel FETs, and away from it in P-channel symbologies.
All types of FETs have very high input impedances (1 M-ohm to over 1,000,000 M-ohm). This is the primary advantage to using FETs in the majority of applications. The complete independence of FET operation from its input current is the reason for their classification as voltage devices. Because FETs don’t need gate current to function, they don’t have an appreciable loading effect on preceding stages or transducers.
Also, because their operation does not depend on "junction recombination" of majority carriers (as do bipolar transistors), they are inherently low-noise devices.
FET Operational Principles
The basic operational principles of FETs are actually much simpler than those of bipolar transistors. FETs control current flow through a semi conductor "channel" by means of an electrostatic field.
Referring to Fig. ---2, examine the construction of a JFET. Notice that there are two junctions, with the P material connected to the gate, and the N material making up the channel. Assume that the source lead is connected to a circuit common, and a positive potential is applied to the drain lead. Current will begin to flow from source to drain with little restriction. The N-channel semiconductor material, although not a good conductor, will conduct a substantial current.
Under these conditions, if a negative voltage is applied to the gate, the PN junctions between the gate and channel material will be reverse biased (negative on the P material). The reverse-biased condition will cause a depletion region extending outward from the gate/channel junctions. As you might recall, a depletion region becomes an insulator because of the lack of majority charge carriers. As the depletion region spreads out from the gate/channel junction deeper into the channel region, it begins to restrict some of the current flow between source and drain. In effect, it reduces the conductive area of the channel, acting like a water valve closing on a stream of flowing water. This depletion region will increase outward in proportion to the increase in amplitude of the negative voltage applied to the gate.
If the negative gate voltage is increased to a high enough potential, a point will be reached when the depletion region entirely pinches off the current flow through the channel. At this point, the FET is said to be "pinched off" (this pinch-off region is analogous to cutoff in bipolar transistors), and all current flow through the channel stops. The basic principle involved is controlling the channel current with an electrostatic field. This field effect is the reason for the name field-effect transistor.
Continuing to refer to Fig. ---2, notice the difference in construction between a MOSFET and JFET. Although a JFET's input impedance is very high (because of the reverse-biased gate junction), there can still be a small gate current (because of leakage current through the junction), which translates to a reduced input impedance. However, gate current through a MOSFET is totally restricted by an insulating layer between the gate and channel.
A MOSFET functions in the same basic way as a JFET. If a negative volt age is applied to the gate of an N-channel MOSFET, the negative electrostatic charge round the gate area repels the negative-charge carriers in the N-channel material, forming a resultant depletion region. As the negative gate voltage varies, the depletion region varies proportionally. The variance in this depletion region controls the current flow through the channel and, once again, current flow is controlled by an electrostatic field.
A third type of FET, called an enhancement-mode MOSFET, utilizes an electrostatic field to "create a channel," rather than deplete a channel.
Referring again to Fig. ---2, notice the construction of an enhancement mode MOSFET. The normal N channel is separated by a section of a P material block, called the substrate. N-channel enhancement-mode MOSFETs, such as the one illustrated, require a positive voltage applied to the gate. The positive potential at the gate attracts "minority" carriers out of the P-material substrate, forming a layer of "N material" around the gate area.
This has the effect of connecting the two sections of N material (attached to the source and drain) together to form a continuous channel, and thus allows current to flow. As the positive gate potential increases, the size of the channel increases proportionally, which results in a proportional increase in conductivity. Once more, current flow is controlled by an electrostatic field.
All of the operating principles discussed in this section have been applied to N-channel FETs. P-channel FET devices will operate identically; the only difference is in the reversal of voltage polarities.
As discussed previously, the primary gain parameter of a standard bipolar transistor is beta. Beta defines the ratio of the current flow through the base relative to the current flow through the collector. In reference to FETs, the primary gain parameter is called transconductance (Gm).
The transconductance is a ratio defining the effect that a gate-to-source voltage (VGS) change will have on the drain current (I_D). Transconductance is typically defined in terms of micromhos (the mho is the basic unit for expressing conductance). Typical transconductance values for common FETs range from 2000 to 15,000 micromhos. The equation for calculating transconductance is:
Gm = change in drain current /ange in gate-to-source voltage
For example, assume that you were testing an unknown FET. A 1-volt change in the gate-to-source voltage caused a 10-milliamp change in the drain current. The calculation for its transconductance value would be:
Gm = 0.01 mho= 10,000 micromhos
Referring to Fig. ---3, assume that this illustration has the same transconductance value as calculated in the previous example. A 1-volt change in the gate-to-source voltage (input) will cause a 10-milliamp change in the drain current. According to Ohm's law, a 10-milliamp cur rent change through the 1-Kohm drain resistor (RD) will cause a 10-volt change across the drain resistor (10 milliamps _ 1000 ohms _ 10 volts).
This 10-volt change will appear at the output. Therefore, because a 1-volt change at the gate resulted in a 10-volt change at the output, this circuit has a voltage gain (Ae) of 10.
In numerous ways, FET circuits can be compared with standard bipolar transistor circuits. The circuit shown in Fig. ---3 is analogous to the common-emitter configuration, and it’s appropriately called a common-source configuration. The output is inverted from the input, and it’s capable of voltage gain. If the output were taken from the source, instead of the drain, it would then be a common-drain configuration.
The output would not be inverted, and the voltage gain would be approximately 1. Of course, the common-drain FET amplifier is analogous to the common-collector amplifier in bipolar design.
FET Biasing Considerations
Referring again to Fig. ---3, note that the gate is effectively placed near the same potential as circuit common through resistor RG. With no input applied, the gate voltage (relative to circuit common) is zero.
However, this does not mean the gate-to-source voltage is zero.
Assume the source resistor (RS ) is 100 ohms, and that the drain cur rent, which is the same as the source current, is 15 milliamps. This 15-milliamp current flow through RS would cause it to drop 1.5 volts, placing the source lead of the FET at a positive 1.5-volt potential "above circuit common." If the source is 1.5 volts more positive than the gate, it could also be said that the gate is 1.5 volts more negative than the source. (Is an 8-ounce glass, with 4 ounces of water in it, half-full or half-empty?) Therefore, the gate-to-source voltage in this case is negative 1.5 volts. This also means that the gate has a _1.5-volt negative bias. If a signal voltage is applied to the input, causing the gate to become more negative, the FET will become less conductive (more resistive), and vice versa. A JFET exhibits maximum conductivity (minimum resistance), from the source to the drain, with no bias voltage applied to the gate.
MOSFETs are biased in similar ways to JFETs, except in the case of enhancement-mode MOSFETs. As explained previously, enhancement mode MOSFETs are biased with a gate voltage of the opposite polarity to their other FET counterparts. Some enhancement-mode MOSFETs are designed to operate in either mode of operation.
In general, FETs provide a circuit designer with a higher degree of simplicity and flexibility, because of their lack of interstage loading considerations (a transistor stage with a high input impedance won’t load down the output of a previous stage). This can also result in the need for fewer stages, and less complexity in many circuit designs.
Static Electricity: An Unseen Danger
The introduction of MOS (metal oxide semiconductor) devices brought on a whole new era in the electronic world. Today, MOS technology has been incorporated into discrete and integrated components, allowing lower power consumption, improved circuit design and operation, higher component densities, and more sophisticated operation. Unfortunately, a major problem exists with all MOS devices. They are very susceptible to destruction by static electricity.
Inadvertent static electricity is usually caused by friction. Under the proper conditions, friction can force electrons to build up on nonconductive surfaces, creating a charge. When a charged substance is brought in contact with a conductive substance of lesser charge, the charged sub stance will discharge to the other conductor until the potentials are equal.
Everyone is "jolted" by static electricity from time to time. Static electrical charges can be built up on the human body by changing clothes, walking over certain types of carpeting, sliding across a car seat, or even friction from moving air. The actual potential of typical static charges is surprising.
A static charge of sufficient potential to provide a small "zap" on touching a conductive object is probably in the range of 2000 to 4000 volts! Most MOS devices can be destroyed by static discharges as low as 50 volts. The static discharge punctures the oxide insulator (which is almost indescribably thin) and forms a tiny carbon arc path through it. This renders the MOS device useless.
The point is that whenever you work with any type of MOS device, your body and tools must be free of static charges. There are many good methods available to do this. The most common is a "grounding strap," made from conductive plastic, that might be worn around the wrist or ankle and attached to a grounded object. Soldering irons should have a grounded tip, and special "antistatic" desoldering tools are available. Conductive work mats are also advisable. MOS devices must be stored in specially manufactured small parts cabinets, antistatic bags, and conductive foam.
NOTE Don’t try to make your own grounding straps out of common wire or conductive cable of any type!
This is very dangerous. It’s like working on electrical equipment while standing in water. Specially designed grounding straps, for the removal of static charges, are made from conductive plastic exhibiting very high resistance. Consequently, static charges can be drained safely, without increasing an electrocution risk in the event of an accident.
The susceptibility to static charges has led many people to believe that MOS devices are somehow "fragile." There is some evidence to support this notion; but in actuality, the problem is usually the result of an inexperienced design engineer incorporating a MOS device into an application where it doesn't belong. If properly implemented, MOS devices are as reliable as any other type of semiconductor device. However, care should be exercised in handling PC boards containing MOS devices, because some designers might extend an open, unprotected MOS device lead to an edge connector where it’s susceptible to static voltages once unplugged.
Building a High-Quality MOSFET Audio Amplifier
Many audiophiles today are adamant supporters of the virtues of MOSFETs used as output drivers in audio amplifiers. They claim that MOSFETs provide a softer, richer sound-one more reminiscent of vacuum-tube amplifiers. Although I won't get involved in that dispute, I will say that MOSFETs are more rugged than bipolar transistors and consequently provide a higher degree of reliability.
There are good economic and functional reasons to use MOSFETs as output drivers, however. At lower power levels, power MOSFETs display the same negative temperature coefficient as bipolar transistors. But at higher power levels, they begin to take on the characteristic of devices with a positive temperature coefficient. Because of this highly desirable attribute, temperature compensation circuits are not required and there is no danger of thermal runaway. Also, power MOSFETs, being voltage devices, don’t require the high current drive that must be provided for their bipolar counterparts. The result is a simpler, more temperature-stable amplifier circuit. The only disadvantage in using power MOSFETs (that I have discovered), is the lack of availability of high-power, complementary pairs.
FIG. 4 is a schematic diagram of a professional-quality 120-watt rms MOSFET audio power amplifier. If you compare this schematic with the Fig. 8-11 amplifier design, you will discover that they are very similar. Most of the same operational physics and principles apply to both designs. There are a few differences, however, which will be detailed in this section.
To begin, the Fig. ---4 design incorporates four power MOSFETs in the output stage: two pairs of complementary MOSFETs that are connected in parallel with each other. This was done to increase the power output capability of the amplifier. If only about 50 or 60 watts of output power capability were desired, only a single pair of MOSFETs would have been needed. The conventional method of increasing the output power capability of any audio power amplifier, bipolar or MOSFET, is to add additional output devices in parallel.
A few of the changes incorporated into the Fig. ---4 amplifier have nothing to do with the use of MOSFET outputs-they are due to the higher rail voltages and subsequent higher output power capability. F1 and F2 have been increased to 5-amp fuses to accommodate the higher rail currents, and Q8 and Q11 have been replaced with higher dissipation transistor devices because of the higher rail voltages. Also, although not shown in the schematics, the rail decoupling capacitors (i.e., C4, C5) must have their voltage ratings increased up to at least 63 WVDC (prefer ably 100 WVDC). C10, C11, and C12 were already specified at 100 WVDC for the Fig. 8-11 project, so they are suitable for this design also. The volt age gain of this design also had to be increased in order to keep the same approximate input sensitivity. Remember, the peak-to-peak output voltage of this amplifier must be significantly greater than the Fig. 8-11 design in order to deliver about double the output power to the speaker system. Therefore, if the input signal is maintained at about 0.9 volt rms, the voltage gain must be increased. R8 was lowered to 270 ohms, which sets the gain at about 38 [(R10 _ R8) divided by R8 _ 38.03].
Now, getting into the modifications involving the incorporation of MOSFETs, note that transistor Q9 in the Fig. 8-11 diagram has been removed and replaced with a 500-ohm potentiometer (connected as a rheostat) in the Fig. ---4 diagram. Since the source-to-drain impedance values of MOSFETs increase with rising temperature, temperature compensation circuitry and dangers of thermal runaway are nonexistent.
Therefore, the V_bias temperature-tracking transistor of Fig. 8-11 (Q9) can be replaced with a simple potentiometer. Typically, P1 will be adjusted to drop about 0.8 volt to provide the small forward bias on the output devices for the minimization of crossover distortion.
Since MOSFETs are voltage devices, drawing insignificant gate currents for operational purposes, the pre-driver transistors (Q14 and Q15 in Fig. 8-11) have been removed. Unfortunately, MOSFETs do suffer the disadvantage of fairly high gate capacitance (up to about 1000 pF in many devices), which can create parasitic RF oscillations in the gate circuits (i.e., destructive high-frequency oscillations localized in the gate circuitry and not a function of the overall amplifier stability characteristics). This is especially problematic if you are using paralleled output pairs, such as in the Fig. 4 design. The cure for this idiosyncrasy is to install gate resistors, commonly called gate-stopper resistors, to provide resistive isolation from one gate to another. This is the function of resistors R19, R20, R25, and R26.
Note that C9 in Fig. 8-11 has been deleted in the Fig. ---4 design. As you may recall, C9 was implemented to improve the turn-off speed of the bipolar output transistors (eliminating the possibility of switching distortion). Unlike bipolar transistors, MOSFETs don’t have a junction capacitance mechanism that can store charge carriers and inhibit their turn-off speed. This is the reason why power MOSFETs are superior to power bipolar transistors for high-frequency applications. Therefore, C9 is not needed.
The MOSFET devices specified for the Fig. ---4 amplifier (i.e., 2SK1058/2SJ162 pairs) are a special type of MOSFETs commonly called lateral MOSFETs. These devices are specifically designed for applications in audio power amplifiers, and they will provide better performance with greater reliability than the more common HEXFET or D-MOSFET families. However, these other device types are incorporated into quite a few commercial MOSFET amplifiers because of their lower cost-lateral MOSFETs are comparatively expensive.
The performance of the Fig. ---4 audio power amplifier is quite impressive. Most of the performance specifications are virtually identical to those of the Fig. 8-11 amplifier, but the percent THD (total harmonic distortion) is a little higher, measuring out to about 0.02% at 120 watts rms. One reason for this slightly higher distortion figure is the inherent lower transconductance of MOSFETs in comparison to bipolar transistors. (Bipolar transistors can be evaluated on the basis of transconductance just like MOSFETs, but their gain factor is usually looked at from the perspective of current gain rather than transconductance.) The point is that bipolar transistors have a gain capability much higher than do MOSFETs, so their higher gain can be converted to higher levels of "linearizing" negative feedback, which results in a little better distortion performance.
I included the amplifier design of Fig. ---4 in this section primarily for discussion purposes, but it can be considered an advanced project if you want to invest the time and money into building it. However, I certainly don't recommend it for a first project. In case you believe that your construction experience and safety practices are satisfactory for such an endeavor, I have provided the following construction details.
If you design a PC board for this project, don't worry about trying to make it overly compact-the heatsinking for the MOSFETs will take up most of the enclosure space. Make sure that all of the high-current PC board tracks are extra wide. The heatsinking for the lateral MOSFETs will need to be approximately doubled over that needed for the Fig. 8-11 amplifier. (MOSFETs are a little less efficient than bipolar transistors, but they can also tolerate higher temperatures than bipolar transistors.) If you have to run fairly long connection wiring to the MOSFET leads (i.e., over _ 6 inches), it’s best to solder the gate resistors directly to the gate leads of the MOSFETs and insulate them with a small piece of heat shrink tubing. Two small heatsinks should be mounted to transistors Q8 and Q11. The remaining construction details are essentially the same as for the Fig. 8-11 amplifier design.
The raw dual-polarity power supply that you will need to power this amplifier will be quite hefty. The power transformer must be an 80-volt center-tapped model with a secondary current rating of at least 4 amps (i.e., a 320-VA transformer). I recommend a 25-amp bridge rectifier (over rated because of the high surge currents involved) and about 10,000 to 15,000 _F of filtering for each power supply rail. These capacitors will need to have a voltage rating of 75 WVDC. The AC line fuse (on the primary of the power transformer) should be a 3-amp, 250-volt slow-blow type, and don't forget to incorporate bleeder resistors.
During the initial testing of the Fig. ---4 power amplifier, or almost any power amplifier design that you want to test, the "lab quality power supply" project (the first project in this guide) can be used to detect major faults or wiring errors in the amplifier circuitry.
The power amplifier under test can be connected up to the dual polarity 38-volt outputs of the lab supply and functionally tested without a speaker load. Most modern audio power amplifier designs will function at much lower rail voltages than what they might be designed for. Obviously, under these conditions, the amplifier won’t operate at maximum performance levels, but it’s a less risky means of testing a newly constructed amplifier (or an amplifier that has just undergone major repairs). If you have made any catastrophic mistakes, the current-limited outputs of the lab supply will reduce the risk of destroying a large quantity of expensive components through collateral damage.
Time for more practical fun.
Sounds Like Fun
FIG. 5 is my favorite project in this guide. It can actually be "played" similar to a musical instrument to produce a variety of pleasing and unusual sounds. It’s also capable of running in "automatic mode" for unattended fascination.
The heart of this light-controlled sound generator is the basic UJT oscillator illustrated in Section 9, Fig. 9-9. Referring to this illustration, the 4.7-Kohm resistor in series with P1 is replaced with a photoresistor.
Three such oscillators are needed for the Fig. ---5 circuit. Each oscillator should have a different C1 value; the lowest C1 value chosen should be placed in oscillator 1 (to produce the highest audio frequency), the inter mediate value in oscillator 2, and the highest capacity value (producing the lowest frequency) in oscillator 3.
The outputs of the three oscillators are capacitor-coupled to the input of a JFET audio mixer. The output of the mixer is connected to a line level input on any audio amplification system.
The P1 potentiometer in each oscillator is adjusted for a good reference frequency under ambient lighting conditions. By waving your hand over the photoresistor (causing a shadow), the frequency will decrease accordingly. With all three oscillators running, the waving of both hands over the three photoresistors can produce a wide variety of sounds. By experimenting with different P1 settings in each oscillator, the effects can be extraordinary.
Another feature, added for automatic operation, is two "high-brightness"-type LEDs on the output of oscillator 3. Referring to Fig. ---5, when SW1 is in the position to connect the oscillator 3 output to the LEDs, the LEDs will flash on and off at the oscillator's frequency. If these LEDs are placed in close proximity to the photoresistors used to control the frequency of oscillators 1 and 2, their frequency shifts will occur at the oscillator 3 frequency. In addition, even subtle changes in ambient light will cause variances. The possibilities are infinite.
Although not shown in Fig. ---5, you will need to add some series resistance between the output of oscillator 3 and the LEDs to limit the cur rent to an appropriate level (depending on the type of LEDs used).
This circuit works very well with the 12-watt amplifier discussed in Section 8. Placed in an attractive enclosure, it’s truly an impressive project.
The JFET mixer ( Fig. ---5) is a high-quality audio frequency mixer for any audio application. If additional inputs are needed, additional 1-Mohm potentiometers and 100 Kohm-resistor combinations can be added.
Emergency Automobile Flasher
FIG. 6 illustrates how one HEXFET (a type of MOSFET) can be used to control a high-current automobile headlight for an emergency flasher.
A UJT oscillator ( Section 9, Fig. 9-9) is modified for extremely low-frequency (ELF) operation (about 1 hertz), and its output is applied to the gate of the HEXFET as a switching voltage. The 1-Kohm resistor and 1000-_F capacitor are used to decouple the oscillator from the power circuit.
Any high-current automotive accessory (up to 10 amps) can be con trolled with this circuit-even inductive loads, such as winch motors.
The circuit illustrated in Fig. ---7 is used to convert a low-voltage DC power source (usually 12 volts from an automobile battery) to a higher voltage AC source. Circuits of this type are called inverters. The most common application for this type of circuit is the operation of line powered (120-volt AC) equipment from a car battery. There are, of course, many other applications.
For example, if you wanted to use this circuit for the previously mentioned application, the 12-volt DC source from the car battery would be applied to the DC terminals shown in Fig. ---7 (observing the correct polarity, and fuse-protecting the 12-volt line from the battery). A standard 12.6-volt ct secondary/120-volt primary power transformer is used. The VA rating of the transformer will depend on the load of the line-powered equipment intended for use with this circuit.
If the line-powered device required 120 volts AC at 1 amp ( For example), a minimum size of 120 VA is needed (I recommend using at least a 10 to 20% higher VA rating to compensate for certain losses). With the components specified, a 200-V A transformer is the largest transformer that can be used.
The combination of C1, C2, and the transformer secondary make up a resonant circuit (resonance will be discussed in a later section). Used in conjunction with the active components (Q1 and Q2), this circuit becomes a free-running oscillator, with the frequency of which is deter mined primarily by the value of C1 and C2. The transformer will operate the most efficiently at about a 60-hertz frequency, so the value of C1 and C2 should be chosen to "tune" the oscillator as closely to that frequency as possible.
Capacitors C1 and C2 should be equal in capacitance value. Start with 0.01 _F for C1 and C2, and use a resistance value of 100 K-ohms for R1 and R2. These values should bring you close to 60 hertz. If the frequency is too high, decrease the values of the capacitors slightly and vice versa for a lower-frequency condition. |
These 10 worksheets will help with your math rotations or with just day to day math problem solving. This pack focuses on the part part whole math concept and students have the ability to draw, write or cut and paste their answers with these sheets.
Spring Math Problem Solving Part Part Whole Strategy
PDF (3 MB|13 pages)
Year 1 Maths:
Represent and solve simple addition and subtraction problems using a range of strategies including counting on, partitioning and rearranging parts (ACMNA015)
Year 2 Maths:
Solve simple addition and subtraction problems using a range of efficient mental and written strategies (ACMNA030) |
Sputtering is a thin film deposition process in the modern technology world of CDs, semiconductors, disk drives and optical devices industries. Sputtering is the process at an atomic level, where the atoms are automatically sputtered out from the sputtering materials and then be deposited on another substrate, such as a solar panel, semiconductor wafer or optical device. It is an effect of the severe bombard of the high energy particles on the target.
In general, sputtering occurs only when kinetic energy is said to be bombarding particles at very high speeds, which is much higher than a normal thermal energy. At the atomic level, this makes thin film deposition more precise and accurate than that by melting the source material using conventional thermal energy.
Copper Sulfide is the best material for Sputtering Targets. It can be molded into the shape of Plates, Discs, Step Targets, Column Targets and Custom-made. Copper Sulfide is a combination of two materials—Copper and Sulphur. The chemical name of the product is CuS, which offers you the Copper Sulfide product with more than 99 percent purity.
Cyprus is the original source material for the chemical element Copper. The people of Middle East initially discovered it in 9000 BC. “Cu” is the canonical chemical symbol of copper.
Whereas Sulfur, otherwise known as sulphur, is first introduced in 2000 BC and discovered by Chinese and Indians. It is a chemical name originated from the Sanskrit word ‘sulvere’, and the Latin ‘sulfurium’. Both names are for sulfur.
Copper Sulfide metal discs and plates are highly adhesive and resistant against oxidation and corrosion. Using Copper Sulfide sputtering targets to deposit thin films will not produce highly reflective and extremely conductive films, but can also extensively increase the efficiency of the source energy.
So to achieve the desired noticeable result in a sputtering deposition, the built-up process used to fabricate the Sputtering Targets should be critical. A Copper Sulfide targeted material will give the best result. However, material like only an element, alloys, mixture of elements, or perhaps a compound can be used for the purposes.
For more information about sputtering targets, please visit http://www.sputtertargets.net/. |
What is the basic unit of any element?
The most basic unit of any element is the atom.
Every element has a specific charge as well (besides transition metals). For example:
#H#has a #+1#charge.
#Cl#has a #-1#charge.
If we want an element or a compound/solution to be stable, the charges must add up to
It's important to understand that once you change the chemical formula, you change the whole entire number or atoms in that "object". |
Raccoons are omnivores and eat foods from both plant and animal sources; their diet is highly dependent on the food available where they live. According to PBS.org, a raccoon's typical diet includes fruits, nuts, plants, insects, berries, eggs, frogs and crayfish. The largest volume of their diet comes from plants and invertebrates. In urban areas and campgrounds, humans often see them scavenging through garbage cans looking for discarded food.
The original habitat of the raccoon was in the tropics, where riverbanks provided plenty of opportunities for foraging for frogs and crustaceans. They lived in burrows or cavities in trees of the forests of North America. Predators for raccoons included coyotes and foxes.
Over time, raccoons moved north. Barns and other human outbuildings provide raccoons shelter from the cold weather, allowing them to survive in areas far from their origin. This migration has lead to sightings as far north as Alaska.
This migration took them from the forest to urban areas, where they do very well. Raccoons find shelter under homes and in storm sewers. The garbage and pet foods of humans provide a constant supply of food, even when their natural sources are scarce. Additionally, the city is relatively free of predators and laws restrict human hunting or trapping of these animals. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.