content
stringlengths 275
370k
|
---|
Epstein Barr Virus (EBV) infection, commonly known as mononucleosis or “mono”, can occur at any age, but is most common in adolescence and early adulthood. Generally, the younger a person is when they get mono the better – children are less ill with it and recover faster than teenagers do, and teenagers in turn have an easier time than adults!
Mono proceeds in three phases. First, a prodrome lasting 1-2 weeks with few if any symptoms. Second, an acute phase lasting 2 to 6 weeks during which the individual may be very sick with fevers, swollen glands, severe sore throat, and exhaustion. And third, a convalescent phase lasting 2 to 6 months during which the acute symptoms have resolved but the patient suffers from lowered physical & mental energy, endurance, and easy fatigability. During the acute and convalescent phases, individuals are at increased risk of rupturing the spleen in the event of blunt abdominal trauma. There is no treatment other than supportive care (fluids, pain control, fever control) during any of these phases.
In most cases, by the time EBV is diagnosed it is a week or more into the acute phase. During this phase, the following measures are helpful:
- Take acetaminophen (Tylenol) or ibuprofen (Advil or Motrin) to bring down a fever and lessen the pain from a sore throat.
- Gargle four times a day with warm water mixed with a teaspoon of antacid or salt.
- If it hurts to swallow, try eating softer foods. Milkshakes and cold drinks areespecially good. Avoid orange or grapefruit juice.
- Take a multivitamin every day.
- Do not share drinks or silverware with others.
- Drink plenty of fluids, at least 8 glasses each day.
- Rest when you feel tired. You do not need to stay in bedif you feel well enough to get up.
EBV is contagious, but only from close contact such as kissing, sharing utensils, or prolonged household contact. It is NOT generally transmitted through casual social contacts such as might happen in the classroom. EBV is a member of the Herpes virus family, and like other Herpes viruses after a primary infection lives in the patient’s body (inside a subset of the white blood cells) for the rest of his life. In this latent state it is harmless, but it does periodically go through phases of replication during which the host individual sheds virus and is contagious for EBV despite having no symptoms at all for the rest of his life. Thus, the question of being “no longer contagious” lacks much meaning and is not relevant to when a person should return to school.
Generally we recommend that children or teens with EBV return to school during the convalescent phase, basically as soon as they “feel up to it”; but with certain modifications in place to compensate for their reduced stamina. The most important of these is a ban from all contact sports (football, wrestling, hockey, etc…) for 6 weeks from the onset of illness (acute phase) due to the risk of splenic rupture. Being excused from gym and other physical activities altogether for the first 4 to 6 weeks after return to school might also be reasonable just for lack of energy. Reduced homework load and/or a shortened school day for some time after return should also be considered in severe cases. Kids in the convalescent phase of mono should have the option to go to the nurse’s office for a rest or nap should they feel the need during the school day as well. |
20 October 2021
Delving into the stars
Published online 30 November 2016
A new technique allows scientists to measure the shape and structure of moving stars with unprecedented precision.
An international team of scientists, including from New York University Abu Dhabi (NYUAD), managed to directly observe structural components of one slowly rotating star, thanks to asteroseismology1.
This new technique, 10,000 times more precise than its predecessor, reveals a star’s flatter, rounder contours and different rotational speeds. It allows scientists to ‘see’ the nature of the stellar interior with very high precision, according to the scientists.
Traditional techniques can only be used to image some of the largest close-by stars.
Stars are not perfectly spherical. All stars rotate and are therefore flattened by the centrifugal force. The faster the rotation, the more oblate the star becomes. The shape of stars can also be distorted by magnetic fields.
“Stellar magnetic fields, especially weak magnetic fields, are notoriously difficult to directly observe on distant stars,” says lead author Laurent Gizon, researcher at the Max Planck Institute for Solar System Research, Germany.
Gizon’s research focused on the evolution of stars, which is typically controlled by the nuclear reactions at their core.
“Measuring the shape of stars can inform us about their rotation and their magnetic field, two fundamental properties of stars,” he says. Stellar magnetic fields are responsible for active phenomena such as sunspots and flares.
Gizon and his colleagues studied a hybrid pulsating star — hot, luminous, more than twice the size of the Sun and rotates three times slower — called Kepler 11145123. They found that the star, whose oscillations were observed by NASA’s Kepler mission for four years, is less oblate than implied by its rotation rate — meaning that its structural distortion is caused by more than rotation alone.
“This is an indication of the presence of a magnetic field,” says Gizon. “We propose that the presence of a magnetic field at low latitudes could make the star look more spherical to the stellar oscillations.”
Through this new technique, the researchers managed to separate frequencies of the sound waves oscillating from the star’s interior, discovering that the star rotated faster at the surface than at the core.
Of the significance of the discovery, Othman Benomar, researcher at the Center of Space Science at New York University Abu Dhabi, and co-author of the study, says, “stars are elementary components of our universe and it is important to precisely understand the mechanisms of their birth, evolution and death if one wants to understand the evolution of larger structures in the universe such as stellar clusters, and galaxies.”
According to Benomar, until this research, only little has been known about the physical conditions, such as pressure, temperature, nuclear reaction rate, magnetic field or rotation of the interior of stars.
“This is due to two reasons. Firstly, stars are very distant, dense and opaque objects, for which in situ measurements are not possible. Secondly, conditions in the deep interior are so extreme that they cannot be reproduced in laboratories,” he says.
In the future, the scientists plan to map out deformities of more rapidly spinning stars. “It will be particularly interesting to see how faster rotations and a stronger magnetic field can change a star’s shape. An important theoretical field in astrophysics has now become observational,” says Gizon.
Joseph Gelfand, assistant professor of physics at NYUAD who was not involved in the study, describes the research and findings as “exciting” with some caveats.
Indeed, it’s the first time that asteroseismology has been used to measure the magnetic field strength of a star, says Gelfand. But while magnetic fields might be the cause of the discrepancy between oscillations implied by stellar rotation rates and measured, “there are other possibilities, and this technique isn't really able to give the strength of the magnetic field.”
“That being said, the origin and strength of stellar magnetic fields is an important question relevant to a lot of fields of astrophysics and plasma physics — where magnetic fields are often acknowledged to be very important but too complicated to be studied.”
- Gizon, L. et al. Shape of a slowly rotating star measured by asteroseismology. Sci. Adv. 2, e1601777 (2016). |
what is edge detection ?
Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness.
where edge detection is used ?
Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision, so knowing how to do it will eventually pay you off.
There are several edge detection algorithms and different libraries supporting it but in this tutorial, I’m going to show you how to do it using OpenCV using the Canny algorithm.
#In Window pip install opencv-python pip install matplotlib #In Linux pip3 install opencv-python pip3 install matplotlib
Let’s get started
Once we have installed now we ready to go to detecting edges with python using Canny algorithms.
we are going to use OpenCV method imread( ) to load an image from the file, Canny() to detect the edges, and then finally visualising the images before detection and after using Matplotlib
Reading images with OpenCV
To read an image from file using imread() method you need to provide two parameter, one of path to to our image and the next one is mode of reading which can either by gray, rgb, hsv, hsl, and etc.
OpenCV syntax to read image
import cv2 image = cv2.imread(path_to_image, mode_of_reading)
Canny algorithms usually works well when the image is in gray scale , when you put 0 on mode of reading opencv just interpret it as grayscale reading.
Using Canny algorithms to detect the edges
To detect edges with Canny you have to specify your raw image, lower pixel threshold, and higher pixel threshold in the order shown below;
image_with_edges = cv2.Canny(raw_image, l_threshold, h_theshold)
How threshold affect edge detection?
The intensity gradient of a pixel is greater than the higher threshold, it will be added as an edge pixel in the output image otherwise it will be rejected completely.
Finalizing our code and Visualizing it with Matplotlib
Now that’s we learned the basics of OpenCV together with Canny detection algorithms now let’s put them together to sample real-world application
Let’s detect the edges in the below sample case image road.jpg
Using the knowledge we just learned above, I have bundled everything together to come up with below final code below, when you run it, it will load an image, perform detection and display it using Matplotlib.
import cv2 import matplotlib.pyplot as plt def detect_edge(image): ''' function Detecting Edges ''' image_with_edges = cv2.Canny(image , 100, 200) images = [image , image_with_edges] location = [121, 122] for loc, img in zip(location, images): plt.subplot(loc) plt.imshow(img, cmap='gray') plt.savefig('edge.png') plt.show() image = cv2.imread('road.jpg', 0) detect_edge(image)
Once Executed it will produce the below results
If you find this post interesting, don’t forget to subscribe to get more posts like this.
I recommend you to also check this;
- Build a real-time bar-code reader in Python
- How to convert picture to sound in Python
- Realtime vehicle detection in Python in 5 minutes
- Getting started with image processing using a pillow
Also in case of anything, drop it in the comment box below and I will get back to you ASAP
To get the full code for this article check out o on My Github |
English Language Arts Kindergarten
Reading: Foundational Skills Standard 1 b.
English Language Arts Kindergarten
Reading: Foundational Skills Standard 1 d.
This lesson will help expose students to other cultures and traditions.
The Vowel Family Skit
Scoop the Short Vowels
Forming Relationships Using Music and Rhythm--Our culture
Check with your school librarian to see what resources are available
25 Fun Learning Songs-lyrics on basic skills, alphabet, counting, safety rules, animal sounds, etc.
Kids Sing For Kids, 150 Songs, Lyrics Included (Direct Source Special Productions Inc.)
Alphabet Sing-Along Set, 26 lg. Alph. color cards & Sing Along Alph. Tape(Scholastic)
35 Rubrics & Checklists to Assess Reading and Writing (Scholastic)
Sound Matching Sheets & Lessons That Build Phonemic Awareness (Scholastic)
Phonemic Awareness Activities (Scholastic)
Phonemic Awareness Songs & Rhymes
Fun Phonics Mini-Books (Scholastic)
Easy & Adorable Alphabet Recipes for Snacktime (Scholastic)
When Will I Read by Miriam Cohen
The Alphabet Tree by Leo Lionni
Animal Alphabet by Bert Kitchen
ABC by Bob Reese
ABC I Like Me by Nancy Carlson
Alphabet Adventure by Audrey Wood
Letters and Sounds by Rosemary Wells
Word Family Wheels (Scholastic)
Books Are Better Than TV
Read To Me by Jacalyn Leavitt
PLEASE READ TO ME! By Elizabeth Rodgers
Teachers-Curtains! Familiar Plays for Little Actors by Diane Head
Learn-the-Alphabet PUPPET PALS-26 Reproducible stick puppets with stories
Puppets by Susan Canizares
25 Just Right Plays for Emergent Readers (Scholastic)
25 Emergent Reader Plays Around the Year (Scholastic)
25 Science Plays for Emergent Readers
A RAINBOW All Around Me by Sandra Pinkney
Let's Read About Squanto by Sonia W. Black
Ten Little Rabbits by Virginia Brossman & Sylvia Long
Tales From Around the World:
Teacher Book-Fun with Fairy Tales by Jo Ellen Moore & Joy Evens
Teaching With Cinderella Stories From Around the World (Scholastic)
Why Mosquitoes Buzz in People's Ears by Verna Aandema
Anansi the Spider
Mufaro's Beautiful Daughters
Abuela's Weave by Omar S. Castaneda
Babushka Baba Yaga by Patricia Polacco
The Tale of Rabbit and Coyote by Tony Johnston
Market Days by Madhur Jaffrey
Teaching Math with Rhyme & Rhythm:
Collaboration Math Books-12 adorable Rhyming math book (Scholastic)
Mother Goose Math (Scholastic)
Move & Learn Math Activities (Scholastic)
Children need to be exposed to other cultures and traditions, making connections through music, dance, art, poetry, stories, and acting.
Intended Learning Outcomes
1. Demonstrate a positive learning attitude.
3. Demonstrate responsible emotional and cognitive behaviors.
4. Develop physical skills and personal hygiene.
6. Communicate clearly in oral, artistic, written and nonverbal form.
Symbolization, classification, segmentation and blending, form conclusions
Invitation to Learn
Possible Extensions and Adaptations
Pair them with another child for the Invitation to Learn Activities-- Share one card or let them pick their card first--"Find the letter that your name starts with."
Extended Activities: Have students from your classroom put on the Short Vowel Skit for other classes.
Use Small Square White boards--write specific alphabet letters, write alphabet pairs, write vowels, write consonants, and sound out and spell three letter words.
Magnetic Letters--Find specific alphabet letters, find alphabet pairs, find vowels, find consonants and spell three letter words.
Phonetic Readers-Sound out letters to form words
Send a copy of the Short Vowel Family Skit home for the families to act out and reinforce.
Family bags—Alphabet flash card activities: Parent/child match, little/big letters separate and match, separate vowels from consonants, and sounding out and spelling three letter words.
Basic Skills Assessment--each quarter |
Microphthalmia is a congenital (present at birth) defect identified by the unusual smallness of one (unilateral) or both (bilateral) eyeballs. While in the uterus, the eyeballs of the baby fail to grow or form correctly, resulting in limited or severe loss of vision, or even blindness.
With simple cases, small eyeballs are anatomically intact, meaning all parts of the eye are present and functioning properly. Normal vision is possible if the eyes are slightly smaller than normal. In complex cases, small eyeballs are associated with other eye abnormalities, such as cataracts, coloboma and ptosis. Vision may be impaired due to this malformation and its accompanying conditions.
According to MedlinePlus, microphthalmia occurs in 1 in 10,000 births. Also called "small eye syndrome," it may be caused by genetics, although risk factors may also contribute to this birth defect.
Microphthalmia can be part of a syndrome (syndromic microphthalmia) or be present by itself (non-syndromic microphthalmia).
It's important to note that between a third and a half of all those with microphthalmia have it as part of a syndrome and that the treatments for syndromic and non-syndromic are handled in different ways.
Here are the main characteristics of each:
Syndromic microphthalmia refers to a birth defect occurring in conjunction with other conditions that affect organs and tissues in other parts of the body. Lenz microphthalmia syndrome, for example, is a type of syndromic microphthalmia. This rare birth defect is passed down through families (inherited) and almost exclusively affects males. It's characterized by microphthalmia plus malformations of other parts of the body, such as eyelids (blepharoptosis), skull (microcephaly), spine (scoliosis) or fingers (clinodactyly).
Non-syndromic microphthalmia means this birth defect occurs in isolation and not in concert with other conditions.
SEE RELATED: Rare eye diseases
Microphthalmia vs anophthalmia
Microphthalmia is often confused with anophthalmia. Both are birth defects that affect the eyes, but they’re not the same.
With microphthalmia, one or both eyeballs don't grow to full size. It may seem as if the eyeball is completely missing from the socket, but on closer inspection, it still has some eye tissue remaining, thus the "small eye" appearance. How the eyeball will function in terms of vision depends on the severity of the microphthalmia.
Anophthalmia, on the other hand, occurs when one or both eyeballs don’t form at all and are absent. Upon inspection, there may be some residual tissue, but formation of the eyeball during fetal development either completely stopped, degenerated or did not occur at all. Babies born with anophthalmia are blind.
Causes and risk factors of microphthalmia
The exact cause of microphthalmia is difficult, if not impossible, to identify. Microphthalmia may be attributed to genetics. However, research on birth defects as it relates to gene or chromosomal anomalies is so inconclusive that pinpointing an obvious genetic cause is not possible.
Instead of specific causes, it's useful to look at general risk factors that may shine a light on reasons eye-related birth defects occur. Some risk factors that focus on the mother during pregnancy include:
Harmful environmental factors – Exposure to certain things acquired through the environment may pose issues for fetal development. These include toxins and chemicals (mercury, solvents, pesticides, radiation from X-rays, etc.) as well as infections such as rubella and toxoplasmosis, or viruses such as herpes simplex and Zika.
Adverse behavioral factors – Engaging in certain activities may affect babies in utero. These include drinking alcohol or caffeine, using tobacco products or illegal drugs, taking certain medicines or engaging in poor dietary habits.
Underlying medical conditions – The existence of health-related conditions prior to and during pregnancy may interfere in fetal growth. Some pre-existing or chronic conditions include diabetes, cancer and obesity.
Is there a cure for microphthalmia?
There's no cure for microphthalmia since new eyes cannot be created, even with all the remarkable advances in medical science.
Just like other organs in the body (with the exception of the liver and skin), eyes can't be regenerated after birth — while they do have the capability of repairing themselves after trauma, injury or disease, but they cannot regrow themselves.
Treatment of microphthalmia
While there's no way to prevent or fully correct this birth defect, there are surgeries available for treating eye abnormalities that may be present alongside microphthalmia. These eye abnormalities include but are not limited to:
Cloudy eye (cataract)
Missing eye tissue (coloboma)
Small cornea (microcornea)
Droopy eye (ptosis)
Lazy eye (amblyopia)
Partial or complete absence of the iris (aniridia)
In addition to surgeries related to associated eye abnormalities, another option is available for those with microphthalmia — the artificial (prosthetic) eye.
SEE RELATED: Congenital cataracts
Prosthetic eyes as a treatment for microphthalmia
A prosthetic eye (commonly referred to as "fake eye" or "glass eye") takes the place of a missing eyeball and is designed to fit over an orbital (eye socket) implant like a shell.
Cosmetically, it improves appearance by giving the face symmetry and a natural look, especially when painted with an iris and pupil. While a prosthetic eye doesn't return vision, it can function like a real eye in terms of blinking, watering and movement.
Newborns or very young children have their own type of prosthetic eye called a conformer. It's inserted to promote and encourage orbital growth, since the bones around the eyes are still forming. With it, the health of underlying tissue can be tracked and monitored. Conformers allow for expansion from growth by being changed out frequently as the child gets older.
After a certain age, a conformer can be switched for a prosthesis. Just like no two pairs of natural eyes are the same, no two sets of prosthetic eyes are the same.
Some things to discuss with an ocularist (a professional who make prosthetic eyes) and an eye surgeon can be:
Material – Options include acrylic plastic polymer, silicone polymer or acrylic glass.
Customization – Offers a personalized look, such as eye color and pupil size, and correct measurements by taking an impression of the orbit.
Cost – Ready-made shells provide a less expensive alternative to custom made, though they may not fit or look properly.
Microphthalmia can be emotionally traumatic for the baby's parents, families and others — especially given that there's no cure. In addition to treatment, many support, counselling and advocacy groups are available to provide education, resources and answers to questions related to vision impairment and blindness.
As with other birth defects, early diagnosis and treatment are imperative. After birth, it's important to consult with a pediatrician or book an eye exam with an ophthalmologist or eye doctor.
Page updated February 2021 |
The powerful resolution and sensitivity of NASA’s Hubble Space Telescope reveal wonders of the universe in this image. The ability of gravity to warp the fabric of space itself is displayed, as the massive galaxy cluster Abell S1063 at center is surrounded by the distorted and magnified light of galaxies much farther away. The combined mass of the galaxies in the cluster act as a natural magnifying glass or funhouse mirror, showing amazing detail, but with a warped effect.
Natural magnifiers like these allow scientists to study details of distant galaxies they could not see otherwise. The distant, warped galaxies also provide information about the cluster that is revealing them. Extreme distortion stretches distant galaxies into a smeared arc, indicating the mass distribution of the galaxy cluster. Likewise, some distant galaxies appear multiple times through the “lens,” and any changes within them, like a supernova, will show up in one reflection of the galaxy and then another, indicating how light is travelling through the distorted space.
Hubble also captures the faint intracluster glow between the galaxies that make up Abell S1063, produced by free-floating “orphan” stars that were thrown from their galaxies during mergers. These stars align themselves with the overall gravity map of the cluster, and have been used as an indicator of where dark matter is distributed. In this way, the intracluster light is used to trace the location of dark matter, which is in itself undetectable. |
Definition of Fallacy
A fallacy is an erroneous argument dependent upon an unsound or illogical contention. There are many fallacy examples that we can find in everyday conversations.
Types of Fallacies
Here are a few well-known kinds of fallacies you might experience when making an argument:
1. Appeal to Ignorance
Appeal to ignorance happens when one individual utilizes another individual’s lack of information on a specific subject as proof that his or her own particular argument is right.
2. Appeal to Authority
This sort of error is also known as “Argumentum Verecundia” (argument from modesty). Instead of concentrating on the benefits of an argument, the arguer will attempt to append their argument to an individual of power or authority in an effort to give trustworthiness to their argument.
3. Appeal to Popular Opinion
This sort of appeal is when somebody asserts that a thought or conviction is correct since it is the thing that the general population accept.
4. Association Fallacy
Sometimes called “guilt by affiliation,” this happens when somebody connects a particular thought or drill to something or somebody negative so as to infer blame on another individual.
5. Attacking the Person
Also regarded as “Argumentum ad Hominem” (argument against the man), this is a common fallacy used during debates where an individual substitutes a rebuttal with a personal insult.
6. Begging the Question
The conclusion of a contention is accepted in the statement of the inquiry itself.
7. Circular Argument
This fallacy is also known as “Circulus in Probando”. This error is committed when an argument takes its evidence from an element inside the argument itself instead of from an outside one.
8. Relationship Implies Causation Fallacy
Also called “Cum Hoc Ergo Propter Hoc”, this fallacy is a deception in which the individual making the contention joins two occasions that happen consecutively and accepts that one made the other.
9. False Dilemma/Dichotomy
Sometimes called “Bifurcation”, this sort of error happens when somebody presents their argument in such a way that there are just two conceivable alternatives left.
10. Illogical conclusion
This is a fallacy wherein somebody attests a conclusion that does not follow from the suggestions.
11. Slippery Slope
The error happens when one contends that an exceptionally minor movement will unavoidably prompt great and frequently ludicrous conclusions.
12. Syllogism Fallacy
This fallacy may also be used to form incorrect conclusions that are odd. Syllogism fallacy is a false argument as it implies an incorrect conclusion.
Examples of Fallacy
To understand the different types of fallacies better, check out the following examles of fallacy:
Appeal to Ignorance
“You can’t demonstrate that there aren’t Martians living in caves on the surface of Mars, so it is sensible for me to accept there are.”
Appeal to Authority
“Well, Isaac Newton trusted in Alchemy, do you suppose you know more than Isaac Newton?”
Appeal Popular Opinion
“Lots of individuals purchased this collection, so it must be great.”
“Hitler was a veggie lover, in this way, I don’t trust vegans.”
Attacking the Person
“Don’t listen to Eddie’s contentions on instruction, he’s a simpleton.”
Begging the Question
“If outsiders didn’t take my daily paper, who did?” (accept that the daily paper was really stolen).
“I accept that Frosted Flakes are incredible since it says as much on the Frosted Flakes bundling.”
Relationship Implies Causation Fallacy
“I saw a jaybird and ten minutes after the fact, I crashed my auto, in this manner, jaybirds are terrible fortunes.”
“If you don’t vote for this applicant, you must be a Communist.”
“All Dubliners are from Ireland. Ronan is not a Dubliner, in this manner, he is not Irish.”
“If we permit gay individuals to get hitched, what’s afterward? Permitting individuals to wed their pooches?”
“All crows are black and the bird in my cage is black. So, the bird in my cage is a crow.”
Functions of Fallacy
Literary critics find the weaknesses of literary pieces by searching for fallacies in the pieces being critiqued. Because of this, there is a tendency for critics to distort the intentions of the writer. |
In my studies regarding mapping I came across ‘Terminus,” the Roman god who protected boundary markers, his name meaning the name for such a marker. Introduced to Rome in 700 BC, it is believed to be an early animistic reverence, wherein power was inherent within objects, in this case a boundary marker as God… “a god concerned with the division of property.”
Stone markers were used to show divisions between properties and the God, Terminus, would look over the boundary, to maintain its position. An annual celebration on February 23 was made, call Terminalia, that brought the property owners together to celebrate, acknowledge, and renew the boundary marker.
“On the festival the two owners of adjacent property crowned the statue with garlands and raised a rude altar, on which they offered up some corn, honeycombs, and wine, and sacrificed a lamb. It is the traditional end of the Roman year. The rites of the Terminalia included ceremonial renewal and mutual recognition of the boundary stone, the marker between properties. A garland would be laid on this marker by all parties to the land so divided. After kindling a fire, honey-cakes, fruits and wine would be offered and shared, and songs of praise to the god called Terminus would be sung.”
I found this intriguing that there was actually a GOD of boundaries, a deity that oversaw the division of land. That such an act would be carried out in to a religious order only makes sense. The exploits of land has in many cases been derived as a religious act, perhaps this is just the beginning of this perspective.
One can also see the challenge and conflict it takes to divide up and take ownership of the land. It makes sense that a diety was needed to oversee this, to send prayers and make offerings too. It was a task too complicated and contentious for mere mortals.
And here is more of what that new year party looked like, From Ovid, Fasti II:
"When night has passed, let the god be celebrated With customary honour, who separates the fields with his sign. Terminus, whether a stone or a stump buried in the earth, You have been a god since ancient times. You are crowned from either side by two landowners, Who bring two garlands and two cakes in offering. An altar's made: here the farmer's wife herself Brings coals from the warm hearth on a broken pot. The old man cuts wood and piles the logs with skill, And works at setting branches in the solid earth. Then he nurses the first flames with dry bark, While a boy stands by and holds the wide basket. When he's thrown grain three times into the fire The little daughter offers the sliced honeycombs. Others carry wine: part of each is offered to the flames: The crowd, dressed in white, watch silently. Terminus, at the boundary, is sprinkled with lamb's blood, And doesn't grumble when a sucking pig is granted him. Neighbours gather sincerely, and hold a feast, And sing your praises, sacred Terminus: `You set bounds to peoples, cities, great kingdoms: Without you every field would be disputed. You curry no favour: you aren't bribed with gold, Guarding the land entrusted to you in good faith. If you'd once marked the bounds of Thyrean lands, Three hundred men would not have died, Nor Othryades' name be seen on the pile of weapons. O how he made his fatherland bleed! What happened when the new Capitol was built? The whole throng of gods yielded to Jupiter and made room: But as the ancients tell, Terminus remained in the shrine Where he was found, and shares the temple with great Jupiter. Even now there's a small hole in the temple roof, So he can see nothing above him but stars. Since then, Terminus, you've not been free to wander: Stay there, in the place where you've been put, And yield not an inch to your neighbour's prayers, Lest you seem to set men above Jupiter: And whether they beat you with rakes, or ploughshares, Call out: "This is your field, and that is his!"' There's a track that takes people to the Laurentine fields, The kingdom once sought by Aeneas, the Trojan leader: The sixth milestone from the City, there, bears witness To the sacrifice of a sheep's entrails to you, Terminus. The lands of other races have fixed boundaries: The extent of the City of Rome and the world is one." |
Miosis means excessive constriction (shrinking) of your pupil. In miosis, the diameter of the pupil is less than 2 millimeters (mm), or just over 1/16th of an inch.
The pupil is the circular black spot at the center of your eye that allows light to enter. Your iris (the colored part of your eye) opens and closes to change the size of the pupil.
Miosis can occur in one or both eyes. When it affects only one eye, it’s also called anisocoria. Another name for miosis is pinpoint pupil. When your pupils are excessively dilated, it’s called mydriasis.
There are many causes of miosis. It can be a symptom of certain brain and nervous system conditions. It can also be induced by many types of drugs and chemical agents. Opioids (including fentanyl, morphine, heroin, and methadone) can produce miosis.
Constricted or dilated pupils can be an important clue to help your doctor diagnose your condition.
The size of your pupil is controlled by two counteracting muscles — the iris dilator and the iris sphincter. Usually miosis or pupil contraction is caused by a problem with your iris sphincter muscles or the nerves that control them.
The iris sphincter muscles are controlled by nerves that originate near the center of your brain. They’re part of the parasympathetic or involuntary nervous system. To reach your eye, these nerves pass along your third cranial nerve, also called the oculomotor nerve.
Any disease, drug, or chemical agent that affects these nerves, or the parts of the brain and head that they pass through, can cause miosis.
Diseases or conditions that can cause miosis
Diseases or conditions that can cause miosis include:
- cluster headaches
- Horner’s syndrome
- intracranial hemorrhage and brain stem stroke
- iris inflammation (iridocyclitis, uveitis)
- Lyme disease
- multiple sclerosis (MS)
- loss of the lens of the eye (aphakis) due to surgery or accident
Drugs and chemicals that can cause miosis
Some of the commonly used drugs and chemicals that can cause miosis are opioids, including:
- oxycodone (Oxycontin)
Other drugs and chemicals that can cause miosis include:
- PCP (angel dust or phencyclidine)
- tobacco products and other nicotine-containing substances
- pilocarpine eye drops used to treat glaucoma
- clonidine, which is used to treat high blood pressure, ADHD, drug withdrawal, and menopausal hot flashes
- cholinergic drugs used to stimulate the parasympathetic nervous system, including acetylcholine, carbachol, and methacholine
- second-generation or atypical antipsychotics, including risperidone, haloperidol, and olanzapine
- phenothiazine-type antipsychotics used to treat schizophrenia, including prochlorperazine (Compazine, Compro), chlorpromazine (Promapar, Thorazine), and fluphenazine (Permitil, Prolixin)
- organophosphates, found in many insecticides, herbicides, and nerve agents
Both newborns and older adults may have small pupils. It’s normal for a newborn to have small pupils for up to two weeks.
As you get older, your pupils tend to grow smaller. This is usually due to weakness of the iris dilator muscles, not to a problem with the iris constrictors.
Because miosis can be triggered by a variety of diseases and conditions, there are many possible accompanying symptoms. Here we’ll break down some of the common causes of miosis and their accompanying symptoms:
Cluster headaches. A cluster headache produces very severe pain around or above the eye, in your temple or forehead. It occurs only on one side of your head, and recurs at different intervals, depending on the type of cluster headache you have (chronic or episodic).
Miosis is one of the common accompanying symptoms. Other cluster headache symptoms can include:
- drooping eyelid
- eye redness
- runny nose
- sensitivity to light and sound
- mood change
Intracranial hemorrhage and brain stem stroke. Miosis in both pupils is a common symptom of an intracranial hemorrhage or a brain stem (Pontine) stroke. A hemorrhage or stroke happens when the blood supply to your upper brain stem (Pons) is cut off by a burst artery or a blockage.
A brain stem stroke does not produce the same symptoms as a typical stroke. The most common symptoms are dizziness, vertigo, and weakness on both sides of the body. It can occasionally produce jerking or shaking that looks like a seizure, slurred speech, or sudden loss of consciousness.
Horner’s syndrome. Horner’s syndrome is a collection of symptoms resulting from damage to the nerves connecting the brain to the face or eye. Decreased pupil size (miosis) and drooping eyelid on one side of the face are typical symptoms.
Horner’s is sometimes the result of a stroke, brain tumor, spinal cord injury, or shingles (herpes zoster) infection.
Iris inflammation (iridocyclitis). Decreased pupil size (miosis) can be a symptom of inflammation of your iris, the colored portion of your eye. Iris inflammation can have many causes. These include:
- rheumatoid arthritis
- shingles (herpes zoster)
Iris inflammation can also be called iridocyclitis iritis or uveitis.
The infection can affect the midbrain and cause a specific type of miosis called Argyll Robertson pupil. In Argyll Robertson, the pupils are small but don’t contract further when exposed to light. However, they contract when focusing on a near object.
Lyme disease. Lyme disease is caused by infection with a corkscrew-shaped bacterium similar to the syphilis spirochete. Except for the genital rash, untreated Lyme can produce
Your doctor will examine your pupils, usually with the aid of a flashlight or other light source. They’ll look at your pupils in a dimly lit place, because it’s natural for pupils to be constricted in a brightly lit location, especially outdoors.
Miosis is defined as a pupil size of 2 mm (a little over 1/16th inch) or smaller.
Once the miosis is identified, your doctor will look for specific signs:
- Does it affect one eye (ipsilateral) or both (bilateral)?
- Does the pupil size change in response to light?
- Does the pupil size change in response to a near object?
- How long does it take for the pupil to respond?
The answer to each of these questions can help identify the possible cause of the miosis.
Miosis is a symptom of something else and not a disease in itself. It can provide an important clue to your doctor in finding the underlying cause.
If your miosis is the result of prescription drugs, such as for glaucoma or high blood pressure, your doctor may be able to find a substitute drug that will reduce or eliminate the symptom.
Miosis can be a result of use of opioid drugs, including fentanyl, oxycodone (Oxycontin), heroin, and methadone. Severe miosis could be a sign of an overdose. In that case, emergency treatment with naloxone drug could save your life.
If drug use is ruled out, miosis could be a sign of organophosphate poisoning. Organophosphates are the
Organophosphate poisoning produces serious symptoms including:
- stomach disorder
- violent muscle contractions
- accelerated or reduced heart rate
Miosis is a relatively minor symptom of organophosphate poisoning, but may help in diagnosis. Acute organophosphate poisoning is treated in a hospital or emergency setting. The drug pralidoxime (2-PAM) can be used to treat organophosphate poisoning.
As a symptom of disease
When miosis is a symptom of an underlying disease, the treatment addresses the underlying disease. Some of the common disease causes and their treatments include:
Cluster headaches. Acute cluster headaches are treated with oxygen inhalation, triptans, ergotamine, and topical lidocaine nose drops.
Preventive treatments include:
- corticosteroids (prednisone)
- lithium carbonate
- the blood pressure medication verapamil
- melatonin in doses of 9 milligrams per day
Injection of a mixture of methylprednisolone and lidocaine into the greater occipital nerve (back of your neck) can serve as a preventive.
Intracranial hemorrhage and brain stem stroke). Miosis can be a sign of a brain stem (Pontine) stroke. Because the symptoms are different from a classic stroke, it may be misdiagnosed. Doctors use an MRI to confirm it. Treatment involves either dissolving the blockage with drugs or insertion of a stent, or surgery to stop the bleeding and restore blood flow to the brain.
Horner’s syndrome. There’s no specific treatment for Horner’s syndrome. If your doctor can find the underlying condition, they’ll treat that. It could be due to stroke, brain tumor, spinal cord injury, or shingles — or there may be no discoverable cause.
Neurosyphilis and ocular syphilis. If the ocular symptoms occur in earlier stages (primary, secondary, or latent) of the infection, a single intramuscular injection of
The tertiary stage of syphilis requires multiple doses of penicillin, and existing damage to the nervous system won’t be repaired.
Lyme disease. Early detection of Lyme disease is crucial for a good outcome. If caught in the first few weeks, antibiotic treatment for up to 30 days will usually cure the infection. In later stages of Lyme, long-term antibiotic therapy is needed. The causes and treatment of late stage or chronic Lyme is controversial.
Miosis or pinpoint pupil can be a symptom of many underlying disease conditions or a reaction to drugs.
The condition isn’t normally painful or dangerous in itself. But it can be a marker for some serious conditions including stroke, drug overdose, or organophosphate poisoning.
Be sure to consult a doctor if you notice the signs of miosis. |
The nematode (roundworm) Trichuris trichiura, also called the human whipworm.
The unembryonated eggs are passed with the stool . In the soil, the eggs develop into a 2-cell stage , an advanced cleavage stage , and then they embryonate ; eggs become infective in 15 to 30 days. After ingestion (soil-contaminated hands or food), the eggs hatch in the small intestine, and release larvae that mature and establish themselves as adults in the colon . The adult worms (approximately 4 cm in length) live in the cecum and ascending colon. The adult worms are fixed in that location, with the anterior portions threaded into the mucosa. The females begin to oviposit 60 to 70 days after infection. Female worms in the cecum shed between 3,000 and 20,000 eggs per day. The life span of the adults is about 1 year.
Life cycle image and information courtesy of DPDx. |
Washington – More than three decades after biologists discovered and identified a then-new species of Nautilus in Papua Guinea, Peter Ward and his colleagues were able to see it once again. The professor from the University of Washington returned from the South Pacific claiming he reunited with what he considers “one of the world’s rarest animals”.
The creature named Allonautilus scrobiculatus, is a rare species of nautilus that Ward and a colleague previously discovered off of Ndrova Island in Papua New Guinea in 1984.
Nautiluses are small, distant cousins of squid and cuttlefish. They are an ancient lineage of animal, often called “living fossil” because their characteristic shells appear in the fossil record over a 500 million year period.
However, the Allonautilus disappeared until July 2015, when Ward returned to Papua New Guinea aiming to find once again nautilus populations. Since these creatures are expert scavengers, Ward and his colleagues set up “bait on a stick” systems each evening. They placed fish and chicken meat on a pole between 500 and 1,300 feet below surface. In order to spot a nautilus, they filmed activity around the bait for 12 hours.
The surprise came when watching a footage from one night, an Allonautilus approached the bait. This was the first time Ward saw the animal after 31 years. The video also showed how the creature was joined by another nautilus, and the two battled for the bait until a sunfish appeared on the scene.
Additionally, biologists used baited traps in order to capture nautiluses at a depth of 600 feet. This was to obtain small tissue, shell, mucous and other samples to study the rare creature. The team later released the animals back to their habitats. The samples were used to determine the age and sex of each animal, as well as the diversity of nautilus population in the South Pacific. Scientists found that most nautilus populations live isolated from one another because they can only live on a very specific range of ocean depth.
“They swim just above the bottom of wherever they are. Just like submarines, they have ‘fail depths’ where they’ll die if they go too deep, and surface waters are so warm that they usually can’t go up there. Water about 2,600 feet deep is going to isolate them, ” Ward explained.
According to the research, illegal fishing and “mining” operations for nautilus shells have already decreased significantly several populations. “This unchecked practice could threaten a lineage that has been around longer than the dinosaurs were and survived the two largest mass extinctions in Earth’s history.”
In September, the US Fish and Wildlife Service are expected to determine whether to advocate for nautiluses to become protected species under the Convention on International Trade in Endangered Species of Wildlife Fauna and Flora (CITES).
Nonetheless, the reunion has made Ward to state that many more studies must be made in order to better understand the animal. “It’s only near this tiny island,” said Ward. “This could be the rarest animal in the world. We need to know if Allonautilus is anywhere else, and we won’t know until we go out there and look.”
Source: University of Washington |
Ocean Of Opportunity
Earth’s oceans contain tens of millions of tons of plastic pollution. But a new technique that creates biodegradable plastics out of seaweed could finally give the oceans relief.
Bioplastics are plastics manufactured from biomass sources instead of fossil fuels. Many degrade far more quickly than traditional plastics, but creating them typically requires fertile soil and fresh water, which aren’t available everywhere.
Now, researchers have found a way to create a bioplastic using seaweed, a far more accessible resource — a promising new approach that could both reduce strain on the plastic-clogged oceans and reduce the Earth’s dependence on fossil fuels.
Researchers from the University of Tel Aviv describe their new bioplastic production process in a study published recently in the journal Bioresource Technology.
Certain microorganisms naturally produce a polymer called polyhydroxyalkanoate (PHA). Some factories already create plastics from PHA, but they do so using microorganisms that feed on plants that grow on land using fresh water.
Through their experiments, the team found it was possible to derive PHA from Haloferax mediterranei, a microorganism that feeds on seaweed.
“We have proved it is possible to produce bioplastic completely based on marine resources in a process that is friendly both to the environment and to its residents,” researcher Alexander Golberg said in a press release.
Every year, 8 million metric tons of plastic finds its way into the Earth’s oceans, and researchers estimate that plastic will outweigh fish by 2050. That plastic is killing marine life, destroying coral reefs, and even affecting human health.
Efforts are already underway to remove plastic from the ocean, and several governments are banning certain plastics altogether. But plastic pollution is a huge problem that will require a multi-pronged solution — and a biodegradable plastic could be one of those prongs. |
Correlations in ScienceIn conducting their research, scientists often want to know if two sets of data (variables) are related to each other. For instance, you might wonder if the amount of time a student spends reading the Windows to the Universe website is related to the grade that student gets in his or her science classes. How would you test this, and how would you express it in a way that would clearly tell other people what sort of relationship there is between these two variables?
One of the most common ways a scientist does this is by using a concept called correlation. Correlation is basically a measurement of how independent two different variables are, and is usually calculated using a formula that results in a coefficient of correlation ranging from -1 to 1.
A correlation of -1 indicates that the two variables are inversely related, and that as one variable increases the other always decreases. For example, the total sales in a given day for an ice cream truck and the total snowfall for that same day might have a correlation close to -1. On days with lots of snow, not many people are buying ice cream from the truck, and on days where the ice cream truckís sales are really high, itís probably not snowing. A correlation of 1, on the other hand, indicates that the two variables are directly related, and that as one variable goes up the other does also. For example, the amount of time a basketball player spends practicing is usually closely related to the number of points he or she scores in games, and this relationship would probably have a correlation coefficient close to 1.
Many times a calculated correlation will be close to 0, and this indicates that there is no obvious relationship between the two variables (there may still be a relationship; in some rare cases two variables can be closely related but have a correlation coefficient of 0). Itís important to remember that even when two variables are correlated, this does not mean that a change in one variable causes the other one to changeóit just means that theyíre related. For instance, when itís raining you can see people using umbrellas a lot more often, and you can see cars using their wipers a lot more often. So umbrella use and windshield wiper use are correlated, but neither causes the otherówe donít use umbrellas because other people are using wipers, or vice versa. We use both because itís raining. |
Temporal range: Early Paleocene - Present
|In Tobago river of Trinidad & Tobago|
| Caiman crocodilus'|
|Map of caiman distribution|
The Spectacled caiman (Caiman crocodilus) also known as the White Caiman or Common caiman is a crocodilian found in much of Central and South America. It lives in a range of lowland wetland and riverine habitats, and can tolerate saltwater as well as freshwater, due to it's high adaptability.
This crocodile is a small to medium sized crocodilian. Males are somewhere around 1.8 to 2 meters in adult stage. Exceptionally large males can stretch up to 2.5 meters. Reports show they can reach up to 2.8 meters too. Females are smaller like all other crocodilians. With the average size of the females being 1.3 meters. Largest female was in captivity and died due to heart disease. It was about 1.61 meters at the time of death. Their body mass is about 58 kg on average for an adult male. Females usually don't cross the 32 kg boundary. They are usually bigger in South American parts than the North American parts. It's name comes from it's bony eye ridges, which looked like spectacles when observed first by amateurs. |
As I observe students of Chinese, I notice that they use different learning strategies, some which are more effective than others. What are some of their good strategies which will help you acquire Chinese faster than the average student? Let’s look at some memory strategies. These enhance the storage and retrieval of information.
The linguist Earl Stevick in his book Memory, Meaning and Method says that the greater the personal investment of mental energy that we spend on vocabulary, the easier it is to get it into our long-term memory. If this is true, what types of activities will be of more use to you in achieving your goal?
Flash cards: the one most commonly used is flash cards, i.e. blank name cards (or pieces of paper) with the Chinese written on one side and a picture/drawing or your mother tongue on the reverse side. Even better, include a sentence illustrating the use of the word. Writing them out yourself is better than buying ready-made ones as the actual process of writing them aids memory. But, having written them out, what should you do with them? Here are a couple of ideas:
Place them on a table with the picture or English facing upwards and try to guess the Chinese; then try it the other way round. This game is more fun if two or three people play it together. Another idea is to take a small oblong-shaped box and divide it into sections. After learning the new vocabulary of the first lesson, place the flash cards in the front section of the box. Then move on to the next lesson. When you have completed that lesson, take out the cards in the front section and test yourself. Those that you remember, place them in the next section back (thus freeing up space for the new vocabulary cards); those that you failed to recall, leave them in the front section, and so on through the textbook. When you fail to recall the Chinese of any flash card no matter where it is in the box, place that card in the front section. For those vocabulary words which stubbornly refuse to stick in your long-term memory, place the flash cards in your top pocket (or purse/wallet) and go through them when you have a few spare moments.
Record New Vocabulary: record the new vocabulary – the Chinese plus the English definition; then listen to it several times. Use the ‘pause’ button to test yourself.
Counting: in order to memorize the numbers, carry some loose change with you (9 @ RMB1, 9 @ RMB10, etc.) and when you have a few minutes, get the money out and count it.
Saying the Action: naming actions as you do them will help you memorize faster. For example, the daily routine of getting washed and dressed: “I shave my face, I brush my teeth, I take a shower, I dry myself, I comb my hair, I put on my clothes” (naming each item of clothing). Another example would be cooking: “I turn on the gas, I fill the pan with water, I slice the meat and vegetables, etc.”
Labeling Items: label household items and the rooms in your apartment or dormitory, e.g. curtains, window, chair, light switch, kitchen, bedroom, etc., and then say the word aloud when you use that item or enter that room.
Grouping under Topics: group or classify vocabulary under different subjects. Use your computer or notebook and divide it into topic areas, e.g. weather, place names, food items, transport, etc. Then, as you learn new vocabulary from either your textbook or Chinese friends, write the vocabulary items under the relevant topic area. When you have sufficient words under a particular topic, write a story using those words and get your teacher to correct it. For example, when you have acquired several ‘animal’ words, write a story about ‘A Day on the Farm’; or ‘weather’ words, write ‘Tomorrow’s Weather Forecast for China’. Then, when you exchange English for Chinese conversation or hire someone for talking practice, you can choose one topic area each time you meet, using the vocabulary you have acquired as a basis for conversation.
Word Mapping: write a key word in the middle of a piece of paper (e.g. breakfast), then link words related to the topic ‘breakfast’ by means of arrows, e.g. bread – butter – jam / coffee – milk – sugar / knife – fork – spoon. Drawing or downloading pictures of the words is more effective than just writing the English with the Chinese.
Visual Association: linking the visual with the verbal (i.e. picturing the vocabulary item along with the Chinese pronunciation) is a more effective way of storing vocabulary in your memory as it utilizes both the left and right side of your brain.
Looking for Similarities: look for similarities between Chinese and English (or any other language you know). This isn’t easy with Chinese, but have a go anyway. One good example I heard was the Mandarin word for ‘head’ sounds just like the English word for ‘toe’ – right at the opposite end of the body!!
Drawing Diagrams and Pictures: draw diagrams and pictures, because it utilizes your right brain, will also help you memorize groups of words. For example, draw pictures to represent the Place Prepositions ‘behind’, ‘in front of ‘, ‘opposite’, ‘next to’, etc. Drawing faces to express different emotions (happy, sad, angry, bored, etc.) will also aid memory.
Remember that applying the same strategy to all tasks does not work, so try to discover those strategies which best help compensate for your weaknesses. And once you have found them, continue using them, discarding those strategies that are ineffective.
Language Learning Strategies – Memory Strategies: pdf file
This article is also in Chinese: 好的语言学习者的学习策略 1 |
Whether you are renewing a prescription, communicating with others, or working across nearly any key employment disciplines you care to name, it has never been more important for people to have the knowledge, skills and dispositions to navigate online spaces confidently and successfully. We call this ability ‘digital fluency’ and it encompasses an array of competencies and understandings that are needed for us have access to opportunities in our networked, digital societies today and in the future.
This outcome of being digitally fluent relates to issues of responsibility, equity and access. We all need to be able to fully participate in a digitally-enabled education system and in an increasingly digitised society. If we work with fluency in the way we use technologies, we are able to keep ourselves safe online and take full advantage of life chance opportunities such as being able to apply for work, manage our finances, or be part of our local community.
Digital fluency can also be considered as part of a broader set of competencies related to ‘21st Century’ learning. Being able to manipulate technologies so we can create and navigate information successfully is supported by our ability to work collaboratively, solve real-world problems creatively, pursue our own learning goals and so on.
What might it mean to be digitally fluent?
Broadly speaking, digital fluency is a combination of these three concepts:
- digital, or technical, proficiency:
- able to understand, make judgements about, select and use appropriate technologies and technological systems for different purposes; this might include knowing how to use technologies to protect one’s data, digital identity, and device security.
- digital literacy:
- in digital spaces, being able to read, create, critique and make judgements about the accuracy and worth of information being accessed;
- being fluent in critical thinking and problem-solving online;
- Use digital tools to collaborate and construct information across all relevant and significant contexts
- social competence, or dispositional knowledge:
- the ability to be able to relate to others and communicate with them effectively;
- able to manage one’s identity, information, relationships in ways that are appropriate, responsible, safe and sustainable.
What might this look like in practice? A useful example of a curriculum context in which we might deliberately foster the competencies of digital fluency can be seen in this example from Makauri School in which Year 6 created digital mementos to “preserve the stories of Makauri’s past students". This kind of story encompasses deliberate teaching of technical skills (making websites and QR codes), literacies (story-creation through multiple media; critical creation of new information) and social competence (the value of others’ stories, heritage and culture in the local community).
Developing digital fluency
The aim, then, of becoming digitally fluent is for people to be able to act as successful citizens in whatever contexts they choose for themselves. Our role as educators is to deliberately design pathways from early childhood through to tertiary and beyond that support these developing fluencies in ways that make sense to the learners. The recent report - Students, Computers and Learning: Making the Connection (OECD, 2015) - highlights the importance of bridging the digital divide, not leaving the development of digital fluency to chance.
The skills, understandings and competencies that comprise digital fluency are best considered as underpinning supports that weave throughout curricula. In many ways, here in New Zealand, the Key Competencies in the New Zealand Curriculum offer a helpful way to think about a framework for fostering digital fluency as part of learning. Similarly, the four principles of Te Whāriki can be considered through a digital fluency lens while the Te Marautanga o Aotearoa encourages communities to frame a graduate profile that might include digital fluency as an over-arching goal for ākonga.
We know that ‘adding on’ modules or skill-based ticklist to work though do not effectively offer ways to foster digital fluency. We also know learning how to effectively and safely manage technologies cannot be achieved solely through technical means (e.g. filtering) or prohibition (e.g. denying people access to technologies). Instead, a proactive approach to designing learning pathways, that balances preventative approaches with application of skills and understandings in meaningful contexts, is the preferable approach.
NetSafe and the Ministry of Education remind us that we need to offer “opportunities for students to be involved in decisions about the management of digital technology at the school [and develop] a pro-social culture of digital technology use” in school, alongside our communities [Source, Netsafe]
Where to begin?
One helpful framework for thinking about planning approaches to digital fluency development through learning can be found in the description of how key competencies integrate into effective curriculum design (NZC website).
Broadly, this reminds us that digital fluency approaches should:
- align to the principles of the New Zealand Curriculum, TMOA and Te Whāriki
- draw on a range of values that are inclusive and enable young people to become confident, connected, actively involved, lifelong learners
- be embedded in learning in each of the learning areas
- be supported by effective pedagogy. |
Pollen records show that destruction of Easter’s forests was well under way by the year 800 [there is no evidence were here on the island at this time. The only evidence for humans on Raa Nui comes at 1200AD], just a few centuries after the start of human settlement [also wrong]. Then charcoal from wood fires came to fill the sediment cores, while pollen of palms and other trees and woody shrubs decreased or disappeared, and pollen of the grasses that replaced the forest became more abundant. Not long after 1400 the palm finally became extinct [not according to radiocarbon evidence that shows that palm burning continued until 1700s (Mann et al 2008)], not only as a result of being chopped down but also because the now ubiquitous rats prevented its regeneration: of the dozens of preserved palm nuts discovered in caves on Easter, all had been chewed by rats and could no longer germinate [Yes: rats were likely a major factor in the loss of the trees as we have argued]. While the hauhau tree did not become extinct in Polynesian times, its numbers declined drastically until there weren’t enough left to make ropes from [Absolutely no evidence of this? Triumfetta grows in disturbed habit and likely increased over time]. By the time Heyerdahl visited Easter, only a single, nearly dead toromiro tree remained on the island, and even that lone survivor has now disappeared. (Fortunately, the toromiro still grows in botanical gardens elsewhere.) [The loss of the toromiro tree is likely due to the massive sheep ranch that occupied the island from the 1880s through 1940s — this sheep devastated landscape is what Heyerdahl observed; not the result of prehistory. An overlooked and inconvenient fact for Diamond.]
The fifteenth century marked the end not only for Easter’s palm but for the forest itself [and it had no impact on humans whatsoever as the loss of trees meant more room for growing food.]. Its doom [What doom? Where is the evidence?] had been approaching as people cleared land to plant gardens [yes, they grew more food to support themselves]; as they felled trees to build canoes [no: the palm trees were not used for canoes], to transport and erect statues [no: palm trees are bad rollers and there is no evidence to suggest they were used in this fashion], and to burn [yes: as slash-and-burn cultivators this woudl be the best way to release nutrients locked up in the trees] ; as rats devoured seeds [yes, the rats would eat the palm nuts before humans could use them as food sources]; and probably as the native birds died out that had pollinated the trees’ flowers and dispersed their fruit [yet this had not effect on humans or the rest of the environment]. The overall picture is among the most extreme examples of forest destruction anywhere in the world [uh, what about England? Or Iceland? Or the many areas around the world that were deforested at much bigger scales] the whole forest gone, and most of its tree species extinct.
The destruction of the island’s animals was as extreme as that of the forest: without exception, every species of native land bird became extinct [True: but these were not a major source of food for people even early on according to the faunal remains]. Even shellfish were overexploited, until people had to settle for small sea snails instead of larger cowries [Again: not these were never a major source of food as these people were slash-and-burn culitvators, growing sweet potato and taro]. Porpoise bones disappeared abruptly [not based on data] from garbage heaps around 1500 [in Steadman’s original 1994 data, there are dolphin bones in all of the levels up to the surface]; no one could harpoon porpoises anymore [says who? There are dolphin bones throughout the Steadman sequence], since the trees used for constructing the big seagoing canoes no longer existed [Wrong. The trees used for sea-going canoes never existed on Rapa Nui. The forest that was cleared for farming was a palm forest. Palms are not useful for canoes. The lack of sea-going canoes is almost certainly due to the fact that the original canoes rotted away or left for other locations. But note that the loss of the canoes had no impact on the population’s ability to subsist: they were slash-and-burn cultivators.] The colonies of more than half of the seabird species breeding on Easter or on its offshore islets were wiped out. [Yet, this had no impact on the ability of people to feed themselves. The loss of seabirds is likely due to the loss of nesting habitats as these slash-and-burn cultivators cleared for the forest for food. In this way, the loss of the forest increased the ability of people to support themselves.]
In place of these meat supplies, the Easter Islanders intensified their production of chickens , which had been only an occasional food item [There is no specific evidence of this fact: chickens were always present on the island and provides a fraction of the diet. There is no evidence to support Diamond’s assertion about the early “occasional” chicken]. They also turned to the largest remaining meat source available: humans, whose bones became common in late Easter Island garbage heaps [This claim is simply made up: human remains are not found in “garbage heaps.” Human remains are found as part of cremations and burials. There is no evidence of cannibalism in all known skeletal examples.] Oral traditions of the islanders are rife with cannibalism; the most inflammatory taunt that could be snarled at an enemy was The flesh of your mother sticks between my teeth [the first mentions of cannibalism appears with the arrival of Catholic Missionaries from the Gambier Islands. These missionaries describe all native populations as “cannibals” regardless of evidence of this fact.] With no wood available to cook these new goodies [what wood?], the islanders resorted to sugarcane scraps, grass, and sedges to fuel their fires [This was always the case: palm trees are mushy grasses not hardwood so they were never useful for firewood as Diamond appears to claim.].
All these strands of evidence can be wound into a coherent narrative of a society’s decline and fall. The first Polynesian colonists found themselves on an island with fertile soil, abundant food, bountiful building materials, ample lebensraum, and all the prerequisites for comfortable living. They prospered and multiplied.
After a few centuries [Existing evidence points to the fact that people arrived on the island in the 13th century and that statue construction began upon arrival. Monument construction is a trait carried by Polynesians as they spread across East Polynesia and appears elsewhere (e.g., Hawaii, Tahiti, Marquesas, New Zealand, Australs)], they began erecting stone statues on platforms, like the ones their Polynesian forebears had carved. With passing years, the statues and platforms became larger and larger, and the statues began sporting ten-ton red crowns–probably in an escalating spiral of one-upmanship, as rival clans tried to surpass each other with shows of wealth and power. (In the same way, successive Egyptian pharaohs built ever-larger pyramids. Today Hollywood movie moguls near my home in Los Angeles are displaying their wealth and power by building ever more ostentatious mansions. Tycoon Marvin Davis topped previous moguls with plans for a 50,000-square-foot house, so now Aaron Spelling has topped Davis with a 56,000-square-foot house [Diamond’s home is in Bel Air amid a sea of mansions and is currently worth $8.2MM]. All that those buildings lack to make the message explicit are ten-ton red crowns.) On Easter, as in modern America, society was held together by a complex political system to redistribute locally available resources and to integrate the economies of different areas [Yet the difference on Rapa Nui is the fact that the isolation meant that there were direct feedback mechanisms that would provide cues to shape behavior, something ignored by Diamond and a fact that makes the world-spanning economy of the present distinct from prehistory.]
Eventually Easter’s growing population was cutting the forest more rapidly than the forest was regenerating. The people used the land for gardens and the wood for fuel, canoes, and houses–and, of course, for lugging statues. As forest disappeared, the islanders ran out of timber and rope to transport and erect their statues. Life became more uncomfortable– springs and streams dried up [What springs and streams? Water sources continued to be used in the same fashion as they were in prehistory through contact.], and wood was no longer available for fires [It never was in the first place].
People also found it harder to fill their stomachs, as land birds, large sea snails, and many seabirds disappeared [they never relied on these food as Rapa Nui people were sweet-potato cultivators. The loss of these things had little impact of food availability.]. Because timber for building seagoing canoes vanished [they never had the “timber” for canoes], fish catches declined and porpoises disappeared from the table. Crop yields also declined [crop yields were alway poor even from the beginning], since deforestation allowed the soil to be eroded by rain and wind [radiocarbon dates of big erosional features show that this occurred during historic times as a consequence of sheep ranching], dried by the sun, and its nutrients to be leeched from it [this was always the case: the volcanic soils were never productive]. Intensified chicken production [no evidence for this] and cannibalism [no evidence for this at all] replaced only part of all those lost foods. Preserved statuettes with sunken cheeks and visible ribs suggest that people were starving [these are all historic wooden figures and reflect oral traditions about ghosts].
With the disappearance of food surpluses, Easter Island could no longer feed the chiefs [there were never chiefs], bureaucrats [there were never bureaucrats], and priests who had kept a complex [it was never “complex” in the way Diamond implies] society running. Surviving islanders described to early European visitors how local chaos replaced centralized government and a warrior class took over from the hereditary chiefs [all kinds of changes occurred after contact]. The stone points of spears and daggers, made by the warriors during their heyday in the 1600s and 1700s, still litter the ground of Easter today [There is no evidence these items were used for warfare and indeed would have been ineffective as such. All evidence points to their use as cultivation implements]. By around 1700, the population began to crash toward between one-quarter and one-tenth of its former number [There is no evidence for this. Diamond assumes larger numbers and then assumes they must be smaller since Europeans saw only 3000 people or so. There is no necessary reason this is the case nor evidence to support it]. People took to living in caves for protection against their enemies [Radiocarbon and obsidian hydration dates for cave uses in these way are historic and reflect people hiding from slave raiders and whalers looking to steal people from the island]. Around 1770 [note: that 1770 is the arrival of the Spanish this is not a prehistoric event] rival clans started to topple each other’s statues, breaking the heads off. By 1864 the last statue had been thrown down and desecrated [Yes: after contact the island was plagued by disease and Europeans looking to take slaves. Thus the entire social system fell apart after contact.]
As we try to imagine [cue the dark music and shadowy foreboding] the decline of Easter’s civilization, we ask ourselves, Why didn’t they look around, realize what they were doing, and stop before it was too late? [Obvious answer: because what they were doing increased their carrying capacity and improved their lives] What were they thinking when they cut down the last palm tree? [Most likely: good riddance. We now have more land to cultivate the sweet potato we rely upon for our food and have fewer places for the introduced rat to live.]
I suspect, though, that the disaster happened not with a bang but with a whimper. After all, there are those hundreds of abandoned statues to consider. The forest the islanders depended on for rollers and rope didn’t simply disappear one day–it vanished slowly, over decades. Perhaps war interrupted the moving teams; perhaps by the time the carvers had finished their work, the last rope snapped. In the meantime, any islander who tried to warn about the dangers of progressive deforestation would have been overridden by vested interests of carvers, bureaucrats, and chiefs, whose jobs depended on continued deforestation. Our Pacific Northwest loggers are only the latest in a long line of loggers to cry, Jobs over trees! The changes in forest cover from year to year would have been hard to detect: yes, this year we cleared those woods over there, but trees are starting to grow back again on this abandoned garden site here. Only older people, recollecting their childhoods decades earlier, could have recognized a difference. Their children could no more have comprehended their parents’ tales than my eight-year-old sons today can comprehend my wife’s and my tales of what Los Angeles was like 30 years ago.
Gradually trees became fewer, smaller, and less important. By the time the last fruit-bearing adult palm tree was cut, palms had long since ceased to be of economic significance [there is no evidence to support that the palm trees were ever of major economic importance]. That left only smaller and smaller palm saplings to clear each year, along with other bushes and treelets. No one would have noticed the felling of the last small palm. [True: because they were irrelevant for human survival.]
By now the meaning of easter Island for us should be chillingly obvious [Chilling? That people transformed their landscape to support humans?]. Easter Island is Earth writ small. Today, again, a rising population confronts shrinking resources. We too have no emigration valve, because all human societies are linked by international transport, and we can no more escape into space than the Easter Islanders could flee into the ocean. If we continue to follow our present course, we shall have exhausted the world’s major fisheries, tropical rain forests, fossil fuels, and much of our soil by the time my sons reach my current age.
Every day newspapers report details of famished countries– Afghanistan, Liberia, Rwanda, Sierra Leone, Somalia, the former Yugoslavia, Zaire–where soldiers have appropriated the wealth or where central government is yielding to local gangs of thugs. With the risk of nuclear war receding, the threat of our ending with a bang no longer has a chance of galvanizing us to halt our course. Our risk now is of winding down, slowly, in a whimper. Corrective action is blocked by vested interests, by well-intentioned political and business leaders, and by their electorates, all of whom are perfectly correct in not noticing big changes from year to year. Instead, each year there are just somewhat more people, and somewhat fewer resources, on Earth.
It would be easy to close our eyes or to give up in despair. If mere thousands of Easter Islanders with only stone tools and their own muscle power sufficed to destroy their society, how can billions of people with metal tools and machine power fail to do worse? But there is one crucial difference. The Easter Islanders had no books and no histories of other doomed societies. Unlike the Easter Islanders, we have histories of the past–information that can save us. My main hope for my sons’ generation is that we may now choose to learn from the fates of societies like Easter’s. [Yes: but what we chose to learn needs to be based on evidence. The evidence points to the fact that these people lived sustainably until contact – and we should learn from their success.] |
- What is the definition for Cognitive?
- What is cognitive engagement in education?
- What are the three types of cognitive learning?
- How do you teach cognitive skills?
- Why is teacher modeling such a powerful method of teaching?
- What are the cognitive mental activities?
- Is cognitive ability the same as IQ?
- How is cognitive theory used in the classroom?
- What does Modelling mean in teaching?
- How do you apply cognitive development in the classroom?
- What are your cognitive skills?
- Why is teaching Modelling important?
- What are examples of cognitive activities?
- How do I improve my cognitive skills?
- What is emotional engagement?
- What is cognitive method of teaching?
- What are the 8 cognitive skills?
- What are the 3 cognitive learning styles?
- What is the modeling method?
- What is cognitive activity?
- What is cognitive engagement at work?
What is the definition for Cognitive?
of or relating to cognition; concerned with the act or process of knowing, perceiving, etc.
: cognitive development; cognitive functioning.
of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes..
What is cognitive engagement in education?
Cognitive engagement is defined as the extent to which students’ are willing and able to take on the learning task at hand. … Listening to a lecture is arguably the least cognitively engaging since under such circumstances there is little to no student autonomy.
What are the three types of cognitive learning?
There are three main types of learning: classical conditioning, operant conditioning, and observational learning.
How do you teach cognitive skills?
How can Teachers Teach Cognitive Skills?Strong Foundation. A healthy brain naturally seeks to operate as efficiently as possible. … Repetition. With repetition, a cognitive skill can eventually become a stored routine. … New Activities. … Progressive Drills. … Feedback.
Why is teacher modeling such a powerful method of teaching?
Research has showed that modeling is an effective instructional strategy in that it allows students to observe the teacher’s thought processes. Using this type of instruction, teachers engage students in imitation of particular behaviors that encourage learning.
What are the cognitive mental activities?
Let’s take a deeper dive into 13 evidence-based exercises that offer the best brain-boosting benefits.Have fun with a jigsaw puzzle. … Try your hand at cards. … Build your vocabulary. … Dance your heart out. … Use all your senses. … Learn a new skill. … Teach a new skill to someone else. … Listen to or play music.More items…•
Is cognitive ability the same as IQ?
The term IQ, or Intelligence Quotient, generally describes a score on a test that rates your cognitive ability as compared to the general population. IQ tests are designed to measure your general ability to solve problems and understand concepts.
How is cognitive theory used in the classroom?
Examples of cognitive learning strategies include:Asking students to reflect on their experience.Helping students find new solutions to problems.Encouraging discussions about what is being taught.Helping students explore and understand how ideas are connected.Asking students to justify and explain their thinking.More items…
What does Modelling mean in teaching?
Modelling is an instructional strategy in which the teacher demonstrates a new concept or approach to learning and students learn by observing. Haston (2007) Whenever a teacher demonstrates a concept for a student, that teacher is modelling.
How do you apply cognitive development in the classroom?
Applying Jean Piaget in the ClassroomUse concrete props and visual aids whenever possible.Make instructions relatively short, using actions as well as words.Do not expect the students to consistently see the world from someone else’s point of view.More items…•
What are your cognitive skills?
Cognitive skills are the core skills your brain uses to think, read, learn, remember, reason, and pay attention. Working together, they take incoming information and move it into the bank of knowledge you use every day at school, at work, and in life.
Why is teaching Modelling important?
Effective modelling makes you a better teacher. Models are enablers – they are there to help students see what outcomes could/should look like. It allows your students to engage and succeed and it reduces your workload because common misconceptions are addressed as or before they arise.
What are examples of cognitive activities?
Examples of cognitive skillsSustained attention.Selective attention.Divided attention.Long-term memory.Working memory.Logic and reasoning.Auditory processing.Visual processing.More items…•
How do I improve my cognitive skills?
Discover five simple, yet powerful, ways to enhance cognitive function, keep your memory sharp and improve mental clarity at any age.Adopt a growth mindset. … Stay physically active. … Manage emotional well-being. … Eat for brain health. … Restorative sleep.
What is emotional engagement?
What is emotional engagement? Simply put, emotional engagement is a student’s involvement in and enthusiasm for school. When students are emotionally engaged, they want to participate in school, and they enjoy that participation more.
What is cognitive method of teaching?
Cognition refers to mental activity including thinking, remembering, learning and using language. When we apply a cognitive approach to learning and teaching, we focus on theunderstaning of information and concepts.
What are the 8 cognitive skills?
Cognitive Skills: Why The 8 Core Cognitive CapacitiesSustained Attention. Sustained Attention is the basic ability to look at, listen to and think about classroom tasks over a period of time. … Response Inhibition. … Speed of Information Processing. … Cognitive Flexibility and Control. … Multiple Simultaneous Attention. … Working Memory. … Category Formation. … Pattern Recognition.
What are the 3 cognitive learning styles?
There are three main cognitive learning styles: visual, auditory, and kinesthetic.
What is the modeling method?
It emphasizes the use of models to describe and explain physical phenomena rather than solve problems. … The modeling method is unique in requiring the students to present and defend an explicit model as justification for their conclusions in every case.
What is cognitive activity?
1. High-level activities such as problem solving, decision making, and sense making that involve using, working with, and thinking with information.
What is cognitive engagement at work?
Cognitive engagement refers to employees’ beliefs about the company, its leaders and the. workplace culture. The emotional aspect is how employees feel about the company, the. leaders and their colleagues. |
Learn Python Programming using a Step By Step Approach with 200+ code examples.
About This Video
- The course is amazing, the instructor has good knowledge in the field
- This course can be viewed with zero Python Programming Experience
Python is one of the most popular programming languages. Python offers both object-oriented and structural programming features. We love Programming. Our aim with this course is to create a love for Programming.
In more than 150 Steps, we explore the most important Python Programming Language Features
- Basics of Python Programming - Expressions, Variables, and Printing Output
- Python Operators - Python Assignment Operator, Relational and Logical Operators, Short Circuit Operators
- Python Conditionals and If Statement
- Methods - Parameters, Arguments and Return Values
- An Overview Of Python Platform
- Object-Oriented Programming - Class, Object, State and Behavior
- Basics of OOPS - Encapsulation, Inheritance and Abstract Class.
- Basics about Python Data Types
- Basics about Python Built-in Modules
- Conditionals with Python - If Else Statement, Nested If Else
- Loops - For Loop, While Loop in Python, Break and Continue
- Immutablity of Python Basic Types
- Python Data Structures - List, Set, Dictionary and Tuples
- Introduction to Variable Arguments
- Basics of Designing a Class - Class, Object, State and Behavior. Deciding State and Constructors.
- Introduction to Exception Handling - Your Thought Process during Exception Handling. try, except, else and finally. Exception Hierarchy. Throwing an Exception. Creating and Throwing a Custom Exception.
All the code and supporting files for this course are available at: https://github.com/PacktPublishing/Python-Programming-for-Beginners---Learn-in-100-Easy-Steps
Downloading the example code for this course: You can download the example code files for all Packt video courses you have purchased from your account at http://www.PacktPub.com. If you purchased this course elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you. |
Dutch and Mauritian scientists made an exciting discovery - beautifully preserved, 2000 year old Dodo bones from about twenty of the extinct birds. The scientists are using evidence from the bones, as well as DNA analysis techniques, to discover more about the life and death of the Dodo. They now believe that Dodos were not as fat, or stupid, as was once thought and were probably well adapted to their environment. Only the arrival of Dutch settlers to Mauritius, and the animals they introduced, led to the Dodo's eventual extinction.
Children consider evidence from bones and contemporaneous drawings to make their own decisions: were Dodos well adapted to their habitat? Why did they become extinct?
- That bones can provide evidence about the lives and adaptations of an extinct species
Children will learn:
- That many extinctions happen because of human activity
These resources were initially developed in partnership with the Centre for Science Education, Sheffield Hallam University. |
v. de·scend·ed, de·scend·ing, de·scends
1. To move from a higher to a lower place; come or go down.
2. To slope, extend, or incline downward: "A rough path descended like a steep stair into the plain" (J.R.R. Tolkien).
a. To be related by genetic descent from an individual or individuals in a previous generation: He descends from Norwegian immigrants.
b. To come down from a source; derive: a tradition descending from colonial days.
c. To pass by inheritance: The house has descended through four generations.
4. To lower oneself; stoop: "She, the conqueror, had descended to the level of the conquered" (James Bryce).
5. To proceed or progress downward, as in rank, pitch, or scale: titles listed in descending order of importance; notes that descended to the lower register.
6. To arrive or attack in a sudden or overwhelming manner: summer tourists descending on the seashore village.
1. To move from a higher to lower part of; go down: I descended the staircase into the basement.
2. To extend or proceed downward along: a road that descended the mountain in sharp curves.
be descended from
To be related to (an ancestor) by genetic descent from an individual or individuals in a previous generation: She claims to be descended from European royalty.
[Middle English descenden, from Old French descendre, from Latin dēscendere : dē-, de- + scandere, to climb; see skand- in the Appendix of Indo-European roots.]
de·scendi·ble, de·scenda·ble adj.
The American Heritage® Dictionary of the English Language, Fifth Edition copyright ©2017 by Houghton Mifflin Harcourt Publishing Company. All rights reserved.
The American Heritage Dictionary Blog
Check out our blog, updated regularly, for new words and revised definitions, interesting images from the 5th edition, discussions of usage, and more.
American Heritage Dictionary Products
The American Heritage Dictionary, 5th Edition
The American Heritage Dictionary of Idioms
The American Heritage Roget's Thesaurus
Curious George's Dictionary
The American Heritage Children's Dictionary
The American Heritage Dictionary of Indo-European Roots
The American Heritage Student Grammar Dictionary
The American Heritage Desk Dictionary + Thesaurus
The American Heritage Science Dictionary
The American Heritage Dictionary of Business Terms
The American Heritage Student Dictionary
The American Heritage Essential Student Thesaurus |
Early Childhood Health and Nutrition
Health and nutrition is essential to support school readiness
A child’s health can greatly impact their ability to learn. In order to ensure that each child is ready to learn it is important to meet all of their health and nutrition needs. The Early Childhood school environment is prepared to recognize any signs or symptoms the child may be experiencing and work with families to provide for individual needs. A safe environment allows children to be more independent in their play, to explore their surroundings, and learn about rules and routines to know what is safe and appropriate.
Students learn independence and self-help skills through the daily routine of handwashing, toileting and mealtimes. When children learn and practice all of these healthy habits in school, it makes it easier for them to continue them at home and for the rest of their lives.
Children learn how to properly wash their hands, which helps prevent the spread of germs and illnesses.
Children develop toileting skills at different rates. Adults support the students learning the routine of toileting and handwashing and provide assistance as needed.
Mealtime in the classroom is considered a part of the classroom day and is an integral part of the total education program. Adults model appropriate mealtime behavior and food practices. Adults facilitate the development of social skills and language skills by participating in mealtime conversations. Mealtimes provide children with opportunities for decision making, responsibility, sharing, communication, fine motor skills, eye-hand coordination, self-help skills, math skills and independence.
Meals in the PreK classroom:
Healthy foods are provided in the PreK classroom. Each day, students receive breakfast, lunch and one snack. The goal of the menu is to help introduce healthy foods to each child. Exploring new foods with their peers and teachers in the classroom can help children learn how to make healthy choices for the rest of their lives. Another key component of meal time is the practice of family style meals. Conducting family style meals appropriately in the classroom provides an interactive way for teachers to model and support healthy behaviors, and provides multiple opportunities for nutrition education.
A typical menu for a day in the classroom might look like this:
- 1 oz Breakfast Bread
- ½ c Peaches
- ½ pt 1% Unflavored Milk
- ¼ c Teriyaki Chicken
- ¼ c Brown Rice
- ¼ c Snap Peas and Green Beans
- ¼ c Pineapple
- ½ pt 1% Unflavored Milk
- ½ c Zucchini Coins w Veggie Dip
- ¾ oz WG Goldfish
Dental, Hearing & Vision
In the PreK classroom children also learn the importance of dental hygiene by participating in classroom activities focused on dental health and by brushing their teeth every day under the supervision of a teacher. Also, all students enrolled in the PreK program receive yearly vision and hearing screening. The program’s health and family service teams work closely with classrooms and families to provide resources and student support. |
One of the most important techniques that is relied upon in the world of life science is to study the sequence of gene of interest. The technique to determine the sequence of bases in DNA is called DNA sequencing which is the central part of current genetic technologies. The classical methods available for sequencing DNA include
- Sanger’s sequencing method (dideoxynucleotide method)
- Direct PCR Pyrosequencing
- Maxam and Gilbert sequencing
|Fig 1: Sanger sequencing.|
The DNA is denatured by heat or more traditionally inserted and cloned into a vector M13 (which is naturally single stranded). The DNA is extracted and the reaction mixture is then divided into four aliquots. The tube A contains all the 4 nucleotides and 2’, 3’-dideoxyadenosine triphosphates (ddATP). Similarly tube T contains all nucleotides and ddATT and so on. The dideoxynucleotide doesn’t possess a 3’ end and so will terminate the synthesis, since polymerase an add nucleotides only to the 3’-end.
The incorporation of ddNTP will be a random event, the reaction producing molecules of various lengths culminating in the same ddNTP. The reaction products are then run using an electrophoresis method (commonly used is polyacrylamide gel method). The position of the various bands of each ddNTP will be the indication of sequence (See Fig 5). Under ideal conditions sequences up to about 300 bases in length can be read from as single gel run.
Direct PCR Pyrosequencing
|Fig 2: Pyrosequencing|
This is a sequencing method were a PCR template is hybridized to an oligonucleotide and incubated with a DNA polymerase, ATP sulphurylase, luciferase and apyrase. During the reaction the first of the 4 dNTP are added and if incorporated release pyrophosphate (PPi). The ATP sulphurylase converts the PPi to ATP. This ATP can now convert the luciferin to oxyluciferin to generate light. The overall reaction is
dNTP + ATP sulphurylase +luciferin ---------------------------> light
This is followed by another round of addition of dNTP. The resulting program can b used for the analysis of sequence. The method described represents a very fast sequencing result with good potential to be automated. It also provides highly precise and accurate analysis. Also it avoids the problems of gel electrophoresis.
Maxam and Gilbert sequencing:
The DNA is radiolabelled with 32P at the 5’ ends of each strand and the strands denatured separated and purified to give a population of labeled strands for the sequencing reactions. The next step is a chemical modification of the bases in the DNA strand. The modified bases are then removed from their sugar groups and the strands cleaved at these positions using the chemical piperidine. This will create a set of fragments known as nested fragments. It is then analysed for products as in Sanger’s method.
Automated fluorescent DNA sequencing:
The method is similar to that of the Sanger’s method. Here a specific fluorescent dye terminator (instead of ddNTP) is used in a single reaction cuvette. This allows a single gel column to be run. The sequence can be read using a laser (Light amplification by stimulated emission of radiation) and detecting the fluorescence which instead tells about the sequence. The method can be connected to various to various other bioinformatics software’s that allows high data collection.
Needless to say, the first generation technology "Sanger sequencing" method is still considered as a gold standard in resolving issues such as looking into SNP (single nucleotide polymorphism). With sequencing methods automated and access to better technology, the human genome project which was once a mammoth project is now a more feasible technology. But then the method in common laboratory remains cost prohibitive. So science seeks newer methods that can bring down the cost to less than 1000$ per human genome.
Many new techniques that rely on the basic DNA replication mechanisms are now introduced by the companies. Below I will discuss a couple since it has become too common now.
SMRT (pronounced as smart), stands for Single Molecule Real Time sequencers. This technology is a product of Pacific Biosciences corporation. The technology uses the same biological process that a natural system uses.
|Fig 3: Phi29 polymerase.|
The first requirement is a DNA polymerase. The polymerase used here is φ29 polymerase. This polymerase is derived from a bacteriophage Φ29. This phage is a native attacker of B. subtilis. This polymerase has exceptional strand displacement and an inherent 3´-5' proofreading exonuclease activity. Its exceptional qualities have been useful in genetic studies. The enzyme is obtained in industrial quantities by cloning the gene to an E coli.
The second component of this technology are the special fluorescent nucleotides. Nucleotides are the basic structural units of DNA or RNA. In this case the 4 nucleotides- A, T, C, G are labelled with different fluorescent colors. The speciality is that they are γ-labelled dNTPs. Labeling at the terminal nucleotide is done on purpose. In a normal replication process, DNA polymerase cleave the α-β-phosphoryl bond upon incorporating a nucleotide into DNA, releasing the pyrophosphate leaving group and attached fluorescent label. This means when cleaved the γ- labelled fluorescent molecule is free to move out without being effected.
Of note, the fluorophore is attached to the nucleotide by using linkers. This attachment is cleavable, with chemicals so that the dye can be detached from the DNA after it has been detected. This serves to remove the noise in detection and thus enhance the assay. One important thing of note is that extension of the triphosphate moiety to four and five phosphates can increase incorporation efficiency (Reference). However, to the best of my understanding this idea has not been used in the technology.
|Fig 4: ZMW cells. Source|
Third component of this system is the reaction chamber embedded in a chip. The chip is a glass cover slip with aprox 100 nm-thick layer of aluminum deposited on top of it. In this plate is an array of cylindrical wells each 70 nm–100 nm in diameter. The aluminum is chemically treated so that polymerase molecules will stick to the glass at the bottom of each well rather than the sides of the wells. Each well is designed to hold one polymerase molecule in it. The cover glass at the bottom is designed for feasibility of imaging.
The above mentioned special reaction cell is referred otherwise as ZMW (Zero mode waveguides). These cells are product of nano technology. The reaction chamber holds not more than a few atto- or zeptoliters. This is in the range of 10-18 to 10-21.They call it the microfluidics. But I think I better call it zeptofluidics. This permits use of extreme low sample volumes.
A problem often encountered in the whole genome sequencing is the limitedness of sequencing very large sequences. It is just not practical to sequence the whole set of chromosomes or even a single chromosome in one read due to technical constraints. The fastest method around the problem is a "shotgun" approach. This method breaks up the whole genome to fragments and then sequences each bits. Then the sequences are realigned using a computer program using unique matches.
So, this is how the technology works. The first step is to prepare the sample. The genomic DNA is obtained, and broken into fragments. Each fragmented DNA is then incorporated into a reaction chamber. The reaction starts with DNA polymerase which unwinds the DNA and incorporates the correct nucleotide. The γ-phosphate is released along with the fluorescent dye. Since the reaction can hold only one molecule of dNTP, in one of the four colors a binary value of 1 or 0 is generated. This implies the graph plot will show square peaks rather than the usual triangular peaks we are used to.
By simultaneously running a very large set of reaction in a complete chip, the full genome is sequenced at a very high speed. The method also avoids the requirement of running a gel and the sequence is obtained in real time. Since the signal generated is cleaved and released after each binding step, this reduces the otherwise background noise. Remember, reaction can hold only one molecule of dNTP.
As a proof of concept, the company has sequenced E coli O104:H4 strain with an accuracy of 99.9% and some sources claiming it to be 99.9999% accuracy. This level of accuracy is unheard of any other 1st or 2nd generation sequencers.
Dr. Schadt comments "The ability to sequence the outbreak strain with reads averaging 2,900 base pairs and our longest reads at over 7,800 bases, combined with our circular consensus sequencing to achieve high single molecule accuracy with a mode accuracy distribution of 99.9%, enabled us to complete a PacBio-only assembly without having to construct specialized fosmid libraries, perform PCR off the ends of contigs, or other such techniques that are required to get to similar assemblies with second generation DNA sequencing technologies." And it took them less than 8 hrs on average to complete sequencing.
Ion Proton Sequencer
Next lets discuss about another sequencer: Ion Proton Sequencer. The technology comes to the laboratory bench from Life technologies. The technology is vaguely referred as Semi conductor sequencing. The system was unpacked to the world of science in January 2012 and testified of $1,000, full genome sequence in a single day. Oh and by the way, Jonathan Rothberg is considered as the innovator of this technology.
Photo 1: Ion Proton™ Sequencer
The technology works, based on a semiconductor chip. The Proton I semiconductor chip is filled with millions of sensors. I don't have exact figures of how many sensors does an individual chip possess. But, I gather from reliable sources that the number is the Proton I semiconductor chip endorsed 165 million sensors, and the proton II 660 million sensors. The sensors are developed based on complementary metal-oxide semiconductor (CMOS) technology.
CMOS is actually a technology used in the manufacturing computer microchips, usually engineered from two semiconductor metals- Silicon and Germanium. The same technology is used in digital cameras, but here instead of sensing light, this sensor detects change in pH. What's a pH meter got to do here? The technology used here makes use of a simple principle in natural DNA replication. Each time a nucleotide is successfully incorporated into a growing DNA strand a hydrogen ion is released. For people, who are wondering where did this proton come from, Nucleic acids are acidic because of the phosphate groups. They can act as hydrogen ion donors, and proton is thrown out during nucleotide incorporation.
Photo 2: Semi conductor chip
So, take a human genome, prepare a DNA library, add it on to the sequencing chip and each sensor acts as a well for reaction. When flooded with nucleotides the correct match is taken up and a change in pH is recorded which is converted to digital data. A powerful computer program integrates all the data and bingo, you have the sequence.
If I had to say anything more about this technology, I would say it works pretty much the same as the SMRT did. But the difference lies in the method of detection. Here the detection is based on pH change.
The battle to make it to the top is in between the Ion sequencer and its rival Illumina was so evident. Just a few days before the proton was announced HiSeq 2500 was released. There isn't enough conclusive evidence on which is actually better, the scientific community is more inclined to the ion proton version as it costs less ($740,000 vs $150,000). As a sequencer at a smaller scale, a modified version known as Ion Personal Genome Machine™ (PGM™) Sequencer competes with a illumina mini version (MiSeq). Again cost is an important factor ($50,000 vs $100,000 per machine). (Source)
"DNA sequencing is going to affect everything," says Rothberg, predicting it will become a $100 billion industry. "This is biology's century, just as physics was the foundation of the last century." (Source). And they also argue it as better than other technologies like Nanopore sequencing (Link).
Fig 5: dNTP used in SBS
Next let's talk something about the llumina/Solexa sequencer. The technology of sequencing is called as "Sequencing by synthesis technology". This technology is a brain child of 2 scientists from Cambridge- Shankar Balasubramanian and David Klenerman. They studied the movement of polymerase enzyme by using fluorescent dye labelled nucleotides. Based on their prime experience in sequencing project and their studies on polymerase, they theorized a massive parallel sequencing of short reads using a solid phase sequencing with reversible terminators as the basis of a new DNA sequencing approach. this technique came to be known as SBS or "Sequencing by synthesis" technology. Over the years, the technology was developed and a successful Solexa prototype was launched as a commercial sequencing instrument. A detailed history can be found here.
So how does the technology work? The technology is very much similar to Sanger sequencing method. The difference lies in use of modified dNTP's with terminators. 3′-O-fluorophore-labeled nucleotides were synthesized and used as reversible terminators of DNA polymerization. This reversible terminator ensures that in one step, only one nucleotide can be incorporated. After the template is flooded with nucleotides and binding step is accomplished, the unincorporated reagents are washed away. The terminator chemical is equipped with a fluorescent tag that allow it to be detected by using specific detection cameras. Since only one type of fluorescent color is used the detection of 4 nucleotides use 4 separate tubes. The 2nd step is to remove the terminators using a chemical reaction. This means, there is a removal of fluorescent tag also and the cycle is repeated.
The technology claims to be capable of a read length of nearly 50 bases for fragment libraries and 36 bases for mate-paired libraries, with a raw base-calling accuracy of 98.5% (Source is most probably outdated. I couldn't find the latest).
Of course there are more varieties coming up in the field of research and some new techniques are coming up in test mode. But I believe the post has give you a basic idea on some of the sequencer's that has now become the talk of DNA. A technology that is now gaining popularity is pore-fection or commonly known as nanopore sequencing. The technique is under development. Please refer to my previous post on pore-fection for more information on the technique (Link). The technique is emerging as potent, cost effective technology. Probably in a few years we will have machines that will sequence the human genome in less than an hour for less than 100$. And that will be a true genetic age of Lab science.
- Chun-Xiao Song etal. Sensitive and specific single-molecule sequencing of 5-hydroxymethylcytosine. Nature Methods 9, 75–77. doi:10.1038/nmeth.1779. Link
- Paul Zhu and Harold G. Craighead. Zero-Mode Waveguides for Single-Molecule Analysis. Annual Review of Biophysics. June 2012; Vol . 41: 269-293. doi: 10.1146/annurev-biophys-050511-102338. Link
- Christian Castro etal.Two proton transfers in the transition state for nucleotidyl transfer catalyzed by RNA- and DNA-dependent RNA and DNA polymerases. PNAS March 13, 2007 vol. 104no. 11 4267-4272. Link
- Democratizing DNA sequencing by reducing time, cost and informatics bottleneck. Link
- aek-Soo Kim etal. Novel 3′-O-Fluorescently Modified Nucleotides for Reversible Termination of DNA Synthesis. ChemBioChem; January 4, 2010. Volume 11, Issue 1, pages 75–78. Link
- Luo C, Tsementzi D, Kyrpides N, Read T, Konstantinidis KT (2012) Direct Comparisons of Illumina vs. Roche 454 Sequencing Technologies on the Same Microbial Community DNA Sample. PLoS ONE 7(2): e30087. doi:10.1371/journal.pone.0030087. |
History of the Middle East Midterm
A. Short Answers: For each, provide identifying clues (Who/What/Where/When); Why the term is important to the country(countries) it is related to; and Why the term is important to our course. The best responses will be a paragraph in length.
1. British Mandate
2. King Hussein
3. Lebanon’s National Pact of 1943
4. Hezbollah and Hamas
5. Persian Gulf War
6. Osama Bin Laden
7. Ayatollah Khomeni
8. Bashar Al Assad
9. Al Qaeda
10. September 11th attacks
11. Iranian Hostage Crisis
14. Saddam Hussein
15. The Taliban
B. Essays: The best essays will be at least five pages in length and will have an introduction, supporting paragraphs, and conclusion. You should also make reference to the textbook, the book Islam, and discussions (including your articles) where appropriate.
1. In the second half, we discussed Syria, Lebanon, Iran, Iraq, Jordan, the Maghreb (Algeria, Tunisia, and Morocco), along with terrorist groups ISIS, Al Qaeda, and the Taliban. How do you characterize the stability of each nation? For each, provide a brief synopsis of the major issues facing the country, and how their history throughout the twentieth century factors into the present challenges. Which country is the most stable? Why? Which is the most unstable? Why? How does Islam impact the countries? How has terrorism impacted each?
2. What is the future of the Middle East? What are the most important developments to keep an eye on for the future? What are the three most important things you’ve learned this semester? If you were teaching History of the Middle East, what would you emphasize in your course? |
Earlier, we saw the area of shapes is calculated – in general – by multiplying the length and breadth of the shape. Shapes, such as triangles, squares, etc, are two-dimensional, i.e., they lie in a plane. However when objects are three-dimensional, i.e., they cannot be represented in a plane, they are called solids.
Some examples of solids are a cube, rectangular prism, triangular prism, cylinder and cone, as shown below –
The surface area of a solid with plane faces is the sum of the areas of its faces.
The calculation of a surface area can be made easier by knowing whether the number of faces have the same area. It is also necessary to consider whether the figure is open or closed, such as a swimming pool.
Calculating surface area is important in many trades like painting, decorating, tiling, layering, etc. Tradespeople need to calculate (or estimate) the total surface area involved in a job so that they can decide on the fee to be charged, and the quantity of materials needed.
For example, a painter is required to quote for a job to paint a room. He will first find the dimensions of the room, then work out the area of each wall to get the total ‘surface area’ of the walls to be painted. He may then be required to work out the amount of paint to purchase, the time required to complete the job, and hence his cost.
We now look at calculating the surface areas of different solids: |
Introduced insect fauna of an oceanic archipelago: The Galapagos Islands, Ecuador
Oceanic islands are susceptible to invasion by exotic species of plants and animals that are introduced either intentionally or unintentionally by human action. Most tropical oceanic islands now have insect faunas that have changed markedly since their discovery by humans. The changes occurred with the introduction of foreign species by aboriginal peoples and later by colonization activities of Europeans (Carlquist 1965, 1974). For instance, the Hawaiian Islands now have more than 3,200 alien species of arthropods (Howarth 1990) and 2,621 species of introduced insects. Approximately 500 of these insects can be classed as pests (Beardsley 1991). More than 416 insect species were introduced intentionally (Nishida 1994), and it now is difficult to find indigenous insect species in most lowland areas of the Hawaiian islands. The faunal change in almost all tropical island insect faunas occurred before scientific inventories could document the processes and stages of change.
Peck, S, Heraty, J. (John), Landry, B. (Bernard), & Sinclair, B.J. (Bradley J.). (1998). Introduced insect fauna of an oceanic archipelago: The Galapagos Islands, Ecuador. American Entomologist, 44(4), 218–237. doi:10.1093/ae/44.4.218 |
One of the most fundamental shapes in geometry is a triangle and it has various properties attached to it. There are different forms of triangles and according to their shape, the angles vary. The interior angles that are the angles on the sides should sum up to a perfect 180 degrees. For triangles, you need to study about the inequality theorem according to which, if you pick any 2 sides and add their lengths, then it has to be greater than the length of the third side. This is known as triangle inequality theorem. But you need to understand what the relationship between the sides and angles are. College students are supposed to know how to write a coursework since they might be asked to write on the angles of a triangle and the kind of theorems that apply to it. One side and the largest interior angle of a triangle will always be opposite to each other and this rule applies to one side and the smallest angle as well as the third side and middle-sized angle.
The following section of the article will talk about the differences between interior and exterior angles with the help of holistic explanation of each of the concepts. Once we are done with that, we will then proceed towards the type of triangles and explain each one to you on the basis of the length of the sides and the angles that each side makes with its adjacent sides.
First of all, you need to know the difference between interior and exterior angles. Although the terms are self-explanatory, yet we will go out of our way and explain to you what they mean in detail. Suppose there is a triangle which can be of any size and shape. Understand that the triangle will only have three sides and if not, then it is not a triangle. As for the angles of a triangle, there are three. Since there are three sides in a triangle, each of the vertices makes an angle. Thus, the angles that are present inside the triangle are called interior angles whereas the ones that are outside the triangle are known as exterior angles. But the question arises as to how exactly is an exterior angle formed. The answer is quite simple. You will just have to extend one side of the triangle by a little bit and you will see an angle forming between the new line and the adjacent side of the side that you had extended. This is known as the exterior angle which will vary according to the side that is extended.
So what do we get when we decide to sum all the three angles of a triangle ? The result should be 180 degrees and this is something which remains the same for all forms of triangles. If one side of a triangle has an interior angle of 90 degrees, then the other two sides should sum up to 90 so that the total sum remains 180 degrees. College students are usually made to write thesis definition explaining the relationship between the sides and the angles of a triangle which happens to be one of the most crucial topics for your thesis in college. There are lots of agencies which provide paper writing service to its customers and they specialize in all forms of write-ups from thesis to even turabian paper and all of them come at different prices according to the popularity and experience of the agency.
A triangle can take any shape depending upon the length of the sides. But ideally, if we are asked about the types of triangles, then there are basically three of them.
Firstly, let us focus on an isosceles triangle. Keep this in mind that two sides of an isosceles triangle will always be of the same length and they will form the same angle with the third side. Thus, two angles of an isosceles triangle will have the same value. Next up, we have the equilateral triangle where all the three sides are equal to each other, which points to the fact that the angles of a triangle will also be the same only. Hence, each of the interior angles of an equilateral triangle will measure to 60 degrees. Finally, we have scalene triangle where each of the sides are different from each other which goes on to say that the three interior angles of a triangle will be different from each other.
These are the three types of triangles that exist and all the problems related to triangles will be based on these three types. There are lots of theorems that are based on the triangles and as you go deeper into the concept, you will gradually come across certain theorems which will improve your understanding of the topic altogether. Triangles is considered as one of the most fascinating topics in geometry owing to its theorems and there are lots of thesis topics which can be generated related to triangles. Your approach towards problems related to triangles should be holistic and well researched since there are lots of factors to keep in mind. You might be asked to prove theorems and the reason that you give has to be justifiable enough to ensure that the reviewer gets the crux of the concept. There is still a lot of research being done related to the angles in a triangle as there is a lot of scope. You need to understand the relationship between different types of triangles with their sides and angles. Especially if you are in college and your major happens to be mathematics, then you might have to do a lot of research while drafting an essay on a topic related to triangles.
When it comes to writing a perfect essay, there are lots of things that you need to keep in mind in order to ensure that the essay is perfect according to the format as well as the vocabulary. Especially when you are drafting a thesis paper on a complex geometry-related topic, then you will have to follow a proper structured-approach towards the topic in order to make it easier for the readers to comprehend what you are trying to explain to them. As college students, we are bound to come across essay-based assignments which would require lots and lots of research in order to get the perfect content for the essay.
But there are some who rely on creative writing agencies which specialize in quality research related to any topic. All you have to do is give them a topic to write on and they will take care of the rest. These agencies usually hire well-educated people such as professors from colleges who can help the students with their essays. Ideally, the reputed agencies will not write the essay fully for you, but they will be with you throughout your assignment and will guide you towards writing your own essay. Now this is a more recommended approach owing to the fact that you will be able to develop your own writing and analytical skills in the process of drafting your own essay and that are highly encouraged.
But you need to be very cautious about the agencies that you deal with. Before hiring someone to help you out with your essay, ensure that you do your research and go through the feedbacks that the agency has received from its previous customers. The last thing you want is some agency wasting your time unnecessarily and this will reflect on the marks that you score on the essay. Hence, it is very important that you need to develop a right mind to shell out more if you want your essay to look better. An average agency might finish the task for you and will also offer you a free essay as a delightful starter, but do not get carried away since the one thing which you are not supposed to compromise on is the quality of the essay, despite whatever topic it is on. Hence, you will need to choose your agency wisely and ensure that the people at the agency are capable enough to help you out with your essay. Value for money is something we always look out for, so start by searching for such agencies in your area and then reach out to them. Ask them for a demo and chart out a plan of approach towards your essay assignment. If all goes well, your work will outshine others.
We know that interior angles are the ones that are present inside the triangle and the sum of all interior angles is 180 degrees. Thus, at a given point in time, only one of the angles can be more than 90 degrees as the other two angles have to add up to the remaining value. There is the concept of incenter which we have not discussed yet. Let us say we draw a line from all the three vertices of a triangle to their respective opposite sides. The point at which they will meet and bisect is called as an incenter. Now, if you draw a circle which touches all the three sides of the triangle, then the center of the circle will be the same point as the incenter.
One more point to note is that the interior angles will only add up to 180 degrees if the triangle is planar. For those who do not know what planar means, it is any shape that rests on a flat place. Let us say there is a triangle which is lying on a surface that is curved. In this case, the angles might not add up to 180 degrees. As for exterior angles, if we extend one side of a triangle, then it will result in an exterior angle. If we consider the angles of a triangle, then we will notice that the value of the exterior angle will be calculated when the two interior angles are added to each other. |
Slope and Rate of ChangeIn this video, you will learn about slope and rate of change. There are four types of slope: positive (rising), negative(falling), zero slope, and no slope. In order to find the slope, take any two coordinates from a line and substitute the values of x and y of each coordinate into the formula. Rate of change is the relationship between the x and y values in the coordinates given. In order to find the rate of change, you need to know that the you are looking for the change of y over the change of x. Thanks for watching this video, and subscribe for more! |
The following stages are used for thymoma:
In stage I, cancer is found only within the thymus. All cancer cells are inside the capsule (sac) that surrounds the thymus.
In stage II, cancer has spread through the capsule and into the fat around the thymus or into the lining of the chest cavity.
In stage III, cancer has spread to nearby organs in the chest, including the lung, the sac around the heart, or large blood vessels that carry blood to the heart.
Stage IV is divided into stage IVA and stage IVB, depending on where the cancer has spread.
In stage IVA, cancer has spread widely around the lungs and heart.
In stage IVB, cancer has spread to the blood or lymph system.
Thymic carcinomas have usually spread to other parts of the body when diagnosed.
The staging system used for thymomas is sometimes used for thymic carcinomas.
Various tests and procedures are used to diagnose thymoma and thymic carcinoma.read more
Four types of standard treatment are used for thymoma and thymic carcinoma which includes surgery, chemotherapy, hormone and radiation therapy.read more |
By J. W. WHITE, JR. AND LANDIS W. DONER(1)
BEEKEEPING IN THE UNITED STATES
AGRICULTURE HANDBOOK NUMBER 335
Revised October 1980
Pages 82 – 91
Honey is essentially a highly concentrated water solution of two sugars, dextrose and levulose, with small amounts of at least 22 other more complex sugars. Many other substances also occur in honey, but the sugars are by far the major components. The principal physical characteristics and behavior of honey are due to its sugars, but the minor constituents – such as flavoring materials, pigments, acids, and minerals – are largely responsible for the differences among individual honey types.
Honey, as it is found in the hive, is a truly remarkable material, elaborated by bees with floral nectar, and less often with honeydew. Nectar is a thin, easily spoiled sweet liquid that is changed (“ripened”) by the honey bee to a stable, high-density, high-energy food. The earlier U.S. Food and Drug Act defined honey as “the nectar and saccharine exudation of plants, gathered, modified, and stored in the comb by honey bees (Apis mellifera and A. dorsata); is levorotatory; contains not more than 25% water, not more than 0.25% ash, and not more than 8% sucrose.” The limits established in this definition were largely based on a survey published in 1908. Today, this definition has an advisory status only, but is not totally correct, as it allows too high a content of water and sucrose, is too low in ash, and makes no mention of honeydew.
Colors of honey form a continuous range from very pale yellow through ambers to a darkish red amber to nearly black. The variations are almost entirely due to the plant source of the honey, although climate may modify the color somewhat through the darkening action of heat.
The flavor and aroma of honey vary even more than the color. Although there seems to be a characteristic “honey flavor,” almost an infinite number of aroma and flavor variations can exist. As with color, the variations appear to be governed by the floral source. In general, light-colored honey is mild in flavor and a darker honey has a more pronounced flavor. Exceptions to the rule sometimes endow a light honey with very definite specific flavors. Since flavor and aroma judgments are personal, individual preference will vary, but with the tremendous variety available, everyone should be able to find a favorite honey.
(1) Research leader and research chemist, respectively, Science and Education Administration, Eastern Regional Research Center, Philadelphia, Pa. 19118.
Composition of Honey
By far, the largest portion of the dry matter in honey consists of the sugars. This very concentrated solution of several sugars results in the characteristic physical properties of honey – high viscosity, “stickiness,” high density, granulation tendencies, tendency to absorb moisture from the air, and immunity from some types of spoilage. Because of its unique character and its considerable difference from other sweeteners, chemists have long been interested in its composition and food technologists sometimes have been frustrated in attempts to include honey in prepared food formulas or products. Limitations of methods available to earlier researchers made their results only approximate in regard to the true sugar composition of honey. Although recent research has greatly improved analytical procedures for sugars, even now some compromises are required to make possible accurate analysis of large numbers of honey samples for sugars.
An analytical survey of U.S. honey is reported in Composition of American Honeys, Technical Bulletin 1261, published by the U.S. Department of Agriculture in 1962. In this survey, considerable effort was made to obtain honey samples from all over the United States and to include enough samples of the commercially significant floral types that the results, averaged by floral type, would be useful to the beekeeper and packer and also to the food technologist. In addition to providing tables of composition of U.S. honeys, some general conclusions were reached in the bulletin on various factors affected by honey composition.
Where comparisons were made of the composition of the same types of honey from 2 crop years, relatively small or no differences were found. The same was true for the same type of honey from various locations. As previously known, dark honey is higher than light honey in ash (mineral) and nitrogen content. Averaging results by regions showed that eastern and southern honeys were darker than average, whereas north-central and intermountain honeys were lighter. The north-central honey was higher than average in moisture, and the intermountain honey was more heavy bodied. Honey from the South Atlantic States showed the least tendency to granulate, whereas the intermountain honey had the greatest tendency.
The technical bulletin includes complete analyses of 490 samples of U.S. floral honey and 14 samples of honeydew honey gathered from 47 of the 50 States and representing 82 “single” floral types and 93 blends of “known” composition. For the more common honey types, many samples were available and averages were calculated by computer for many floral types and plant families. Also given in this bulletin are the average honey composition for each State and region and detailed discussions of the effects of crop year, storage, area of production, granulation, and color on composition. Some of the tabular data are included in this handbook.
Table 1 gives the average value for all of the constituents analyzed in the survey and also lists the range of values for each constitutent. The range shows the great variability for all honey constituents. Most of the constituents listed are familiar. Levulose and dextrose are the simple sugars making up most of the honey. Fructose and glucose are other commonly used names for these sugars. Sucrose (table sugar) also is present in honey, and is one of the main sugars in nectar, along with levulose and dextrose. “Maltose” is actually a mixture of several complex sugars, which are analyzed collectively and reported as maltose. Higher sugars is a more descriptive term for the material formerly called honey dextrin.
The undetermined value is found by adding all the sugar percentages to the moisture value and subtracting from 100. The active acidity of a material is expressed as pH; the larger the number the lower is the active acidity. The lactone is a newly found component of honey. Lactones may be considered to be a reserve acidity, since by chemically adding water to them (hydrolysis) an acid is formed. The ash is, of course, the material remaining after the honey is burned and represents mineral matter. The nitrogen is a measure of the protein material, including the enzymes, and diastase is a specific starch-digesting enzyme.
Most of these constituents are expressed in percent, that is, parts per hundred of honey. The acidity is reported differently. In earlier times, acidity was reported as percent formic acid. We now know that there are many acids in honey, with formic acid being one of the least important. Since a sugar acid, gluconic acid, has been found to be the principal one in honey, these results could be expressed as “percent gluconic acid” by multiplying the numbers in the table by 0.0196. Since actually there are many acids in honey, the term “milliequivalents per kilogram” is used to avoid implying that only one acid is found in honey. This figure is such that it properly expresses the acidity of a honey sample independently of the kind or kinds of acids present.
In table 1, the differences between floral honey and honeydew honey(2) can be seen. Floral honey is higher in simple sugars (levulose and dextrose), lower in disaccharides and higher sugars (dextrins), and contains much less acid. The higher amount of mineral salts (ash) in honeydew gives it a less active acidity (higher PH). The nitrogen content reflecting the amino acids and protein content is also higher in honeydew.
The main sugars in the common types of honey are shown in table 2. Levulose is the major sugar in all the samples, but there are a few types, not on the list, that contain more dextrose than levulose (dandelion and the blue curls). This excess of levulose over dextrose is one way that honey differs from commercial invert sugar. Even though honey has less dextrose than levulose, it is dextrose that crystallizes when honey granulates, because it is less soluble in water than is levulose. Even though honey contains an active sucrose-splitting enzyme, the sucrose level in honey never reaches zero.
Honey varies tremendously in color and flavor, depending largely on its floral source. Its composition also varies widely, depending on its floral sources (table 2). Although hundreds of kinds of honey are produced in this country, only about 25 or 30 are commercially important and available in large quantities. Until the comprehensive survey of honey composition was published in 1962, the degree of compositional variation was not known. This lack of information hindered the widespread use of honey by the food industry.
The natural moisture of honey in the comb is that remaining from the nectar after ripening. The amount of moisture is a function of the factors involved in ripening, including weather conditions and original moisture of the nectar. After extraction of the honey, its moisture content may change, depending on conditions of storage. It is one of the most important characteristics of honey influencing keeping quality, granulation, and body.
Beekeepers as well as honey buyers know that the water content of honey varies greatly. It may range between 13 and 25 percent. According to the United States Standards for Grades of Extracted Honey, honey may not contain more than 18.6 percent moisture to qualify for U.S. grade A (U.S. Fancy) and U.S. grade B (U.S. Choice). Grade C (U.S. Standard) honey may contain up to 20 percent water; any higher amount places a honey in U.S. grade D (Substandard).
These values represent limits and do not indicate the preferred or proper moisture content for honey. If honey has more than 17 percent moisture and contains a sufficient number of yeast spores, it will ferment. Such honey should be pasteurized, that is, heated sufficiently to kill such organisms. This is particularly important if the honey is to be “creamed” or granulated, since this process results in a slightly higher moisture level in the liquid part. On the other hand, it is possible for honey to be too low in moisture from some points of view. In the West, honey may have a moisture content as low as 13 to 14 percent. Such honey is somewhat difficult to handle, though it is most useful in blending to reduce moisture content. It contains over 6 percent more honey solids than a product of 18.6 percent moisture.
In the 490 samples of honey analyzed in the Department’s Technical Bulletin 1261, the average moisture content was 17.2 percent. Samples ranged between 13.4 and 22.9 percent, and the standard deviation was 1.46. This means that 68 percent of the samples (or of all U.S. honey) will fall within the limits of 17.2 ± 1.46 percent moisture (15.7 – 18.7); 95.5 percent of all U.S. honey will fall within the limits of 17.2 ± 2.92 percent moisture (14.3 – 20.1).
In the same bulletin, a breakdown of average moisture contents by geographic regions is shown. These values (percent) are North Atlantic, 17.3; East North Central, 18.0; West North Central, 18.2; South Atlantic, 17.7; South Central, 17.5; Intermountain West, 16.0; and West, 16.1.
Honey is above all a carbohydrate material, with 95 to 99.9 percent of the solids being sugars, and the identity of these sugars has been studied for many years. Sugars are classified according to their size or the complexity of the molecules of which they are made. Dextrose (glucose) and levulose (fructose), the main sugars in honey, are simple sugars, or monosaccharides, and are the building blocks for the more complex honey sugars. Dextrose and levulose account for about 85 percent of the solids in honey.
Until the middle of this century, the sugars of honey were thought to be a simple mixture of dextrose, levulose, sucrose (table sugar), and an ill-defined carbohydrate material called “honey dextrin.” With the advent of new methods for separating and analyzing sugars, workers in Europe, the United States, and Japan have identified many sugars in honey after separating them from the complex honey mixture. This task has been accomplished using a variety of physical and chemical methods.
Dextrose and levulose are still by far the major sugars in honey, but 22 others have been found. All of these sugars are more complex than the monosaccharides, dextrose and levulose. Ten disaccharides have been identified: sucrose, maltose, isomaltose, maltulose, nigerose, turanose, kojibiose, laminaribiose, a, B-trehalose, and gentiobiose. Ten trisaccharides are present: melezitose, 3-a-isomaltosylglucose, maltotriose, l-kestose, panose, isomaltotriose, erlose, theanderose, centose, and isopanose. Two more complex sugars, isomaltotetraose and isomaltopentaose, have been identified. Most of these sugars are present in quite small quantities.
Most of these sugars do not occur in nectar, but are formed either as a result of enzymes added by the honeybee during the ripening of honey or by chemical action in the concentrated, somewhat acid sugar mixture we know as honey.
The flavor of honey results from the blending of many “notes,” not the least being a slight tartness or acidity. The acids of honey account for less than 0.5 percent of the solids, but this level contributes not only to the flavor, but is in part responsible for the excellent stability of honey against microorganisms. Several acids have been found in honey, gluconic acid being the major one. It arises from dextrose through the action of an enzyme called glucose oxidase. Other acids in honey are formic, acetic, butyric, lactic, oxalic, succinic, tartaric, maleic, pyruvic, pyroglutamic, a-ketoglutaric, glycollic, citric, malic, 2- or 3-phosphoglyceric acid, a- or B-glycerophosphate, and glucose 6-phosphate.
Proteins and Amino Acids
It will be noted in table 1 that the amount of nitrogen in honey is low, 0.04 percent on the average, though it may range to 0.1 percent. Recent work has shown that only 40 to 65 percent of the total nitrogen in honey is in protein, and some nitrogen resides in substances other than proteins, namely the amino acids. Of the 8 to 11 proteins found in various honeys, 4 are common to all, and appear to originate in the bee, rather than the nectar. Little is known of many proteins in honey, except that the enzymes fall into this class.
The presence of proteins causes honey to have a lower surface tension than it would have otherwise, which produces a marked tendency to foam and form scum and encourages formation of fine air bubbles. Beekeepers familiar with buckwheat honey know how readily it tends to foam and produce surface scum, which is largely due to its relatively high protein content.
The amino acids are simple compounds obtained when proteins are broken down by chemical or digestive processes. They are the “building blocks” of the proteins. Several of them are essential to life and must be obtained in the diet. The quantity of free amino acids in honey is small and of no nutritional significance. Breakthroughs in the separation and analysis of minute quantities of material (chromatography) have revealed that various honeys contain 11 to 21 free amino acids. Proline, glutamic acid, alanine, phenylalanine, tyrosine, leucine, and isoleucine are the most common, with proline predominating.
Amino acids are known to react slowly, or more rapidly by heating, with sugars to produce yellow or brown materials. Part of the darkening of honey with age or heating may be due to this.
When honey is dried and burned, a small residue of ash invariably remains, which is the mineral content. As shown in table 1, it varies from 0.02 to slightly over 1 percent for a floral honey, averaging about 0.17 percent for the 490 samples analyzed.
Honeydew honey is richer in minerals, so much so that its mineral content is said to be a prime cause of its unsuitability for winter stores. Schuette and his colleagues at the University of Wisconsin have examined the mineral content of light and dark honey. They reported the following average values:
One of the characteristics that sets honey apart from all other sweetening agents is the presence of enzymes. These conceivably arise from the bee, pollen, nectar, or even yeasts or micro-organisms in the honey. Those most prominent are added by the bee during the conversion of nectar to honey. Enzymes are complex protein materials that under mild conditions bring about chemical changes, which may be very difficult to accomplish in a chemical laboratory without their aid. The changes that enzymes bring about throughout nature are essential to life.
Some of the most important honey enzymes are invertase, diastase, and glucose oxidase.
Invertase, also known as sucrase or saccharase splits sucrose into its constitutent simple sugars, dextrose, and levulose. Other more complex sugars have been found recently to form in small amounts during this action and in part explain the complexity of the minor sugars of honey. Although the work of invertase is completed when honey is ripened, the enzyme remains in the honey and retains its activity for some time. Even so, the sucrose content of honey never reaches zero. Since the enzyme also synthesizes sucrose, perhaps the final low value for the sucrose content of honey represents an equilibrium between splitting and forming sucrose.
Diastase (amylase) digests starch to simpler compounds but no starch is found in nectar. What its function is in honey is not clear. Diastase appears to be present in varying amounts in nearly all honey and it can be measured. It has probably had the greatest attention in the past, because it has been used as a measure of honey quality in several European countries.
Glucose oxidase converts dextrose to a related material, a gulconolactone, which in turn forms gluconic acid, the principal acid in honey. Since this enzyme previously was shown to be in the pharyngeal gland of the honey bee, this is probably the source. Here, as with other enzymes, the amount varies in different honeys. In addition to gluconolactone, glucose oxidase forms hydrogen peroxide during its action on dextrose, which has been shown to be the basis of the heat-sensitive antibacterial activity of honey.
Other enzymes are reported to be present in honey, including catalase and an acid phosphatase. All the honey enzymes can be destroyed or weakened by heat.
Properties of Honey
Because of honey’s complex and unusual composition, it has several interesting attributes. In addition, honey has some properties, because of its composition, that make it difficult to handle and use. With modern technology, however, methods have been established to cope with many of these problems.
An ancient use for honey was in medicine as a dressing for wounds and inflammations. Today, medicinal uses of honey are largely confined to folk medicine. On the other hand, since milk can be a carrier of some diseases, it was once thought that honey might likewise be such a carrier. Some years ago this idea was examined by adding nine common pathogenic bacteria to honey. All the bacteria died within a few hours or days. Honey is not a suitable medium for bacteria for two reasons – it is fairly acid and it is too high in sugar content for growth to occur. This killing of bacteria by high sugar content is called osmotic effect. It seems to function by literally drying out the bacteria. Some bacteria, however, can survive in the resting spore form, though not grown in honey.
Another type of antibacterial property of honey is that due to inhibine. The presence of an antibacterial activity in honey was first reported about 1940 and confirmed in several laboratories. Since then, several papers were published on this subject. Generally, most investigators agree that inhibine (name used by Dold, its discoverer, for antibacterial activity) is sensitive to heat and light. The effect of heat on the inhibine content, of honey was studied by several investigators. Apparently, heating honey sufficiently to reduce markedly or to destroy its inhibine activity would deny it a market as first-quality honey in several European countries. The use of sucrase and inhibine assays together was proposed to determine the heating history of commercial honey.
Until 1963, when White showed that the inhibine effect was due to hydrogen peroxide produced and accumulated in diluted honey, its identity remained unknown. This material, well known for its antiseptic properties, is a byproduct of the formation of gluconic acid by an enzyme that occurs in honey, glucose oxidase. The peroxide can inhibit the growth of certain bacteria in the diluted honey. Since it is destroyed by other honey constituents, an equilibrium level of peroxides will occur in a diluted honey, its magnitude depending on many factors such as enzyme activity, oxygen availability, and amounts of peroxide-destroying materials in the honey. The amount of inhibine (peroxide accumulation) in honey depends on floral type, age, and heating.
A chemical assay method has been developed that rapidly measures peroxide accumulation in diluted honey. By this procedure, different honeys have been found to vary widely in the sensitivity of their inhibine to heat. In general, the sensitivity is about the same as or greater than that of invertase and diastase in honey.
Honey is primarily a high-energy carbohydrate food. Because its distinct flavors cannot be found elsewhere, it is an enjoyable treat. The honey sugars are largely the easily digestible “simple sugars,” similar to those in many fruits. Honey can be regarded as a good food for both infants and adults.
The protein and enzymes of honey, though used as indicators of heating history and hence table quality in some countries, are not present in sufficient quantities to be considered nutritionally significant. Several of the essential vitamins are present in honey, but in insignificant levels. The mineral content of honey is variable, but darker honeys have significant quantities of minerals.
Dextrose, a major sugar in honey, can spontaneously crystallize from any honeys in the form of its monohydrate. This sometimes occurs when the moisture level in honey is allowed to drop below a certain level.
A large part of the honey sold to consumers in the United States is in the liquid form, much less in a finely granulated form known as “honey spread” or finely granulated honey, and even less as comb honey. The consumer appears to be conditioned to buying liquid honey. At least sales of the more convenient spread form have never approached those of liquid honey.
Since the granulated state is natural for most of the honey produced in this country, processing is required to keep it liquid. Careful application of heat to dissolve “seed” crystals and avoidance of subsequent “seeding” will usually suffice to keep a honey liquid for 6 months. Damage to color and flavor can result from excessive or improperly applied heat. Honey that has granulated can be returned to liquid by careful heating. Heat should be applied indirectly by hot water or air, not by direct flame or high-temperature electrical heat. Stirring accelerates the dissolution of crystals. For small containers, temperatures of 140ºF for 30 minutes usually will suffice.
If unheated honey is allowed to granulate naturally, several difficulties may arise. The texture may be fine and smooth or granular and objectionable to the consumer. Furthermore, a granulated honey becomes more susceptible to spoilage by fermentation, caused by natural yeast found in all honeys and apiaries. Quality damage from poor texture and fermented flavors usually is far greater than any caused by the heat needed to eliminate these problems.
Finely granulated honey may be prepared from a honey of proper moisture content (17.5 percent in summer, 15 percent in winter) by several processes. All involve pasteurization to eliminate fermentation, followed by addition at room temperature of 5 to 10 percent of a finely granulated “starter” of acceptable texture, thorough mixing, and storage at 55º to 60ºF in the retail containers for about a week. The texture remains acceptable if storage is below about 80º to 85º.
Deterioration of Quality
Fermentation. – Fermentation of honey is caused by the action of sugar-tolerant yeasts upon the sugars dextrose and levulose, resulting in the formation of ethyl alcohol and carbon dioxide. The alcohol in the presence of oxygen then may be broken down into acetic acid and water. As a result, honey that has fermented may taste sour.
The yeasts responsible for fermentation occur naturally in honey, in that they can germinate and grow at much higher sugar concentrations than other yeasts, and, therefore, are called “osmophilic.” Even so there are upper limits of sugar concentration beyond which these yeasts will not grow. Thus, the water content of a honey is one of the factors concerned in spoilage by fermentation. The others are extent of contamination by yeast spores (yeast count) and temperature of storage.
Honey with less than 17.1 percent water will not ferment in a year, irrespective of the yeast count. Between 17.1 and 18 percent moisture, honey with 1,000 yeast spores or less per gram will be safe for a year. When moisture is between 18.1 and 19 percent, not more than 10 yeast spores per gram can be present for safe storage. Above 19 percent water, honey can be expected to ferment even with only one spore per gram of honey, a level so low as to be very rare.
When honey granulates, the resulting increased moisture content of the liquid part is favorable for fermentation. Honey with a high moisture content will not ferment below 50ºF or above about 80º. Honey even of relatively low water content will ferment at 60º. Storing at temperatures over 80º to avoid fermentation is not practical as it will damage honey.
E. C. Martin has studied the mechanism and course of yeast fermentation in honey in conjunction with his work on the hygroscopicity of honey. He confirmed that when honey absorbs moisture, which occurs when it is stored above 60-percent relative humidity, the moisture content at first increases mostly at the surface before the water diffuses into the bulk of the honey. When honey absorbs moisture, yeasts grow aerobically (using oxygen) at the surface and multiply rapidly, whereas below the surface the growth is slower.
Fermenting honey is usually at least partly granulated and is characterized by a foam or froth on the surface. lt will foam considerably when heated. An odor as of sweet wine or fermenting fruit may be detected. Gas production may be so vigorous as to cause honey to overflow or burst a container. The off-flavors and odors associated with fermentation probably arise from the acids produced by the yeasts.
Honey that has been fermented can sometimes be reclaimed by heating it to 150ºF for a short time. This stops the fermentation and expels some of the off-flavor. Fermentation in honey may be avoided by heating to kill yeasts. Minimal treatments to pasteurize honey are as follows:
The following summarize the important aspects of fermentation:
1. All honey should be considered to contain yeasts.
2. Honey is more liable to fermentation after granulation.
3. Honey of over 17 percent water may ferment and over 19 percent water will ferment.
4. Storage below 50ºF will prevent fermentation during such storage, but not later.
5. Heating honey to 150ºF for 30 minutes will destroy honey yeasts and thus prevent fermentation.
Quality loss by heating and storing – The other principal types of honey spoilage, damage by over-heating and by improper storing, are related to each other. In general, changes that take place quickly during heating also occur over a longer period during storage with the rate depending on the temperature. These include darkening, loss of fresh flavor, and formation of off-flavor (caramelization).
To keep honey in its original condition of high quality and delectable flavor and fragrance is possibly the greatest responsibility of the beekeeper and honey packer. At the same time it is an operation receiving perhaps less attention from the producer than any other and one requiring careful consideration by packers and wholesalers. To do an effective job, one must know the factors that govern honey quality, as well as the effects of various beekeeping and storage practices on honey quality. The factors are easily determined, but only recently are the facts becoming known regarding the effects of processing temperatures and storage on honey quality.
To be of highest quality, a honey – whether liquid, crystallized, or comb – must be well ripened with proper moisture content; it must be free of extraneous materials, such as excessive pollen, dust, insect parts, wax, and crystals if liquid; it must not ferment; and above all it must be of excellent flavor and aroma, characteristic of the particular honey type. It must, of course, be free of off-flavors or odors of any origin. In fact, the more closely it resembles the well-ripened honey as it exists in the cells of the comb, the better it is.
Several beekeeping practices can reduce the quality of the extracted product. These include combining inferior floral types, either by mixing at extracting time or removing the crop at incorrect times, extraction of unripe honey, extraction of brood combs, and delay in settling and straining. However, we are concerned here with the handling of honey from its extraction to its sale. During this time improper settling, straining, heating, and storage conditions can make a superb honey into just another commercial product.
The primary objective of all processing of honey is simple – to stabilize it. This means to keep it free of fermentation and to keep the desired physical state, be it liquid or finely granulated. Methods for accomplishing these objectives have been fairly well worked out and have been used for many years. Probably improvements can be made. The requirements for stability of honey are more stringent now than in the past, with honey a world commodity and available in supermarkets the year around. Government price support and loan operations require storage of honey, and market conditions also may require storage at any point in the handling chain, including the producer, packer, wholesaler, and exporter.
The primary operation in the processing of honey is the application and control of heat. If we consider storage to be the application of or exposure to low amounts of heat over long periods, it can be seen that a study of the effects of heat on honey quality can have a wide application.
Any assessment of honey quality must include flavor considerations. The objective measurement of changes in flavor, particularly where they are gradual, is most difficult. We have measured the accumulation of a decomposition product of the sugars (hydroxymethylfurfural or HMF) as an index of heat-induced chemical change in the honey. Changes in flavor, other than simple loss by evaporation, also may be considered heat-induced chemical changes.
To study the effects of treatment on honey, we must use some properties of honey as indices of change. Such properties should relate to the quality or commercial value of honey. The occurrence of granulation of liquid honey, liquefaction or softening of granulated honey, and fermentation as functions of storage conditions has been reported; also, color is easily measured.
As indicators of the acceptability of honey for table use, Europeans have for many years used the amount of certain enzymes and HMF in honey. They considered that heating honey sufficiently to destroy or greatly lower its enzyme content or produce HMF reduced its desirability for most uses. A considerable difference has been noted in the reports by various workers on the sensitivity to heat of enzymes, largely diastase and invertase, in honey. Only recently has it been noted that storage alone is sufficient to reduce enzyme content and produce HMF in honey. Since some honey types frequently exported to Europe are naturally low in diastase, the response of diastase and invertase to storage and processing is of great importance for exporters.
A study was made of the effects of heating and storage on honey quality and was based on the results with three types of honey stored at six temperatures for 2 years. The results were used to obtain predictions of the quality life of honey under any storage conditions. The following information is typical of the calculations based on this work.
At 68ºF, diastase in honey has a half-life of 1,500 days, nearly 4 years. Invertase is more heat sensitive, with a half-life at 68º of 800 days, or about 2-1/4 years. Thus there are no problems here. By increasing the storage temperature to 77º, half the diastase is gone in 540 days, or 1-1/3 years, and half the invertase disappears in 250 days, or about 8 months. These periods are still rather long and there would seem to be nothing to be concerned about. However, temperatures in the 90′s for extended periods are not at all uncommon: 126 days (4 months) will destroy half the diastase and about 50 days (2 months) will eliminate half the invertase. As the temperature increases, the periods involved become shorter and shorter until the processing temperatures are reached. At 130º, 2-1/2 days would account for half the diastase and in 13 hours half the invertase is gone.
A recommended temperature for pasteurization of honey is 145ºF for 30 minutes. At this temperature diastase has a half-life of 16 hours and invertase only 3 hours. At first glance this might seem to present no problems, but it must be remembered that unless flash heating and immediate cooling are used, many hours will be required for a batch of honey to cool from 145º to a safe temperature.
If we proceed further to a temperature often recommended for preventing granulation, 160ºF for 30 minutes, the necessity of prompt cooling becomes highly important. At 160º, 2-1/2 hours will destroy half of the diastase, but half of the more sensitive invertase will be lost in 40 minutes. This treatment then cannot be recommended for any honey in which a good enzyme level is needed, as for export.
The damage done to honey by heating and by storage is the same. For the lower storage temperatures, simply a much longer time is required to obtain the same result. It must be remembered that the effects of processing and storage are additive. It is for this reason that proper storage is so important. A few periods of hot weather can offset the benefits of months of cool storage – 10 days at 90ºF are equivalent to 100 to 120 days at 70º. An hour at 145º in processing will cause changes equivalent to 40 days’ storage at 77º.
An easy way for beekeepers to decide whether they have storage or processing deterioration is to take samples of the fresh honey, being careful that the samples are fairly representative of the batch, and place them in a freezer for the entire period. At the end of this time, they should warm the samples to room temperature and compare them by color, flavor, and aroma with the honey in common storage. In some parts of the United States, the value of the difference can reach 1-1/2 cents per pound in a few months. Such figures certainly would justify expenditures for temperature control.
People who store honey are in a dilemma. They must select conditions that will minimize fermentation, undesirable granulation, and heat damage. Fermentation is strongly retarded below 50ºF and above 100º. Granulation is accelerated between 55º and 60º and initiated by fluctuation at 50º to 55º. The best condition for storing unpasteurized honey seems to be below 50º, or winter temperatures over much of the United States. Warming above this range in the spring can initiate active fermentation in such honey, which is usually granulated and thus even more susceptible.
DONER, L. W.
1977. THE SUGARS OF HONEY-A REVIEW. Journal of Science and Agriculture.
TOWNSEND, G. F
1961. PREPARATION OF HONEY FOR MARKET. 24 p. Ontario Department of Agriculture Publication 544.
WHITE, J. W. JR.
1975. HONEY. In Grout, R. A., ed., The hive and the honey bee, p. 491-530. Dadant & Sons, Inc., Hamilton, Ill.
1975. COMPOSITION AND PHYSICAL PROPERTIES OF HONEY. In E. Crane, ed., Honey Review, p. 157-239. Heinemann, London.
______ M. L. RIETHOP, M. H. SUBERS, and I. KUSHNIR.
1962. COMPOSITION OF AMERICAN HONEYS. 124 p. U.S. Department of Agriculture Technical Bulletin 1261. |
The Bird Man Symbol
Native American Indians were a deeply spiritual people and they communicated their history, thoughts, ideas and dreams from generation to generation through Symbols and Signs such as the Bird Man symbol. The origin of the Bird Man symbol derives from the ancient Mississippian culture of the Mound Builders of North America and were major elements in the Southeastern Ceremonial Complex of American prehistory (S.E.C.C.). Some Indian tribes including the Creek, Choctaw, Cherokee, Seminole and Chickasaw still retain some elements of the Mississippi culture. Their sacred rites, myths and symbols, such as the Thunderbird symbol, are presumed to descend from the Mississippians. For additional information refer to Mythical Creatures.
The Meaning of the Bird Man Symbol
The Bird Man symbol featured strongly in the Mississippian culture. The bird man was believed to be a supernatural deity who resided in the Upperworld with the spirits of the Sun, Moon and Stars. A Bird Man therefore represented the Upperworld, order, and light and bird man dancers would perform in ceremonies supplicating the spirits of the Upperworld. The link between the Upperworld (heaven) and the earth was the sky and the bird man was able to move between the two realms as messengers to the gods. The bird man was portrayed in the guise of an eagle, hawk or falcon. These birds were all strong, high flying predators. As creatures of the sky they were in constant warfare with the spirits of the underworld. The Mississippians used dances, gestures and sounds as symbolic powers and wore ceremonial clothes and carried sacred objects and weapons to symbolize their power. The Bird men also used masks as they were believed to hold spiritual powers that never left them and that the masks would identify them with the spirits and activate their power. The bird man created a powerful, intimidating figure and was associated with warfare. The Bird Man symbol pictured above shows a headdress bearing a horn. Antlers and horns signified spiritual power, especially when applied to animals that did not ordinarily have them such as Birds, Panthers, Avanyu and Snakes (Serpents). Performing rituals and bird man dances were the Mississippians way of aligning themselves to the spirits of the Upperworld and gaining favor for victory during battles or victory in important competitions such as Chunkey during which fortunes could be won or lost.
The Bird Man Symbol - Mississippian culture
The most ancient Native American Indian symbols, like the Bird Man symbol, came from the Mississippian culture which was established in 1000AD and continued to 1550AD onward. The Mississippian Native Americans were the last of the mound-building cultures of North America in the Midwestern, Eastern, and Southeastern United States. The Mississippian culture was based on warfare, which was represented by an array of emblems, motifs and symbols. The Mississippian culture warrior icons like the Bird Man symbol provides interesting history and ideas for tattoos that include cosmic imagery depicting animals, humans and mythical beasts. The Mississippian Native Americans practiced body painting, tattooing and piercing. |
stored in a free state under
pressure greater than 15 psi can be made
to break down by heat or shock and possibly explode.
Under pressure of 29.4 psi, acetylene becomes self-explosive,
and a slight shock will cause it to explode spontaneously.
However, when dissolved in acetone, it
Figure 15-27.-Acetylene cylinder.
can be compressed into cylinders at pressures up to 250psi.
The acetylene cylinder (fig. 15-27) is filled withporous materials, such as balsa wood, charcoal, and shredded asbestos, to decrease the size of the open spaces in the cylinder. Acetone, a colorless, flammable liquid, is added until about 40 percent of the porous material is filled. The filler acts as a large sponge to absorb the acetone, which, in turn, absorbs the acetylene. In this process, the volume of the acetone increases as it absorbs the acetylene, while acetylene, being a gas, decreases in volume. The acetylene cylinders are equipped with safety plugs, which have a small hole through the center. This hole is filled with a metal alloy, which melts at approximately 212°F or releases at 500 psi. When a cylinder is overheated, the plug will melt and permit the acetylene to escape before a dangerous pressure can build up. The plug hole is too small to permit a flame to burn back into the cylinder if the escaping acetylene should become ignited.
WELDING TORCHES.-The oxyacetylene welding torch is used to mix oxygen and acetylene gas in the proper proportions, and to control the volume of these gases burned at the welding tip. The torch has two needle valves, one for adjusting the flow of acetylene and the other for adjusting the flow of oxygen. In addition, there are two tubes, one for oxygen and the other for acetylene; a mixing head; inlet nipples for the attachment of hoses; a tip; and a handle. The tubes and
Figure 15-28.-Mixing head for injector-type welding torch.
Figure 15-29.-Equal pressure welding torch.
handle are made of seamless hard brass, copper-nickel alloy, stainless steel, or other noncorrosive metals of adequate strength. There are two types of welding torches–the low-pressure or injector type and the equal-pressure type. In the low-pressure or injector type (fig. 15-28), the acetylene pressure is less than 1 psi. A jet of high-pressure oxygen is used to produce a suction effect to draw in the required amount of acetylene. This is accomplished by the design of the mixer in the torch, which operates on the injector principle. The welding tips may or may not have separate injectors designed integrally with each tip.
The equal pressure torch (fig. 15-29) is designed to operate with equal pressures for the oxygen and acetylene. The pressure ranges from 1 to 15 psi. This torch has certain advantages over the low-pressure type because the flame can be more readily adjusted, and since equal pressures are used for each gas, the torch is less susceptible to flashbacks.
The welding tips are made of hard, drawn, electrolytic copper or 95-percent copper and 5-percent tellurium. They are made in various styles and types, some having a one-piece tip either with a single orifice or a number of orifices, and others with two or more tips attached to one mixing head. The diameters of the tip orifices differ to control the quantity of heat and the type of flame. These tip sizes are designated by numbers that are arranged according to the individual manufacturer’s system. In general, the smaller the number, the smaller the tip orifice.
No matter what type or size tip you select, the tip must be kept clean. Quite often the orifice becomes clogged with slag. When this happens, the flame will not burn properly. Inspect the tip before you use it. If the passage is obstructed, you can clear it with wire tip cleaners of the proper diameter, or with soft copper wire. Tips should not be cleaned with machinists drills or other sharp instruments. These devices may enlarge or scratch the tip opening and greatly reduce the efficiency of the torch tip.
HOSE.–The hose used to make the connection between the torch and the regulators is strong, nonporous, light, and flexible to make the torch movements easy. It is made to withstand high internal pressures, and the rubber used in its manufacture is chemically treated to remove sulfur to avoid the danger of spontaneous combustion. The oxygen hose is GREEN, and the acetylene hose is RED. The hose is a rubber tube with braided or wrapped cotton or rayon reinforcements and a rubber covering. The hoses have connections at each end so they can be connected to their respective regulator outlet and torch inlet connections. To prevent a dangerous interchange of acetylene and oxygen hoses, all threaded fittings used for the acetylene hookup are left-handed threads, and all threaded fittings for oxygen hookup are right-handed threads. The hoses are obtainable as a single hose for each gas or with the hoses bonded together along their length under a common outer rubber jacket. This type prevents the hose from kinking or becoming entangled during the welding operation.
LIGHTERS.-A flint lighter is provided for igniting the torch. The lighter consist of a file-shaped piece of steel, usually recessed in a cuplike device, and a piece of flint that can be drawn across the steel, which produces the sparks required to light the torch.
Matches should never be used to ignite atorch; their length requires bringing the hand too close to the tip to ignite the gas. Accumulated gas may envelope the hand and, when ignited, cause a severe burn.
GOGGLES. -Welding goggles are fitted withcolored lenses to keep out heat and light rays and to protect the eyes from sparks and molten metal. Regardless of the shade of lens used, goggles should be protected by a clear cover glass. The welding operator should select the shade or density of color that is best suited for his/her particular work. The desired lens is the darkest shade that will show a clear definition of the work without eyestrain. Goggles should fit closely around the eyes, and should be worn at all times during welding and cutting operations. Special goggles, using standard lenses, are available for use with spectacles.
WELDING (FILLER) RODS. -The use of the proper type of filler rod is very important in oxyacetylene welding operations. This material not only adds reinforcement to the weld area, but also adds desired properties to the finished weld. By selecting the proper type of rod, either tensile strength or ductility can be secured in a weld. Similarly, rods can be selected that will help retain the desired amount of corrosion resistance. In some cases, a suitable rod with a lower melting point will eliminate possible cracks from expansion and contraction.
Welding rods are classified as ferrous and nonferrous. The ferrous rods include carbon and alloy steel rods as well as cast iron rods. Nonferrous rods include brazing and bronze rods, aluminum and aluminum alloy rods, magnesium and magnesium alloy rods, copper rods, and silver rods. The diameter of the rod used is governed by the thickness of the metals being joined. If the rod is to small, it will not conduct heat away from the puddle rapidly enough, and a burned weld will result. A rod that is to large will chill the puddle. As in selecting the proper size welding torch tip, experience will enable the welder to select the proper diameter welding rod. |
On December 10th, we celebrate Human Rights Day in order “to bring to the attention of ‘people of the world’ the Universal Declaration of Human Rights (UDHR) as the common standard of achievement for all peoples and all nations” (UN General Assembly, 1950).
Adopted on 10 December 1948, the UDHR is the first international document, which formally sets out fundamental human rights to be universally protected.
Article 26 of the UDHR recognises education as a right:
- Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all on the basis of merit.
- Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.
- Parents have a prior right to choose the kind of education that shall be given to their children.
After the adoption of the UDHR, the right to education has been reaffirmed and further developed in a number of international and regional treaties. Today, all States (except South Sudan) have ratified at least one human rights treaty, accepting the legal obligation to respect, protect and fulfil the right to education for all, without discrimination.
However, there are still huge challenges in fully realising the right to education. Everyday, everywhere in the world, this right is violated. As we celebrate Human Rights Day, it is important to remember that the right to education is more than words written in legal instruments.
The law is fundamental in guaranteeing the right to education, but the law alone does not mean everyone is enjoying their universal human right to education.
I recently attended the World Forum on Human Rights, held in Marrakech from 27 to 30 November, where I presented on the state of the right to education in the world during the education session, giving an overview of international and national law, as well as flagging the on-going issues in practice, such as: non-access and discrimination against marginalised groups, the violation of free education due to the levying of indirect costs and fees, bad quality education, and lack of financing linked to the growth of privatisation in education.
Colleagues from different regions of the world were also raising key issues. Beatriz Pérez, from the Bolivian Campaign for the Right to Education, focused on gender inequality and the importance of quality education, which should promote dignity.
Rene Raya, from the Asia South Pacific Association for Basic and Adult Education (ASPBAE), highlighted that Asia-Pacific hosts the majority of illiterate adults (64% of global total) and 31% of total of out-of-school children. In the region, over 100 million youth (15-24 years) have not completed primary education and gender disparity remains large with 2/3 of women illiterate and more girls out-of-school than boys. At the same time, Asia-Pacific spends the least on education, which explains why the right to education is not realised in practice, despite most countries having constitutional and legal provisions guaranteeing free and compulsory education.
Rene noted, that in recent years, due to the lack of public financing, Asia-Pacific has seen a stronger push towards privatisation with an increase in private school enrolment, the rise of low fee private schools, the expansion of private tutoring, and the emergence of corporate chain schools, which all negatively impact on the enjoyment of the right to education. In Cambodia for instance, private tutoring has become so widespread that poor students, whose parents cannot afford it, fall behind. Privatisation doesn’t just affect students; teachers working in private school are paid extremely low salaries, often without benefits or social security.
Refaat Sabah from the Arabic Campaign for Education for All (ACEA), reiterated the importance of teachers’ salaries for a good quality education, and firmly denounced the money spent on buying arms rather than using it to protect the right to education of everyone. He told us the stories of Palestinian students who have been shot on their way to school reminding us that in some parts of the world, particularly in emergency contexts, students risk their lives to access education.
A few moments later, Alberto Croce, from the Argentinian Campaign for the Right to Education (CADE) stood up and asked for solidarity for the 43 missing Mexican students who disappeared two months ago while they were demonstrating against the Mexican Government’s plan to reduce education budgets.
What happened next moved me a lot.
We opened the floor for questions and many hands shot up – too many to count. The room was packed and I had the impression that everyone wanted to speak. About 15 people were able to take the floor, and each of them had their own personal story to tell.
Almost all of them highlighted discrimination issues: the lack of access to early childhood care and education in rural area, particularly for poor families; the lack of access to school for disabled children, even more so if they are orphans; the challenge for nomadic people to get to schools as the availability of mobile schools is limited; the exclusion from education of persons in detention; the gender inequalities from primary to higher education; and the difficulty in receiving education in minority languages. They also raised quality issues, for instance, in some remote areas students are taught in tents, and materials, such as books, are unavailable. Students also denounced the prohibitively high university fees.
Nothing of what I heard from them was new to me. I know these realities. But nonetheless I was deeply moved by people’s need to express them. Many participants came to me after the session to ask for more information and continue talking about the challenges they face in their particular context. It reiterated to me that, for these people, the right to education is more than just words written in legal instruments. It is a daily challenge to effectively enjoy it.
It is essential to create more spaces where people can be heard and find support in their fight for the full realisation of their right to education. On the Right to Education Project website, we have a multilingual discussion forum and a blog where I encourage everyone to share their stories. This space is yours. And if you need any information or support, do get in touch with us.
This year, the Human Rights Day’s slogan, ‘Human Rights 365’, encompasses the idea that every day is Human Rights Day. It celebrates the fundamental proposition in the Universal Declaration that each one of us, everywhere, at all times is entitled to the full range of human rights.
Today, let’s call on States to comply with their obligation to guarantee the right to education to everyone, including through the adoption of special measures towards marginalised groups.
No one can be left behind. It is essential for the well-being and development of each individual, as well as for society, that everyone can access and receive an good quality education.
Delphine Dorsi is Legal and Communication Officer for the Right to Education Project since September 2012. Prior to this position she worked at UNESCO for the Right to Education Programme where she carried out research, produced publications and monitored the implementation of the right to education, in close collaboration with UN treaty bodies and the UN Special Rapporteur on the Right to Education. She also worked with a number of NGOs in Europe and Africa. She holds a Master Degree in Human Rights from the University of Strasbourg. |
So these are the instructions:
Put your finger on the capital “A” and lowercase “a” at the top of the page.
Write the remaining letters of the alphabet in order to Z on the lines across your page.
You may use capital letters, lowercase letters, or both.
This is the scoring Rubric:
4- Exceeds expectations – Student correctly forms all letters of the alphabet, both capital and lowercase. Letters are properly sequenced from A-Z.
3- Meets expectations- Student correctly sequences the letters from A-Z. Most letters are correctly formed. The student may use all capital letters, all lowercase letters, or a combination of the two.
2- Approaches expectations- Student correctly forms at least 20 letters of the alphabet, the letters may be out of sequence.
1- Below expectations- Student correctly forms fewer than 20 letters of the alphabet.
So 23 or 24 of my students (out of 25) correctly sequenced the letters from A-Z and formed them correctly AND had BOTH upper and lower case letters formed correctly to the following standard. I accept “some” reversals, this is a developmental thing with 5-6 year olds, they don’t have much control over it, they may form the letters correctly 9 times out of 10, being tired, or hungry, or stressed in some other way can cause a higher incidence of reversals. Since we mostly don’t write yet on lined paper, I don’t care if the lower case letters (like g or j) “hang” below the line. I say that IF all letters were formed correctly to my standard, and all properly sequenced, then to me they exceed expectations. Meeting expectations only requires one of each letter, upper or lower case or a combination of them, properly sequenced, with “most” formed correctly. I say that my kids were WAY closer to #4 than #3 and graded them accordingly.
My problem with the rubric is that the different levels of the rubric focus on different things not just better levels of the same things. There are two different things being measured, alphabetical order and forming the letters. Further more, the DIRECTIONS give the kids the options of “You may use capital letters, lowercase letters, or both” there is no real direction given to TRY to form ALL upper and lowercase letters. (I left that part of the directions out with my kids, leaving them with the understanding that they were to form both upper and lower case letters, then at least they TRY) |
As summer became autumn in 1918, the Spanish Flu struck hard at the eastern shores of New England. Cases emerged in Boston, Brockton, Quincy, and Gloucester and at Camp Devens. By mid-September, 21 flu-related deaths were reported in Boston alone. By October 1, 85,000 cases had been reported statewide and the city was experiencing deaths at a rate of about 200 a day. Doctors, health officials, and scientists rushed to control and find a treatment for the influenza epidemic as they watched victims die within days, or even hours, of the appearance of their first symptoms.
John Owen, my grandfather, was just 17 years old and living in Lawrence, Massachusetts at the time. He remembered seeing the wagons come around the streets of Lawrence to collect the bodies of those who had fallen victim to the flu. As he explained it, the horse-drawn wagons would approach from the end of the street and collect bodies placed on the sidewalks, or carried out from houses. Coffins became scarce at the height of the epidemic and workers were forced to pile the bodies one atop the next, and carry them to mass graves.
As influenza cases and death tolls mounted in Boston and statewide, panic emerged. Misinformation and fear abounded. What were the symptoms of the flu? How could it be distinguished from the common cold? The US Public Health Service soon released a pamphlet, informing the public that the Spanish Flu came on suddenly, striking the victim with pain and soreness throughout his body, especially in the eyes, ears, back and head. Some experienced dizziness and nausea; most suffered fevers as high as 104°F that lasted as long as four days. The US Public Heath Service advised that flu sufferers looked sick and likely would have bloodshot eyes, a runny nose, and a cough.
Beyond the pamphlets, schools became a means of disseminating information about the disease to children and their families. During the height of the epidemic, the National School Boards Association advised the worried public to avoid sick people, crowds, and badly ventilated places, to keep warm, and to change from wet clothes quickly. This is familiar advice, even for us today. Local papers took it a step further, perhaps sensationalizing their advice somewhat, in order to attract more readers.
To slow the spread of the disease, public buildings, schools included, in Lawrence and many other New England cities and towns were closed. Haverhill, Massachusetts went a step further and prohibited its schoolchildren from attending motion picture houses or other public meetings during the epidemic. In Marblehead, Massachusetts, the high school building was converted into a hospital by the Board of Health in October 1918. Boston’s Committee of Public Safety asked school teachers to attend flu victims; most did as schools were closed indefinitely.
During the epidemic, fresh air was thought paramount in protecting against infection. Open-air emergency camps were set up in many Massachusetts cities and towns to treat the infected. The first opened on Brookline’s Corey Hill on September 9, 1918. Gloucester, Ipswich, Brockton, Waltham, Haverhill, Springfield, and Barre soon followed. My grandfather would have also seen Lawrence open its own emergency camp for flu victims from the city, as well as cases from the neighboring communities of Methuen, Andover, and North Andover. Controversy surrounded the opening of the Lawrence camp, named Emery Hill, which had been a large dairy farm that supplied milk to nearby residents. Though the residents objected loudly to Lawrence’s Mayor Hurley; in the end, Lawrence city officials protested that they had little say in the matter. They maintained that the state had chosen the site and was running the hospital. By October 12, 1918, the camp had 150 patients.
In the end, more than one in every four people in the US suffered some form of the Spanish Flu, and within one year, the average life expectancy for a US citizen was shortened by 12 years. Worldwide, more than 50 million people died – more than three times the number of lives claimed by World War I.
The Influenza Epidemic of 1918 is frequently overlooked in discussions about the history of the United States. Does your family have a story about its experiences with the epidemic? Did you lose any family members to it? |
Today there are well over a thousand different species of scorpion and they can be found throughout the tropics and sub-tropics. The biggest of them, aptly called the imperial scorpion, lives in the humid rain forest of West Africa. Fully outstretched, it can measure 8 inches (21 cm). Some desert scorpions on the other hand are only a few millimeters long. All are very similar in form, with a pair of large and formidable pincers in front, eight legs, and a long segmented tail which is usually carried arched over the animal's back. At the end of it hangs a large tear-shaped stinger loaded with poison.
The potency of scorpion's poison varies - as does the reaction of different animals when stung. As a general rule, the bigger and more formidable a scorpion's pincers, the less virulent its sting is likely to be. So the imperial relies more on its strength to overcome the prey and has a comparatively mild sting, whereas smaller species with fat tails and thin pincers have a venom so virulent that their sting can kill a dog in seven minutes and a human being in a few hours.
Although scorpions are normally only found in warm countries, their tolerance to different climatic conditions is extraordinary. They can withstand freezing for several weeks and will survive underwater for two days. Their external skeleton retains liquid so effectively that they can live in the hottest deserts. Their appetite is so small that individuals of some species can go without any food or water for twelve months. And they have a life span of up to thirty years.
If you try to pick up a scorpion, you quickly realize that it is almost impossible to catch the animal unaware. One way to make the attempt is to take a pair of forceps and grip it by its tail, just beneath the sting. But this is not easy. No matter from which direction you approach, the animal will be aware if you and will swivel to face danger often jerking its stinging tail forward in a threatening way. Almost as alarming, some species, such as a greenish black one that lives in southern India, will hiss at you, producing the noise by rasping a small patch of tiny spines on its claws
They can see you which ever way you approach for they have up to six pairs of simple eyes distributed around their carapace, together with a rather larger pair close to its back margin. So a scorpion can see both forwards and backwards simultaneously. These eyes consist of a group of individual light-sensitive cells.
But vision is by no means a scorpion's primary sense. It will certainly be aware of your approach even if it cannot see you. The slightest movement on sand causes a minute vibration that is transmitted from grain to grain. The scorpion detects it with a slit-shaped organ on the upper part of each leg. These are so sensitive they can detect the footfall of a beetle on sand when it is a meter away. Air-borne noises are picked up by scorpion in a different way - by minute hairs on the claws. With these it can detect the beat of an insect wing.
In addition to all these receptors the scorpion also has sensory devices that have virtually no parallel in any other animal - comb-like structures on the under-surface of its last pair of legs. They are called pectines and they are certainly sensitive for they are packed with nerve endings. It now seems certain that these organs are chemo-receptors. Whatever their precise role turns out to be, part of it will involve the smelling or tasting of chemical substances on the surface over which they walk.
Israeli black scorpion Scorpio maurus fuscus
A subspecies of Scorpio maurus
Photo courtesy of BioLib
Scorpions do not generally attack man and can be carefully brushed off the body without danger; but if they are suddenly disturbed they may inflict a painful sting. The sting of some species may even be fatal, particularly to children because of their small size. One of the more venomous species in the United States is Centruroides exilicauda commonly known as Arizona Bark Scorpion. This species was previously known as C. sculpturatus. The specific name exilicauda means "slender tail". |
New York: Centuries from now, a large swathe of the West Antarctic ice sheet is likely to be gone, its hundreds of trillions of tons of ice melted, causing a 4-foot (1.2-metre) rise in already swollen seas.
Scientists reported last week that the scenario may be inevitable, with new research concluding that some giant glaciers had passed the point of no return, possibly setting off a chain reaction that could doom the rest of the ice sheet.
For many, the research signalled that changes in the earth’s climate have reached a tipping point, even if global warming halted immediately.
“We as people see it as closing doors and limiting our future choices,” said Richard Alley, a professor of geosciences at Pennsylvania State University. “Most of us personally like to keep those choices open.”
But these glaciers are just the latest signs that the thawing of earth’s icy regions is accelerating. While some glaciers are holding steady or even growing slightly, most are shrinking, and scientists believe they will continue to melt until greenhouse gas emissions are reined in.
“It’s possibly the best evidence of real global impact of warming,” said Theodore A. Scambos, lead scientist at the National Snow and Ice Data Center.
Furthest along in melting are the smallest glaciers in the high mountainous regions of the Andes, the Alps and the Himalayas and in Alaska. By itself, their melting does not pose a grave threat; together they make up only 1 per cent of the ice on the planet and would cause sea level to rise only by 1 to 2 feet.
But the mountain glaciers have been telling scientists what the West Antarctica glacier disintegration is now confirming: In the coming centuries, more land will be covered by water and more of nature will be disrupted. A full melt would cause sea level to rise 215 feet.
During recent ice ages, glaciers expanded from the poles and covered nearly a third of the continents. And in the distant past there were episodes known as Snowball Earth, when the entire planet froze over. At the other extreme, a warm period near the end of the age of dinosaurs may have left the earth ice-free. Today the amount of ice is modest — 10 per cent of land areas, nearly all of that in Greenland and Antarctica.
Glaciers are, simply, rivers of ice formed from snow in regions that are frozen year-round. The snow compacts over time into granular, porous ice, which glaciologists call firn. When firn compacts even more, it becomes glacier ice, which flows, usually slowly, down mountainsides. Depending on how fast new snow accumulates at the top, or melts at the bottom, a glacier grows or shrinks in length and thickness.
Not long ago, the only way to measure glaciers was to put stakes in the ice. Using surveying tools, glaciologists would mark the location and return later to see how far the ice had moved. The method gave scientists a sense of only the areas measured during that study period. “We had these point measurements which were very labour-intensive,” said Tad Pfeffer, a glaciologist at the University of Colorado.
Today, satellites provide a global view. Images show where the glaciers are and how areas change over the years. Most useful has been NASA’s Gravity Recovery and Climate Experiment, or GRACE. Two identical spacecraft have been measuring the earth’s gravity. When glaciers melt, the water flows elsewhere, and that part of the planet weighs less, slightly weakening its gravitational pull. GRACE isn’t precise enough to measure the mass changes in an individual glacier, but it does provide data on regional shifts.
Another Nasa satellite, IceSat, bounced lasers off the ice to precisely measure glaciers’ height. (In operation from 2003 through 2009, when the last of its lasers stopped working, it is scheduled to be replaced by IceSat-2 in 2017.)
In an analysis last year of the satellite and ground measurements, a team of scientists led by Alex S. Gardner, an earth scientist at Clark University in Worcester, Massachusetts, who is moving to NASA’s Jet Propulsion Laboratory, concluded that, on average, glaciers in all regions were withering away, dumping 260 billion metric tons of water into the ocean every year.
“I can’t think of any major glacier region that’s growing right now,” Scambos said. “Almost everywhere we look we’re seeing mass loss.”
The melting from the mountain glaciers alone raises sea level about 0.7 millimetres a year.
The ice sheets of Antarctica and Greenland together possess about 100 times as much ice as all of the mountain glaciers combined, but contribute only slightly more to the sea level rise: 310 billion tons a year, Scambos said. That is because most of the mountain glaciers lie in areas where temperatures are closer to the melting point than they are in Greenland or Antarctica, and so slight warming tips them to melting.
Greenland, with 10 per cent of the world’s ice, has enough to raise sea level 23 feet. “I still think Greenland is the most important thing to watch for this century,” Scambos said.
In 2012, when summer Arctic temperatures were particularly warm, surface melting was observed almost everywhere on Greenland’s glaciers, even in the mountains. That had not happened for decades.
Researchers from Dartmouth found that another side effect from global warming, forest fires, made the melting even worse. Soot from fires elsewhere in the world landed on Greenland snow, making it darker, causing it to absorb more heat.
A new study of Greenland, published Sunday in the journal Nature Geoscience, paints an even bleaker picture. The melting is accelerated because many of the glaciers flow in the warming waters around Greenland. However, scientists had believed that the melting would slow once the bottom of the glaciers melted and they were no longer touching the water.
The new research indicates otherwise. Researchers at the University of California, Irvine, including Eric Rignot, the lead author of one of last week’s papers concluding that the melt in West Antarctica is irreversible, discovered long, deep canyons below sea level and under the ice sheet. That means the glaciers will have to retreat farther and longer before they lose contact with the water, and as a result, more ice will melt. “They will contribute more to sea level rise,” said Mathieu Morlighem, lead author of the Nature Geoscience paper.
Antarctica is the largest frozen mass on the planet, accounting for about 90 per cent of the earth’s ice. Most of it is in East Antarctica, which is generally higher and colder and less likely to melt. By some estimates global warming is leading to increased snowfall there, which is limiting the loss. But as in West Antarctica, some of the ice resides in bowl-shape depressions, which are similarly vulnerable to melting.
Overall, data from the European Space Agency’s CryoSat satellite, published on Monday, indicates that the continent shed 160 billion tons a year from 2010 to 2013.
Scientists say that the melting will continue as long as the heat-trapping carbon dioxide in the atmosphere increases. Even if carbon dioxide and temperatures stabilise, the melting and shifting of glaciers will continue for decades or centuries as they adjust to the new equilibrium.
But a vast majority of the ice is not yet destined to melt. “We have not committed to a lot more that could be committed if we keep turning up the thermostat,” said Alley of Penn State. |
As human populations and their impacts on the world increase, tropicalforests are changing in many different ways. Forests are being cleared,burned, logged, fragmented, and overhunted and an unprecedented pace,and they are also being altered in insidious ways by global climaticand atmospheric changes. "The evidence for global effects suggests thata massive reorganization of the structure and dynamics of tropicalforests is already underway" writes ecologist S. Joseph Wright, "Thetropics support over half of all species and over two-thirds of allpeople. Without an appropriate commitment from the scientificcommunity, the two are unlikely to continue to coexist," concludes thescientist.
Tropical forest landscapes are changing rapidly in the eyes ofscientists working on tropical monitoring plots around the globe, whilehuman populations and their economic activities grow. Old-growthforests become agricultural lands, degraded land is abandoned,urbanization intensifies, and the populations of tropical countrieswill increase by two billion over the next 25 years.
But what is happening in protected areas? Globally, 18% of all tropicaland subtropical moist forest and 9% of all tropical dry forests arenominally protected by governments. Increasingly, even these areas seemto be bearing the indelible marks of human activity.
On Barro Colorado Island (BCI) administered by the Smithsonian TropicalResearch Institute (STRI) in the Republic of Panama, the old-growthforest has escaped fire and agriculture for at least 1500 years. STRI'sCenter for Tropical Forest Science (CTFS) has repeatedly censused allsteams one centimeter of diameter or more at 1.3 m height in a50-hectare plot every five years since 1985. Wright notes that theaboveground biomass on BCI was almost constant since the first census.But lianas increased substantially (from 9% to 13% of all leaf biomass)between 1986 and 2002.
In another protected area, the Kibale National Park in Uganda, a30-year record suggests that reproductive activity by forest trees isincreasing; and at La Selva, Costa Rica, diameter-growth ratesdecreased among surviving individuals for cohorts of nine speciesmeasured annually for 17 years. These mysterious changes may be causedby large-scale drivers, such as increasing carbon dioxide in theatmosphere, intense droughts, or other poorly understood phenomena.
Wright, who has studied tropical forests and its plant and animalinhabitants since the late '70 at STRI, encourages tropical scientiststo conduct assessments based on existing long-term records. Basicresearch will help us to understand the dimensions and mechanisms offorest responses to anthropogenic forcing. Conservation scientists musthelp to mitigate the number of species lost to extinction by enhancingthe effectiveness of the network of protected areas. Other appliedresearch will help to rehabilitate degraded lands and to improveagricultural yields and living standards.
According to William F. Laurance, Wright's colleague at STRI andfrequent spokesman for conservation efforts in Africa and the Amazon,"the commitment of tropical biologists must go a step further toinclude effective communication of their findings to decision makersand the general public. It is those who will eventually demand thatgovernments invest in research and conservation of tropical forests andwho will work to slow the rapid, unsustainable growth of humanpopulations in the tropics."
Ref.: Wright, S. Joseph. 2005. "Tropical forests in a changing environment." Trends in Ecology & Evolution Online.
The Smithsonian Tropical Research Institute (STRI), with headquartersin Panama City, Panama, is one of the world's leading centers for basicresearch on the ecology, behavior and evolution of tropical organisms. http://www.stri.org
Cite This Page: |
Egg - The white, oblong egg is about 1 mm long and becomes light brown just before hatching.
Nymph - The wingless nymph resembles the adult in shape but is slightly smaller. The first instar is orange; instars two through four are yellow; the last nymphal instar is pale green.
Host Plants - This general feeder infests over 400 species of plants. It is particularly injurious to alfalfa, red clover, wheat, oats, corn, and strawberries.
Damage - Nymphs and adults extract plant juices through their needle-like mouthparts. Unlike most sap-sucking insects, however, they do not cause the foliage to turn yellow. Most plants wilt and become stunted. On alfalfa, a rosetting of the terminal plant growth often occurs. A severe infestation (100 bugs/plant) of meadow spittlebugs may drastically reduce seed yields and lower hay production 25 to 50 percent. The hay that is harvested often is too wet to cure. Such an infestation is most likely to result after an unusually dry spring.
Life History - Meadow spittlebugs overwinter as egg masses between the leaf sheath and the stem. Located 8 to 15 cm above the ground, these masses of up to 30 eggs hatch from late March in North Carolina to early June in Maine. The nymphs seek sheltered, humid areas of plants. Once they begin feeding, the nymphs exude a white, frothy spittle mass which protects them from natural enemies and desiccation. The nymphs feed for a month or longer, depending on temperature, and finally develop into adults in late May or June. During the summer, adults feed on the crop upon which they developed. As the foliage dries out, adults migrate to new hosts. In late August or early September, females each begin to deposit 18 to 51 eggs on succulent plant tissue. Each mass consists of a row of 2 to 20 eggs lying side by side and glued together with a frothy cement. Meadow spittlebugs produce only one generation each year. |
Compiled by Copy Editor Susan Johnson
Lake-effect snow is produced during cooler atmospheric conditions when cold winds move across long expanses of warmer lake water, providing energy and picking up water vapor, which freezes and is deposited on leeward shores. The same effect occurs over bodies of salt water, producing ocean-effect or bay-effect snow. The effect is enhanced when the moving air mass is uplifted by the orographic (relating to mountains) influence of higher elevations on the downwind shores. This uplifting can produce narrow, but intense, bands of precipitation, depositing many inches of snow each hour. This often results in large snowfall totals.
The areas affected by lake-effect snow are called snowbelts. This effect is found in many locations around the world, but is best known in populated areas of the Great Lakes of North America, and especially central and western New York, northwestern Pennsylvania, northeastern Ohio, southwestern and central Ontario, northeastern Illinois (along the shoreline of Lake Michigan), northwestern and north-central Indiana.
Under certain conditions, strong winds can accompany lake-effect snows, creating blizzard-like conditions. But the duration of this event is often a little less than that required for a blizzard warning in both the U.S. and Canada. If the air temperature is not low enough to keep the precipitation frozen, it falls as lake-effect rain. For lake-effect rain or snow to form, the air moving across the lake must be significantly cooler than the surface air (which is likely to be near the temperature of the water surface).
Lake-effect snow is produced as cold winds blow clouds over warm waters. Several key elements are required to form lake-effect precipitation. These determine its characteristics: instability, fetch (the distance an airmass travels over a body of water), wind shear, upstream moisture, upwind lakes, synoptic (large)-scale forcing, orography/topography, and snow or ice cover.
As a lake gradually freezes over, its ability to produce lake-effect precipitation decreases for two reasons. First, the open ice-free liquid surface area of the lake shrinks, which reduces fetch distances. Second, the water temperature nears freezing, reducing overall latent heat energy available to produce squalls. A complete freeze is often not necessary to end production of lake-effect precipitation.
Even when precipitation is not produced, cold air passing over warmer water may produce cloud cover. Fast-moving mid-latitude cyclones (Alberta clippers) often cross the Great Lakes. After a cold front passes, winds tend to switch to the northwest, and often a long-lasting low-pressure area will form over the Canadian Maritimes, which may pull cold northwestern air across the Great Lakes for a week or more.
The southern and eastern shores of the Great Lakes typically receive heavy snowfall each winter, especially from late November to early January. This lake-effect snow may lead to large regional differences. Researcher B. Geerts noted, “For instance, 50 cm of snow may accumulate over the course of a few days near the shore, and 50 km from the lake shore the ground may be bare. Lake-effect snow occurs elsewhere as well, e.g., near Lake Baikal in Russia, but nowhere is it so pronounced and has it such an effect on ground and air transportation.
“The local maxima in snowfall are not due to the proximity of mountains or an ocean. The difference is not because the southern and eastern shores are cooler than the surroundings; in fact, they are slightly warmer than the other shores. Snowfall typically occurs in this area after the passage of a cold front, when synoptic factors are not conducive to precipitation.”
Meteorologist Brian Edwards wrote for AccuWeather.com on Jan. 8, 2014, that with the arctic air still hanging over the Great Lakes, snow would continue to pile up in the snowbelts downwind of the Great Lakes. According to NOAA, ice coverage throughout the Great Lakes is limited to mostly coastal locations, and the lack of ice would lead to blinding snow developing downwind of the lakes. Some locations to the east and southeast of the Great Lakes could receive more than 3 feet of snow.
From the Feb. 19-25, 2014, issue |
Points to consider when evaluating student work include:
Seeing the many sides of the issue is critical: students who are well prepared will not become very partisan or blaming of one side or the other.
Students will demonstrate that they are familiar with the history and nature of the difficult relations of the Comanches with the Texans, based on their readings of primary sources.
Students will demonstrate an understanding of the differences between the various approaches.
Students will show they understand all sides of the issues, and in doing so demonstrate they would make a good Indian agent who will protect the rights of both Comanches and settlers.
Students will be able to respond to with solutions to problems which are based on their readings of primary source documents.
Their responses will demonstrate that they can offer suggestions that could make the reservation idea work for both sides, based on approaches described in the resources. |
Did You Know?
The bands or rings on a clams
outer shell indicates its age, much
like rings on a tree. One year
of growth is one full ring.
~Photo courtesy of NOAA~
- Mollusks like clams, oysters, mussels, and conch are soft-bodied animals that build their own hard-outer shell for protection. To create the shell, they use an organ called a "mantle" to secrete a hard substance known as "nacre" over their body. A mollusks shell is often times washed up on the beach after it dies.
- Pearls are formed inside mollusks. They form when a sand grain or other irritating particle gets stuck inside the mollusks shell. To protect itself, the mollusk covers the particle with a substance called nacre, which is the material that covers the inside of its shell. After several layers of nacre covers the sand grain a pearl takes shape. Today, many pearls come from cultured or farmed mollusks and finding one in the wild is a one in a million chance.
- Molluscan shellfish have siphons that skim out food like microscopic plants and animals (phytoplankton and zooplankton) from the water. This is known as filter-feeding.
- Shellfish like oysters, clams, mussels and scallops are called "bivalves" because they have two halves that are held together by a ligament. A strong adductor muscle helps keep the two valves close.
Harvesting Shellfish:Visit the Shellfishing webpages for information on recreationally and commercially harvesting these animals. |
What is an air quality index (AQI)?
Where can I find air quality forecasts and measured concentrations of pollutants?
What are some seasonal patterns for ozone and particulate matter in my community?
5-10 minutes a day for one week for Part A, prior to doing Parts B and C
One block period or two traditional periods for reviewing Part A and doing Parts B and C
AQI charts for projection (included)
Daily newspaper and/or internet connection
Archived records of ozone and pm
Video 2-3: Forecasting Air Quality (optional)
TOPICS: air pollution (local), air pollution (North Carolina), ozone (ground-level), particle pollution (or particulate matter), parts per million, parts per billion, Air Quality Index
TYPES: critical thinking, data collection, data analysis
NC ESSENTIAL STANDARDS for Earth/Environmental Science:
EEn.2.5 Understand the structure of and processes within our atmosphere.
EEn.2.5.1 Summarize the structure and composition of our atmosphere.
EEn.2.5.5 Explain how human activities affect air quality.
If you are only doing this activity, and not the others in Module 2, you may want to show the 2-3 Forecasting Air Quality Video to your students after they do this activity.
What's an Air Quality Index? Activity
In this activity, students will learn where they can find the daily Air Quality Index (AQI) forecast and how to interpret it. They will also identify seasonal patterns for ozone and particulate matter in their region, and learn some of the reasons behind those patterns.
What is An Air Quality Index? - Teacher to Teacher Tips
The quick video below has tips for doing this activity from Mark Townley, an award-winning, North Carolina high school teacher. Mark helped develop It’s Our Air and has used each of these activities with his students.
OTHER MODULE 2 (PREDICTING AIR POLLUTION) ACTIVITIES AND VIDEO
2-1 What's an Air Quality Index (this activity) |
Subscribe / Renew
July 22, 2010
If you think back to the time you were in school, high school or earlier, you probably remember classes where you did well and were engaged, those that bored you, and still others that you found too challenging.
Chances are, the ones you excelled in responded well to your needs as a learner while those that bored you or confused you did not. This is a common situation and highlights the problem faced by teachers who teach students with many different learning needs.
Solving this problem is the notion behind student-centered learning, where the individual needs of student are front and center. This is an idea that has been around for a while, but is finding currency again in the search for ways to improve educational outcomes and the need for students to develop the “21st-century skills” they will need in their life after school. In addition to literacy, science and math skills, and knowledge of government and economics, 21st-century skills include critical thinking, communication, collaboration and creativity.
Student-centered learning is tailored to learning styles, and students are encouraged to move at a pace that ensures they understand concepts before moving on. This goes a long way to improving student engagement.
If the pace is too slow, students can move faster and move on to more challenging material. Those who do not understand the concepts can take more time or find alternative approaches to the material that might better respond to their learning needs.
For both teachers and students, it represents a change. Rather than delivering information to passive recipients, teachers are more engaged in guiding students to discover meaning in the information they’re considering.
Information is also explored in new ways through projects, student-to-student instruction, team collaboration and presentations that use a variety of media to demonstrate knowledge. Technology is also being used increasingly since it allows the individualization of instruction at low cost.
As we consider this trend, logical questions are: What does this transformation mean for the physical environments of schools, and how would a student-centered environment look?
To answer these questions, let’s turn first to the characteristics of the environments that are not student-centered.
When you remember the school environment from your own experience, it probably had many of the following characteristics: classes taught in nearly identical rooms arranged along a central corridor; a strict time limit for the start and end of class; a teacher lecturing a class with students in rows facing one direction; a clear front and back of the room, with white boards or chalkboards on some of the walls and perhaps a few computers in the room located together in the back.
This environment, which is typical even in many new schools, is the result of the “industrial” model of education, which had as its goal to process a lot of students at minimum cost, providing them with basic skills.
It works well enough for students who respond to this sort of learning activity, but tends to leave students behind who don’t respond to this approach, and to bore students who don’t see the relevance of what they are learning or aren’t receptive to how it is being taught.
It is not student-centered because it ignores the individual needs of students. A certain amount of material needs to be covered, and whether or not the student has mastered it, it is necessary to move on to the next unit of material.
Many teachers try to personalize their teaching in this situation, but with limited time and support for customization, dedicated teachers are still limited by a system that was never designed for personalization.
A student-centered model
Student-centered learning environments, by contrast, will need to be very different. The examples that exist have some things in common: students tend to have a home base or work station to call their own; classrooms, while they still exist, are more limited and specialized, with more learning happening out of the classroom; spaces with a variety of furniture soft seating, sofas and tables are available for students who may find them more comfortable than the traditional desk arrangement; more spaces exist for working in groups; teachers are grouped together so they can collaborate more easily; areas for presentation are widely available; flexible spaces that accommodate a wide range of project activities both for individuals and groups are scattered throughout; students are encouraged to exercise creativity in both the development and rearrangement of spaces; technology computers, projectors, smart boards and so on are widely available with easy wireless connectivity.
A school environment that displays many of these characteristics is High Tech High, a charter school in San Diego. Instead of classrooms, High Tech High has studios, seminar rooms, collaboration areas, spaces for project-based learning, and a variety of furniture available for student rearrangement and use.
The school is very flexible in terms of the kinds of projects and groupings it can accommodate. This physical arrangement supports the school’s cross-curricular approach by allowing students to develop knowledge in many subject areas through project-based learning.
A project might combine the development of math, science and social studies skills, for instance, by working on a project that requires exploration into all of these subjects to successfully complete the project. High Tech High is a student-centered learning environment.
Although student-centered learning has gained currency in discussions of educational reform, it is not common to see this approach wholly adopted in schools. The reasons for this go beyond the scope of this article, but as this trend gains momentum, it would be wise for facility planners and designers to plan for its consequences.
Those consequences are the need for spaces that are different from those we see today, spaces that allow more collaboration, encourage critical thinking and foster communication between students, faculty and the greater community. In short, student-centered learning environments will need to be more flexible and adaptable to the individualization and personalization trends of the future.
Greg Stack is the K-12 thought leader for NAC Architecture.
|Need to manage your next solicitation? Try SolicitBid, now free for public agencies.| |
There are three major classes of Cephalopods. Which of the following is a Cephalopod?
Which type of Cnidarian body plan is less motile?
What moves the Gastric Mill?
What are the structures called where blood enters the heart?
Where is food ground up?
A. pyloric stomach
B. gastric stomach
What provides the major force for backward swimming?
What muscle causes the mandibles to come together?
A. mandibular adductor muscles
B. mandibular chewing muscles
C. abdominal flexor muscles
D. all of the above
What covers the Brachial Chamber?
What organs excrete excess Water and Ammonia?
B. green glands
C. malpighian tubules
Where does the female store sperm until she lays her eggs?
A. seminal vesicle
C. seminal receptacle
What are the first pair of walking legs called?
The male genital opening is located at the base of which pair of walking legs?
The female genital opening is located at the base of which pair of walking legs?
How many pair of uropods does a crayfish have?
What is the posterior-most extension of the last body segment called?
What are the telson and Uropods used for?
A. forward swimming
B. food gathering
C. pollen collection
D. rapid backward motion to escape
What name is given to the paired appendages of the abdomen?
A. chelipeds and swimmerets
B. swimmerets and antennae
C. swimmerets and uropods
D. uropods and pleopods
What covers the Cephalothorax
C. gill plate
D. gastric mill
What is the sharp front extension of the carapace called?
Name 2 different kinds of sensory organs of the crayfish.
A. antennae and eyes
B. legs and uropods
C. rostrum and eyes
D. eyes and legs
Which appendages are used for defense and food handling?
C. walking legs
How many pair of swimmerets does a crayfish have?
How many pair of walking legs does a grayfish have?
Which swimmerets are adapted for sperm transfer in the male crayfish?
A. the first 2 pair
B. the last 2 pair
C. the first and last pair
D. all of them
Regarding general animal biology questions, this solution provides assistance with a multiple choice exercise, including links for further research. |
Various types of substances can be found in our surrounding. Some substances are very simple whereas some are very complex. Some substances can be broken into other simpler substances while some substances cannot be broken. A substance which cannot be further broken down into other simpler substances is called an element. Hydrogen, oxygen, nitrogen, chlorine, mercury, lead, etc. are the examples of the element. There are altogether 109 elements known so far. Out of them, 92 elements are naturally found and remaining 26 elements are artificially prepared by scientists. Elements combine together to form a new substance, which is called a compound. Salt, water, chalk, carbon dioxide, etc. are the examples of a compound.
An atom is the smallest particle of an element which can take part in a chemical reaction. They are different in size, masses, and chemical properties. For example atoms of hydrogen are similar in all respects whereas the atoms of oxygen elements are different. 118 elements have 118 different types of atoms. So, different elements have different atoms.
The smallest particles of an element or compound are called molecule. For example, a molecule of chlorine is made of two atoms of chlorine, it is denoted by Cl. The molecule of a compound contains two or more atoms of different elements. For example, a molecule of water (H2O) contains two atoms of hydrogen and one atom of oxygen. It is represented by H2O.
An element is simplest pure substances that can neither be divided into simpler ones nor can it be made from two or more substances by any method e.g: hydrogen, helium and etc.
A compound is a substance formed by the combination of two or more than two elements.
An atom is the smallest particle of an element which can take part in the chemical reaction.
The smallest particles of an element or a compound is called molecule.
Any two examples of compounds are sodium chloride and sodium oxide.
The smallest particle of an element is _______.
Molecule is the smallest unit of _______.
Cl2 is an example of ______.
NaCl is an example of ______.
Elements combine together to form a new substance, which is called ______.
A molecule of chlorine is made of ______ atoms of chlorine.
A substance which cannot be further broken down into other simpler substances is called ______.
Which of these are matters?
All the answers are correct |
An object placed on a tilted surface will often slide down the surface. The rate at which the object slides down the surface is dependent upon how tilted the surface is; the greater the tilt of the surface, the faster the rate at which the object will slide down it. In physics, a tilted surface is called an inclined plane. Objects are known to accelerate down inclined planes because of an unbalanced force. To understand this type of motion, it is important to analyze the forces acting upon an object on an inclined plane. The diagram at the right depicts the two forces acting upon a crate that is positioned on an inclined plane (assumed to be friction-free). As shown in the diagram, there are always at least two forces acting upon any object that is positioned on an inclined plane - the force of gravity and the normal force. The force of gravity (also known as weight) acts in a downward direction; yet the normal force acts in a direction perpendicular to the surface (in fact, normal means "perpendicular").
The Abnormal Normal Force
The first peculiarity of inclined plane problems is that the normal force is not directed in the direction that we are accustomed to. Up to this point in the course, we have always seen normal forces acting in an upward direction, opposite the direction of the force of gravity. But this is only because the objects were always on horizontal surfaces and never upon inclined planes. The truth about normal forces is not that they are always upwards, but rather that they are always directed perpendicular to the surface that the object is on.
The Components of the Gravity Force
The task of determining the net force acting upon an object on an inclined plane is a difficult manner since the two (or more) forces are not directed in opposite directions. Thus, one (or more) of the forces will have to be resolved into perpendicular components so that they can be easily added to the other forces acting upon the object. Usually, any force directed at an angle to the horizontal is resolved into horizontal and vertical components. However, this is not the process that we will pursue with inclined planes. Instead, the process of analyzing the forces acting upon objects on inclined planes will involve resolving the weight vector (Fgrav) into two perpendicular components. This is the second peculiarity of inclined plane problems. The force of gravity will be resolved into two components of force - one directed parallel to the inclined surface and the other directed perpendicular to the inclined surface. The diagram below shows how the force of gravity has been replaced by two components - a parallel and a perpendicular component of force.
The perpendicular component of the force of gravity is directed opposite the normal force and as such balances the normal force. The parallel component of the force of gravity is not balanced by any other force. This object will subsequently accelerate down the inclined plane due to the presence of an unbalanced force. It is the parallel component of the force of gravity that causes this acceleration. The parallel component of the force of gravity is the net force.
The task of determining the magnitude of the two components of the force of gravity is a mere manner of using the equations. The equations for the parallel and perpendicular components are:
In the absence of friction and other forces (tension, applied, etc.), the acceleration of an object on an incline is the value of the parallel component (m*g*sine of angle) divided by the mass (m). This yields the equation
(in the absence of friction and other forces)
Simplifying an Inclined Plane Problem
In the presence of friction or other forces (applied force, tensional forces, etc.), the situation is slightly more complicated. Consider the diagram shown at the right. The perpendicular component of force still balances the normal force since objects do not accelerate perpendicular to the incline. Yet the frictional force must also be considered when determining the net force. As in all net force problems, the net force is the vector sum of all the forces. That is, all the individual forces are added together as vectors. The perpendicular component and the normal force add to 0 N. The parallel component and the friction force add together to yield 5 N. The net force is 5 N, directed along the incline towards the floor.
The above problem (and all inclined plane problems) can be simplified through a useful trick known as "tilting the head." An inclined plane problem is in every way like any other net force problem with the sole exception that the surface has been tilted. Thus, to transform the problem back into the form with which you are more comfortable, merely tilt your head in the same direction that the incline was tilted. Or better yet, merely tilt the page of paper (a sure remedy for TNS - "tilted neck syndrome" or "taco neck syndrome") so that the surface no longer appears level. This is illustrated below.
Once the force of gravity has been resolved into its two components and the inclined plane has been tilted, the problem should look very familiar. Merely ignore the force of gravity (since it has been replaced by its two components) and solve for the net force and acceleration.
As an example consider the situation depicted in the diagram at the right. The free-body diagram shows the forces acting upon a 100-kg crate that is sliding down an inclined plane. The plane is inclined at an angle of 30 degrees. The coefficient of friction between the crate and the incline is 0.3. Determine the net force and acceleration of the crate.
Begin the above problem by finding the force of gravity acting upon the crate and the components of this force parallel and perpendicular to the incline. The force of gravity is 980 N and the components of this force are Fparallel = 490 N (980 N • sin 30 degrees) and Fperpendicular = 849 N (980 N • cos30 degrees). Now the normal force can be determined to be 849 N (it must balance the perpendicular component of the weight vector). The force of friction can be determined from the value of the normal force and the coefficient of friction; Ffrict is 255 N (Ffrict = "mu"*Fnorm= 0.3 • 849 N). The net force is the vector sum of all the forces. The forces directed perpendicular to the incline balance; the forces directed parallel to the incline do not balance. The net force is 235 N (490 N - 255 N). The acceleration is 2.35 m/s/s (Fnet/m = 235 N/100 kg).
The two diagrams below depict the free-body diagram for a 1000-kg roller coaster on the first drop of two different roller coaster rides. Use the above principles of vector resolution to determine the net force and acceleration of the roller coaster cars. Assume a negligible effect of friction and air resistance. When done, click the button to view the answers.
The effects of the incline angle on the acceleration of a roller coaster (or any object on an incline) can be observed in the two practice problems above. As the angle is increased, the acceleration of the object is increased. The explanation of this relates to the components that we have been drawing. As the angle increases, the component of force parallel to the incline increases and the component of force perpendicular to the incline decreases. It is the parallel component of the weight vector that causes the acceleration. Thus, accelerations are greater at greater angles of incline. The diagram below depicts this relationship for three different angles of increasing magnitude.
Some Roller Coaster Physics
Roller coasters produce two thrills associated with the initial drop down a steep incline. The thrill of acceleration is produced by using large angles of incline on the first drop; such large angles increase the value of the parallel component of the weight vector (the component that causes acceleration). The thrill of weightlessness is produced by reducing the magnitude of the normal force to values less than their usual values. It is important to recognize that the thrill of weightlessness is a feeling associated with a lower than usual normal force. Typically, a person weighing 700 N will experience a 700 N normal force when sitting in a chair. However, if the chair is accelerating down a 60-degrees incline, then the person will experience a 350 Newton normal force. This value is less than normal and contributes to the feeling of weighing less than one's normal weight - i.e., weightlessness.
Use the widget below to investigate other inclined plane situations. Simply enter the mass, the incline angle and the coefficient of friction (use 0 for frictionless situations). Then click the Submit button to view the acceleration.
Check Your Understanding
The following questions are intended to test your understanding of the mathematics and concepts of inclined planes. Once you have answered the question, click the button to see the answers.
1. Two boys are playing ice hockey on a neighborhood street. A stray puck travels across the friction-free ice and then up the friction-free incline of a driveway. Which one of the following ticker tapes (A, B, or C) accurately portrays the motion of the puck as it travels across the level street and then up the driveway?
Explain your answer.
2. Little Johnny stands at the bottom of the driveway and kicks a soccer ball. The ball rolls northward up the driveway and then rolls back to Johnny. Which one of the following velocity-time graphs (A, B, C, or D) most accurately portrays the motion of the ball as it rolls up the driveway and back down?
Explain your answer.
3. A golf ball is rolling across a horizontal section of the green on the 18th hole. It then encounters a steep downward incline (see diagram). Friction is involved. Which of the following ticker tape patterns (A, B, or C) might be an appropriate representation of the ball's motion?
Explain why the inappropriate patterns are inappropriate.
4. Missy dePenn's eighth frame in the Wednesday night bowling league was a disaster. The ball rolled off the lane, passed through the freight door in the building's rear, and then down the driveway. Millie Meater (Missy's teammate), who was spending every free moment studying for her physics test, began visualizing the velocity-time graph for the ball's motion. Which one of the velocity-time graphs (A, B, C, or D) would be an appropriate representation of the ball's motion as it rolls across the horizontal surface and then down the incline? Consider frictional forces.
5. Three lab partners - Olive N. Glenveau, Glen Brook, and Warren Peace - are discussing an incline problem (see diagram). They are debating the value of the normal force. Olive claims that the normal force is 250 N; Glen claims that the normal force is 433 N; and Warren claims that the normal force is 500 N. While all three answers seem reasonable, only one is correct. Indicate which two answers are wrong and explain why they are wrong.
6. Lon Scaper is doing some lawn work when a 2-kg tire escapes from his wheelbarrow and begins rolling down a steep hill (a 30° incline) in San Francisco. Sketch the parallel and perpendicular components of this weight vector. Determine the magnitude of the components using trigonometric functions. Then determine the acceleration of the tire. Ignore resistance force.
Finally, determine which one of the velocity-time graph would represent the motion of the tire as it rolls down the incline.
Explain your answer.
7. In each of the following diagrams, a 100-kg box is sliding down a frictional surface at a constant speed of 0.2 m/s. The incline angle is different in each situation. Analyze each diagram and fill in the blanks. |
Teach English With This Week in History
This Week in History is a fun and interesting reading that is updated every Monday at English Club. You will find it from the English Club homepage. ESL learners can read about a different event that occured in history each week. These short readings can also be printed out and used to inspire a wide range of activities in the ESL classroom. Below you will see one example of "This Week in History", along with a number of ideas on how to use this resource as a basis for various games and skill-building lessons.
NB: to help in planning for future lessons, you can find the whole year of weekly events in the This Week in History Archive. Also, the EnglishClub YouTube channel has an audio version of This Week in History with subtitles.
Sample from first week of August
1930: FIRST FIFA WORLD CUP IS HELD IN URUGUAY
In 1930, the first FIFA World Cup Football (soccer) Championship was held in Uruguay, with the final match being played in the country's capital, Montevideo, between the host nation Uruguay and their South American neighbours Argentina. Uruguay trailed 2-1 at half-time, but scored 3 goals in the second half to win 4-2 and become the first World Champions. The FIFA World Cup is now held every 4 years and is one of the world's most popular sporting events, with the final match regularly watched by over 2 billion television viewers.
- Practise writing questions: Have students write as many wh-questions as they can based on the short reading. Put students in pairs and have them answer each other's questions without looking at the text.
When was the first FIFA World Cup Championship held?
Where was the final match of the 1930 World Cup held?
Which teams played in the final match of the 1930 World Cup?
What was the final score of the first World Cup?
How often is the World Cup held?
How many people watch the World Cup Final each year?
- Key word search: Ask students to choose one keyword from the reading and do online research on it. Depending whether or not your students require speaking or writing practice, the research can be presented orally or in a written report. The length and detail of the assignment can be based on the level of your students. This may be a good time to introduce the idea of plagiarism in essay writing. Teach students how to put sentences into their own words. You may even want to introduce how to properly cite references in English. Use this exercise to encourage students to use variety in sentence structure.
Montevideo is the capital of Uruguay. It is located in the southern region. Almost half of the country lives in this city. Montevideo is a popular tourist destination. People from Argentina love to visit its beautiful beaches. Fishing is one of the most important industries in Montevideo.
- Pronunciation Practice: Choose three to five words from the text and write them on the board. Time students while they try to come up with as many rhyming words as they can. Turn the activity into a contest by splitting students into two groups. The group that comes up with the most rhyming words wins. Have each group share their rhyming words out loud. People from the other group can participate by putting up their hands when they think a word doesn't rhyme.
"first": worst, thirst, burst
"match": catch, patch, latch
"half": laugh, calf
"goals": bowls, rolls, holes
- Spelling Bee: Choose a list of ten words from the text and tell students there will be a spelling bee based on "This Week in History". Do not tell them which words to study or when the test will be. Students can practise quizzing each other in pairs.
- Hot Seat: Choose vocabulary from the text. Separate your class into two. One player from each group goes to the front of the class and faces away from the board. You write the word on the board and the other students have to define the word and try to get their player to guess it. The student who shouts the word first gets a point for her team.
- Grammar Search: Hand out copies of the text. Have students circle all of the verbs, box all of the nouns, underline all of the modifiers, etc. Take up the answers on the board.
- This Week in the Future: Have students choose 5 vocabulary words from the original text. Tell them they will use the words to write a new story about the future (for example, 100 years later than the original story).
Vocabulary: trailed, host nation, capital, popular, regularly
- 2030: FIRST HUMAN FLYING COMPETITION
In 2030, the first human flying competition was held in England's capital city. People from 4 different countries participated in the event. The host nation won the competition. A 14-year-old British boy flew all the way from the London Zoo to Buckingham Palace in half an hour. The other flyers trailed behind by more than five minutes. Human flying was one of the most popular things for teens to do in England during this year. Students would regularly fly to and from school instead of taking the bus.
Monday Madness: Save yourself time and stress at the beginning of each week by starting Monday's class with this fun and simple resource. (NB: weekly update occurs at 00:01 hours GMT Mondays. You may need to refresh the page to get the latest story.)
Extra YouTube Activity
Give your students the link to the EnglishClub YouTube channel and let them find the This Week in History playlist. First, have them listen to a video with sound. Then have them turn the sound down and try to read the captions out loud as the words go across the screen. This will help learners practise speaking at an appropriate pace. |
The 1st July 2009 marks the 440th anniversary of what was perhaps one of the first (in retrospect) ‘EU’-style unions on the European continent. The Union of Lublin (1st July, 1569) is often seen as a natural predecessor to the Maastricht Treaty (7th February, 1992). The Union of Lublin was a union of two states – the Kingdom of Poland and the Grand Duchy of Lithuania. The actual signing of the Union of Lublin may have been a defining point in history but it was only one moment in a whole series of acts of union and treaties that saw the eventual creation of a federal state.
Not only is the Union of Lublin seen as a precursor to the Maastricht Treaty, but the state that the Union of Lublin created is often seen as analogous to the modern European Union. Does this mean that the member states of the European Union will follow the same path as Poland and Lithuania prior to and after the Union of Lublin? Can the respective histories of Poland and Lithuania give us valuable insights into what might become of the European Union? In order to answer these questions or even attempt to answer these questions, it is useful to look at what happened before and after the Union of Lublin with the help of a simple timeline…
1385 – Union of Krewo (Grand Lithuanian Duke marries Polish Queen);
1401 – Union of Vilnius-Radom (relating to issues of royal authority);
1413 – Union of Horodło (uniting the nobilities of both states);
1432 – Union of Grodno (saw increased ties between the two states);
1499 – Union of Kraków-Vilnius (was a political-military alliance);
1501 – Union of Mielnik (renewed the personal dynastic union);
1569 – Union of Lublin (created a ‘Commonwealth’ – two states with one ruler, government and foreign policy);
1791 – Creation of a unitary state (and abolition of the two states);
1795 – The ‘Commonwealth’ disappears off the map (with the Partitions of Poland).
Will the European Union follow a similar path? We may argue that the deterioration of the Polish-Lithuanian state prior to the Partitions could well happen to the EU. The social collapse of the Polish-Lithuanian Commonwealth opened the gates for the Partitions. Perhaps this is already happening in the EU? Rising bureaucracy, a growing feeling of dissatisfaction, a general feeling of apathy. Are we witnessing the start of the collapse of the European Union or does Maastricht still have another 200 years left? Could the EU also end up on the rubbish heap of history?
A History of Unions
If we count the start of the development of Europe’s first ‘Union’ to have been 1385 and the end 1795 then 410 years is not a bad result, although in reality we should count the Union of Lublin as the Union’s inception date. In any case 1569 to 1795 still gives us 226 years. The Scandinavian Kalmar Union lasted from 1397 to 1523 (a ‘mere’ 126 years). The British Acts of Union began in 1707 and still exist (which gives 302 years and counting). In any case, these three examples – the Polish-Lithuanian Commonwealth, the United Kingdom and the Kalmar Union – demonstrate that the European continent has a history of unions and this is, by no means, something foreign to us. Why did the Commonwealth and Kalmar Union fail? Why is the United Kingdom still going? Two questions that may prove to be important for the future of the European Union. |
Scientists have found the oldest fossil depicting copulating insects in northeastern China, published November 6th in the open-access journal PLOS ONE by Dong Ren and colleagues at the Capital Normal University in China.
Fossil records of mating insects are fairly sparse, and therefore our current knowledge of mating position and genitalia orientation in the early stages of evolution is rather limited.
In this study, the authors present a fossil of a pair of copulating froghoppers, a type of small insect that hops from plant to plant much like tiny frogs. The well-preserved fossil of these two froghoppers showed belly-to-belly mating position and depicts the male reproductive organ inserting into the female copulatory structure.
This is the earliest record of copulating insects to date, and suggests that froghoppers' genital symmetry and mating position have remained static for over 165 million years. Ren adds, "We found these two very rare copulating froghoppers which provide a glimpse of interesting insect behavior and important data to understand their mating position and genitalia orientation during the Middle Jurassic."
The above post is reprinted from materials provided by Public Library of Science. Note: Materials may be edited for content and length.
Cite This Page: |
1. Embryo of Empire: Americans and the World Before 1789.
2. Independence, Expansion, and War, 1789–1815.
3. Extending and Preserving the Empire, 1815–1848.
4. Expansionism, Sectionalism, and Civil War, 1848–1865.
5. Establishing Regional Hegemony and Global Power, 1865–1895.
6. Imperialist Leap, 1895–1900.
7. Managing, Policing, and Extending the Empire, 1900–1914.
8. War, Peace, and Revolution in the Time of Wilson, 1914–1920.
Appendix: Makers of American Foreign Relations.
6. Diplomatic Crossroad: The Maine, McKinley, and War, 1898. The Venezuelan Crisis of 1895. American Men of Empire. Each in His Own Way: Cleveland and McKinley Confront Cuba Libre, 1895-1898. The Spanish-American-Cuban-Filipino War. Men versus “Aunties”: The Debate over Empire in the United States. Imperial Collisions in Asia: The Philippine Insurrection and the Open Door in China. The Elbows of a World Power, 1895-1900. |
Molar tooth belonging to a Denisovan, thought to be a new branch of ancient humans that overlapped in time with Neanderthals and modern humans. © MPI-EVA, Leipzig
Denisovans, together with Neanderthals, are our closest extinct relatives. They are a recently discovered group of ancient humans from whom only a few fossil fragments, dated to about 40,000 years ago, have been found.
Not only did this group exist at the same time as modern humans, remarkable genetic research has revealed that they interbred with some populations.
In 2010, scientists analysed limited DNA from a fossilised finger and a molar tooth unearthed in Denisova Cave in the Altai Mountains, Siberia. The initial research suggested they were from a genetically distinct group of ancient humans that shared a common ancestor with modern humans (Homo sapiens) and Neanderthals (Homo neanderthalensis) about 1 million years ago. However, the whole Denisovan genome has now been reconstructed and indicates a closer link to the Neanderthals.
Just as remarkable was the discovery that the Denisovans, as this ancient human group has become known, are related to a particular group of humans alive today – Australasians, who live on some of the islands north of Australia and in Australia itself.
The study showed that Australasians share around 5% of their genetic material with the Denisovans. The most plausible explanation is that Denisovans were present further south as well as in Siberia, and that they encountered and interbred with pre-Australasian populations of modern humans migrating from Africa though south east Asia around 60,000 years ago.
If the populations were very small, it wouldn’t take much interbreeding to make a genetic mark. As few as 50 Denisovans interbreeding with 1,000 pre-Australasians could result in their present-day descendants sharing 5% of their genetic material with Denisovans.
Genetic information suggests that Denisovans may have been part of the Homo heidelbergensis lineage. In Europe, Homo heidelbergensis gave rise to Neanderthals, in Africa they gave rise to us (modern humans), and in Asia, perhaps to the Denisovans. |
Words express our feeling about surrounding but actions shows them. Words are only reflection of action. Sometimes words are lie so, they don’t have any effect. In fact, we may make easily promises but later break those promises. Therefore, I agreed with this statement “actions speak louder than word”. I want to explain why.
Firstly, every one can say everything they want. But, it’s important how much they try to achieve them. Something needs more effort to obtain. For example, my brother wants to give toefl exam that he has ability to study abroad. But, he doesn’t study more and does not practice more to increase his ability.
In addition, unlike word action have real effects. Everyone may say that like you. But, they don’t show in their action. In comparison, our parents not only express their loveliness by words but also they show it by their action.
Finally, people usually evaluate a person by actions not by words. In facts, one characteristic only expose through his behavior and gesture. A lot of people want to hide their real emotion by saying things which don’t correspond with their thought. For example, in emergent situations such as fire accident, a genuine hero is a person saves many lives beside some people just talk but do nothing. Therefore, actions are the factor that defines his characteristics.
To sum up, I think action make a person different from other. So, we should think carefully before saying anything whether we can do it or not. It is possible to lose many important things in life.
You should spend about 20 minutes on this task.
Write about the following topic:
“Actions speak louder than words.” Do you agree or disagree with the statement?
You should write at least 250 words.
***Note:This PTE Essay was recently asked in the PTE Academic Exam.
Actions Speak Louder Than Words Essay
Actions speak louder than words
Model Answer 1:
People either communicate with their words or through their actions. Words are merely sounds uttered by the tongue. The action is what puts the meaning into the words said. I completely agree with the statement “Actions speak louder than words.”
To say something and to do something are two different things. It is a well-known fact that people often do not do what they say. It is why people value hard work more than mere promises. For example, politicians make many heart touching promises before elections. But after the election is over, and they get elected we all know how their action goes.
Secondly, actions depict a person’s character and personality. What someone does and how he behaves in that particular situation reveals his real side. For example, if a town next to you experienced a flood, a real hero would do what he can do within his means to help the people instead of only feeling bad for them.
In conclusion, actions carry more meaning and importance than what a person says. A person must hence, think twice before saying anything because words can be easily expressed but doing it, in reality, takes real effort and dedication.
Other Recommended PTE Essays:
Cultural Shock Essay
Extreme Sports Essay
Influence Of Media On Society
Model Answer 2:[Submitted by - Nirav]
Words convey an offer or invitation for some action whereas performing the action is what completes a task. The only reason behind the famous phrase that “action speaks louder than words” is the fact that taking action brings the destination closer. Hence, I also firmly believe that actions speak louder than words.
In any environment, be it work, household chores or national politics, there are always people who only talk about taking actions and the ones who get the work done. Though it is essential to convey the message of the action to be performed to all the stakeholders, it is imperative that the task is worked upon since the spoken word is a double-edged sword where the authenticity of the spoken word can be questioned later by work performed.
If you have ever worked at a job where there is a lazy boss, then you can relate to this. The employees always look up to the upper management / the boss for motivation. When they find their boss doing nothing & unmotivated then employees replicate the same traits. But when the boss gets their hands dirty and shows the passion they have for making things better, employees are more motivated to work and feel more passionate about what they are doing.z
Accordingly, even though the spoken word is a vital step towards proper communication of the actions that need to be performed, it is much more important that timely performance of the required action is taken. |
The latest news about space exploration and technologies,
astrophysics, cosmology, the universe...
Posted: Apr 11, 2016
'Virtual Habitat' software - to Mars and back
(Nanowerk News) Space is the most hostile environment that we know of. The lack of pressure would bring our bodily fluids to the boil. Oxygen, heat, food and water are not present either. Yet people live there - on the International Space Station (ISS), thanks to the life support systems that are installed there. For extended space missions, such as a trip to Mars, the functional capability of these technologies is also crucial. Researchers at the Technical University of Munich (TUM) have developed software that can be used to simulate the systems.
In the movie, "The Martian", the astronaut Mark Watney is left stranded alone on Mars. It quickly becomes apparent how dependent his survival is on the life support systems. He needs oxygen, drinking water, food, normal pressure and heat. None of which the Red Planet can provide him with. The conditions are even more extreme in space. Nevertheless, NASA has long-term plans to send astronauts on missions, lasting several weeks or months, to an asteroid or even to Mars, for example.
Claas Olthoff (left) and Daniel Pütz with a model of the International Space Station.
"A crucial question must be posed here: do the life support systems run stably over such a long period?" explains Claas Olthoff, Research Associate at the TUM Institute of Astronautics. Interactions with other systems or even unforeseen disruptions and failures must be taken into account.
Man takes center stage of the simulation
Since 2006, scientists at the institute have been working on the "Virtual Habitat" software, which can calculate these problems precisely. "V-HAB" allows researchers to simulate models, ranging from a spacesuit right up to a ten-man lunar base crew. Even missions lasting several years are calculated. The advantage of the tool: numerous functional life support technologies have already been programmed here and interactions between different systems can be calculated.
A core element of the software is a model of the human body because humans produce carbon dioxide and urine, among other things. These are the raw materials that the life support system can in turn process. Methane gas and water are produced from carbon dioxide after a chemical reaction with hydrogen. The life support system pumps the methane overboard, the water is then available to the astronauts again and can be used as drinking water or to produce oxygen by electrolysis. Urine can also be converted into drinking water. These interactions between man and machine are very complex and "V-HAB" tries to map as many of them as possible.
European life support system is coming to the ISS
The software is constantly being developed and supplemented with models of diverse systems; from a radiator that cools the spacesuits to algae cultures required to produce food. During a study visit at NASA’s Johnson Space Center in Houston, master's student, Daniel Pütz, had the opportunity to simulate the installation of a new life support system on the International Space Station (ISS) using "V-HAB" and therefore program and test other functions along the way.
Both an American and a Russian life support system are currently installed on the ISS. Now a European version is about to be added. Airbus developed the Advanced Closed Loop System (ACLS) for the European Space Agency (ESA). Thanks to a close connection between the individual subsystems, it is more compact and therefore saves space. It will be brought to the ISS with a Japanese spacecraft in 2017 and installed in the American laboratory module, Destiny, for test purposes.
Too much humidity can cause mold
But there are always risks with a new system, too, explains Olthoff. Because it can affect or even disrupt the existing systems. As ACLS uses a different technology for CO2 filtering than the systems already installed, there is a particular risk here of more water vapor reaching the air. Humidity must be between 40 and 60 percent on the space station. A higher figure would be dangerous because mold could form in poorly ventilated areas.
As Pütz discovered from the simulations, the filtering systems on ISS can easily make adjustments for the higher levels of humidity that the system produces. Even the other values were deemed acceptable.
NASA already used "V-HAB" to make calculations for an asteroid mission. Maybe the software will be used in the future for simulate more long-term missions. And perhaps give the real "Martians" a better chance of survival in the process. |
The characteristically narrow-headed ringed map turtle (Graptemys oculifera) is one of the most poorly known turtles in the United States (2). The upper shell, or carapace, has flattened, black, spine-like projections running along the centre, and a slightly serrated edge (2). The carapace is a mainly dark olive-green, with yellow or orange eye-like spots on the bony-plates, or scutes, running along its back. A wide yellow semicircle patterns the outermost scutes (2). The underside of the shell, or plastron, is yellow or orange (2).
The skin covering the head and body is black with vibrant yellow stripes. The adult male differs from the female in having a long, thick tail, elongated foreclaws and a narrower head (2).
- Also known as
- Ringed sawback.
- Male carapace length: up to11 cm (2)
- Female carapace length: up to 22 cm (2)
Ringed map turtle biology
Like many species of reptile, the ringed map turtle is cold-blooded and spends much of its day basking in the sun (2). At night, it rests on branches and other partially submerged wood (2). Its diet consists mainly of a range of insects including caddisfly larvae, aquatic beetles and their larvae, mayflies and dragonfly nymphs. It is also known to feed on plant matter and the material scraped from submerged logs (1) (2).
The male ringed map turtle becomes sexually mature when the plastron reaches 65 centimetres in length. Mating has been observed to occur in April (2), and the female is gravid for approximately two and a half weeks before laying a clutch of around three or four eggs (4). The eggs are deposited in nests on sandbars, approximately 18 metres from the water’s edge. The female can lay two clutches per year (4). The eggs are laid from mid-May to mid-July, peaking in mid-June, and the hatchlings begin to emerge after sunset in late July and early August (2) (4).
Ringed map turtle range
The ringed map turtle is endemic to the United States, where it is restricted to the Pearl River and its major tributaries in the states of Mississippi and Louisiana. It is not found in the lower-most section of the West Pearl River, which is tidally influenced (1) (2).
The total length of river occupied by the ringed map turtle is just 875 kilometres (1).
Ringed map turtle habitat
The ringed map turtle prefers wide, sand- or clay-bottomed rivers with strong currents and adjacent white sand beaches (2). An abundance of basking sites in the form of brush, logs and debris is also an important part of its habitat (2).
Ringed map turtle status
The ringed map turtle is classified as Vulnerable (VU) on the IUCN Red List (1) and listed on Appendix II of CITES (3).
Ringed map turtle threats
The eggs and juveniles of the ringed map turtle are preyed upon by numerous species, including ants, snakes, crows, racoons, canids and also humans (2). Boat traffic in some areas can cause mutilation and even death of adults, and they also often become hooked on fishing lines and are subsequently killed by fishermen (2).
Habitat modification and water-quality degradation have also had a detrimental impact on this species (1). In addition, it is suspected that illegal collecting, assumed to be for the pet trade, may have added to the decline of this species (3).
Ringed map turtle conservation
In Louisiana, the ringed map turtle is listed as a ‘Species of Special Concern’ meaning that any person involved in the acquiring, handling, buying or selling of this reptile must hold either a collectors license or a reptile wholesale or retail dealers license (3). It is also protected by law in Mississippi (1).
A 19 kilometre stretch of the Pearl River has been designated as a turtle sanctuary, with a further 240 kilometres of river habitat suggested for future protection. Further studies on aspects of this species’ ecology would also be beneficial in understanding how best to conserve it (1).
Find out more
Find out more about the ringed map turtle and its conservation:
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact:
- The top shell of a turtle or tortoise. In arthropods (insects, crabs etc), the fused head and thorax (the part of the body located near the head), also known as the ‘cephalothorax’.
- A species or taxonomic group that is only found in one particular country or geographic area.
- Carrying developing young or eggs.
- Stage in an animal’s lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce.
- The lower shell of a turtle or tortoise.
- A large scale on the shell of a turtle or tortoise.
IUCN Red List (September, 2011)
Ernst, C.H. and Lovich, J.E. (2009) Turtles of the United States and Canada: Second Edition. The John Hopkins University Press, Baltimore.
CITES (September, 2011)
Jones, R.L. (2006) Reproduction and nesting of the endangered ringed map turtle, Graptemys oculifera, in Mississippi. Chelonian Conservation and Biology, 52(2): 195-209. |
Star system ETA Carinae has been a mystery for astronomers for centuries. Scientists could not understand the cause of the change in brightness double stars, for example, a striking flash in the XIX century, is very similar to the supernova explosion.
Astronomers from the University of Arizona (USA) found that ETA Carinae “flashed” at least two more times to the nineteenth century, but remained alive.
Star may become a supernova in two ways. If the star is large enough, it collapses on itself when it runs out of fuel. When the outer layers hit the core, the star explodes, it spits out the remains in the surrounding space. The second option substance accumulates on the white dwarf until the layer becomes so dense that it causes an explosion.
Neither scenario is not suitable for This Keel. In the mid 1800s, the star system became so bright that outshines the other stars in the night sky, and then faded. Everyone was talking about the supernova explosion and gas and “debris” that created the Homunculus nebula. But ETA Carinae is not dead. Again she was very bright in 1890, then in 1953 and dramatically doubled its brightness in 1998-1999.
“ETA Carinae is a supernova impostor,” says Megan Kiiminki from the University of Arizona. Star was very bright when she threw a lot of stuff in the 1800’s, but she’s still here.”
Measurements of the movement of 800 gas bubbles ejected star system, scientists have calculated the probable date of the explosions. The team from Arizona found evidence of two early pseudotribos supernova – in the XIII and XVI centuries.
Researchers have determined that ETA Carinae is a binary system with two huge stars orbiting around one another. The larger star of the system is in the final stages of life. |
At the end of the Mexican War, many new lands west of Texas were yielded to the United States, and the debate over the westward expansion of slavery was rekindled. Southern politicians and slave owners demanded that slavery be allowed in the West because they feared that a closed door would spell doom for their economy and way of life. Whig Northerners, however, believed that slavery should be banned from the new territories. Pennsylvanian congressman David Wilmot proposed such a ban in 1846, even before the conclusion of the war. Southerners were outraged over this Wilmot Proviso and blocked it before it could reach the Senate.
The Wilmot Proviso justified Southerners’ fears that the North had designs against slavery. They worried that if politicians in the North prevented slavery from expanding westward, then it was only a matter of time before they began attacking it in the South as well. As a result, Southerners in both parties flatly rejected the proviso. Such bipartisan support was unprecedented and demonstrated just how serious the South really felt about the issue.
The large land concessions made to the U.S. in the 1848 Treaty of Guadalupe Hidalgo only exacerbated tensions. Debates in Congress grew so heated that fistfights even broke out between Northerners and Southerners on the floor of the House of Representatives. In fact, sectional division became so pronounced that many historians label the Mexican War and the Wilmot Proviso the first battles of the Civil War.
Even though the Wilmot Proviso failed, the expansion of slavery remained the most pressing issue in the election of 1848. The Whigs nominated Mexican War hero General Zachary Taylor, a popular but politically inexperienced candidate who said nothing about the issue in hopes of avoiding controversy.
The Democrats, meanwhile, nominated Lewis Cass. Also hoping to sidestep the issue of slavery, Cass proposed allowing the citizens of each western territory to decide for themselves whether or not to be free or slave. Cass hoped that a platform based on such popular sovereignty would win him votes in both the North and South.
The election of 1848 also marked the birth of the Free-Soil Party, a hodgepodge collection of Northern abolitionists, former Liberty Party voters, and disgruntled Democrats and Whigs. The Free-Soilers nominated former president Martin Van Buren, who hoped to split the Democrats. He succeeded and diverted enough votes from Cass to throw the election in Taylor’s favor. (Taylor, however, died after only sixteen months in office and was replaced by Millard Fillmore.)
Although Taylor’s silence on the issue quieted the debate for about a year, the issue was revived when California and Utah applied for statehood. California’s population had boomed after the 1849 gold rush had attracted thousands of prospectors, while barren Utah had blossomed due to the ingenuity of several thousand Mormons. The question arose whether these states should be admitted as free states or slave states. The future of slavery in Washington, D.C., was likewise in question.
A great debate ensued in Congress over the future of these three regions as Southerners attempted to defend their economic system while Northerners decried the evils of slavery. In Congress, the dying John C. Calhoun argued that the South still had every right to nullify unconstitutional laws and, if necessary, to secede from the Union it created. Daniel Webster and Henry Clay, on the other hand, championed the Union and compromise. Webster in particular pointed out that discussion over the expansion of slavery in the West was moot because western lands were unsuitable for growing cotton.
In the end, the North and South agreed to compromise. Although Clay was instrumental in getting both sides to agree, he and Calhoun were too elderly and infirm to negotiate concessions and draft the necessary legislation. This task fell to a younger generation of politicians, especially the “Little Giant” Stephen Douglas, so named for his short stature and big mouth. A Democratic senator from Illinois, Douglas was responsible for pushing the finished piece of legislature through Congress.
The Compromise of 1850 , as it was called, was a bundle of legislation that everyone could agree on. First, congressmen agreed that California would be admitted to the Union as a free state (Utah was not admitted because the Mormons refused to give up the practice of polygamy). The fate of slavery in the other territories, though, would be determined by popular sovereignty. Next, the slave trade (though not slavery itself) was banned in Washington, D.C. Additionally, Texas had to give up some of its land to form the New Mexican territory in exchange for a cancellation of debts owed to the federal government. Finally, Congress agreed to pass a newer and tougher Fugitive Slave Act to enforce the return of escaped slaves to the South.
Though both sides agreed to it, the Compromise of 1850 clearly favored the North over the South. California’s admission as a free state not only set a precedent in the West against the expansion of slavery, but also ended the sectional balance in the Senate, with sixteen free states to fifteen slave states. Ever since the Missouri Compromise, this balance had always been considered essential to prevent the North from banning slavery. The South also conceded to end the slave trade in Washington, D.C., in exchange for debt relief for Texans and a tougher Fugitive Slave Law. Southerners were willing to make so many concessions because, like Northerners, they truly believed the Compromise of 1850 would end the debate over slavery. As it turned out, of course, they were wrong.
Ironically, the 1850 Fugitive Slave Act only fanned the abolitionist flame rather than put it out. Even though many white Americans in the North felt little love for blacks, they detested the idea of sending escaped slaves back to the South. In fact, armed mobs in the North freed captured slaves on several occasions, especially in New England, and violence against slave catchers increased despite the federal government’s protests. On one occasion, it took several hundred troops and a naval ship to escort a single captured slave through the streets of Boston and back to the South. The Fugitive Slave Act thus allowed the abolitionists to transform their movement from a radical one to one that most Americans supported.
Even though few slaves actually managed to escape to the North, the fact that Northern abolitionists encouraged slaves to run away infuriated Southern plantation owners. One network, the Underground Railroad, did successfully ferry as many as several thousand fugitive slaves into the North and Canada between 1840 and 1860. “Conductor” Harriet Tubman, an escaped slave from Maryland, personally delivered several hundred slaves to freedom.
Another major boost for the abolitionist cause came via Harriet Beecher Stowe’s 1852 novel Uncle Tom’s Cabin , a story about slavery in the South. Hundreds of thousands of copies were sold, awakening Northerners to the plight of enslaved blacks. The book affected the North so much that when Abraham Lincoln met Stowe in 1863, he commented, “So you’re the little woman who wrote the book that made this great war!”
Despite the concessions of the Compromise of 1850 and the growing abolitionist movement, Southerners believed the future of slavery to be secure, so they looked for new territories to expand the cotton kingdom. The election of Franklin Pierce in 1852 helped the Southern cause. A pro-South Democrat from New England, Pierce hoped to add more territory to the United States, in true Jacksonian fashion.
Pierce was particularly interested in acquiring new territories in Latin America and went as far as to quietly support William Walker’s takeover of Nicaragua. A proslavery Southerner, Walker hoped that Pierce would annex Nicaragua as Polk had annexed Texas in 1844. The plan failed, however, when several other Latin American countries sent troops to depose the adventurer. Pierce’s reputation was also muddied over his threat to steal Cuba from Spain, which was revealed in a secret document called the Ostend Manifesto, which was leaked to the public in 1854.
Despite his failures in Nicaragua and Cuba, Pierce did have several major successes during his term. In 1853, he completed negotiations to make the Gadsden Purchase from Mexico—30,000 square miles of territory in the southern portions of present-day Arizona and New Mexico. In addition, Pierce successfully opened Japan to American trade that same year. |
Presentation on theme: "HOW 3D GLASSES WORK JACQUELINE DEPUE. In 1893, William Friese-Green created the first anaglyphic 3D motion picture by using a camera with two lenses."— Presentation transcript:
In 1893, William Friese-Green created the first anaglyphic 3D motion picture by using a camera with two lenses. During this time, the first lenses on 3D glasses were typically red and green. Popularity of 3D motion pictures was reborn in 1950 and continued to improve over the next couple of decades. By the 1970s, the lenses were improved to red and cyan for slightly better quality. Today, 3D pictures are used in movie theaters and television.
To understand how 3D glasses work, we first must understand binocular vision— being able to see in three dimensions.
Having two eyes allows humans to tell how far away an object is. This is known as depth perception. Our eyes are about two inches apart, therefore each eye sees the world from a slightly different perspective. When the two different images come into the brain, the brain calculates the distance between each image. The brain combines the images and distances and allows you to see one 3D image.
The View-Master is a great example of how binocular vision works. Each image is photographed from a slightly different position, so each eye is given a slightly different image. Each eye sees only one image, but the brain combines the pictures to form one 3D image.
3D glasses are actually simple. They work in a similar way to View-Masters. Each lens, no matter which type of 3D glasses they are in, feed a different angle of the image into each eye. The screen or projector displays two images. Each image is fed into the corresponding lens and into the eye. There are three common types of 3D glasses: 1. Anaglyph 3D glasses 2. Active Shutter 3D glasses 3. Polarized 3D glasses
Anaglyph glasses were the first type of 3D glasses created. These glasses either have one red and one green lens OR one red and one cyan lens. Two images are displayed on the screen, one in red and one in green/cyan. Each lens filters only one image through to enter each eye. Your brain does the rest: the images are combined and appear three-dimensional. These glasses are cheap, however, the quality of the 3D image is low. Today, they are typically used for movies watched at home.
Active Shutter glasses are much more complex than Anaglyph glasses and have great 3D quality. There is an on/off button and they must be charged. They are typically used in home theaters. Each lens has liquid crystal that darkens when signaled by the screen. For example, the left lens may be signaled to turn black or darken while the right lens allows the image intended for the right eye to enter. The screen typically communicates with the glasses through an infrared emitter or a radio frequency emitter.
Polarized glasses are used in many theaters today, including Disney World’s Universal Studios. Each lens has a different polarized filter that corresponds with one of the images on the screen. The images are projected by two synchronized projectors. The angles at which each lens is polarized restrict the amount of light passing through, allowing only one image to enter each eye. The brain combines both images to create the 3D effect. If you look at Woody’s legs, you can see two different images projected onto the screen. While wearing the glasses, Woody will appear 3- dimensional.
You can see that the left lens will only allow the vertical component of the light to pass through.
There are two types of polarized glasses: linear and circular. 1. Linear glasses do not allow the viewers to tilt their heads; if they do, they will lose some of the 3D effect. Tilting of the head will tilt the filters, causing parts each image to bleed over into the other filter. 2. Circular glasses allow the viewers to tilt their heads without losing any 3D effect. The filters are circularly polarized in opposite directions. These are used more commonly today.
Studying 3D glasses relates directly to the Polarization and Interference Lecture (Monday 4/14). We learned that non-polarized light vibrates in all directions. Certain materials can polarize a ray of light, only allowing compatible components of the light to pass through. For example, when light enters a vertical medium, only the vertical component passes through, minimizing the amount of light seen. This is how polarized 3D glasses work: the lenses of the glasses are angled differently from each other as well as one of the images presented on the screen, only allowing compatible rays of light to pass through each lens. This allows different images to enter each lens to produce the 3D effect.
While creating this project, I have learned that there are more than one kind of 3D glasses. I learned that Anaglyph glasses work differently from Polarized glasses. Anaglyph glasses have different colors entering each lens, while Polarized glasses have different angles of polarization that only allow matching light rays to enter each lens. The real life application of polarized light to 3D glasses bettered my understanding of how light polarization works. Most importantly, I now understand why there are two images on the screen or why the images on the movie screen look fuzzy when I take off my glasses at the movie theater.
Bnext3D. "How Do 3D Glasses Work - Difference between Types of 3D Glasses." YouTube. YouTube, 25 Feb. 2013. Web. 18 Apr. 2014. Brain, Marshall. "How 3-D Glasses Work." HowStuffWorks. HowStuffWorks.com, 18 July 2003. Web. 18 Apr. 2014. "How Do 3D Glasses Work?" 3D Glasses. American Paper Optics LLC, 2010. Web. 17 Apr. 2014. Klein, Alexander. "Stereoscopy.com - FAQ." Stereoscopy.com - FAQ. Stereoscopy.com, 2014. Web. 18 Apr. 2014. Nauman, William. "History of 3D Glasses." EHow. Demand Media, 07 May 2009. Web. 18 Apr. 2014. |
Magnetron, capacitor, fuse or diode.
1 Archived Guide
These are some common tools used to work on this device. You might not need every tool for every procedure.
Background and Identification ¶
A microwave oven, often colloquially shortened to microwave, is a kitchen appliance that heats food by bombarding it with electromagnetic radiation in the microwave spectrum causing polarized molecules in the food to rotate and build up thermal energy in a process known as dielectric heating.
Microwave ovens heat foods quickly and efficiently because excitation is fairly uniform in the outer 25–38 mm of a dense (high water content) food item; food is more evenly heated throughout (except in thick, dense objects) than generally occurs in other cooking techniques.
- a high voltage power source
- a high voltage capacitor connected to the magnetron, transformer and via a diode to the case
- a cavity magnetron
- a magnetron control circuit
- a waveguide
- a cooking chamber |
Body mass index, a measurement calculated using height and weight, is often used to help determine if someone is overweight, underweight or at a healthy weight. Healthy BMI ranges are determined using data on the BMIs of children and adolescents at varying ages to come up with a healthy range for each given age and gender. Since the development of the BMI-for-age charts, the average BMIs have gone up significantly.
Healthy BMI Ranges for Adolescents
While there are set cutoff points for adults, adolescents' bodies are changing so much and so often that the cutoff points vary based on age and gender. Using a range is better than comparing an adolescent to the average; puberty can start anywhere from the age of 8 to the age of 14, and some children experience more rapid changes to their bodies during puberty than others.
For any given age, a measurement between the 5th and 85th percentile on the BMI-for-age charts is considered healthy. For a boy who's just turned 13, BMI is anything between 15.5 and 21.9. As adolescents get older, their BMI increases, so the healthy range for a boy who's just turned 18 would be between 18.2 and 25.7.
Girls tend to be smaller than boys, but they also can sometimes develop sooner. The healthy range for a girl who just turned 13 is between 15.3 and 22.6, and for a girl who just turned 18, it's 17.6 and 25.7.
Average BMIs for Adolescents
According to the BMI-for-age charts, the 50th percentile, the midrange for BMIs for a given age, for 13- to 18-year-old boys ranges from about 18.5 to about 22.4. For girls, the range is between 18.7 and 21.3. This isn't exactly the same as the current average BMI for adolescents, however, as the data used for these charts dates from 1963 to 1994. For example, the average BMI for an 18-year-old was 22 in the 1980s, but increased to 24.5 by the year 2000, according to a study published in the Journal of Adolescent Health in 2012.
Average BMI Changes Over Time
The number of children who are obese has been increasing over the past 40 or so years, according to the Centers for Disease Control and Prevention, but it has recently started to level off. This means that the average BMI for adolescents has been increasing as well. According to an article published in the CDC's publication Advance Data From Vital and Health Statistics in 2004, the average BMI for adolescents between the ages of 12 and 17 increased by more than 4 units between 1963 and 2002. This is because the average height only increased by 0.3 inches for girls and 0.7 inches for boys, while the average weight increased by over 12 pounds for girls and over 15 pounds for boys.
Much of this change appears to have occurred starting in the 1990s, according to the Journal of Adolescent Health study. However, beginning in 2003, the prevalence of obesity, and thus the average BMI, started to go decrease slightly, according to the Centers for Disease Control and Prevention.
Differences Between Groups
Some groups of adolescents may be more likely to have a higher or lower average BMI than others. According to the CDC, as of 2012, about 20.5 percent of 12- to 19-year-olds were obese. Asian adolescents were less likely to be obese than non-Hispanic whites, who were less likely to be obese than either non-Hispanic blacks or Hispanic youth. Adolescents with a parent who completed college were less likely to be obese than those whose parents hadn't completed high school, and adolescents who come from higher income families are less likely to be obese than those whose families have lower incomes. |
World literature is more than just interesting stories of characters in action. It is the selective recreation of reality which, when presented from a historical perspective, can unlock the universe by providing an understanding of important abstractions, concretized by the storyline and characters. Through the dramatization of a historical moment, world literature thrusts students into a new or familiar world, where abstracts, represented by the action and the characters, are given full life. Webster defines literature as "Writings having excellence of form or expression and expressing ideas of permanent or universal interest." I would like to add Aristotle's view. Unlike history, which represents life as it is, literature represents life as it might be or ought to be, and, for that reason, is of special importance to learning. What literature has that so few other tools for learning have is immediacy. It creates a moment that the reader can live vicariously. A slice of life is turned into a fully realized experience to support some abstract idea, which the entire story examines from different perspectives. Unlike real life with its share of irrelevant details, literature is more real than life itself. This is because good literature distills from life its essence by concentrating only on the relevant. Consequently, it can be a dynamic learning tool for students and, when properly presented, a profound influence on their intellectual development. To qualify as great literature, a story must have four indivisible parts -- theme, plot, characterization, and style. Selectively chosen and carefully integrated, these four parts make it possible for a story to unfold. Theme is the basic idea moving a story. It may be philosophical, a generalization, or a historical view. No restrictions are made as to what the theme must be. But no good novel can be without it, for it determines the material selected to dramatize a point. Since all good stories are dramatization of events, literature must have purposeful action, structured logically, connecting events progressively to a resolution or climax.What characters are introduced, what moments are developed, what views are presented, what dialogue is featured are all carefully determined by the theme. In good writing, nothing is left dangling, and all parts fit together like the pieces of a puzzle to create a clear picture. William Shakespeare, for example, conveys through his plays that man doesn't have volition and is driven by some tragic flaw. Leo Tolstoy inAnna Karenina believes social conformity is more important than man's happiness. Fyodor Dostoyevsky examines the psychological depths of human evil and, especially in The Possessed, dissects it exactly through his main characters. To hold interest and to concretize the theme, the author must devise a plot made up of a series of events. These events must be carefully selected and sharply focused. But most importantly, they must be linked to the theme. By bringing these events together and linking them to a theme, the writer provides the reader with a glimpse of reality, stripped of irrelevance. But for these events to come alive meaningfully, they must be inter-connected and dramatized through conflicts and clashes, while progressively advancing toward an ultimate solution of a basic problem. For this, characters are needed. Ideally these characters should be memorable and interesting (like Quasimodo in Victor Hugo's The Hunchback of Notre Dame) with key personality traits that give them uniqueness and individuality. A good writer in selecting these traits is attentive to detail and only uses traits that will conform to the theme. Although there are many subordinate ways of characterizing a person (i.e., thoughts, feelings, descriptions, etc.), action and dialogue are the best way. Character traits (like kindness, etc.) to be believable are not simply stated as loose abstractions, but are demonstrated by the character's actions, thoughts, and dialogue, given purpose by his motivation. The key to distinguishing popular fiction from great literature is found in the depth of the character's motivation (i.e., his philosophical premise). Henryk Sienkiewicz's Quo Vadis, for example, would have been just another book without the brilliant portrayal of Petronius. The same applies to the main character in Sinclair Lewis' Dodsworth, or the colorful characters of the vicomte and marquise in Choderlos de Laclos' Les Liaisons Dangereuses. Without connecting the characters' motivation to its philosophical origin, these books would completely fall apart or at best be just readable. Finally, there is the fourth element of good literature, style. This is the careful selection of words and material to describe an event or a character. (For example, does the author focus on important objective details that reflect reality, or does he use subjective abstractions to create a moment removed from reality?) Style by itself may have a certain appeal and charm like beautiful poetry, but without the other three qualities of literature, it can quickly lead to boredom. As these four parts of literature are interlaced, so is literature to its historical moment. A teacher who discusses literature removed from the times in which it lived is like an author trying to tell a story without a theme. Each period of history made its unique contribution to literature, which connected logically to the next. Understanding the economic and social conditions that link history is necessary to give depth to a literary moment. Good teachers have always known this and have tried to provide dimension to the teaching of literature by introducing as much historical information as necessary to give a literary moment life. The England that influenced Shakespeare, for example, had seen the end of the War of the Roses. The peace following 200 years of strife was a period in which ancient Greek manuscripts were discovered and introduced into England. Minds, closed for centuries, were suddenly flung open to the beautiful universe which the Greeks had long ago discovered. Under the reign of King Henry VII and later Queen Elizabeth, England emerged as a world power and began to look outward toward the new world. This was the England which influenced Shakespeare. This was the world of Marlowe, Dekker, Beaumont and Fletcher, their new world! How much more alive literature becomes when taken in a context like this, how much more significant it is when linked with other countries, and outside influences. We are not an island, and neither is good literature. Instead, those great men and women of our past, those literary writers with a new and clear vision, are indivisibly linked to history. Teaching literature from a historical perspective provides students with the subject relevance they crave. New vistas open up the moment literary ideas are attached to their historical antecedent. These ideas no longer remain loose abstractions of limited importance, but instead become important links to the powerful ideas shaping the world! Opening the mind like this to order exposes the student to logic. Great literature by its selective and orderly arrangement of elements (and its strong relationship to history) does this. A wise teacher who highlights this order by integrating his lessons dramatically and methodically is succeeding at fulfilling the primary reason for teaching literature -- to develop the form and substance of thinking!
"Teaching World Literature" was originally published by Basic Education in 1994 and reprinted in Education in Focus in 1995.
Post Office Box 202 * Warrenton, Virginia 20188 * ( 540) 428-3175 * staff(at)bfat,com |
Sunlight, Skyscrapers, and Soda Pop: The Wherever-You-Look Science Book
Where can you find science? All around you! Children can learn the science behind everyday activities with Sunlight, Skyscrapers, and Soda Pop. Follow Sally and Sammy, the cartoon siblings in this story as they discover science in the kitchen, at the park, in the bathroom, at a friend’s house … everywhere! Every time they make a discovery, they stop to do a science activity that you and your children can do together. Sally and Sammy model how to do each activity with illustrations and clearly written instructions. After reading the story and doing the activities, you can take part in a science search challenge! Challenges and challenge answers are listed at the back of the book, where you’ll also find explanations for each science activity.
More Activities for Kids
The Best of WonderScience
Grades 3 – 6
This compilation teaches fundamental science concepts through 600+ hands-on activities. Includes student activity sheets, science background information for teachers, and explanations of how activities support the National Science Education Standards.
Apples, Bubbles, and Crystals
Preschool – 2
Science meets the ABCs in this activity book for beginner readers. Kids learn the alphabet while having fun discovering science through easy-to-follow hands-on activities. |
Crustaceans are a diverse group, so their life history and ecology offer great variety. Crustaceans are the arthropods that dominate marine habitats, but they are also found in large numbers in freshwater and a few groups have made their way successfully onto on land. When found on land crustaceans are either found in moist protected habitats like under logs or in leaf litter in cool forests, or they are encysted (enclosed in a tough protective capsule, nearly dried out, and dormant).
It is possible to find a group of crustaceans that feeds in just about every way imaginable. There is even a great diversity of parasitic life history strategies used by various crustacean groups. And there's a lot to learn about non-parasitic crustacean feeding, mouthparts, and digestion. As crustaceans feed, they grow, and as they grow they must shed their exoskeleton and produce a larger one; this is called molting. Crustaceans molt as they grow throughout their lives but they molt most frequently during metamorphosis when they are changing from larvae to adults.
(At left, image of Odontadactylus scyllarus with eggs. Image used with permission from Roy Caldwell, U.C. Berkeley.) |
The Mayflower Voyage
The group that set out from Plymouth, in southwestern England, in September 1620 included 35 members of a radical Puritan faction known as the English Separatist Church. In 1607, after illegally breaking from the Church of England, the Separatists settled in the Netherlands, first in Amsterdam and later in the town of Leiden, where they remained for the next decade under the relatively lenient Dutch laws. Due to economic difficulties, as well as fears that they would lose their English language and heritage, they began to make plans to settle in the New World. Their intended destination was a region near the Hudson River, which at the time was thought to be part of the already established colony of Virginia. In 1620, the would-be settlers joined a London stock company that would finance their trip aboard the Mayflower, a three-masted merchant ship, in 1620. A smaller vessel, the Speedwell, had initially accompanied the Mayflower and carried some of the travelers, but it proved unseaworthy and was forced to return to port by September.
Some of the most notable passengers on the Mayflower included Myles Standish, a professional soldier who would become the military leader of the new colony; and William Bradford, a leader of the Separatist congregation who wrote the still-classic account of the Mayflower voyage and the founding of Plymouth Colony. While still on board the ship, a group of 41 men signed the so-called Mayflower Compact, in which they agreed to join together in a “civil body politic.” This document would become the foundation of the new colony’s government.
Settling at Plymouth
Rough seas and storms prevented the Mayflower from reaching their initial destination, and after a voyage of 65 days the ship reached the shores of Cape Cod, anchoring on the site of Provincetown Harbor in mid-November. After sending an exploring party ashore, the Mayflower landed at what they would call Plymouth Harbor, on the western side of Cape Cod Bay, in mid-December. During the next several months, the settlers lived mostly on the Mayflower and ferried back and forth from shore to build their new storage and living quarters. The settlement’s first fort and watchtower was built on what is now known as Burial Hill (the area contains the graves of Bradford and other original settlers).
More than half of the English settlers died during that first winter, as a result of poor nutrition and housing that proved inadequate in the harsh weather. Leaders such as Bradford, Standish, John Carver, William Brewster and Edward Winslow played important roles in keeping the remaining settlers together. In April 1621, after the death of the settlement’s first governor, John Carver, Bradford was unanimously chosen to hold that position; he would be reelected 30 times and served as governor of Plymouth for all but five years until 1656.
Relations with Native Americans
The native inhabitants of the region around Plymouth Colony were the various tribes of the Wampanoag people, who had lived there for some 10,000 years before the Europeans arrived. Soon after the Pilgrims built their settlement, they came into contact with Tisquantum, or Squanto, an English-speaking Native American. Squanto was a member of the Pawtuxet tribe (from present-day Massachusetts and Rhode Island) who had been seized by the explorer John Smith’s men in 1614-15. Meant for slavery, he somehow managed to escape to England, and returned to his native land to find most of his tribe had died of plague. In addition to interpreting and mediating between the colonial leaders and Native American chiefs (including Massasoit, chief of the Pokanoket), Squanto taught the Pilgrims how to plant corn, which became an important crop, as well as where to fish and hunt beaver. In the fall of 1621, the Pilgrims famously shared a harvest feast with the Pokanokets; the meal is now considered the basis for the Thanksgiving holiday. After attempts to increase his own power by turning the Pilgrims against Massasoit, Squanto died in 1622, while serving as Bradford’s guide on an expedition around Cape Cod.
Other tribes, such as the Massachusetts and Narragansetts, were not so well disposed towards European settlers, and Massasoit’s alliance with the Pilgrims disrupted relations among Native American peoples in the region. Over the next decades, relations between settlers and Native Americans deteriorated as the former group occupied more and more land. By the time William Bradford died in 1657, he had already expressed anxiety that New England would soon be torn apart by violence. In 1675, Bradford’s predictions came true, in the form of King Philip’s War. (Philip was the English name of Metacomet, the son of Massasoit and leader of the Pokanokets since the early 1660s.) That conflict left some 5,000 inhabitants of New England dead, three quarters of those Native Americans. In terms of percentage of population killed, King Philip’s War was more than twice as costly as the American Civil War and seven times more so than the American Revolution.
The Pilgrim Legacy in New England
Repressive policies toward religious nonconformists in England under King James I and his successor, Charles I, had driven many men and women to follow the Pilgrims’ path to the New World. Three more ships traveled to Plymouth after the Mayflower, including the Fortune (1621), the Anne and the Little James (both 1623). In 1630, a group of some 1,000 Puritan refugees under Governor John Winthrop settled in Massachusetts according to a charter obtained from King Charles I by the Massachusetts Bay Company. Winthrop soon established Boston as the capital of Massachusetts Bay Colony, which would become the most populous and prosperous colony in the region.
Compared with later groups who founded colonies in New England, such as the Puritans, the Pilgrims of Plymouth failed to achieve lasting economic success. After the early 1630s, some prominent members of the original group, including Brewster, Winslow and Standish, left the colony to found their own communities. The cost of fighting King Philip’s War further damaged the colony’s struggling economy. Less than a decade after the war King James II appointed a colonial governor to rule over New England, and in 1692, Plymouth was absorbed into the larger entity of Massachusetts.
Bradford and the other Plymouth settlers were not originally known as Pilgrims, but as “Old Comers.” This changed after the discovery of a manuscript by Bradford in which he called the settlers who left Holland “saints” and “pilgrimes.” In 1820, at a bicentennial celebration of the colony’s founding, the orator Daniel Webster referred to “Pilgrim Fathers,” and the term stuck. |
Most of us are familiar with the idea of image compression in computers. File extensions like ".jpg" or ".png" signify that millions of pixel values have been compressed into a more efficient format, reducing file size by a factor of 10 or more with little or no apparent change in image quality. The full set of original pixel values would occupy too much space in computer memory and take too long to transmit across networks.
The brain is faced with a similar problem. The images captured by light-sensitive cells in the retina are on the order of a megapixel. The brain does not have the transmission or memory capacity to deal with a lifetime of megapixel images. Instead, the brain must select out only the most vital information for understanding the visual world.
In today's online issue of Current Biology, a Johns Hopkins team led by neuroscientists Ed Connor and Kechen Zhang describes what appears to be the next step in understanding how the brain compresses visual information down to the essentials.
They found that cells in area "V4," a midlevel stage in the primate brain's object vision pathway, are highly selective for image regions containing acute curvature. Experiments by doctoral student Eric Carlson showed that V4 cells are very responsive to sharply curved or angled edges, and much less responsive to flat edges or shallow curves.
To understand how selectivity for acute curvature might help with compression of visual information, co-author Russell Rasquinha (now at University of Toronto) created a computer model of hundreds of V4-like cells, training them on thousands of natural object images. After training, each image evoked responses from a large proportion of the virtual V4 cells -- the opposite of a compressed format. And, somewhat surprisingly, these virtual V4 cells responded mostly to flat edges and shallow curvatures, just the opposite of what was observed for real V4 cells.
The results were quite different when the model was trained to limit the number of virtual V4 cells responding to each image. As this limit on responsive cells was tightened, the selectivity of the cells shifted from shallow to acute curvature. The tightest limit produced an eight-fold decrease in the number of cells responding to each image, comparable to the file size reduction achieved by compressing photographs into the .jpeg format. At this level, the computer model produced the same strong bias toward high curvature observed in the real V4 cells.
Why would focusing on acute curvature regions produce such savings? Because, as the group's analyses showed, high-curvature regions are relatively rare in natural objects, compared to flat and shallow curvature. Responding to rare features rather than common features is automatically economical.
Despite the fact that they are relatively rare, high-curvature regions are very useful for distinguishing and recognizing objects, said Connor, a professor in the Solomon H. Snyder Department of Neuroscience in the School of Medicine, and director of the Zanvyl Krieger Mind/Brain Institute.
"Psychological experiments have shown that subjects can still recognize line drawings of objects when flat edges are erased. But erasing angles and other regions of high curvature makes recognition difficult," he explained
Brain mechanisms such as the V4 coding scheme described by Connor and colleagues help explain why we are all visual geniuses.
"Computers can beat us at math and chess," said Connor, "but they can't match our ability to distinguish, recognize, understand, remember, and manipulate the objects that make up our world." This core human ability depends in part on condensing visual information to a tractable level. For now, at least, the .brain format seems to be the best compression algorithm around.
Explore further: Getting to know you: Directed evolution allows pathobiology-free antibody determination |
A bank is located on the continental shelf and the water depth above it is relatively shallow. Banks have a continental origin and can cover extensive surface area but do not extend thousands of meters into the water column.
In contrast, seamounts are mainly volcanic in origin, rising to considerable height from great depths along the continental rise and are limited in length across the summit.
Seamounts, though common in the world's oceans, often have very different biological assemblages than the surrounding seafloor sediments, due likely to the complex, rocky and current-swept habitats. Rocky outcrops, particularly near seamount peaks, are inhabited by a suite of deep-sea corals and sponges that are typically absent or quite rare in more typical ocean settings.
In March of 2009, NOAA designated the Davidson Seamount Management Zone (DSMZ), increasing the Monterey Bay National Marine Sanctuary (MBNMS) and protecting Davidson Seamount, making it the first seamount within a national marine sanctuary. Several other seamounts - including Gumdrop, Pioneer, and Guide Seamounts - occur just beyond this sanctuary.
The Davidson Seamount Management Zone is located 75 miles (121 kilometers) due west of San Simeon. The shallowest point is 4,101 ft (1,250 m) below the ocean's surface and the deepest part of the DMSZ is 12,713 ft (3,875 m). Davidson Seamount itself is 7,480 ft (2280 m) tall, as measured from the west-side base to the summit. The seamount is also 26 miles long and 8 miles wide. In total, the DSMZ covers an area of 775 square miles and increases the MBNMS to 6,094 square miles.
The geological structure and origin of Davidson, Guide, Pioneer, and Gumdrop Seamounts have only recently been described, by scientists at the Monterey Bay Aquarium Research Institute, as an atypical type of oceanic volcanism, having northeast-trending ridges that reflect the ridge-parallel structure of the underlying crust. Unlike most intra-plate ocean island volcanoes, the seamounts are built on top of spreading center segments that were abandoned at the continental margin when the tectonic regime changed from subduction to a transform margin. The Davidson Seamount consists of about six subparallel linear volcanic ridges separated by narrow valleys that contain sediment. These ridges are aligned parallel to magnetic anomalies in the underlying ocean crust. The seamount is 12.2 ± 0.4 million years old and formed about 8 million years after the underlying mid-ocean ridge was abandoned.
Assemblages of large corals and sponges, along with many associated animals such as sea stars, anemones, crustaceans, octopus and fishes, are common on the seamount. Expeditions in 2002 and 2006 observed 18 species new to science. Recent studies at Davidson Seamount indicate a significant shift in species composition and relative abundances of species occurs with depth. Whereas, species diversity and density at Davidson Seamount do not significantly change with depth, and can vary greatly on a single isobath. Ecological processes influencing the distribution, abundance and dynamics of seamount fauna are less well known than other charismatic ecosystems such as kelp beds and corals reefs, making it difficult to develop management criteria.
Seamount environments may represent optimal habitats for particular faunal groups resulting in thriving and dense populations encountered only rarely in other habitats. Based on research at Davidson Seamount and nearby Monterey Canyon, preliminary evidence suggests seamount communities may serve as a source of larvae for non-seamount habitats.
Due to proximity to the coast, Davidson Seamount faces a number of anthropogenic threats, including but not limited to vessel traffic, sea temperature rise, ocean acidification, commercial harvest, underwater cables, cumulative research collection, bio-prospecting, and military activity. Sanctuary regulations provide important, although not comprehensive, defenses against some of these threats.
Deployment of CTD above Davidson Seamount.
Sea spiders (Class Pycnogonida) were found on the slope and base habitats of the Davidson Seamount (shown here at 1570 meters depth). A chiton (Order Neoloricata) is attached to a rock just to the right of the sea spider. Credit: NOAA/MBARI 2002
Sanctuary staff developed a management plan for the Davidson Seamount Management Zone. It contains background information on the Davidson Seamount, and the activities necessary for effective understanding and protection of this unique area. Goals are to develop and implement a resource protection plan for Davidson Seamount, increase understanding of the seamount through characterization and ecological studies, and develop education programs for this and other seamounts throughout the nation. The plan can be downloaded here.
MonitoringSeveral studies have been conducted within the sanctuary at Davidson Seamount, and adjacent to the sanctuary at Pioneer Seamount. These include seamount characterization, coral distribution study, marine mammal and seabird surveys, and passive acoustic monitoring.
Davidson Seamount Expedition 2002
In May 2002, scientists from the sanctuary, Monterey Bay Aquarium Research Institute (MBARI), Monterey Bay Aquarium, Moss Landing Marine Laboratories (MLML) and NOAA Fisheries embarked on an expedition to explore and characterize the Davidson Seamount. Depth-stratified species assemblages were found at the crest, slope and base habitats of the seamount. The crest of Davidson Seamount had the highest diversity of species, including large gorgonian corals and sponges. The majority of corals were observed almost exclusively on high-relief, ridge areas. Faunal assemblages were arranged in large, contiguous patches that are susceptible to physical disturbance.
Davidson Seamount 2006: Exploring Ancient Coral Gardens
In January 2006, scientists returned to Davidson Seamount with the following objectives:
- to investigate why deep-sea corals live where they do
- to determine age and growth patterns of these corals
- to improve species identifications
- to share the exploration with the general public
Marine Mammal and Seabird Surveys
The Davidson Seamount is typically regarded as having a greater abundance and diversity of marine mammals and seabirds than the surrounding area. In an effort to characterize marine mammal abundance above, and moderate distances away from, the seamount, several aerial surveys have been conducted (April 2009, January 2010, April 2010, July 2011). In addition, the first dedicated ship-based survey to record marine mammal and seabird observations above and surrounding the Davidson Seamount was conducted during July 2010. Zooplankton and CTD data were also collected above the seamount. Fin whales (Balaenoptera physalus) were the most commonly encountered marine mammal. The majority of fin whale sightings were above and to the west of the seamount where, based upon zooplankton net tows, krill abundance was greatest; and foraging behavior was noted by observers.
Pioneer Seamount Ocean Acoustic Observatory
A vertical array of four hydrophones was installed at Pioneer Seamount in August 2001 to passively monitor the Pacific Ocean in the region south of San Francisco. The hydrophones were connected to shore via a telephone cable that came ashore at Pillar Point.
Data from the hydrophones were relayed to NOAA Pacific Marine Environmental Laboratories and to San Francisco State University, where they were made available for public access. In this data set, the loudest and most obvious signals were created by passing ships. Ship traffic and relative ship size can be inferred from the data set. Scientists have also identified a method for analyzing the ship signals to determine the speed of each ship and its distance of closest approach to the array.
Blue, fin, humpback and sperm whales were heard on Pioneer Seamount. Humpback and blue whale calls appeared prominent. Seasonality was evident, with most calls appearing in winter and fall months. Sounds from blue whales were quite prominent over the eight months of monitoring. The cable was damaged and had not functioned since September 2002. The cable was removed in 2011. |
DURING the ice-free Eocene period 56 to 34 million years ago, Earth was 10 °C warmer than today. “Tropical” animals like crocodiles thrived in the polar regions, and Antarctica – now a dry, frozen wasteland – was warm and covered with lush vegetation.
The discovery of certain plant fossils, however, suggests that Antarctica had a monsoon climate, something that only happens nowadays in the tropics. It could mean that in a warmer world monsoons will occur across wider areas of the planet.
Frédéric Jacques of the Xishuangbanna Tropical Botanical Garden in Mengla, China, and colleagues excavated fossil plants from an Antarctic island. As certain plants only grow in particular climates, their fossils provide evidence that such a climate prevailed in the past. Jacques’s fossils came from plants that experience seasonal rainfall.
He estimates that during the early and middle Eocene, 60 per cent of Antarctica’s annual rainfall fell in summer – about 6.4 millimetres per day (Gondwana Research, doi.org/h9n). This suggests there was a monsoon on the continent. Eocene Antarctica looks much like the monsoon climates of south Asia today, where there is heavy rainfall in summer but the winters are dry.
Finding a monsoon so close to the poles is unexpected, says Gill Martin of the UK Met Office in Exeter. “It tends to be thought of as a tropical thing.”
Monsoon rains are driven by seasonal changes in wind direction. Moist sea air blows onto the land during summer, but in winter the winds travel in reverse. For this to happen, the land must warm up much more than the ocean during summer. Today, this only happens in the tropics, but Jacques says that monsoons may proliferate in hotter climates. “It can create new monsoon circulations.”
Climatologists aren’t sure how climate change will affect monsoons. Their best guess is that monsoon winds will diminish, but because warmer air holds more moisture, the monsoon rains will get heavier and cause more floods (Nature Climate Change, doi.org/h9p). “In that sense the monsoon is stronger, but the circulation is weaker,” Martin says. Could the monsoons also spread? It’s unlikely we’ll reach Eocene temperatures, but Jacques says monsoons could spring up on other continents.
Martin is unconvinced. She says major weather systems like the jet streams operate close to the poles, and could interfere with any new monsoons.
Climate models by Matthew Huber of Purdue University in West Lafayette, Indiana, suggest that Eocene monsoons happened in the same places as monsoons today (Journal of Asian Earth Sciences, doi.org/bg4c3j). However, climate models struggle to simulate such hot climates. “Data to really answer the question are few and far between,” Huber says.
More on these topics: |
Answer: It could be that your tree isn't quite mature enough to produce fruit, or there may be some environmental conditions causing the immature fruit drop.
Mangos basically require a frost-free climate. Flowers and small fruit can be killed if temperatures drop below 40? F, even for a short period. Young trees may be seriously damaged if the temperature drops below 30? F, but mature trees may withstand very short periods of temperatures as low as 25? F. The mango must have warm, dry weather to set fruit. In southern California the best locations are in the foothills, away from immediate marine influence. It is worth a trial in the warmest cove locations in the California Central Valley, but is more speculative in the coastal counties north of Santa Barbara, where only the most cold adapted varieties are likely to succeed. Mangos luxuriate in summer heat and resent cool summer fog. Wet, humid weather favors anthracnose and poor fruit set.
The yellowish or reddish flowers are borne in inflorescences which appear at branch terminals, in dense panicles of up to 2000 minute flowers. Pollinators are flies, hoverflies, rarely bees. Few of the flowers in each inflorescence are perfect, so most do not produce pollen and are incapable of producing fruit. Pollen cannot be shed in high humidity or rain. Fertilization is also ineffective when night temperatures are below 55? F. Mangos are monoecious and self-fertile, so a single tree will produce fruit without cross pollination.
This is probably more information than you want to know, but it might shed some light as to why your mango tree is not producing fruit.
Best wishes with your tree!
Q&A Library Searching Tips |
During the previous lessons we have learnt that the organs are joined with other organs which have the same function creating a SYSTEM. All the systems together perfom the BODY.
Today we are going to work on the organization of the human body and the explanation of the different systems of it. But it is not going to be a normal lesson in which the teacher talks and you just listen.
This time you are going to prepare the lesson and explained to your classmates. In your groups you are going to decide a SYSTEM, behind all these paragraphs you will find information about it,
FOLLOW THE STEPS IN ORDER TO SUCESS.
STEP 1: Decide in each group which system you are going to work on.
STEP 2: Look for all the given information of it, if you need more feel free to look for it in the net.
STEP 3: Design your poster. It should have:
- Title with the system.
- A big picture of the system.
- What vital fuction it covers.
- How the system do it (let's look at your class book)
- Label the main parts of it.
STEP 4: Prepare a speech to show it to your mates. Remember you will be their teacher about it so please, try to be clear and precise. Your teacher should check it before the final exhibition.
- Hi! Our names are (your names).
- We are going to present the (name of the system) system.
- As you can see this is a big picture about the (name of the system) system.
- The vital functions which it covers are ...
- The (name of the system) system works ...
- The main parts of the (name of the system) system are ...
- I hope you have understood.
- Thanks for your attention.
Urinary / Excretory System |
History of the Netherlands
Part of a series on the
|History of the Netherlands|
The history of the Netherlands is the history of seafaring people thriving on a lowland river delta on the North Sea in northwestern Europe. Records begin with the four centuries during which the region formed a militarized border zone of the Roman empire. This came under increasing pressure from Germanic peoples moving westwards. As Roman power collapsed and the Middle Ages began, three dominant Germanic peoples coalesced in the area, Frisians in the north and coastal areas, Low Saxons in the northeast, and the Franks in the south.
During the Middle Ages, the descendants of the Carolingian dynasty came to dominate the area and then extended their rule to a large part of Western Europe. The region of the Netherlands therefore became part of Lower Lotharingia within the Frankish Holy Roman Empire. For several centuries, lordships such as Brabant, Holland, Zeeland, Friesland, Guelders and others held a changing patchwork of territories. There was no unified equivalent of the modern Netherlands.
By 1433, the Duke of Burgundy had assumed control over most of the lowlands territories in Lower Lotharingia; he created the Burgundian Netherlands which included modern Belgium, Luxembourg, and a part of France.
The Catholic kings of Spain took strong measures against Protestantism, which polarized the peoples of present-day Belgium and Holland. The subsequent Dutch revolt led to splitting the Burgundian Netherlands into a Catholic French and Dutch-speaking "Spanish Netherlands" (approximately corresponding to modern Belgium and Luxembourg), and a northern "United Provinces", which spoke Dutch and were predominantly Protestant with a Catholic minority. It became the modern Netherlands.
In the Dutch Golden Age, which had its zenith around 1667, there was a flowering of trade, industry, the arts and the sciences. A rich worldwide Dutch empire developed and the Dutch East India Company became one of the earliest and most important of national mercantile companies based on entrepreneurship and trade.
During the 18th century the power and wealth of the Netherlands declined. A series of wars with the more powerful British and French neighbors weakened it. Britain seized the North American colony of New Amsterdam, turning it into New York. There was growing unrest and conflict between the Orangists and the Patriots. The French Revolution spilled over after 1789, and a pro-French Batavian Republic was established in 1795–1806. Napoleon made it a satellite state, the Kingdom of Holland (1806–1810), and later simply a French imperial province.
After the collapse of Napoleon in 1813–15, an expanded "United Kingdom of the Netherlands" was created with the House of Orange as monarchs, also ruling Belgium and Luxembourg. The King imposed unpopular Protestant reforms on Belgium, which revolted in 1830 and became independent in 1839. After an initially conservative period, in the 1848 constitution the country became a parliamentary democracy with a constitutional monarch. Modern Luxembourg became officially independent from the Netherlands in 1839, but a personal union remained until 1890. Since 1890 it is ruled by another branch of the House of Nassau.
The Netherlands was neutral during the First World War, but during the Second World War, it was invaded and occupied by Nazi Germany. The Nazis, including many collaborators, rounded up and killed almost all the Jews (most famously Anne Frank). When the Dutch resistance increased, the Nazis cut off food supplies to much of the country, causing severe starvation in 1944–45. In 1942, the Dutch East Indies was conquered by Japan, but first the Dutch destroyed the oil wells that Japan needed so badly. Indonesia proclaimed its independence in 1945. Suriname gained independence in 1975. The postwar years saw rapid economic recovery (helped by the American Marshall Plan), followed by the introduction of a welfare state during an era of peace and prosperity. The Netherlands formed a new economic alliance with Belgium and Luxembourg, the Benelux, and all three became founding members of the European Union and NATO. In recent decades, the Dutch economy has been closely linked to that of Germany, and is highly prosperous.
- 1 Prehistory (before 800 BC)
- 1.1 Historical changes to the landscape
- 1.2 Earliest groups of hunter-gatherers (before 5000 BC)
- 1.3 The arrival of farming (around 5000–4000 BC)
- 1.4 Funnelbeaker and other cultures (around 4000–3000 BC)
- 1.5 Corded Ware and Bell Beaker cultures (around 3000–2000 BC)
- 1.6 Bronze Age (around 2000–800 BC)
- 2 The pre-Roman period (800 BC – 58 BC)
- 3 Roman era (57 BC – 410 AD)
- 4 Early Middle Ages (411–1000)
- 5 High Middle Ages (1000–1432)
- 6 Burgundian and Habsburg period (1433–1567)
- 7 The Eighty Years' War (1568–1648)
- 8 Golden Age
- 9 Dutch Empire
- 10 Dutch Republic: Regents and Stadholders (1649–1784)
- 10.1 Refugees
- 10.2 Economic growth
- 10.3 Amsterdam
- 10.4 First Stadtholderless Period and the Anglo-Dutch Wars (1650–1674)
- 10.5 Anglo-Dutch wars
- 10.6 Franco-Dutch War and Third Anglo-Dutch War (1672–1702)
- 10.7 Second Stadtholderless Period (1702–1747)
- 10.8 Economic decline after 1730
- 10.9 Culture and society
- 10.10 The Orangist revolution (1747–1751)
- 10.11 Regency and indolent rule (1752–1779)
- 10.12 Fourth Anglo-Dutch War (1780–1784)
- 11 The French-Batavian period (1785–1815)
- 12 United Kingdom of the Netherlands (1815–1839)
- 13 Democratic and Industrial Development (1840–1900)
- 14 1900 to 1940
- 15 The Second World War (1939–1945)
- 16 Prosperity and European Unity (1945–present)
- 17 Historians and historiography
- 18 See also
- 19 Notes
- 20 Further reading
- 21 External links
Prehistory (before 800 BC)
Historical changes to the landscape
The prehistory of the area that is now the Netherlands was largely shaped by its constantly shifting, low-lying geography.
beach ridges and dunes
floodplain silt areaspeat marshes and
(including old river courses and riverbank breaches which have filled up with silt or peat)
Valleys of the major rivers (not covered with peat)
Pleistocene dunes)River dunes (
open water (sea, lagoons, rivers)
NAP)Pleistocene landscape (> −6 m compared to
Pleistocene landscape ( -6 m – 0 m)
Pleistocene landscape ( 0 m – 10 m)
Pleistocene landscape ( 10 m – 20 m)
Pleistocene landscape ( 20 m – 50 m)
Pleistocene landscape ( 50 m – 100 m)
Pleistocene landscape ( 100 m – 200 m)
Earliest groups of hunter-gatherers (before 5000 BC)
The area that is now the Netherlands was inhabited by early humans at least 37,000 years ago, as attested by flint tools discovered in Woerden in 2010. In 2009 a fragment of a 40,000-year-old Neanderthal skull was found in sand dredged from the North Sea floor off the coast of Zeeland.
During the last ice age, the Netherlands had a tundra climate with scarce vegetation and the inhabitants survived as hunter-gatherers. After the end of the ice age, various Paleolithic groups inhabited the area. It is known that around 8000 BC a Mesolithic tribe resided near Burgumer Mar (Friesland). Another group residing elsewhere is known to have made canoes. The oldest recovered canoe in the world is the Pesse canoe. According to C14 dating analysis it was constructed somewhere between 8200 BC and 7600 BC. This canoe is exhibited in the Drents Museum in Assen.
Autochthonous hunter-gatherers from the Swifterbant culture are attested from around 5600 BC onwards. They are strongly linked to rivers and open water and were related to the southern Scandinavian Ertebølle culture (5300–4000 BC). To the west, the same tribes might have built hunting camps to hunt winter game, including seals.
The arrival of farming (around 5000–4000 BC)
Agriculture arrived in the Netherlands somewhere around 5000 BC with the Linear Pottery culture, who were probably central European farmers. Agriculture was practised only on the loess plateau in the very south (southern Limburg), but even there it was not established permanently. Farms did not develop in the rest of the Netherlands.
There is also some evidence of small settlements in the rest of the country. These people made the switch to animal husbandry sometime between 4800 BC and 4500 BC. Dutch archaeologist Leendert Louwe Kooijmans wrote, "It is becoming increasingly clear that the agricultural transformation of prehistoric communities was a purely indigenous process that took place very gradually." This transformation took place as early as 4300 BC–4000 BC and featured the introduction of grains in small quantities into a traditional broad-spectrum economy.
Funnelbeaker and other cultures (around 4000–3000 BC)
The Funnelbeaker culture was a farming culture extending from Denmark through northern Germany into the northern Netherlands. In this period of Dutch prehistory the first notable remains were erected: the dolmens, large stone grave monuments. They are found in Drenthe, and were probably built between 4100 BC and 3200 BC.
Corded Ware and Bell Beaker cultures (around 3000–2000 BC)
Around 2950 BCE there was a transition from the Funnelbeaker farming culture to the Corded Ware pastoralist culture, a large archeological horizon appearing in western and central Europe, that is associated with the advance of Indo-European languages. This transition was probably caused by developments[clarification needed] in eastern Germany, and it occurred within two generations.
The Corded Ware and Bell Beaker cultures were not indigenous to the Netherlands but were pan-European in nature, extending across much of northern and central Europe.
The first evidence of the use of the wheel dates from this period, about 2400 BC. This culture also experimented with working with copper. Evidence of this, including stone anvils, copper knives, and a copper spearhead, was found on the Veluwe. Copper finds show that there was trade with other areas in Europe, as natural copper is not found in Dutch soil.
Bronze Age (around 2000–800 BC)
The Bronze age probably started somewhere around 2000 BC and lasted until around 800 BC. The earliest bronze tools have been found in the grave of a Bronze Age individual called "the smith of Wageningen". More Bronze Age objects from later periods have been found in Epe, Drouwen and elsewhere. Broken bronze objects found in Voorschoten were apparently destined for recycling. This indicates how valuable bronze was considered in the Bronze Age. Typical bronze objects from this period included knives, swords, axes, fibulae and bracelets.
Most of the Bronze Age objects found in the Netherlands have been found in Drenthe. One item shows that trading networks during this period extended a far distance. Large bronze situlae (buckets) found in Drenthe were manufactured somewhere in eastern France or in Switzerland. They were used for mixing wine with water (a Roman/Greek custom). The many finds in Drenthe of rare and valuable objects, such as tin-bead necklaces, suggest that Drenthe was a trading centre in the Netherlands in the Bronze Age.
The Bell Beaker cultures (2700–2100) locally developed into the Bronze Age Barbed-Wire Beaker culture (2100–1800). In the second millennium BC, the region was the boundary between the Atlantic and Nordic horizons and was split into a northern and a southern region, roughly divided by the course of the Rhine.
In the north, the Elp culture (c. 1800 to 800 BC) was a Bronze Age archaeological culture having earthenware pottery of low quality known as "Kümmerkeramik" (or "Grobkeramik") as a marker. The initial phase was characterized by tumuli (1800–1200 BC) that were strongly tied to contemporary tumuli in northern Germany and Scandinavia, and were apparently related to the Tumulus culture (1600–1200 BC) in central Europe. This phase was followed by a subsequent change featuring Urnfield (cremation) burial customs (1200–800 BC). The southern region became dominated by the Hilversum culture (1800–800), which apparently inherited the cultural ties with Britain of the previous Barbed-Wire Beaker culture.
The pre-Roman period (800 BC – 58 BC)
The Iron Age brought a measure of prosperity to the people living in the area of the present-day Netherlands. Iron ore was available throughout the country, including bog iron extracted from the ore in peat bogs (moeras ijzererts) in the north, the natural iron-bearing balls found in the Veluwe and the red iron ore near the rivers in Brabant. Smiths travelled from small settlement to settlement with bronze and iron, fabricating tools on demand, including axes, knives, pins, arrowheads and swords. Some evidence even suggests the making of Damascus steel swords using an advanced method of forging that combined the flexibility of iron with the strength of steel.
In Oss, a grave dating from around 500 BC was found in a burial mound 52 metres wide (and thus the largest of its kind in western Europe). Dubbed the "king's grave" (Vorstengraf (Oss)), it contained extraordinary objects, including an iron sword with an inlay of gold and coral.
In the centuries just before the arrival of the Romans, northern areas formerly occupied by the Elp culture emerged as the probably Germanic Harpstedt culture while the southern parts were influenced by the Hallstatt culture and assimilated into the Celtic La Tène culture. The contemporary southern and western migration of Germanic groups and the northern expansion of the Hallstatt culture drew these peoples into each other's sphere of influence. This is consistent with Caesar's account of the Rhine forming the boundary between Celtic and Germanic tribes.
Arrival of Germanic groups
The Germanic tribes originally inhabited southern Scandinavia, Schleswig-Holstein and Hamburg, but subsequent Iron Age cultures of the same region, like Wessenstedt (800–600 BC) and Jastorf, may also have belonged to this grouping. The climate deteriorating in Scandinavia around 850 BC to 760 BC and later and faster around 650 BC might have triggered migrations. Archaeological evidence suggests around 750 BC a relatively uniform Germanic people from the Netherlands to the Vistula and southern Scandinavia. In the west, the newcomers settled the coastal floodplains for the first time, since in adjacent higher grounds the population had increased and the soil had become exhausted.
One grouping - labelled the "North Sea Germanic" – inhabited the northern part of the Netherlands (north of the great rivers) and extending along the North Sea and into Jutland. This group is also sometimes referred to as the "Ingvaeones". Included in this group are the peoples who would later develop into, among others, the early Frisians and the early Saxons.
A second grouping, which scholars subsequently dubbed the "Weser-Rhine Germanic" (or "Rhine-Weser Germanic"), extended along the middle Rhine and Weser and inhabited the southern part of the Netherlands (south of the great rivers). This group, also sometimes referred to as the "Istvaeones", consisted of tribes that would eventually develop into the Salian Franks.
Celts in the south
The Celtic culture had its origins in the central European Hallstatt culture (c. 800–450 BC), named for the rich grave finds in Hallstatt, Austria. By the later La Tène period (c. 450 BC up to the Roman conquest), this Celtic culture had, whether by diffusion or migration, expanded over a wide range, including into the southern area of the Netherlands. This would have been the northern reach of the Gauls.
In March 2005 17 Celtic coins were found in Echt (Limburg). The silver coins, mixed with copper and gold, date from around 50 BC to 20 AD. In October 2008 a horde of 39 gold coins and 70 silver Celtic coins was found in the Amby area of Maastricht. The gold coins were attributed to the Eburones people. Celtic objects have also been found in the area of Zutphen.
Although it is rare for hoards to be found, in past decades loose Celtic coins and other objects have been found throughout the central, eastern and southern part of the Netherlands. According to archaeologists these finds confirmed that at least the Maas river valley in the Netherlands was within the influence of the La Tène culture. Dutch archaeologists even speculate that Zutphen (which lies in the centre of the country) was a Celtic area before the Romans arrived, not a Germanic one at all.
Scholars debate the actual extent of the Celtic influence. The Celtic influence and contacts between Gaulish and early Germanic culture along the Rhine is assumed to be the source of a number of Celtic loanwords in Proto-Germanic. But according to Belgian linguist Luc van Durme, toponymic evidence of a former Celtic presence in the Low Countries is near to utterly absent. Although there were Celts in the Netherlands, Iron Age innovations did not involve substantial Celtic intrusions and featured a local development from Bronze Age culture.
The Nordwestblock theory
Some scholars (De Laet, Gysseling, Hachmann, Kossack & Kuhn) have speculated that a separate ethnic identity, neither Germanic nor Celtic, survived in the Netherlands until the Roman period. They see the Netherlands as having been part of an Iron Age "Nordwestblock" stretching from the Somme to the Weser. Their view is that this culture, which had its own language, was being absorbed by the Celts to the south and the Germanic peoples from the east as late as the immediate pre-Roman period.
Roman era (57 BC – 410 AD)
During the Gallic Wars, the Belgic area south of the Oude Rijn and west of the Rhine was conquered by Roman forces under Julius Caesar in a series of campaigns from 57 BC to 53 BC. The tribes located in the area of the Netherlands at this time did not leave behind written records, so all the information known about them during this pre-Roman period is based on what the Romans and Greeks wrote about them. One of the most important is Caesar's own Commentarii de Bello Gallico. Two main tribes he described as living in what is now the Netherlands were the Menapii, and the Eburones, both in the south, which is where Caesar was active. He established the principle that the Rhine defined a natural boundary between Gaul and Germania magna. But the Rhine was not a strong border, and he made it clear that there was a part of Belgic Gaul where many of the local tribes (including the Eburones) were "Germani cisrhenani", or in other cases, of mixed origin.
The Menapii stretched from the south of Zeeland, through North Brabant (and possibly South Holland), into the southeast of Gelderland. In later Roman times their territory seems to have been divided or reduced, so that it became mainly contained in what is now western Belgium.
The Eburones, the largest of the Germani Cisrhenani group, covered a large area including at least part of modern Dutch Limburg, stretching east to the Rhine in Germany, and also northwest to the delta, giving them a border with the Menapii. Their territory may have stretched into Gelderland.
In the delta itself, Caesar makes a passing comment about the Insula Batavorum ("Island of the Batavi") in the Rhine river, without discussing who lived there. Later, in imperial times, a tribe called the Batavi became very important in this region. Much later Tacitus wrote that they had originally been a tribe of the Chatti, a tribe in Germany never mentioned by Caesar. However, archaeologists find evidence of continuity, and suggest that the Chattic group may have been a small group, moving into a pre-existing (and possibly non-Germanic) people, who could even have been part of a known group such as the Eburones.
The approximately 450 years of Roman rule that followed would profoundly change the area that would become the Netherlands. Very often this involved large-scale conflict with the free Germanic tribes over the Rhine.
Other tribes who eventually inhabited the islands in the delta during Roman times are mentioned by Pliny the Elder are the Cananefates in South Holland; the Frisii, covering most of the modern Netherlands north of the Oude Rijn; the Frisiabones, who apparently stretched from the delta into the North of North Brabant; the Marsacii, who stretched from the Flemish coast, into the delta; and the Sturii.
Caesar reported that he eliminated the name of the Eburones but in their place the Texuandri inhabited most of North Brabant, and the modern province of Limburg, with the Maas running through it, appears to have been inhabited in imperial times by (from north to south) the Baetasii, the Catualini, the Sunuci and the Tungri. (Tacitus reported that the Tungri was a new name for the earlier Germani cisrhenani.)
North of the Old Rhine, apart from the Frisii, Pliny reports some Chauci reached into the delta, and two other tribes known from the eastern Netherlands were the Tuihanti (or Tubantes) from Twenthe in Overijssel, and the Chamavi, from Hamaland in northern Gelderland, who became one of the first tribes to be named as Frankish (see below). The Salians, also Franks, probably originated in Salland in Overijssel, before they moved into the empire, forced by Saxons in the 4th century, first into Batavia, and then into Toxandria.
Roman settlements in the Netherlands
Starting about 15 BC, the Rhine, in the Netherlands came to be defended by the Lower Limes Germanicus. After a series of military actions, the Rhine became fixed around 12 AD as Rome's northern frontier on the European mainland. A number of towns and developments would arise along this line. The area to the south would be integrated into the Roman Empire. At first part of Gallia Belgica, this area became part of the province of Germania Inferior. The tribes already within, or relocated to, this area became part of the Roman Empire. The area to the north of the Rhine, inhabited by the Frisii and the Chauci, remained outside Roman rule but not its presence and control.
Romans built military forts along the Limes Germanicus and a number of towns and smaller settlements in the Netherlands. The more notable Roman towns were at Nijmegen (Ulpia Noviomagus Batavorum) and at Voorburg (Forum Hadriani).
Perhaps the most evocative Roman ruin is the mysterious Brittenburg, which emerged from the sand at the beach in Katwijk several centuries ago, only to be buried again. These ruins were part of Lugdunum Batavorum.
Other Roman settlements, fortifications, temples and other structures have been found at Alphen aan de Rijn (Albaniana); Bodegraven; Cuijk; Elst, Overbetuwe; Ermelo; Esch; Heerlen; Houten; Kessel, North Brabant; Oss, i.e. De Lithse Ham near Maren-Kessel; Kesteren in Neder-Betuwe; Leiden (Matilo); Maastricht; Meinerswijk (now part of Arnhem); Tiel; Utrecht (Traiectum); Valkenburg (South Holland) (Praetorium Agrippinae); Vechten (Fectio) now part of Bunnik; Velsen; Vleuten; Wijk bij Duurstede (Levefanum); Woerden (Laurium or Laurum); and Zwammerdam (Nigrum Pullum).
The Batavians, Cananefates, and the other border tribes were held in high regard as soldiers throughout the empire, and traditionally served in the Roman cavalry. The frontier culture was influenced by the Romans, Germanic people, and Gauls. In the first centuries after Rome's conquest of Gaul, trade flourished. And Roman, Gaulish and Germanic material culture are found combined in the region.
However, the Batavians rose against the Romans in the Batavian rebellion of 69 AD. The leader of this revolt was Batavian Gaius Julius Civilis. One of the causes of the rebellion was that the Romans had taken young Batavians as slaves. A number of Roman castella were attacked and burnt. Other Roman soldiers in Xanten and elsewhere and auxiliary troops of Batavians and Canninefatae in the legions of Vitellius) joined the revolt, thus splitting the northern part of the Roman army. In April 70 AD, a few legions sent by Vespasianus and commanded by Quintus Petillius Cerialis eventually defeated the Batavians and negotiated surrender with Gaius Julius Civilis somewhere between the Waal and the Maas near Noviomagus (Nijmegen), which was probably called "Batavodurum" by the Batavians. The Batavians later merged with other tribes and became part of the Salian Franks.
Dutch writers in the 17th and 18th centuries saw the rebellion of the independent and freedom-loving Batavians as mirroring the Dutch revolt against Spain and other forms of tyranny. According to this nationalist view, the Batavians were the "true" forefathers of the Dutch, which explains the recurring use of the name over the centuries. Jakarta was named "Batavia" by the Dutch in 1619. The Dutch republic created in 1795 on the basis of French revolutionary principles was called the Batavian Republic. Even today "Batavian" is a term sometimes used to describe the Dutch people. (This is similar to use of "Gallic" to describe the French and "Teutonic" to describe the Germans.)
Emergence of the Franks
Modern scholars of the Migration Period are in agreement that the Frankish identity emerged at the first half of the 3rd century out of various earlier, smaller Germanic groups, including the Salii, Sicambri, Chamavi, Bructeri, Chatti, Chattuarii, Ampsivarii, Tencteri, Ubii, Batavi and the Tungri, who inhabited the lower and middle Rhine valley between the Zuyder Zee and the river Lahn and extended eastwards as far as the Weser, but were the most densely settled around the IJssel and between the Lippe and the Sieg. The Frankish confederation probably began to coalesce in the 210s.
The Franks eventually were divided into two groups: the Ripuarian Franks (Latin: Ripuari), who were the Franks that lived along the middle-Rhine River during the Roman Era, and the Salian Franks, who were the Franks that originated in the area of the Netherlands.
Franks appear in Roman texts as both allies and enemies (laeti and dediticii). By about 320, the Franks had the region of the Scheldt river (present day west Flanders and southwest Netherlands) under control, and were raiding the Channel, disrupting transportation to Britain. Roman forces pacified the region, but did not expel the Franks, who continued to be feared as pirates along the shores at least until the time of Julian the Apostate (358), when Salian Franks were allowed to settle as foederati in Toxandria, according to Ammianus Marcellinus.
Disappearance of the Frisii
Three factors contributed to the disappearance of the Frisii from the northern Netherlands. First, according to the Panegyrici Latini (Manuscript VIII), the ancient Frisii were forced to resettle within Roman territory as laeti (i.e., Roman-era serfs) in c. 296. This is the last reference to the ancient Frisii in the historical record. What happened to them, however, is suggested in the archaeological record. The discovery of a type of earthenware unique to 4th-century Frisia, called terp Tritzum, shows that an unknown number of them were resettled in Flanders and Kent, likely as laeti under Roman coercion. Second, the environment in the low-lying coastal regions of northwestern Europe began to lower c. 250 and gradually receded over the next 200 years. Tectonic subsidence, a rising water table and storm surges combined to flood some areas with marine transgressions. This was accelerated by a shift to a cooler, wetter climate in the region. If there had been any Frisii left in Frisia, they would have drowned. Third, after the collapse of the Roman Empire, there was a decline in population as Roman activity stopped and Roman institutions withdrew. As a result of these three factors, the Frisii and Frisiaevones disappeared from the area. The coastal lands remained largely unpopulated for the next two centuries.
Early Middle Ages (411–1000)
As climatic conditions improved, there was another mass migration of Germanic peoples into the area from the east. This is known as the "Migration Period" (Volksverhuizingen). The northern Netherlands received an influx of new migrants and settlers, mostly Saxons, but also Angles and Jutes. Many of these migrants did not stay in the northern Netherlands but moved on to England and are known today as the Anglo-Saxons. The newcomers that stayed in the northern Netherlands would eventually be referred to as "Frisians", although they were not descended from the ancient Frisii. These new Frisians settled in the northern Netherlands and would become the ancestors of the modern Frisians. (Because the early Frisians and Anglo-Saxons were formed from largely identical tribal confederacies, their respective languages were very similar. Old Frisian is the most closely related language to Old English and the modern Frisian dialects are in turn the closest related languages to contemporary English.) By the end of the 6th century, the Frisian territory in the northern Netherlands had expanded west to the North Sea coast and, by the 7th century, south to Dorestad. During this period most of the northern Netherlands was known as Frisia. This extended Frisian territory is sometimes referred to as Frisia Magna (or Greater Frisia).
In the 7th and 8th centuries, the Frankish chronologies mention this area as the kingdom of the Frisians. This kingdom comprised the coastal provinces of the Netherlands and the German North Sea coast. During this time, the Frisian language was spoken along the entire southern North Sea coast. The 7th-century Frisian Kingdom (650–734) under King Aldegisel and King Redbad, had its centre of power in Utrecht.
Dorestad was the largest settlement (emporia) in northwestern Europe. It had grown around a former Roman fortress. It was a large, flourishing trading place, three kilometers long and situated where the rivers Rhine and Lek diverge southeast of Utrecht near the modern town of Wijk bij Duurstede. Although inland, it was a North Sea trading centre that primarily handled goods from the Middle Rhineland. Wine was among the major products traded at Dorestad, likely from vineyards south of Mainz. It was also widely known because of its mint. Between 600 and around 719 Dorestad was often fought over between the Frisians and the Franks.
After Roman government in the area collapsed, the Franks expanded their territories until there were numerous small Frankish kingdoms, especially at Cologne, Tournai, Le Mans and Cambrai. The kings of Tournai eventually came to subdue the other Frankish kings. By the 490s, Clovis I had conquered and united all the Frankish territories to the west of the Meuse, including those in the southern Netherlands. He continued his conquests into Gaul.
After the death of Clovis I in 511, his four sons partitioned his kingdom amongst themselves, with Theuderic I receiving the lands that were to become Austrasia (including the southern Netherlands). A line of kings descended from Theuderic ruled Austrasia until 555, when it was united with the other Frankish kingdoms of Chlothar I, who inherited all the Frankish realms by 558. He redivided the Frankish territory amongst his four sons, but the four kingdoms coalesced into three on the death of Charibert I in 567. Austrasia (including the southern Netherlands) was given to Sigebert I. The southern Netherlands remained the northern part of Austrasia until the rise of the Carolingians.
The Franks who expanded south into Gaul settled there and eventually adopted the Vulgar Latin of the local population. However, a Germanic language was spoken as a second tongue by public officials in western Austrasia and Neustria as late as the 850s. It completely disappeared as a spoken language from these regions during the 10th century. During this expansion to the south, many Frankish people remained in the north (i.e. southern Netherlands, Flanders and a small part of northern France). A widening cultural divide grew between the Franks remaining in the north and the rulers far to the south in what is now France. Salian Franks continued to reside in their original homeland and the area directly to the south and to speak their original language, Old Frankish, which by the 9th century had evolved into Old Dutch. A Dutch-French language boundary came into existence (but this was originally south of where it is today). In the Maas and Rhine areas of the Netherlands, the Franks had political and trading centres, especially at Nijmegen and Maastricht. These Franks remained in contact with the Frisians to the north, especially in places like Dorestad and Utrecht.
Modern doubts about the traditional Frisian, Frank and Saxon distinction
In the late 19th century, Dutch historians believed that the Franks, Frisians, and Saxons were the original ancestors of the Dutch people. Some went further by ascribing certain attributes, values and strengths to these various groups and proposing that they reflected 19th-century nationalist and religious views. In particular, it was believed that this theory explained why Belgium and the southern Netherlands (i.e. the Franks) had become Catholic and the northern Netherlands (Frisians and Saxons) had become Protestant. The success of this theory was partly due to anthropological theories based on a tribal paradigm. Being politically and geographically inclusive, and yet accounting for diversity, this theory was in accordance with the need for nation-building and integration during the 1890–1914 period. The theory was taught in Dutch schools.
However, the disadvantages of this historical interpretation became apparent. This tribal-based theory suggested that external borders were weak or non-existent and that there were clear-cut internal borders. This origins myth provided an historical premise, especially during the Second World War, for regional separatism and annexation to Germany. After 1945 the tribal paradigm lost its appeal for anthropological scholars and historians. When the accuracy of the three-tribe theme was fundamentally questioned, the theory fell out of favour. Modern genetics however seems to show that this opposition to the three-tribe theme had more to do with anti-racist ideology than with actual science.[better source needed]
Due to the scarcity of written sources, knowledge of this period depends to a large degree on the interpretation of archaeological data. The traditional view of a clear-cut division between Frisians in the north and coast, Franks in the south and Saxons in the east has proven historically problematic. Archeological evidence suggests dramatically different models for different regions, with demographic continuity for some parts of the country and depopulation and possible replacement in other parts, notably the coastal areas of Frisia and Holland.
The emergence of the Dutch language
The language from which Old Dutch (also sometimes called Old West Low Franconian, Old Low Franconian or Old Frankish) arose is not known with certainty, but it is thought to be the language spoken by the Salian Franks. Even though the Franks are traditionally categorized as Weser-Rhine Germanic, Dutch has a number of Ingvaeonic characteristics and is classified by modern linguists as an Ingvaeonic language. Dutch also has a number of Old Saxon characteristics. There was a close relationship between Old Dutch, Old Saxon, Old English and Old Frisian. Because texts written in the language spoken by the Franks are almost non-existent, and Old Dutch texts scarce and fragmentary, not much is known about the development of Old Dutch. Old Dutch made the transition to Middle Dutch around 1150.
The Franks became Christians after their king Clovis I converted to Catholicism, an event which is traditionally set in 496. Christianity was introduced in the north after the conquest of Friesland by the Franks. The Saxons in the east were converted before the conquest of Saxony, and became Frankish allies.
Hiberno-Scottish and Anglo-Saxon missionaries, particularly Willibrord, Wulfram and Boniface, played an important role in converting the Frankish and Frisian peoples to Christianity by the 8th century. Boniface was martyred by the Frisians in Dokkum (754).
Frankish dominance and incorporation into Holy Roman Empire
In the early 8th century the Frisians came increasingly into conflict with the Franks to the south, resulting in a series of wars in which the Frankish Empire eventually subjugated Frisia. In 734, at the Battle of the Boarn, the Frisians in the Netherlands were defeated by the Franks, who thereby conquered the area west of the Lauwers. The Franks then conquered the area east of the Lauwers in 785 when Charlemagne defeated Widukind.
The linguistic descendants of the Franks, the modern Dutch-speakers of the Netherlands and Flanders, seem to have broken with the endonym "Frank" around the 9th century. By this time Frankish identity had changed from an ethnic identity to a national identity, becoming localized and confined to the modern Franconia and principally to the French province of Île-de-France.
Although the people no longer referred to themselves as "Franks", the Netherlands was still part of the Frankish empire of Charlemagne. Indeed, because of the Austrasian origins of the Carolingians in the area between the Rhine and the Maas, the cities of Aachen, Maastricht, Liège and Nijmegen were at the heart of Carolingian culture. Charlemagne maintained his palatium in Nijmegen at least four times.
The Carolingian empire would eventually include France, Germany, northern Italy and much of Western Europe. In 843, the Frankish empire was divided into three parts, giving rise to West Francia in the west, East Francia in the east, and Middle Francia in the centre. Most of what is today the Netherlands became part of Middle Francia; Flanders became part of West Francia. This division was an important factor in the historical distinction between Flanders and the other Dutch-speaking areas.
Middle Francia (Latin: Francia media) was an ephemeral Frankish kingdom that had no historical or ethnic identity to bind its varied peoples. It was created by the Treaty of Verdun in 843, which divided the Carolingian Empire among the sons of Louis the Pious. Situated between the realms of East and West Francia, Middle Francia comprised the Frankish territory between the rivers Rhine and Scheldt, the Frisian coast of the North Sea, the former Kingdom of Burgundy (except for a western portion, later known as Bourgogne), Provence and the Kingdom of Italy.
Middle Francia fell to Lothair I, the eldest son and successor of Louis the Pious, after an intermittent civil war with his younger brothers Louis the German and Charles the Bald. In acknowledgement of Lothair's Imperial title, Middle Francia contained the imperial cities of Aachen, the residence of Charlemagne, as well as Rome. In 855, on his deathbed at Prüm Abbey, Emperor Lothair I again partitioned his realm amongst his sons. Most of the lands north of the Alps, including the Netherlands, passed to Lothair II and consecutively were named Lotharingia. After Lothair II died in 869, Lotharingia was partitioned by his uncles Louis the German and Charles the Bald in the Treaty of Meerssen in 870. Although some of the Netherlands had come under Viking control, in 870 it technically became part of East Francia, which became the Holy Roman Empire in 962.
In the 9th and 10th centuries, the Vikings raided the largely defenceless Frisian and Frankish towns lying on the coast and along the rivers of the Low Countries. Although Vikings never settled in large numbers in those areas, they did set up long-term bases and were even acknowledged as lords in a few cases. In Dutch and Frisian historical tradition, the trading centre of Dorestad declined after Viking raids from 834 to 863; however, since no convincing Viking archaeological evidence has been found at the site (as of 2007), doubts about this have grown in recent years.
One of the most important Viking families in the Low Countries was that of Rorik of Dorestad (based in Wieringen) and his brother the "younger Harald" (based in Walcheren), both thought to be nephews of Harald Klak. Around 850, Lothair I acknowledged Rorik as ruler of most of Friesland. And again in 870, Rorik was received by Charles the Bald in Nijmegen, to whom he became a vassal. Viking raids continued during that period. Harald’s son Rodulf and his men were killed by the people of Oostergo in 873. Rorik died sometime before 882.
Buried Viking treasures consisting mainly of silver have been found in the Low Countries. Two such treasures have been found in Wieringen. A large treasure found in Wieringen in 1996 dates from around 850 and is thought perhaps to have been connected to Rorik. The burial of such a valuable treasure is seen as an indication that there was a permanent settlement in Wieringen.
Around 879, Godfrid arrived in Frisian lands as the head of a large force that terrorised the Low Countries. Using Ghent as his base, they ravaged Ghent, Maastricht, Liège, Stavelot, Prüm, Cologne, and Koblenz. Controlling most of Frisia between 882 and his death in 885, Godfrid became known to history as Godfrid, Duke of Frisia. His lordship over Frisia was acknowledged by Charles the Fat, to whom he became a vassal. Godfried was assassinated in 885, after which Gerolf of Holland assumed lordship and Viking rule of Frisia came to an end.
Viking raids of the Low Countries continued for over a century. Remains of Viking attacks dating from 880 to 890 have been found in Zutphen and Deventer. In 920, King Henry of Germany liberated Utrecht. According to a number of chronicles, the last attacks took place in the first decade of the 11th century and were directed at Tiel and/or Utrecht.
These Viking raids occurred about the same time that French and German lords were fighting for supremacy over the middle empire that included the Netherlands, so their sway over this area was weak. Resistance to the Vikings, if any, came from local nobles, who gained in stature as a result.
High Middle Ages (1000–1432)
Part of the Holy Roman Empire
The German kings and emperors ruled the Netherlands in the 10th and 11th century. Germany was called the Holy Roman Empire after the coronation of King Otto the Great as emperor. The Dutch city of Nijmegen used to be the spot of an important domain of the German emperors. Several German emperors were born and died there. (Byzantine empress Theophanu died in Nijmegen for instance.) Utrecht was also an important city and trading port at the time.
The Holy Roman Empire was not able to maintain political unity. In addition to the growing independence of the towns, local rulers turned their counties and duchies into private kingdoms and felt little sense of obligation to the emperor who reigned over large parts of the nation in name only. Large parts of what now comprise the Netherlands were governed by the Count of Holland, the Duke of Gelre, the Duke of Brabant and the Bishop of Utrecht. Friesland and Groningen in the north maintained their independence and were governed by the lower nobility.
The various feudal states were in a state of almost continual war. Gelre and Holland fought for control of Utrecht. Utrecht, whose bishop had in 1000 ruled over half of what is today the Netherlands, was marginalised as it experienced continuing difficulty in electing new bishops. At the same time, the dynasties of neighbouring states were more stable. Groningen, Drenthe and most of Gelre, which used to be part of Utrecht, became independent. Brabant tried to conquer its neighbours, but was not successful. Holland also tried to assert itself in Zeeland and Friesland, but its attempts failed.
The language and culture of most of the people who lived in the area that is now Holland were originally Frisian. The sparsely populated area was known as "West Friesland" (Westfriesland). As Frankish settlement progressed, the Frisians migrated away or were absorbed and the area quickly became Dutch. (The part of North Holland situated north of Alkmaar is still colloquially known as West Friesland).
The rest of Friesland in the north continued to maintain its independence during this time. It had its own institutions (collectively called the "Frisian freedom") and resented the imposition of the feudal system and the patriciate found in other European towns. They regarded themselves as allies of Switzerland. The Frisian battle cry was "better dead than a slave". They later lost their independence when they were defeated in 1498 by the German Landsknecht mercenaries of Duke Albrecht of Saxony-Meissen.
The rise of Holland
The center of power in these emerging independent territories was in the County of Holland. Originally granted as a fief to the Danish chieftain Rorik in return for loyalty to the emperor in 862, the region of Kennemara (the region around modern Haarlem) rapidly grew under Rorik's descendants in size and importance. By the early 11th century, Dirk III, Count of Holland was levying tolls on the Meuse estuary and was able to resist military intervention from his overlord, the Duke of Lower Lorraine.
In 1083, the name "Holland" first appears in a deed referring to a region corresponding more or less to the current province of South Holland and the southern half of what is now North Holland. Holland's influence continued to grow over the next two centuries. The counts of Holland conquered most of Zeeland but it was not until 1289 that Count Floris V was able to subjugate the Frisians in West Friesland (that is, the northern half of North Holland).
Expansion and growth
Around 1000 AD there were several agricultural developments (described sometimes as an agricultural revolution) that resulted in an increase in production, especially food production. The economy started to develop at a fast pace, and the higher productivity allowed workers to farm more land or to become tradesmen.
Much of the western Netherlands was barely inhabited between the end of the Roman period until around 1100 AD, when farmers from Flanders and Utrecht began purchasing the swampy land, draining it and cultivating it. This process happened quickly and the uninhabited territory was settled in a few generations. They built independent farms that were not part of villages, something unique in Europe at the time.
Guilds were established and markets developed as production exceeded local needs. Also, the introduction of currency made trading a much easier affair than it had been before. Existing towns grew and new towns sprang into existence around monasteries and castles, and a mercantile middle class began to develop in these urban areas. Commerce and town development increased as the population grew.
The Crusades were popular in the Low Countries and drew many to fight in the Holy Land. At home, there was relative peace. Viking pillaging had stopped. Both the Crusades and the relative peace at home contributed to trade and the growth in commerce.
Cities arose and flourished, especially in Flanders and Brabant. As the cities grew in wealth and power, they started to buy certain privileges for themselves from the sovereign, including city rights, the right to self-government and the right to pass laws. In practice, this meant that the wealthiest cities became quasi-independent republics in their own right. Two of the most important cities were Brugge and Antwerp (in Flanders) which would later develop into some of the most important cities and ports in Europe.
Hook and Cod Wars
The Hook and Cod Wars (Dutch: Hoekse en Kabeljauwse twisten) were a series of wars and battles in the County of Holland between 1350 and 1490. Most of these wars were fought over the title of count of Holland, but some have argued that the underlying reason was because of the power struggle of the bourgeois in the cities against the ruling nobility.
The Cod faction generally consisted of the more progressive cities of Holland. The Hook faction consisted for a large part of the conservative noblemen. Some of the main figures in this multi-generational conflict were William IV, Margaret, William V, William VI, Count of Holland and Hainaut, John and Philip the Good, Duke of Burgundy. But perhaps the most well known is Jacqueline, Countess of Hainaut.
The conquest of the county of Holland by the Duke Philip the Good of Burgundy was an odd affair. Leading noblemen in Holland invited the duke to conquer Holland, even though he had no historical claim to it. Some historians[who?] say that the ruling class in Holland wanted Holland to integrate with the Flemish economic system and adopt Flemish legal institutions. Europe had been wracked by many civil wars in the 14th and 15th centuries, while Flanders had grown rich and enjoyed peace.
Burgundian and Habsburg period (1433–1567)
Most of what is now the Netherlands and Belgium was eventually united by the Duke of Burgundy in 1433. Before the Burgundian union, the Dutch identified themselves by the town they lived in, their local duchy or county or as subjects of the Holy Roman Empire. The Burgundian period is when the Dutch began the road to nationhood.
Holland's trade developed rapidly, especially in the areas of shipping and transport. The new rulers defended Dutch trading interests. The fleets of Holland defeated the fleets of the Hanseatic League several times. Amsterdam grew and in the 15th century became the primary trading port in Europe for grain from the Baltic region. Amsterdam distributed grain to the major cities of Belgium, Northern France and England. This trade was vital to the people of Holland, because Holland could no longer produce enough grain to feed itself. Land drainage had caused the peat of the former wetlands to reduce to a level that was too low for drainage to be maintained.
Habsburg rule from Spain
Charles V (1500–58) was born and raised in the Flemish city of Ghent; he spoke French. Charles extended the Burgundian territory with the annexation of Tournai, Artois, Utrecht, Groningen and Guelders. The Seventeen Provinces had been unified by Charles's Burgundian ancestors, but nominally were fiefs of either France or the Holy Roman Empire. When he was a minor, his aunt Margaret acted as regent until 1515. France relinquished its ancient claim on Flanders in 1528.
From 1515 to 1523, Charles's government in the Netherlands had to contend with the rebellion of Frisian peasants (led by Pier Gerlofs Donia and Wijard Jelckama). Gelre attempted to build up its own state in northeast Netherlands and northwest Germany. Lacking funds in the 16th century, Gelre had its soldiers provide for themselves by pillaging enemy terrain. These soldiers were a great menace to the Burgundian Netherlands, as when they pillaged The Hague.
The dukes of Burgundy over the years through astute marriages, purchases and wars, had taken control of the Seventeen Provinces that made up the Low Countries. They are now the Netherlands in the north, the Southern Netherlands (now Belgium) in the south, and Luxemburg in the southeast. Known as the "Burgundian Circle," these lands came under the control of the Habsburg family. Charles (1500–58) became the owner in 1506, but in 1515 he left to become king of Spain and later became the Holy Roman Emperor. Charles turned over control to regents (his close relatives), and in practice rule was exercised by Spaniards he controlled. The provinces each had their own governments and courts, controlled by the local nobility, and their own traditions and rights ("liberties") dating back centuries. Likewise the numerous cities had their own legal rights and local governments, usually controlled by the merchants, On top of this the Spanish had imposed an overall government, the Estates General of the Netherlands, with its own officials and courts. The Spanish officials sent by Charles ignored traditions and the Dutch nobility as well as local officials, inciting an anti-Spanish sense of nationalism, and leading to the Dutch Revolt. With the emergence of the Protestant Reformation, Charles—now the Emperor—was determined to crush Protestantism and never compromise with it. Unrest began in the south, centered in the large rich metropolis of Antwerp. The Netherlands was an especially rich unit of the Spanish realm, especially after the Treaty of Cateau-Cambresis of 1559; it ended four decades of warfare between France and Spain and allowed Spain to reposition its army.
In 1548, Charles granted the Netherlands status as an entity in which many of the laws of the Holy Roman Empire became obsolete. The "Transaction of Augsburg." created the Burgundian Circle of the Holy Roman Empire, which comprised the Netherlands and Franche-Comté. A year later the Pragmatic Sanction of 1549 stated that the Seventeen Provinces could only be passed on to his heirs as a composite entity.
During the 16th century, the Protestant Reformation rapidly gained ground in northern Europe, especially in its Lutheran and Calvinist forms. Dutch Protestants, after initial repression, were tolerated by local authorities. By the 1560s, the Protestant community had become a significant influence in the Netherlands, although it clearly formed a minority then. In a society dependent on trade, freedom and tolerance were considered essential. Nevertheless, the Catholic rulers Charles V, and later Philip II, made it their mission to defeat Protestantism, which was considered a heresy by the Catholic Church and a threat to the stability of the whole hierarchical political system. On the other hand, the intensely moralistic Dutch Protestants insisted their Biblical theology, sincere piety and humble lifestyle was morally superior to the luxurious habits and superficial religiosity of the ecclesiastical nobility. The rulers' harsh punitive measures led to increasing grievances in the Netherlands, where the local governments had embarked on a course of peaceful coexistence. In the second half of the century, the situation escalated. Philip sent troops to crush the rebellion and make the Netherlands once more a Catholic region.
The second wave of the Reformation, came in the form of Anabaptism, that was popular among ordinary farmers in Holland and Friesland. Anabaptists were socially very radical and equalitarian; they believed that the apocalypse was very near. They refused to live the old way, and began new communities, creating considerable chaos. A prominent Dutch anabaptist was Menno Simons, who initiated the Mennonite church. The movement was allowed in the north, but never grew to a large scale.
The third wave and most permanent wave of the Reformation, was Calvinism. It arrived in the Netherlands in the 1540s, converting many of the elite and the common population, especially in Flanders. The Catholic Spanish responded with harsh persecution and introduced the Inquisition of the Netherlands. Calvinists rebelled. First there was the iconoclasm in 1566, which was the systematic destruction of statues of saints and other Catholic devotional depictions in churches. In 1566 William the Silent, a Calvinist, started the Eighty Years' War to liberate all Dutch of whatever religion from Catholic Spain. Blum says, "His patience, tolerance, determination, concern for his people, and belief in government by consent held the Dutch together and kept alive their spirit of revolt." The provinces of Holland and Zeeland, being mainly Calvinist by 1572, submitted to the rule of William. The other states remained almost entirely Catholic.
Prelude to war
The Netherlands was a valuable part of the Spanish Empire, especially after the Treaty of Cateau-Cambresis of 1559. This treaty ended a forty-year period of warfare between France and Spain conducted in Italy from 1521 to 1559. The Treaty of Cateau-Cambresis was somewhat of a watershed—not only for the battleground that Italy had been, but also for northern Europe. Spain had been keeping troops in the Netherlands to be ready to attack France from the north as well as from the south.
With the settlement of so many major issues between France and Spain by the Treaty of Cateau-Cambresis, there was no longer any reason to keep Spanish troops in the Netherlands. Thus, the people of the Netherlands could get on with their peacetime pursuits. As they did so they found that there was a great deal of demand for their products. Fishing had long been an important part of the economy of the Netherlands. However, now the fishing of herring alone came to occupy 2,000 boats operating out of Dutch ports. Spain, still the Dutch trader's best customer, was buying fifty large ships full of furniture and household utensils from Flanders merchants. Additionally, Dutch woolen goods were desired everywhere. The Netherlands bought and processed enough Spanish wool to sell four million florins of wool products through merchants in Bruges. So strong was the Dutch appetite for raw wool at this time that they bought nearly as much English wool as they did Spanish wool. Total commerce with England alone amounted to 24 million florins. Much of the export going to England resulted in pure profit to the Dutch because the exported items were of their own manufacture. The Netherlands was just starting to enter its "Golden Age." Brabant and Flanders were the richest and most flourishing parts of the Dutch Republic at the time. The Netherlands was one of the richest places in the world. The population reached 3 million in 1560, with 25 cities of 10,000 people or more, by far the largest urban presence in Europe; with the trading and financial center of Antwerp being especially important (population 100,000). Spain could not afford to lose this rich land, nor allow it to fall from Catholic control. Thus came 80 years of warfare.
A devout Catholic, Philip was appalled by the success of the Reformation in the Low Countries, which had led to an increasing number of Calvinists. His attempts to enforce religious persecution of the Protestants, and his centralization of government, law enforcement, and taxes, made him unpopular and led to a revolt. Fernando Alvarez de Toledo, Duke of Alba, was sent with a Spanish Army to punish the unruly Dutch in 1567.
The only opposition the Duke of Alba faced in his march across the Netherlands were the nobles, Lamoral, Count of Egmont; Philippe de Montmorency, Count of Horn and others. With the approach of Alba and the Spanish army, William the Silent of Orange fled to Germany with his three brothers and his whole family on 11 April 1567. The Duke of Alba sought to meet and negotiate with the nobles that now faced him with armies. However, when the nobles arrived in Brussels they were all arrested and Egmont and Horn were executed. Alba then revoked all the prior treaties that Margaret, the Duchess of Parma had signed with the Protestants of the Netherlands and instituted the Inquisition to enforce the decrees of the Council of Trent.
The Eighty Years' War (1568–1648)
The Dutch War for Independence from Spain is frequently called the Eighty Years' War (1568–1648). The first fifty years (1568 through 1618) were uniquely a war between Spain and the Netherlands. During the last thirty years (1618–1648) the conflict between Spain and the Netherlands was submerged in the general European War that became known as the Thirty Years War. The seven rebellious provinces of the Netherlands were eventually united by the Union of Utrecht in 1579 and formed the Republic of the Seven United Netherlands (also known as the "United Provinces"). The Act of Abjuration or Plakkaat van Verlatinghe was signed on 26 July 1581, and was the formal declaration of independence of the northern Low Countries from the Spanish king.
William of Orange (Slot Dillenburg, 24 April 1533 – Delft, 10 July 1584), the founder of the Dutch royal family, led the Dutch during the first part of the war, following the death of Egmont and Horn in 1568. The very first years were a success for the Spanish troops. However, the Dutch countered subsequent sieges in Holland. At several points the Spanish soldiers committed massacres known as Spanish Fury; the most famous 'Spanish Fury' was the sack of Antwerp in 1576, killing 10,000.
In a war composed mostly of sieges rather than battles, Governor-General Alexander Farnese proved his mettle. His strategy was to offer generous terms for the surrender of a city: there would be no more "Spanish furies" (massacres) or looting; historic urban privileges were retained; there was a full pardon and amnesty; return to the Catholic Church would be gradual. The conservative Catholics in the south and east supported the Spanish. Farnese recaptured Antwerp and nearly all of what became Belgium. Most of the Dutch-speaking territory in the Netherlands was taken from Spain, but not in Flanders, which to this day remains part of Belgium. Flanders was the most radical anti-Spanish territory. Many Flemish fled to Holland, among them half of the population of Antwerp, 3/4 of Bruges and Ghent and the entire population of Nieuwpoort, Dunkerque and countryside. His successful campaign gave the Catholics control of the lower half of the Low Countries, and was part of the Catholic Counter-Reformation.
The war dragged on for another half century, but the main fighting was over. The Peace of Westphalia, signed in 1648, confirmed the independence of the United Provinces from Spain. The Dutch people started to develop a national identity since the 15th century, but they officially remained a part of the Holy Roman Empire until 1648. National identity was mainly formed by the province people came from. Holland was the most important province by far. The republic of the Seven Provinces came to be known as Holland across Europe.
The Catholics in the Netherlands were an outlawed minority that had been suppressed by the Calvinists. After 1572, however, they made a striking comeback (also as part of the Catholic Counter-Reformation), setting up seminaries, reforming their Church, and sending missionaries into Protestant districts. Laity often took the lead; the Calvinist government often arrested or harassed priests who seemed too effective. Catholic numbers stabilized at about a third of the population in the Netherlands; they were strongest in the southeast.
During the Eighty Years' War the Dutch provinces became the most important trading centre of Northern Europe, replacing Flanders in this respect. During the Golden Age, there was a great flowering of trade, industry, the arts and the sciences in the Netherlands. In the 17th and 18th centuries, the Dutch were arguably the most economically wealthy and scientifically advanced of all European nations. This new, officially Calvinist nation flourished culturally and economically, creating what historian Simon Schama has called an "embarrassment of riches". Speculation in the tulip trade led to a first stock market crash in 1637, but the economic crisis was soon overcome. Due to these developments the 17th century has been dubbed the Golden Age of the Netherlands.
The invention of the sawmill enabled the construction of a massive fleet of ships for worldwide trading and for defence of the republic's economic interests by military means. National industries such as shipyards and sugar refineries expanded as well.
The Dutch, traditionally able seafarers and keen mapmakers, obtained an increasingly dominant position in world trade, a position which before had been occupied by the Portuguese and Spaniards. In 1602 the Dutch East India Company (Dutch: Verenigde Oostindische Compagnie or VOC) was founded. It was the first-ever multinational corporation, financed by shares that established the first modern stock exchange. It became the world's largest commercial enterprise of the 17th century. To finance the growing trade within the region, the Bank of Amsterdam was established in 1609, the precursor to, if not the first true central bank.
Dutch ships hunted whales off Svalbard, traded spices in India and Indonesia (via the Dutch East India Company) and founded colonies in New Amsterdam (now New York), South Africa and the West Indies. In addition some Portuguese colonies were conquered, namely in Northeastern Brazil, Angola, Indonesia and Ceylon. In 1640 by the Dutch East India Company began a trade monopoly with Japan through the trading post on Dejima.
The Dutch also dominated trade between European countries. The Low Countries were favorably positioned on a crossing of east-west and north-south trade routes and connected to a large German hinterland through the Rhine river. Dutch traders shipped wine from France and Portugal to the Baltic lands and returned with grain destined for countries around the Mediterranean Sea. By the 1680s, an average of nearly 1000 Dutch ships entered the Baltic Sea each year. The Dutch were able to gain control of much of the trade with the nascent English colonies in North America and following the end of war with Spain in 1648, Dutch trade with that country also flourished.
Renaissance Humanism, of which Desiderius Erasmus (c. 1466–1536) was an important advocate, had also gained a firm foothold and was partially responsible for a climate of tolerance. Overall, levels of tolerance were sufficiently high to attract religious refugees from other countries, notably Jewish merchants from Portugal who brought much wealth with them. The revocation of the Edict of Nantes in France in 1685 resulted in the immigration of many French Huguenots, many of whom were shopkeepers or scientists. Still tolerance had its limits, as philosopher Baruch de Spinoza (1632–1677) would find out. Due to its climate of intellectual tolerance the Dutch Republic attracted scientists and other thinkers from all over Europe. Especially the renowned University of Leiden (established in 1575 by the Dutch stadtholder, William of Oranje, as a token of gratitude for Leiden's fierce resistance against Spain during the Eighty Years War) became a gathering place for these people. For instance French philosopher René Descartes lived in Leiden from 1628 until 1649.
Dutch lawyers were famous for their knowledge of international law of the sea and commercial law. Hugo Grotius (1583–1645) played a leading part in the foundation of international law. Again due to the Dutch climate of tolerance, book publishers flourished. Many books about religion, philosophy and science that might have been deemed controversial abroad were printed in the Netherlands and secretly exported to other countries. Thus during the 17th century the Dutch Republic became more and more Europe's publishing house.
Christiaan Huygens (1629–1695) was a famous astronomer, physicist and mathematician. He invented the pendulum clock, which was a major step forward towards exact timekeeping. He contributed to the fields of optics. The most famous Dutch scientist in the area of optics is certainly Anton van Leeuwenhoek, who invented or greatly improved the microscope (opinions differ) and was the first to methodically study microscopic life, thus laying the foundations for the field of microbiology. Famous Dutch hydraulic engineer Jan Leeghwater (1575–1650) gained important victories in The Netherlands's eternal battle against the sea. Leeghwater added a considerable amount of land to the republic by converting several large lakes into polders, pumping all water out with windmills.
Painting was the dominant art form in 17th-century Holland. Dutch Golden Age painting followed many of the tendencies that dominated Baroque art in other parts of Europe, as with the Utrecht Caravaggisti, but was the leader in developing the subjects of still life, landscape, and genre painting. Portraiture were also popular, but history painting – traditionally the most-elevated genre struggled to find buyers. Church art was virtually non-existent, and little sculpture of any kind produced. While art collecting and painting for the open market was also common elsewhere, art historians point to the growing number of wealthy Dutch middle-class and successful mercantile patrons as driving forces in the popularity of certain pictorial subjects. Today, the best-known painters of the Dutch Golden Age are the period's most dominant figure Rembrandt, the Delft master of genre Johannes Vermeer, the innovative landscape painter Jacob van Ruisdael, and Frans Hals, who infused new life into portraiture. Some notable artistic styles and trends include Haarlem Mannerism, Utrecht Caravaggism, the School of Delft, the Leiden fijnschilders, and Dutch classicism.
Due to the thriving economy, cities expanded greatly. New town halls, weighhouses and storehouses were built. Merchants that had gained a fortune ordered a new house built along one of the many new canals that were dug out in and around many cities (for defence and transport purposes), a house with an ornamented façade that befitted their new status. In the countryside, many new castles and stately homes were built. Most of them have not survived. Starting at 1595 Reformed churches were commissioned, many of which are still landmarks today. The most famous Dutch architects of the 17th century were Jacob van Campen, Pieter Post, Pieter Vingbooms, Lieven de Key, Hendrick de Keyser. Overall, Dutch architecture, which generally combined traditional building styles with some foreign elements, did not develop to the level of painting.
The Golden Age was also an important time for developments in literature. Some of the major figures of this period were Gerbrand Adriaenszoon Bredero, Jacob Cats, Pieter Corneliszoon Hooft and Joost van den Vondel. Since Latin was the lingua franca of education, relatively few men could speak, write, and read Dutch all at the same time.
Music did not develop very much in the Netherlands since the Calvinists considered it an unnecessary extravagance, and organ music was forbidden in Reformed Church services, although it remained common at secular functions.
The Dutch in the Americas
The Dutch West India Company was a chartered company (known as the "GWC") of Dutch merchants. On 2 June 1621, it was granted a charter for a trade monopoly in the West Indies (meaning the Caribbean) by the Republic of the Seven United Netherlands and given jurisdiction over the African slave trade, Brazil, the Caribbean, and North America. Its area of operations stretched from West Africa to the Americas, and the Pacific islands. The company became instrumental in the Dutch colonization of the Americas. The first forts and settlements in Guyana and on the Amazon River date from the 1590s. Actual colonization, with Dutch settling in the new lands, was not as common as with England and France. Many of the Dutch settlements were lost or abandoned by the end of that century, but the Netherlands managed to retain possession of Suriname and a number of Dutch Caribbean islands.
The colony was a private business venture to exploit the fur trade in beaver pelts. New Netherland was slowly settled during its first decades, partially as a result of policy mismanagement by the Dutch West India Company (WIC), and conflicts with Native Americans. During the 1650s, the colony experienced dramatic growth and became a major port for trade in the Atlantic World, tolerating a highly diverse ethnic mix. The surrender of Fort Amsterdam to the British control in 1664 was formalized in 1667, contributing to the Second Anglo–Dutch War. In 1673 the Dutch re-took the area, but later relinquished it under the 1674 Treaty of Westminster ending the Third Anglo-Dutch War.
Descendants of the original settlers played a prominent role in the History of the United States, as typified by the Roosevelt and Vanderbilt families. The Hudson Valley still boasts a Dutch heritage. The concepts of civil liberties and pluralism introduced in the province became mainstays of American political and social life.
Although slavery was illegal inside the Netherlands it flourished in the Dutch Empire, and helped support the economy. In 1619 The Netherlands took the lead in building a large-scale slave trade between Africa and Virginia, by 1650 becoming the pre-eminent slave trading country in Europe. It was overtaken by Britain around 1700. Historians agree that in all the Dutch shipped about 550,000 African slaves across the Atlantic, about 75,000 of whom died on board before reaching their destinations. From 1596–1829, the Dutch traders sold 250,000 slaves in the Dutch Guianas, 142,000 in the Dutch Caribbean islands, and 28,000 in Dutch Brazil. In addition, tens of thousands of slaves, mostly from India and some from Africa, were carried to the Dutch East Indies and slaves from the East Indies to Africa and the West Indies.
The Dutch in Asia: The Dutch East India Company
The Dutch East India Company, called the VOC began in 1602, when the government gave it a monopoly to trade with Asia. It had many world firsts—the first multinational corporation, the first company to issue stock, and was the first megacorporation, possessing quasi-governmental powers, including the ability to wage war, negotiate treaties, coin money, and establish colonial settlements.
England and France soon copied its model but could not match its record. Between 1602 and 1796 the VOC sent almost a million Europeans to work in the Asia trade on 4,785 ships. It returned over 2.5 million tons of Asian trade goods. The VOC enjoyed huge profits from its spice monopoly through most of the 17th century. The VOC was active chiefly in the Dutch East Indies, now Indonesia, where its base was Batavia (now Jakarta). Over the next two centuries the Company acquired additional ports as trading bases and safeguarded their interests by taking over surrounding territory. It remained an important trading concern and paid an 18% annual dividend for almost 200 years. Weighed down by corruption, the VOC went bankrupt in 1800. Its possessions were taken over by the government and turned into the Dutch East Indies.
The Dutch in Africa
In 1647, a Dutch vessel was wrecked in the present-day Table Bay at Cape Town. The marooned crew, the first Europeans to attempt settlement in the area, built a fort and stayed for a year until they were rescued. Shortly thereafter, the Dutch East India Company (in the Dutch of the day: Vereenigde Oostindische Compagnie, or VOC) decided to establish a permanent settlement. The VOC, one of the major European trading houses sailing the spice route to East Asia, had no intention of colonizing the area, instead wanting only to establish a secure base camp where passing ships could shelter, and where hungry sailors could stock up on fresh supplies of meat, fruit, and vegetables. To this end, a small VOC expedition under the command of Jan van Riebeeck reached Table Bay on 6 April 1652.
To remedy a labour shortage, the VOC released a small number of VOC employees from their contracts and permitted them to establish farms with which they would supply the VOC settlement from their harvests. This arrangement proved highly successful, producing abundant supplies of fruit, vegetables, wheat, and wine; they also later raised livestock. The small initial group of "free burghers", as these farmers were known, steadily increased in number and began to expand their farms further north and east.
The majority of burghers had Dutch ancestry and belonged to the Calvinist Reformed Church of the Netherlands, but there were also numerous Germans as well as some Scandinavians. In 1688 the Dutch and the Germans were joined by French Huguenots, also Calvinists, who were fleeing religious persecution in France under King Louis XIV. The Huguenots in South Africa were absorbed into the Dutch population but they played a prominent role in South Africa's history.
From the beginning, the VOC used the cape as a place to supply ships travelling between the Netherlands and the Dutch East Indies. There was a close association between the cape and these Dutch possessions in the far east. Van Riebeeck and the VOC began to import large numbers of slaves, primarily from Madagascar and Indonesia. These slaves often married Dutch settlers, and their descendants became known as the Cape Coloureds and the Cape Malays.
During the 18th century, the Dutch settlement in the area of the cape grew and prospered. By the late 1700s the Cape Colony was one of the best developed European settlements outside Europe or the Americas. The two bases of the Cape Colony's economy for almost the entirety of its history were shipping and agriculture. Its strategic position meant that almost every ship sailing between Europe and Asia stopped off at the colony's capital Cape Town. The supplying of these ships with fresh provisions, fruit, and wine provided a very large market for the surplus produce of the colony.
Some free burghers continued to expand into the rugged hinterlands of the north and east, many began to take up a semi-nomadic pastoralist lifestyle, in some ways not far removed from that of the Khoikhoi they had displaced. In addition to its herds, a family might have a wagon, a tent, a Bible, and a few guns. As they became more settled, they would build a mud-walled cottage, frequently located, by choice, days of travel from the nearest European settlement. These were the first of the Trekboers (Wandering Farmers, later shortened to Boers), completely independent of official controls, extraordinarily self-sufficient, and isolated from the government and the main settlement in Cape Town.
This Dutch dialect, sometimes referred to as the "kitchen language" (kombuistaal), would eventually in the late 19th century be recognised as a distinct language called Afrikaans and replace Dutch as the official language of the Afrikaners.
As the 18th century drew to a close, Dutch mercantile power began to fade and the British moved in to fill the vacuum. They seized the Cape Colony in 1795 to prevent it from falling into French hands, then briefly relinquished it back to the Dutch (1803), before definitively conquering it in 1806. British sovereignty of the area was recognised at the Congress of Vienna in 1815. By the time the Dutch colony was seized by the British in 1806, it had grown into an established settlement with 25,000 slaves, 20,000 white colonists, 15,000 Khoisan, and 1,000 freed black slaves. Outside Cape Town and the immediate hinterland, isolated black and white pastoralists populated the country.
Dutch interest in South Africa was mainly as a strategically located VOC port. Yet in the 17th and 18th centuries the Dutch created the foundation of the modern state of South Africa. The Dutch legacy in South Africa is evident everywhere, but particularly in the Afrikaner people and the Afrikaans language.
Dutch Republic: Regents and Stadholders (1649–1784)
The Netherlands gained independence from Spain as a result of the Eighty Years War, during which the Dutch Republic was founded. As the Netherlands was a republic, it was largely governed by an aristocracy of city-merchants called the regents, rather than by a king. Every city and province had its own government and laws, and a large degree of autonomy. After attempts to find a competent sovereign proved unsuccessful, it was decided that sovereignty would be vested in the various provincial Estates, the governing bodies of the provinces. The Estates-General, with its representatives from all the provinces, would decide on matters important to the Republic as a whole. However, at the head of each province was the stadtholder of that province, a position held by a descendant of the House of Orange. Usually the stadtholdership of several provinces was held by a single man.
After having gained its independence in 1648, the Netherlands tried in various coalitions to help to contain France, which had replaced Spain as the strongest nation of Europe. The end of the War of the Spanish Succession (1713) marked the end of the Dutch Republic as a major player. In the 18th century, it just tried to maintain its independence and stuck to a policy of neutrality.
The economy, based on Amsterdam's role as the center of world trade, remained robust. In 1670 the Dutch merchant marine totalled 568,000 tons of shipping—about half the European total. The province of Holland was highly commercial and dominated the country. Its nobility was small and closed and had little influence, for it was numerically small, politically weak, and formed a strictly closed caste. Most land in the province of Holland was commercialized for cash crops and was owned by urban capitalists, not nobles; there were few links between Holland's nobility and the merchants. By 1650 the burgher families which had grown wealthy through commerce and become influential in government controlled the province of Holland, and to a large extent shaped national policies. The other six provinces were more rural and traditional in life style, had an active nobility, and played a small role in commerce and national politics. Instead they concentrated on their flood protections and land reclamation projects.
The Netherlands sheltered many notable refugees, including Protestants from Antwerp and Flanders, Portuguese and German Jews, French Protestants (Huguenots) (including Descartes) and English Dissenters (including the Pilgrim Fathers). Many immigrants came to the cities of Holland in the 17th and 18th century from the Protestant parts of Germany and elsewhere. The amount of first generation immigrants from outside the Netherlands in Amsterdam was nearly 50% in the 17th and 18th centuries. Indeed, Amsterdam's population consisted primarily of immigrants, if one includes second and third generation immigrants and migrants from the Dutch countryside. People in most parts of Europe were poor and many were unemployed. But in Amsterdam there was always work. Tolerance was important, because a continuous influx of immigrants was necessary for the economy. Travellers visiting Amsterdam reported their surprise at the lack of control over the influx.
The era of explosive economic growth is roughly coterminous with the period of social and cultural bloom that has been called the Dutch Golden Age, and that actually formed the material basis for that cultural era. Amsterdam became the hub of world trade, the center into which staples and luxuries flowed for sorting, processing, and distribution, and then reexported around Europe and the world.
During 1585 through 1622 there was the rapid accumulation of trade capital, often brought in by refugee merchantes from Antwerp and other ports. The money was typically invested in high-risk ventures like pioneering expeditions to the East Indies to engage in the spice trade. These ventures were soon consolidated in the Dutch East India Company (VOC). There were similar ventures in different fields however, like the trade on Russia and the Levant. The profits of these ventures were ploughed back in the financing of new trade, which led to its exponential growth.
Rapid industrialization led to the rapid growth of the nonagricultural labor force and the increase in real wages during the same time. In the half-century between 1570 and 1620 this labor supply increased 3 percent per annum, a truly phenomenal growth. Despite this, nominal wages were repeatedly increased, outstripping price increases. In consequence, real wages for unskilled laborers were 62 percent higher in 1615–1619 than in 1575–1579.
By the mid-1660s Amsterdam had reached the optimum population (about 200,000) for the level of trade, commerce and agriculture then available to support it. The city contributed the largest quota in taxes to the States of Holland which in turn contributed over half the quota to the States General. Amsterdam was also one of the most reliable in settling tax demands and therefore was able to use the threat to withhold such payments to good effect.
Amsterdam was governed by a body of regents, a large, but closed, oligarchy with control over all aspects of the city's life, and a dominant voice in the foreign affairs of Holland. Only men with sufficient wealth and a long enough residence within the city could join the ruling class. The first step for an ambitious and wealthy merchant family was to arrange a marriage with a long-established regent family. In the 1670s one such union, that of the Trip family (the Amsterdam branch of the Swedish arms makers) with the son of Burgomaster Valckenier, extended the influence and patronage available to the latter and strengthened his dominance of the council. The oligarchy in Amsterdam thus gained strength from its breadth and openness. In the smaller towns family interest could unite members on policy decisions but contraction through intermarriage could lead to the degeneration of the quality of the members.
In Amsterdam the network was so large that members of the same family could be related to opposing factions and pursue widely separated interests. The young men who had risen to positions of authority in the 1670s and 1680s consolidated their hold on office well into the 1690s and even the new century.
Amsterdam's regents provided good services to residents. They spent heavily on the water-ways and other essential infrastructure, as well as municipal almshouses for the elderly, hospitals and churches.
Amsterdam's wealth was generated by its commerce, which was in turn sustained by the judicious encouragement of entrepreneurs whatever their origin. This open door policy has been interpreted as proof of a tolerant ruling class. But toleration was practiced for the convenience of the city. Therefore, the wealthy Sephardic Jews from Portugal were welcomed and accorded all privileges except those of citizenship, but the poor Ashkenazi Jews from Eastern Europe were far more carefully vetted and those who became dependent on the city were encouraged to move on. Similarly, provision for the housing of Huguenot immigrants was made in 1681 when Louis XIV's religious policy was beginning to drive these Protestants out of France; no encouragement was given to the dispossessed Dutch from the countryside or other towns of Holland. The regents encouraged immigrants to build churches and provided sites or buildings for churches and temples for all except the most radical sects and the Catholics by the 1670s (although even the Catholics could practice quietly in a chapel within the Beguinhof).
First Stadtholderless Period and the Anglo-Dutch Wars (1650–1674)
During the wars a tension had arisen between the Orange-Nassau leaders and the patrician merchants. The former—the Orangists—were soldiers and centralizers who seldom spoke of compromise with the enemy and looked for military solutions. They included many rural gentry as well as ordinary folk attached to the banner of the House of Orange. The latter group were the Republicans, led by the Grand Pensionary (a sort of prime minister) and the regents stood for localism, municipal rights, commerce, and peace. In 1650, the stadtholder William II, Prince of Orange suddenly died; his son was a baby and the Orangists were leaderless. The regents seized the opportunity: there would be no new stadtholder in Holland for 22 years. Johan de Witt, a brilliant politician and diplomat, emerged as the dominant figure. Princes of Orange became the stadtholder and an almost hereditary ruler in 1672 and 1748. The Dutch Republic of the United Provinces was a true republic from 1650–1672 and 1702–1748. These periods are called the First Stadtholderless Period and Second Stadtholderless Period.
The Republic and England were major rivals in world trade and naval power. Halfway through the 17th century the Republic's navy was the rival of Britain's Royal Navy as the most powerful navy in the world. The Republic fought a series of three naval wars against England in 1652–74.
In 1651, England imposed its first Navigation Act, which severely hurt Dutch trade interests. An incident at sea concerning the Act resulted in the First Anglo-Dutch War, which lasted from 1652 to 1654, ending in the Treaty of Westminster (1654), which left the Navigation Act in effect.
After the English Restoration in 1660, Charles II tried to serve his dynastic interests by attempting to make Prince William III of Orange, his nephew, stadtholder of the Republic, using some military pressure. King Charles thought a naval war would weaken the Dutch traders and strengthen the English economy and empire, so the Second Anglo-Dutch War was launched in 1665. At first many Dutch ships were captured and the English scored great victories. However, the Raid on the Medway, in June 1667, ended the war with a Dutch victory. The Dutch recovered their trade, while the English economy was seriously hurt and its treasury nearly bankrupt. The greatly expanded Dutch navy was for years after the world's strongest. The Dutch Republic was at the zenith of its power.
Franco-Dutch War and Third Anglo-Dutch War (1672–1702)
The year 1672 is known in the Netherlands as the "Disaster Year" (Rampjaar). England declared war on the Republic, (the Third Anglo-Dutch War), followed by France, Münster and Cologne, which had all signed alliances against the Republic. France, Cologne and Münster invaded the Republic. Johan de Witt and his brother Cornelis, who had accomplished a diplomatic balancing act for a long time, were now the obvious scapegoats. They were lynched, and a new stadtholder, William III, was appointed.
An Anglo-French attempt to land on the Dutch shore were barely repelled in three desperate naval battles under command of Admiral Michiel de Ruyter. The advance of French troops from the south was halted by a costly inundation of its own heartland, by breaching river dykes. With the aid of friendly German princes, the Dutch succeeded in fighting back Cologne and Münster, after which the peace was signed with both of them, although some territory in the east was lost for ever. Peace was signed with England as well, in 1674 (Second Treaty of Westminster). In 1678, peace was made with France at the Treaty of Nijmegen, although France's Spanish and German allies felt betrayed by this.
In 1688, the relations with England reached crisis level once again. Stadtholder William III decided he had to take a huge gamble when he was invited to invade England by Protestant British nobles feuding with William's father-in-law the Catholic James II of England. This led to the Glorious Revolution and cemented the principle of parliamentary rule and Protestant ascendency in England. James fled to France, and William ascended to the English throne as co-monarch with his wife Mary, James' eldest daughter. This manoeuvre secured England as a critical ally of the United Provinces in its ongoing wars with Louis XIV of France. William was the commander of the Dutch and English armies and fleets until his death in 1702. During William's reign as King of England his primary focus was leveraging British manpower and finances to aid the Dutch against the French. The combination continued after his death as the combined Dutch, British, and mercenary army conquered Flanders and Brabant, and invaded French territory before the alliance collapsed in 1713 due to British political infighting.
Second Stadtholderless Period (1702–1747)
The Second Stadtholderless Period (Dutch: Tweede Stadhouderloze Tijdperk) is the designation in Dutch historiography of the period between the death of stadtholder William III on 19 March 1702 and the appointment of William IV, Prince of Orange as stadtholder and captain general in all provinces of the Dutch Republic on 2 May 1747. During this period the office of stadtholder was left vacant in the provinces of Holland, Zeeland, and Utrecht, though in other provinces that office was filled by members of the House of Nassau-Dietz (later called Orange-Nassau) during various periods.
During the period, the Republic lost its Great-Power status and its primacy in world trade, processes that went hand-in-hand, the latter causing the former. Though the economy declined considerably, causing deindustralization and deurbanization in the maritime provinces, a rentier-class kept accumulating a large capital fund that formed the basis for the leading position the Republic achieved in the international capital market. A military crisis at the end of the period caused the fall of the States-Party regime and the restoration of the Stadtholderate in all provinces. However, though the new stadtholder acquired near-dictatorial powers, this did not improve the situation.
Economic decline after 1730
The slow economic decline after 1730 was relative: other countries grew faster, eroding the Dutch lead and surpassing it. Wilson identifies three causes. Holland lost its world dominance in trade as competitors emerged and copied its practices, built their own ships and ports, and traded on their own account directly without going through Dutch intermediaries. Second, there was no growth in manufacturing, due perhaps to a weaker sense of industrial entrepreneurship and to the high wage scale. Third the wealthy turned their investments to foreign loans. This helped jump-start other nations and provided the Dutch with a steady income from collecting interest, but leaving them with few domestic sectors with a potential for rapid growth.
After the Dutch fleet declined, merchant interests became dependent on the goodwill of Britain. The main focus of Dutch leaders was reducing the country's considerable budget deficits. Dutch trade and shipping remained at a fairly steady level through the 18th century, but no longer had a near monopoly and also could not match growing English and French competition. The Netherlands lost its position as the trading centre of Northern Europe to London.
Although the Netherlands remained wealthy, investors for the nation's money became more difficult to find. Some investment went into purchases of land for estates, but most went to foreign bonds and Amsterdam remained one of Europe's banking capitals.
Culture and society
Dutch culture also declined both in the arts and sciences. Literature for example largely imitated English and French styles with little in the way of innovation or originality. The most influential intellectual was Pierre Bayle (1647–1706), a Protestant refugee from France who settled in Rotterdam where he wrote the massive Dictionnaire Historique et Critique (Historical and Critical Dictionary, 1696). It had a major impact on the thinking of The Enlightenment across Europe, giving an arsenal of weapons to critics who wanted to attack religion. It was an encyclopaedia of ideas that argued that most "truths" were merely opinions, and that gullibility and stubbornness were prevalent.
Life for the average Dutchman became slower and more relaxed than in the 18th century. The upper and middle classes continued to enjoy prosperity and high living standards. The drive to succeed seemed less urgent. Unskilled laborers remained locked in poverty and hardship. The large underclass of unemployed beggars and riffraff required government and private charity to survive.
Religious life became more relaxed as well. Catholics grew from 18% to 23% of the population during the 18th century and enjoyed greater tolerance, even as they continued to be outside the political system. They became divided by the feud between moralistic Jansenists (who denied free will) and orthodox believers. One group of Jansenists formed a splinter sect, the Old Catholic Church in 1723. The upper classes willingly embraced the ideas of the Enlightenment, tempered by the tolerance that meant less hostility to organized religion compared to France.
The Orangist revolution (1747–1751)
During the term of Anthonie van der Heim as Grand Pensionary from 1737 to 1746, the Republic slowly drifted into the War of Austrian Succession. This started as a Prusso-Austrian conflict, but eventually all the neighbours of the Dutch Republic became involved. On one side were Prussia, France and their allies and on the other Austria, Britain (after 1744) and their allies. At first the Republic strove to remain neutral in this European conflict, but it maintained garrisons in a number of fortresses in the Austrian Netherlands. French grievances and threats spurred the Republic into bring its army up to European standards (84,000 men in 1743).
In 1744 and 1745 the French attacked Dutch fortresses at Menen and Tournai. This prompted the Dutch Republic in 1745 to join the Quadruple Alliance, but this alliance was severely defeated at the Battle of Fontenoy in May 1745. In 1746 the French occupied most of the large cities in the Austrian Netherlands. Then, in April 1747, apparently as an exercise in armed diplomacy, a relatively small French military force occupied Zeelandic Flanders, part of the Dutch Republic.
This relatively innocuous invasion fully exposed the rot underlying the Dutch defences. The consequences were spectacular. Still mindful of the French invasion in the "Disaster Year" of 1672, many fearful people clamored for the restoration of the stadtholderate. William IV, Prince of Orange, had been waiting impatiently in the wings since acquiring his princely title in 1732. Over the next year he and his supporters engaged in a number of political battles in various provinces and towns in the Netherlands to wrest control from the regents. The aim was for William IV to obtain a firm grip on government patronage and place loyal officials in all strategic government positions. Eventually he managed to achieve this aim in all provinces.
Willem Bentinck van Rhoon was a prominent Orangist. People like Bentinck hoped that gathering the reins of power in the hands of a single "eminent head" would soon help restore the state of the Dutch economy and finances. The regents they opposed included the Grand Pensionary Jacob Gilles and Adriaen van der Hoop. This popular revolt had religious, anti-Catholic and democratic overtones and sometimes involved mob violence. It eventually involved political agitation by Daniel Raap, Jean Rousset de Missy and the Doelisten, attacks on tax farmers (pachtersoproer), religious agitation for enforcement of the Sabbath laws and preference for followers of Gisbertus Voetius and various demands by the civil militia.
The war against the French was itself brought to a not-too-devastating end for the Dutch Republic with the Treaty of Aix-la-Chapelle (1748). The French retreated of their own accord from the Dutch frontier. William IV died unexpectedly, at the age of 40, on 22 October 1751.
Regency and indolent rule (1752–1779)
His son, William V, was 3 years old when his father died, and a long regency characterised by corruption and misrule began. His mother delegated most of the powers of the regency to Bentinck and her favorite, Duke Louis Ernest of Brunswick-Lüneburg. All power was concentrated in the hands of an unaccountable few, including the Frisian nobleman Douwe Sirtema van Grovestins. Still a teenager, William V assumed the position of stadtholder in 1766, the last to hold that office. In 1767, he married Princess Wilhelmina of Prussia, the daughter of Augustus William of Prussia, niece of Frederick the Great.
The position of the Dutch during the American War of Independence was one of neutrality. William V, leading the pro-British faction within the government, blocked attempts by pro-independence, and later pro-French, elements to drag the government to war. However, things came to a head with the Dutch attempt to join the Russian-led League of Armed Neutrality, leading to the outbreak of the disastrous Fourth Anglo-Dutch War in 1780. After the signing of the Treaty of Paris (1783), the impoverished nation grew restless under William's rule.
An English historian summed him up uncharitably as "a Prince of the profoundest lethargy and most abysmal stupidity." And yet he would guide his family through the difficult French-Batavian period and his son would be crowned king.
Fourth Anglo-Dutch War (1780–1784)
The Fourth Anglo–Dutch War (1780–1784) was a conflict between the Kingdom of Great Britain and the Dutch Republic. The war, tangentially related to the American Revolutionary War, broke out over British and Dutch disagreements on the legality and conduct of Dutch trade with Britain's enemies in that war.
Although the Dutch Republic did not enter into a formal alliance with the United States and their allies, U.S. ambassador (and future President) John Adams managed to establish diplomatic relations with the Dutch Republic, making it the second European country to diplomatically recognize the Continental Congress in April 1782. In October 1782, a treaty of amity and commerce was concluded as well.
Most of the war consisted of a series of largely successful British operations against Dutch colonial economic interests, although British and Dutch naval forces also met once off the Dutch coast. The war ended disastrously for the Dutch and exposed the weakness of the political and economic foundations of the country. The Treaty of Paris (1784), according to Fernand Braudel, "sounded the knell of Dutch greatness."
The French-Batavian period (1785–1815)
After the war with Great Britain ended disastrously in 1784, there was growing unrest and a rebellion by the anti-Orangist Patriots. The French Revolution resulted first in the establishment of a pro-French Batavian Republic (1795–1806), then the creation of the Kingdom of Holland, ruled by a member of the House of Bonaparte (1806–1810), and finally annexation by the French Empire (1810–1813).
Patriot rebellion and its suppression (1785–1795)
Influenced by the American Revolution, the Patriots sought a more democratic form of government. The opening shot of this revolution is often considered to be the 1781 publication of a manifesto called Aan het Volk van Nederland ("To the People of the Netherlands") by Joan van der Capellen tot den Pol, who would become an influential leader of the Patriot movement. Their aim was to reduce corruption and the power held by the stadtholder, William V, Prince of Orange.
Support for the Patriots came mostly from the middle class. They formed militias called exercitiegenootschappen. In 1785, there was an open Patriot rebellion, which took the form of an armed insurrection by local militias in certain Dutch towns, Freedom being the rallying cry. Herman Willem Daendels attempted to organise an overthrow of various municipal governments (vroedschap). The goal was to oust government officials and force new elections. "Seen as a whole this revolution was a string of violent and confused events, accidents, speeches, rumours, bitter enmities and armed confrontations", wrote French historian Fernand Braudel, who saw it as a forerunner of the French Revolution.
In 1785 the stadholder left The Hague and moved his court to Nijmegen in Guelders, a city remote from the heart of Dutch political life. In June 1787, his energetic wife Wilhelmina (the sister of Frederick William II of Prussia) tried to travel to The Hague. Outside Schoonhoven, she was stopped by Patriot militiamen and taken to a farm near Goejanverwellesluis. Within two days she was forced to return to Nijmegen, an insult not unnoticed in Prussia.
The House of Orange reacted with severity, relying on Prussian troops led by Charles William Ferdinand, Duke of Brunswick and a small contingent of British troops to suppress the rebellion. Dutch banks at this time still held much of the world's capital. Government-sponsored banks owned up to 40% of Great Britain's national debt and there were close connections to the House of Stuart. The stadholder had supported British policies after the American Revolution.
This severe military response overwhelmed the Patriots and put the stadholder firmly back in control. A small unpaid Prussian army was billeted in the Netherlands and supported themselves by looting and extortion. The exercitiegenootschappen continued urging citizens to resist the government. They distributed pamphlets, formed "Patriot Clubs" and held public demonstrations. The government responded by pillaging those towns where opposition continued. Five leaders were sentenced to death (but fled first). Lynchings also occurred. For a while, no one dared appear in public without an orange cockade to show their support for Orangism. Many Patriots, perhaps around 40,000 in all, fled to Brabant, France (especially Dunkirk and St. Omer) and elsewhere. However, before long the French became involved in Dutch politics and the tide turned.
Batavian Republic (1795–1806)
The French Revolution was popular, and numerous underground clubs were promoting it when in January 1795 the French army invaded. The underground rose up, overthrew the municipal and provincial governments, and proclaimed the Batavian Republic (Dutch: Bataafse Republiek) in Amsterdam. Stadtholder William V fled to England and the States General dissolved itself. The new government was virtually a puppet of France. The Batavian Republic enjoyed widespread support and sent soldiers to fight in the French armies. The 1799 Anglo-Russian invasion of Holland was repulsed by Batavian–French forces. Nevertheless, Napoleon replaced it because the regime of Grand Pensionary Rutger Jan Schimmelpenninck (1805–06) was insufficiently docile.
The confederal structure of the old Dutch Republic was permanently replaced by a unitary state. The 1798 constitution had a genuinely democratic character, though a coup d'état of 1801 put an authoritarian regime in power. Ministerial government was introduced for the first time in Dutch history and many of the current government departments date their history back to this period. Meanwhile, the exiled stadholder handed over the Dutch colonies in "safekeeping" to Great Britain and ordered the colonial governors to comply. This permanently ended the colonial empire in Guyana, Ceylon and the Cape Colony. The Dutch East Indies was returned to the Netherlands under the Anglo-Dutch Treaty of 1814.
Kingdom of Holland to William I (1806–1815)
In 1806 Napoleon restyled the Netherlands (along with a small part of what is now Germany) into the Kingdom of Holland, putting his brother Louis Bonaparte (1778–1846), on the throne. The new king was unpopular, but he was willing to cross his brother for the benefit of his new kingdom. Napoleon forced his abdication in 1810 and incorporated the Netherlands directly into the French empire, imposing economic controls and conscription of all young men as soldiers. When the French retreated from the northern provinces in 1813, a Triumvirate took over at the helm of a provisional government. Although most members of the provisional government had been among the men who had driven out William V 18 years earlier, the leaders of the provisional government knew that any new regime would have to be headed by his son, William Frederick. They also knew that it would be better in the long term if the Dutch people themselves installed the prince, rather than have him imposed on the country by the anti-French alliance. Accordingly, the Triumvirate called William Frederick back on November 30 and offered him the crown. He refused, but instead proclaimed himself "hereditary sovereign prince" on December 6.
The Great Powers had secretly agreed to merge the northern Netherlands with the more populated Austrian Netherlands and the smaller Prince-Bishopric of Liège into a single constitutional monarchy. Having a stronger country on France's northern border was considered (especially by Tsar Alexander) to be an important part of the strategy to keep France's power in check. In 1814, William Frederick gained sovereignty over the Austrian Netherlands and Liège as well. On March 15, 1815; with the encouragement of the powers gathered at the Congress of Vienna, William Frederick raised the Netherlands to the status of a kingdom and proclaimed himself King William I. This was made official later in 1815, when the Low Countries were formally recognized as the United Kingdom of the Netherlands, with the House of Orange-Nassau as hereditary rulers. William had thus fulfilled the nearly three-century quest of the House of Orange to unite the Low Countries under a single rule.
United Kingdom of the Netherlands (1815–1839)
William I became king and also became the hereditary Grand Duke of Luxembourg, that was part of the Netherlands but at the same time part of the German Confederation. The newly created country had two capitals: Amsterdam and Brussels. The new nation had two equal parts. The north (Netherlands proper) had 2 million people. They spoke chiefly Dutch but were divided religiously between a Protestant majority and a large Catholic minority. The south (which would be known as "Belgium" after 1830) had a population of 3.4 million people. Nearly all were Catholic, but it was divided between French-speaking Walloons and Dutch-speaking Flemings. The upper and middle classes in the south were mostly French-speaking. About 60,000 Belgians were eligible to vote, compared to about 80,000 Dutchmen. Officially Amsterdam was the capital, but in a compromise the government met alternately in Brussels and The Hague.
Adolphe Quetelet (1796–1874), the great Belgian statistician, calculated that the new nation was significantly better off than other states. Mortality was low, the food supply was good, education was good, public awareness was high and the charity rate was the highest in the world. The best years were in the mid-1820s.
The quality of schooling was dismal, however. According to Schama, about 1800 the local school teacher was the "humble auxiliary of the local priest. Despised by his co-villagers and forced to subsist on the gleanings of the peasants, he combined drumming the catechism into the heads of his unruly charges with the duties of winding the town clock, ringing the church bells or digging its graves. His principal use to the community was to keep its boys out of mischief when there was no labour for them in the fields, or setting the destitute orphans of the town to the 'useful arts' of picking tow or spinning crude flax. As one would expect, standards in such an occupation were dismal." But in 1806 the Dutch, led by Adriaan van den Ende, energetically set out to modernise education, focusing on a new system for advanced training of teachers with an elaborate system of inspectors, training courses, teacher examinations and teaching societies. By 1826, although much smaller than France, the Dutch national government was spending 12 times more than Paris on education.
William I, who reigned from 1815–1840, had great constitutional power. An enlightened despot, he accepted the modernizing transformations of the previous 25 years, including equality of all before the law. However, he resurrected the estates as a political class and elevated a large number of people to the nobility. Voting rights were still limited, and only the nobility were eligible for seats in the upper house. The old provinces were reestablished in name only. The government was now fundamentally unitary, and all authority flowed from the center.
William I was a Calvinist and unsympathetic to the religious culture and practices of the Catholic majority. He promulgated the "Fundamental Law of Holland", with some modifications. This entirely overthrew the old order of things in the southern Netherlands: it abolished the privileges of the Catholic Church, and guaranteed equal protection to every religious creed and the enjoyment of the same civil and political rights to every subject of the king. It reflected the spirit of the French Revolution and in so doing did not please the Catholic bishops in the south, who had detested the Revolution.
William I actively promoted economic modernization. The first 15 years of the Kingdom showed progress and prosperity, as industrialization proceeded rapidly in the south, where the Industrial Revolution allowed entrepreneurs and labor to combine in a new textile industry, powered by local coal mines. There was little industry in the northern provinces, but most overseas colonies were restored, and highly profitable trade resumed after a 25-year hiatus. Economic liberalism combined with moderate monarchical authoritarianism to accelerate the adaptation of the Netherlands to the new conditions of the 19th century. The country prospered until a crisis arose in relations with the southern provinces.
Belgium breaks away
William was determined to create a united people, even though the north and south had drifted far apart in the past three centuries. Protestants were the largest denomination in the North (population 2 million), but formed a quarter of the population in the overwhelmingly Catholic South (population 3.5 million). Nevertheless, Protestants dominated William's government and army. The Catholics did not consider themselves an integral part of the United Netherlands, preferring instead to identify with mediaeval Dutch culture. Other factors that contributed to this feeling were economic (the South was industrialising, the North had always been a merchants' nation) and linguistic (French was spoken in Wallonia and a large part of the bourgeoisie in Flemish cities).
After having been dominant for a long time, the French-speaking elite in the Southern Netherlands now felt like second-class citizens. In the Catholic South, William's policies were unpopular. The French-speaking Walloons strenuously rejected his attempt to make Dutch the universal language of government, while the population of Flanders was divided. Flemings in the south spoke a Dutch dialect ("Flemish") and welcomed the encouragement of Dutch with a revival of literature and popular culture. Other Flemings, notably the educated bourgeoisie, preferred to speak French. Although Catholics possessed legal equality, they resented their subordination to a government that was fundamentally Protestant in spirit and membership after having been the state church for centuries in the south. Few Catholics held high office in state or army. Furthermore, political liberals in the south complained about the king's authoritarian methods. All southerners complained of underrepresentation in the national legislature. Although the south was industrializing and was more prosperous than the north the accumulated grievances allowed the multiple opposition forces to coalesce. The outbreak of revolution in France in 1830 was a signal for action, at first on behalf of autonomy for Belgium, as the southern provinces were now called, and later on behalf of total independence. William dithered and his half-hearted efforts to reconquer Belgium were thwarted both by the efforts of the Belgians themselves and by the diplomatic opposition of the great powers.
At the London Conference of 1830, the chief powers of Europe ordered (in November 1830) an armistice between the Dutch and the Belgians. The first draft for a treaty of separation of Belgium and the Netherlands was rejected by the Belgians. A second draft (June 1831) was rejected by William I, who resumed hostilities. Franco-British intervention forced William to withdraw Dutch forces from Belgium late in 1831, and in 1833 an armistice of indefinite duration was concluded. Belgium was effectively independent but William’s attempts to recover Luxembourg and Limburg led to renewed tension. The London Conference of 1838–39 prepared the final Dutch-Belgian separation treaty of 1839. It divided Luxembourg and Limburg between the Dutch and Belgian crowns. The Kingdom of the Netherlands thereafter was made up of the 11 northern provinces.
Democratic and Industrial Development (1840–1900)
The Netherlands did not industrialize as rapidly as Belgium after 1830, but it was prosperous enough. Griffiths argues that certain government policies facilitated the emergence of a national economy in the 19th century. They included the abolition of internal tariffs and guilds, a unified coinage system, modern methods of tax collection, standardized weights and measures, and the building of many roads, canals, and railroads. However, compared to Belgium, which was leading in industrialization on the Continent, the Netherlands moved slowly. Possible explanations for this difference are the higher costs due to geography and high wages, and the emphasis of entrepreneurs on trade rather than industry. For example, in the Dutch coastal provinces agricultural productivity was relatively high. Hence, industrial growth arrived relatively late – after 1860 – because incentives to move to labour-intensive industry were quite weak. However, the provinces of North Brabant and Overijssel did industrialize, and they became the most economically advanced areas of the country.
As in the rest of Europe, the 19th century saw the gradual transformation of the Netherlands into a modern middle-class industrial society. The number of people employed in agriculture decreased, while the country made a strong effort to revive its stake in the highly competitive shipping and trade business. The Netherlands lagged behind Belgium until the late 19th century in industrialization, and caught up around 1920. Major industries included textiles and (later) the great Philips industrial conglomerate. Rotterdam became a major shipping and manufacturing center. Poverty slowly declined as begging largely disappeared along with steadily improving working conditions for the population.
1848 Constitutional reform and liberalism
In 1840 William I abdicated in favor of his son, William II, who attempted to carry on the policies of his father in the face of a powerful liberal movement. In 1848 unrest broke out all over Europe. Although there were no major events in the Netherlands, these foreign developments persuaded King William II to agree to liberal and democratic reform. That same year Johan Rudolf Thorbecke, a prominent liberal, was asked by the king to draft a constitution that would turn the Netherlands into a constitutional monarchy. The new constitution was proclaimed on 3 November 1848. It severely limited the king's powers (making the government accountable only to an elected parliament), and it protected civil liberties. The new liberal constitution, which put the government under the control of the States General, was accepted by the legislature in 1848. The relationship between monarch, government and parliament has remained essentially unchanged ever since.
William II was succeeded by William III in 1849. The new king reluctantly chose Thorbecke to head the new government, which introduced several liberal measures, notably the extension of suffrage. However, Thorbecke's government soon fell, when Protestants rioted against the Vatican's reestablishment of the Catholic episcopate, in abeyance since the 16th century. A conservative government was formed, but it did not undo the liberal measures, and the Catholics were finally given equality after two centuries of subordination. Dutch political history from the middle of the 19th century until the First World War was fundamentally one of the extension of liberal reforms in government, the reorganization and modernization of the Dutch economy, and the rise of trade unionism and socialism as working-class movements independent of traditional liberalism. The growth in prosperity was enormous, as real per capita GNP soared from 106 guilders in 1804 to 403 in 1913.
Religion and pillarisation
Religion was a contentious issue with repeated struggles over the relations of church and state in the field of education. In 1816, the government took full control of the Dutch Reformed Church (Nederlands Hervormde Kerk). In 1857, all religious instruction was ended in public schools, but the various churches set up their own schools, and even universities. Dissident members broke away from the Netherlands Reformed Church in the Secession of 1834. They were harassed by the government under an onerous Napoleonic law prohibiting gatherings of more than 20 members without a permit. After the harassment ended in the 1850s, a number of these dissidents eventually created the Christian Reformed Church in 1869; thousands migrated to Michigan, Illinois, and Iowa in the United States. By 1900 the dissidents represented about 10% of the population, as against 45% in the Netherlands Reformed Church, which continued to be the only church to receive state money.
At mid-century, most Dutch belonged either to the Dutch Reformed churches (around 55%) or the Roman Catholic church (35 to 40%), together with smaller Protestant and Jewish groups. A large and powerful sector of nominal Protestants were in fact secular liberals seeking to minimize religious influence. In reaction a novel alliance developed with Catholics and devout Calvinists joining against secular liberals. The Catholics, who had been loosely allied with the liberals in earlier decades, turned against them on the issue of state support, which the liberals insisted should be granted only to public schools, and joined with Protestant political parties in demanding equal state support to schools maintained by religious groups.
The Netherlands remained one of the most tolerant countries in Europe towards religious belief, although conservative Protestants objected to the liberalization of the Dutch Reformed Church during the 19th century and faced opposition from the government when they tried to establish separate communities (Catholics and other non-Protestants were left unmolested by Dutch authorities). Some moved to the United States as a consequence, but as the century drew to a close, religious persecution had totally ceased.
Dutch social and political life became divided by fairly clear-cut internal borders that were emerging as the society pillarized into three separate parts based on religion. The economy was not affected. One of the people most responsible for designing pillarization was Abraham Kuyper (1837–1920), a leading politician, Protestant theologian, and journalist. Kuyper established orthodox Calvinist organizations, and also provided a theoretical framework by developing such concepts as "sphere-sovereignty" that celebrated Dutch society as a society of organized minorities. Verzuiling ("pillarization" or "pluralism") after 1850 became the solution to the danger of internal conflict. Everyone was part of one (and only one) pillar (zuil) based chiefly on religion (Protestant, Catholic, secular). The secular pillar eventually split into a socialist/working class pillar and a liberal (pro-business) secular pillar. Each pillar built a full set of its own social organizations, including churches (for the religious pillars), political parties, schools, universities, labor unions, sport clubs, boy scout unions and other youth clubs, and newspapers. The members of different zuilen lived in close proximity in cities and villages, spoke the same language, and did business with one another, but seldom interacted informally and rarely intermarried. In politics Kuyper formed the Anti-Revolutionary Party (ARP) in 1879, and headed it until 1905.
Pillarization was officially recognized in the Pacification of 1917, whereby socialists and liberals achieved their goal of universal male suffrage and the religious parties were guaranteed equal funding of all schools. In 1930 radio was organized so that each pillar had full control of its own network. When television began in the late 1940s the pillars divided up time equally on the one station. In politics and civic affairs leaders of the pillar organizations cooperated and the acknowledged the right of the other pillars, so public life generally ran smoothly.
Flourishing of art, culture and science
The late 19th century saw a cultural revival. The Hague School brought a revival of realist painting, 1860-1890. The world-famous Dutch painter was Vincent van Gogh, but he spent most of his career in France. Literature, music, architecture and science also flourished. A representative leader of science was Johannes Diderik van der Waals (1837–1923), a working class youth who taught himself physics, earned a PhD at the nation's leading school Leiden University, and in 1910 won the Nobel Prize for his discoveries in thermodynamics. Hendrik Lorentz (1853–1928) and his student Pieter Zeeman (1865–1943) shared the 1902 Nobel Prize in physics. Other notable scientists included biologist Hugo de Vries (1848–1935), who rediscovered Mendelian genetics.
1900 to 1940
In 1890, William III died after a long reign and was succeeded by his young daughter, Queen Wilhelmina (1880-1962). She would rule the Netherlands for 58 years. On her accession to the throne, the personal union between the Netherlands and Luxembourg ended because Luxembourg law excluded women from rule. Her remote cousin Adolphe became the Grand Duke of Luxembourg.
This was a time of further growth and colonial development, but it was marked by the difficulties of the World War I (in which the Netherlands was neutral) and the Great Depression. The Dutch population grew rapidly in the 20th century, as death rates fell, more lands were opened up, and industrialisation created urban jobs. Between 1900 and 1950 the population doubled from 5.1 to 10 million people.
The Dutch empire comprised the Dutch East Indies (Indonesia), as well as Surinam in South America and some minor possessions. It was smaller in 1945 than in 1815 because the Netherlands was the only colonial power that did not expand into Africa or anywhere else. The empire was run from Batavia (in Java), where the governor and his technical experts had almost complete authority with little oversight from the Hague. Successive governors improved their bureaucratic and military controls, and allowed very little voice to the locals until the 1920s.
The colony brought economic opportunity to the mother country and there was little concern at the time about it. One exception came in 1860 when Eduard Dekker, under the pen name "Multatuli" wrote the novel Max Havelaar: Or the Coffee Auctions of the Dutch Trading Company, one of the most notable books in the history of Dutch literature. He criticized the exploitation of the colony, and as well had harsh words about the indigenous princes who collaborated with the governor. The book helped inspire the Indonesian independence movement in the mid-20th century as well as the "Fair trade" movement for coffee at the end of the century.
The military forces in the Dutch East Indies were controlled by the governor and were not part of the regular Dutch army. As the map shows, the Dutch slowly expanded their holdings from their base in Java to include all of modern Indonesia by 1920. Most islands were not a problem but there was a long, costly campaign against the Achin (Aceh) state in northern Sumatra.
The Netherlands had not fought a major military campaign since the 1760s, and the strength of its armed forces had gradually dwindled. The Dutch decided not to ally themselves with anyone, and kept out of all European wars especially the First World War that swirled about it.
Neutrality during the First World War
The German war plan (the Schlieffen Plan) of 1905 was modified in 1908 to invade Belgium on the way to Paris but not the Netherlands. It supplied many essential raw materials to Germany such as rubber, tin, quinine, oil and food. The British used its blockade to limit supplies that the Dutch could pass on. There were other factors that made it expedient for both the Allies and the Central Powers for the Netherlands to remain neutral. The Netherlands controlled the mouths of the Scheldt, the Rhine and the Meuse Rivers. Germany had an interest in the Rhine since it ran through the industrial areas of the Ruhr and connected it with the Dutch port of Rotterdam. Britain had an interest in the Scheldt River and the Meuse flowed from France. All countries had an interest in keeping the others out of the Netherlands so that no one's interests could be taken away or be changed. If one country were to have invaded the Netherlands, another would certainly have counterattacked to defend their own interest in the rivers. It was too big a risk for any of the belligerent nations and none wanted to risk fighting on another front.
The Dutch were affected by the war, troops were mobilized and conscription was introduced in the face of harsh criticism from opposition parties. In 1918, mutinies broke out in the military. Food shortages were extensive, due to the control the belligerents exercised over the Dutch. Each wanted their share of Dutch produce. As a result, the price of potatoes rose sharply because Britain had demanded so much from the Dutch. Food riots even broke out in the country. A big problem was smuggling. When Germany had conquered Belgium, the Allies saw it as enemy territory and stopped exporting to Belgium. Food became scarce for the Belgian people, since the Germans seized all food. This gave the Dutch the opportunity to start to smuggle. This, however, caused great problems in the Netherlands, including inflation and further food shortages. The Allies demanded that the Dutch stop the smuggling, and the government took measures to remain neutral. The government placed many cities under 'state of siege'. On 8 January 1916, a 5-kilometre (3.1 mi) zone was created by the government along the border. In that zone, goods could be moved on main roads with a permit. German authorities in Belgium had an electrified fence erected all along the Belgian–Dutch border that caused many refugees from Belgium to lose their lives. The fence was guarded by older German Landsturm soldiers.
Although both houses of the Dutch parliament were elected by the people, only men with high incomes were eligible to vote until 1917, when pressure from socialist movements resulted in elections in which all men were allowed to vote. In 1919 women also obtained the right to vote.
The worldwide Great Depression of 1929 and the early 1930s had crippling effects on the Dutch economy, lasting longer than in most other European countries. The long duration of the Great Depression in the Netherlands is often explained by the very strict fiscal policy of the Dutch government at the time, and its decision to adhere to the gold standard for much longer than most of its trading partners. The depression led to high unemployment and widespread poverty, as well as increasing social unrest.
The rise of Nazism in Germany did not go unnoticed in the Netherlands, and there was growing concern at the possibility of armed conflict, but most Dutch citizens expected that Germany would again respect Dutch neutrality.
There were separate fascist and nazi movements in the 1930s. Dutch Fascists admired Mussolini's Italy and called for a traditional corporate ideology. The membership was small, elitist and ineffective. The pro-Nazi movement, however, won support from Berlin and attempted to build a mass base by 1935. It failed because most Dutch rejected its racial ideology and calls for violence.
The defense budget was not increased until Nazi Germany remilitarized the Rhineland in 1936. The budget was further increased in 1938 (after the annexation of Austria and occupation of the Czech Sudetenland). The colonial government also increased its military budget because of increasing tension with Japan. The Dutch did not mobilize their forces until shortly before France and Great Britain declared war in September 1939. Neutrality was still the policy but the Dutch government tried to buy new arms for their badly equipped forces but a considerable share of ordered weapons never arrived.
The Second World War (1939–1945)
Nazi invasion and occupation
At the outbreak of World War II in 1939, the Netherlands once again declared its neutrality. However, on 10 May 1940, Nazi Germany launched an attack on the Netherlands and Belgium and quickly overran most of the two countries. Fighting against the Dutch army proved more of a burden than foreseen; the northern attack was stopped dead, the one in the middle came to a grinding halt near the Grebbeberg and many airborne assault troops were killed and taken prisoner in the west of the country. Only in the south, defenses broke but the one passage over the river Maas at Rotterdam was held by the Dutch. By 14 May, fighting in many locations had ceased and the German army could make little or no headway, So the Luftwaffe bombed Rotterdam, second largest city of the Netherlands, killing about 900 people, destroying the inner city and leaving 78,000 people homeless.
Following the bombing and German threats of the same treatment for Utrecht, the Netherlands capitulated on 15 May, except for the province of Zeeland where French and French Moroccan troops stood side by side with the Dutch forces. Still, the royal family and some armed forces fled to the United Kingdom. Some members of the royal family eventually moved to Ottawa, Ontario, Canada until the Netherlands was liberated; Princess Margriet was born in Canadian exile.
Resentment of the Germans grew as the occupation became more harsh, prompting many Dutch in the latter years of the war to join the resistance. But collaboration was not uncommon either; many thousands of young Dutch males volunteered for combat service on the Russian Front with the Waffen-SS and many companies worked for the Germans.
Holocaust in the Netherlands
About 140,000 Jews lived in the Netherlands at the beginning of the war. Persecution of Dutch Jews started shortly after the occupation. At the end of the war, 40,000 Jews were still alive. Of the 100,000 Jews who did not go into hiding, about 1,000 survived the war.
One who perished was Anne Frank, who gained worldwide fame posthumously when her diary, written in the achterhuis ('backhouse') while hiding from the Nazis, was found and published by her father, Otto Frank, the only survivor of the family.
The war in the Dutch East Indies
On 8 December 1941, the day after the attack on Pearl Harbor, the Netherlands declared war on Japan. The Dutch government in exile in London had for long been working with London and with Washington to cut off oil supplies to Japan. Japanese forces invaded the Dutch East Indies on 11 January 1942. The Dutch surrendered 8 March after Japanese troops landed on Java. Dutch citizens and everybody with Dutch ancestry, the so-called "Indo's" were captured and put to work in labour camps or interned. As in the homeland, many Dutch ships, planes and military personnel managed to reach safe territory, in this case Australia, from where they were able to fight again.
False hopes, the Hunger Winter and Liberation
In Europe, after the Allies landed in Normandy in June 1944, progress was slow until the Battle of Normandy ended in August 1944. German resistance collapsed in western Europe and the allied armies advanced quickly towards the Dutch border. The First Canadian Army and the Second British Army conducted operations on Dutch soil from September onwards. On 17 September a daring operation, Operation Market Garden, was executed with the goal of capturing bridges across three major rivers in the southern Netherlands. Despite desperate fighting by American, British and Polish forces, the bridge at Arnhem, across the Neder Rijn, could not be captured.
Areas south of the Rhine river were liberated in the period September–December 1944, including the province of Zeeland, which was liberated in October and November in the Battle of the Scheldt. This opened Antwerp to allied shipping. The First Canadian Army held a static line along the river Meuse (Maas) from December 1944 through February 1945.
The rest of the country remained occupied until the spring of 1945. In the face of Dutch defiance the Nazis deliberately cut off food supplies resulting in near-starvation in the cities during the Hongerwinter (Hunger winter) of 1944–45. Soup kitchens were set up but many fragile people died. A few days before the Allied victory the Germans allowed emergency shipments of food.
The First Canadian Army launched Operation Veritable in early February, cracking the Siegfried Line and reaching the banks of the Rhine in early March. In the final weeks of the war in Europe, the First Canadian Army was charged with clearing the Netherlands of German forces.
The Liberation of Arnhem began on 12 April 1945 and proceeded to plan, as the three infantry brigades of the 49th Division leapfrogged each other through the city. Within four days Arnhem, now a ruined city, was totally under Allied control.
The Canadians then immediately advanced further into the country, encountering and defeating a German counterattack at Otterlo and Dutch SS resistance at Ede. On 27 April a temporary truce came into effect, allowing the distribution of food aid to the starving Dutch civilians in areas under German control (Operation Manna). On 5 May 1945, Generaloberst Blaskowitz agreed to the unconditional surrender of all German forces in the Netherlands, signing the surrender to Canadian general Charles Foulkes at Wageningen. (The fifth of May is now celebrated annually in the Netherlands as Liberation Day.) Three days later Germany unconditionally surrendered, bringing the war in Europe to a close.
After the euphoria and settling of scores had ended, the Dutch were a traumatized people with a ruined economy, a shattered infrastructure and several destroyed cities including Rotterdam, Nijmegen, Arnhem and part of The Hague.
After the war, there were reprisals against those who had collaborated with the Nazis. Artur Seyss-Inquart, Nazi Commissioner of the Netherlands, was tried at Nüremberg.
In the early post-war years the Netherlands made continued attempts to expand its territory by annexing neighbouring German territory. The larger annexation plans were continuously rejected by the United States, but the London conference of 1949 permitted the Netherlands to perform a smaller scale annexation. Most of the annexed territory was returned to Germany on 1 August 1963.
Operation Black Tulip was a plan in 1945 by Dutch minister of Justice Kolfschoten to evict all Germans from the Netherlands. The operation lasted from 1946 to 1948 and in the end 3691 Germans (15% of Germans resident in the Netherlands) were deported. The operation started on 10 September 1946 in Amsterdam, where Germans and their families were taken from their homes in the middle of the night and given one hour to collect 50 kg of luggage. They were allowed to take 100 guilders. The rest of their possessions went to the state. They were taken to concentration camps near the German border, the biggest of which was Mariënbosch near Nijmegen.
Prosperity and European Unity (1945–present)
The post-war years were a time of hardship, shortages and natural disaster. This was followed by large-scale public works programmes, economic recovery, European integration and the gradual introduction of a welfare state.
Immediately after the war, there was rationing, including of cigarettes, textiles, washing powder and coffee. Even wooden shoes were rationed. There were severe housing shortages. In the 1950s, there was mass emigration, especially to Canada, Australia and New Zealand. Government-encouraged emigration efforts to reduce population density prompted some 500,000 Dutch people to leave the country after the war. The Netherlands failed to hold the Dutch East Indies, as Indonesia became independent and 300,000 Dutch inhabitants (and their Indonesian allies) left the islands.
Postwar politics saw shifting coalition governments. The 1946 Parliamentary elections saw the Catholic People's Party (KVP) come in first just ahead of the socialist Labour party (PvdA). Louis J. M. Beel formed a new coalition cabinet. The United States began Marshall Plan aid in 1948 that pumped cash into the economy, fostered modernization of business, and encouraged economic cooperation. The 1948 elections led to a new coalition led by Labor's Willem Drees. He led four successive cabinets Drees I, Drees II, Drees III and Drees IV until late 1958. His terms saw four major political developments: the traumas of decolonization, economic reconstruction, the establishment of the Dutch welfare state, and international integration and co-operation, including the formation of Benelux, the OEEC, NATO, the ECSC, and the EEC.
Baby boom and economic reconstruction
Despite the problems, this was a time of optimism for many. A baby boom followed the war, as young Dutch couples started planning their families. They had lived through the hardships of depression and the hell of war. They wanted to start fresh and live better lives without the poverty, starvation, terror, and extreme frugality they knew so well. They had little taste for a strictly imposed rule-oriented traditional system with its rigid hierarchies, sharp pillarized boundaries and strictly orthodox religious doctrines. They made a best seller out of the translation of The Common Sense Book of Baby and Child Care (1946), by American pediatrician Benjamin Spock. His vision of family life as companionate, permissive, enjoyable and even fun took hold, and seemed the best way to achieve family happiness in a dawning age of freedom and prosperity.
Wages were kept low and the recovery of consumption to prewar levels was delayed to permit rapid rebuilding of the infrastructure. In the years after the war, unemployment fell and the economy grew at an astonishing pace, despite the high birth rate. The shattered infrastructure and destroyed cities were rebuilt. A key contribution to the recovery in the postwar Netherlands came from the Marshall Plan, which provided the country with funds, goods, raw materials and produce.
The Dutch became internationally active again. Dutch corporations, particularly Royal Dutch Shell and Philips, became internationally prominent. Business people, scientists, engineers and artists from the Netherlands made important international contributions. For example, Dutch economists, especially Jan Tinbergen (1903–1994), Tjalling Koopmans (1910–1985) and Henri Theil (1924–2000), made major contributions to the mathematical and statistical methodology known as econometrics.
Across Western Europe, the period from 1973 to 1981 marked the end of the booming economy of the 1960s. The Netherlands also experienced years of negative growth after that. Unemployment increased steadily, causing rapid growth in social-security expenditures. Inflation reached double digits; government surpluses disappeared. On the positive side, rich natural gas resources were developed, providing a current account trade surplus during most of the period. Public deficits were high. According to the long-term economic analysis of Horlings and Smits, the major gains in the Dutch economy were concentrated between 1870 and 1930 and between 1950 and 1970. Rates were much lower in 1930–45 and after 1987.
The last major flood in the Netherlands took place in early February 1953, when a huge storm caused the collapse of several dikes in the southwest of the Netherlands. More than 1,800 people drowned in the ensuing inundation.
The Dutch government subsequently decided on a large-scale programme of public works (the "Delta Works") to protect the country against future flooding. The project took more than thirty years to complete. The Oosterscheldedam, an advanced sea storm barrier, became operational in 1986. According to Dutch government engineers, the odds of a major inundation anywhere in the Netherlands are now one in 10,000 years.
Europeanisation, Americanisation and internationalisation
The European Coal and Steel Community (ECSC), was founded in 1951 by the six founding members: Belgium, the Netherlands and Luxembourg (the Benelux countries) and West Germany, France and Italy. Its purpose was to pool the steel and coal resources of the member states, and to support the economies of the participating countries. As a side effect, the ECSC helped defuse tensions between countries which had recently been enemies in the war. In time, this economic merger grew, adding members and broadening in scope, to become the European Economic Community, and later the European Union.
The United States started to have more influence. After the war higher education changed from a German model to more of an American model.[dubious ] American influences had been small in the interwar era, and during the war the Nazis had emphasised the dangers of a "degraded" American culture as represented by jazz. However, the Dutch became more attracted to the United States during the postwar era, perhaps partly because of antipathy towards the Nazis but certainly because of American movies and consumer goods. The Marshall Plan also introduced the Dutch to American management practices.[dubious ] NATO brought in American military doctrine and technology. Intellectuals, artists and the political left, however, remained more reserved about the Americans. According to Rob Kroes, the anti-Americanism in the Netherlands was ambiguous: American culture was both accepted and criticised at the same time.
The Netherlands is a founding member of the EU, NATO, OECD and WTO. Together with Belgium and Luxembourg it forms the Benelux economic union. The country is host to the Organization for the Prohibition of Chemical Weapons and five international courts: the Permanent Court of Arbitration, the International Court of Justice, the International Criminal Tribunal for the Former Yugoslavia, the International Criminal Court and the Special Tribunal for Lebanon. The first four are situated in The Hague, as is the EU's criminal intelligence agency Europol and judicial co-operation agency Eurojust. This has led to the city being dubbed "the world's legal capital".
Decolonisation and multiculturalism
By the first half of the 20th century, new organisations and leadership had developed in the Dutch East Indies. Under its Ethical Policy, the government had helped create an educated Indonesian elite. These profound changes constituted the "Indonesian National Revival". Increased political activism and Japanese occupation undermining Dutch rule culminated in nationalists proclaiming independence on 17 August 1945, two days after the surrender of Japan.
The Dutch East Indies had long been a valuable resource to the Netherlands, so the Dutch feared its independence. The Indonesian National Revolution followed as Indonesia attempted to secure its independence in the face of Dutch diplomatic and military opposition (sometimes brutal in nature). Increasing international pressure eventually led the Netherlands to withdraw and it formally recognised Indonesian independence on 27 December 1949. The western part of New Guinea, remained under Dutch control as Netherlands New Guinea until 1961, when the Netherlands transferred sovereignty of this area to Indonesia.
During and after the Indonesian National Revolution, around 300,000 people, pre-dominantly "Indos" (Dutch-Indonesian Eurasians), left Indonesia for the Netherlands. This difficult, complex and messy mass migration was called repatriation, but the majority of this group had never set foot in the Netherlands before. This migration occurred in five distinct waves over a period of 20 years. It included Indos (many of whom spent the war years in Japanese concentration camps), former South Moluccan soldiers and their families, "New-Guinea Issue" Dutch citizens, Dutch citizens from Netherlands New Guinea (including Papuan civil servants and their families), and other Indos who had remained behind but later regretted their decision to take out Indonesian citizenship (called spijtoptanten in Dutch and warga negara in Indonesian).
The Indo community (now numbering around 680,000) is the largest minority group in the Netherlands. They are integrated into Dutch society, but they have also retained many aspects of their culture and have added a distinct Indonesian flavour to the Netherlands.
Although it was originally expected that the loss of the Dutch East Indies would contribute to an economic decline, the Dutch economy experienced exceptional growth (partly because a disproportionate amount of Marshall Aid was received) in the 1950s and 1960s. In fact, the demand for labour was so strong that immigration was actively encouraged, first from Italy and Spain then later on, in larger numbers, from Turkey and Morocco.
Suriname became independent on 25 November 1975. The Dutch government supported independence because it wanted to stem the flow of immigrants from Suriname and also to end its colonial status. However, about one third of the entire population of Suriname, fearing political unrest and economic decline, relocated to the Netherlands, creating a Surinamese community in the Netherlands that is now roughly as large as the population of Suriname itself.
When the postwar baby-boom children grew up, they led the revolt in the 1960s against all rigidities in Dutch life. The 1960s and 1970s were a time of great social and cultural change, such as rapid ontzuiling (literally: depillarisation), a term that describes the decay of the old divisions along class and religious lines. A youth culture emerged all across Western Europe and the U.S., characterised by student rebellion, informality, sexual freedom, informal clothes, new hair styles, protest music, drugs and idealism. Young people, and students in particular, rejected traditional mores, and pushed for change in matters like women's rights, sexuality, disarmament and environmental issues.
Secularization, or the decline in religiosity, first became noticeable after 1960 in the Protestant rural areas of Friesland and Groningen. Then, it spread to Amsterdam, Rotterdam and the other large cities in the west. Finally the Catholic southern areas showed religious declines. As the social distance between the Calvinists and Catholics narrowed (and they began to intermarry), it became possible to merge their parties. The Anti Revolutionary Party (ARP) in 1977 merged with the Catholic People's Party (KVP) and the Protestant Christian Historical Union (CHU) to form the Christian Democratic Appeal (CDA). However, a countervailing trend later appeared as the result of a religious revival in the Protestant Bible Belt, and the growth of the Muslim and Hindu communities as a result of immigration and high fertility levels.
After 1982, there was a retrenchtment of the welfare system, especially regarding old-age pensions, unemployment benefits, and disability pensions/early retirement benefits.
Following the election of 1994, in which the Christian democratic CDA lost a considerable portion of its representatives, the social-liberal Democrats 66 (D66) doubled in size and formed a coalition with the labour party (Netherlands) (PvdA), and the People's Party for Freedom and Democracy (VVD). This purple (government) coalition marked the first absence of the CDA in government in decades. During the Purple Coalition years, a period lasting until the rise of the populist politician Pim Fortuyn, the government addressed issues previously viewed as taboo under the Christian-influenced cabinet. At this time, the Dutch government introduced unprecedented legislation based on a policy of official tolerance (gedoogbeleid). Abortion and euthanasia were decriminalized, but stricter guidelines were set for their implementation. Drug policy, especially with regard to the regulation of cannabis, was reformed. Prostitution was legalised, but confined to brothels where the health and safety of those involved could be properly monitored. With the 2001 Same-Sex Marriage Act, the Netherlands became the first country to legalise same-sex marriage. In addition to social reforms, the Purple Coalition also presided over a period of remarkable economic prosperity.
In the 1998 election the Purple Coalition consisting of Social Democrats, and left and right wing Liberals, increased its majority. Both the social-democratic PvdA and the conservative liberal VVD grew at the cost of their junior partner in cabinet, the progressive liberal D66. The voters rewarded the Purple Coaliation for its economic performance, which had included reduction of unemployment and the budget deficit, steady growth and job creation combined with wage freezes and trimming of the welfare state, together with a policy of fiscal restraint. The result was the second Kok cabinet.
The power of the coalition waned with the introduction of List Pim Fortuyn in the Dutch general election of 2002, a populist party, which ran a distinctly anti-immigration and anti-purple campaign, citing "Purple Chaos" (Puinhopen van Paars) as the source of the countries social woes. In the first political assassination in three centuries, Fortuyn was murdered with little over a week left before the election. In the wake of its leader's death, LPF swept the elections, entering parliament with one sixth of the seats, while the PvdA (Labour) lost half of its seats. The ensuing cabinet was formed by CDA, VVD and LPF, led by Prime Minister Jan Peter Balkenende. Though the party succeeded in displacing the rival Purple Coalition, without the charismatic figure of Pim Fortuyn at its helm, it proved to be short-lived lasting 87 days in power.
Two events changed the political landscape:
- On 6 May 2002, the assassination of Politician Pim Fortuyn, calling for a very strict policy on immigration, shocked the nation, not at all used to political violence in peace time. His party won a landslide election victory, partly because of his perceived martyrdom, However, internal party squabbles and blowing up the coalition government they had helped to create, resulted in the loss of 70% of their support in early general elections in 2003.
- Another murder that caused great upheaval took place on 2 November 2004, when film director and publicist Theo van Gogh was assassinated by a Dutch-Moroccan youth with radical Islamic beliefs, because of Van Gogh's alleged blasphemy. One week later, several arrests were made of several would-be Islamist terrorists, who have later been found guilty of conspiracy with terrorist intentions, this verdict was however reversed on appeal. All this sparked a debate on the position of radical Islam and of Islam generally in Dutch society, and on immigration and integration. The personal protection of most politicians, especially of the Islam critic Ayaan Hirsi Ali, was stepped up to unprecedented levels.
The Netherlands today
By 2000 the population had increased to 15.9 million people, making the Netherlands one of the most densely populated countries in the world. Urban development has led to the development of a conurbation called the Randstad (Dutch: Randstad), which includes the four largest cities (Amsterdam, Rotterdam, The Hague and Utrecht), and the surrounding areas. With a population of 7,100,000 it is one of the largest conurbations in Europe.
This small nation has successfully developed into one of the most open, dynamic and prosperous countries in the world. It had the tenth-highest per capita income in the world in 2011. It has an open, market-based mixed economy, ranking 13th of 157 countries according to the Index of Economic Freedom. In May 2011, the OECD ranked the Netherlands as the "happiest" country in the world.
Historians and historiography
- Julia Adams, economic and social history
- Petrus Johannes Blok, survey
- J. C. H. Blom, survey
- M. R. Boxell, political history
- Pieter Geyl, Dutch revolt; historiography
- Johan Huizinga (1872–1945), cultural history
- Jonathan Israel, Dutch Republic, Age of Enlightenment, Baruch Spinoza
- Louis De Jong, World War II
- John Lothrop Motley, American historian of the Dutch Revolt
- Jan Romein (1893–1962), theoretical and world history
- Jan de Vries, economic history
The American John Lothrop Motley was the first foreign historian to write a major history of the Dutch Republic. In 3500 pages he crafted a literary masterpiece that was translated into numerous languages; his dramatic story reached a wide audience in the 19th century. Motley relied heavily on Dutch scholarship and immersed himself in the sources. His style no longer attracts readers, and scholars have moved away from his simplistic dichotomies of good versus evil, Dutch versus Spanish, Catholic versus Protestant, freedom versus authoritarianism. His theory of causation over-emphasized ethnicity as an unchanging characteristic, exaggerated the importance of William of Orange, and gave undue importance to the issue religious tolerance.
The pioneering Dutch cultural historian Johan Huizinga (1872–1945), author of The Autumn of the Middle Ages (1919) (the English translation was called The Waning of the Middle Ages) and Homo Ludens: A Study of the Play Element in Culture (1935), which expanded the field of cultural history and influenced the historical anthropology of younger historians of the French Annales School. He was influenced by art history and advised historians to trace "patterns of culture" by studying "themes, figures, motifs, symbols, styles and sentiments."
The "polder model" continues to strongly influence historians as well as Dutch political discussion. The polder model stresses the need for finding consensus; it discourages furious debate and angry dissent in both academia and politics – in contrast to the highly developed, intense debates in Germany.
The H-Net list H-Low-Countries is published free by email and is edited by scholars. Its occasional messages serve an international community with diverse methodological approaches, archival experiences, teaching styles, and intellectual traditions, promotes discussion relevant to the region and to the different national histories in particular, with an emphasis on the Netherlands. H-Low-Countries publishes conference announcements, questions and discussions; reviews of books, journals, and articles; and tables of contents of journals on the history of the Low Countries (in both Dutch and English). After World War II both research-oriented and teaching-oriented historians have been rethinking their interpretive approaches to Dutch history, balancing traditional memories and modern scholarship. In terms of popular history, there has been an effort to ensure greater historical accuracy in museums and historic tourist sites.
Once heralded as the leading event of modern Dutch history, the Dutch Revolt lasted from 1568 to 1648, and historians have worked to interpret it for even longer. Cruz (2007) explains the major debates among scholars regarding the Dutch bid for independence from Spanish rule. While agreeing that the intellectual milieus of late 19th and 20th centuries affected historians' interpretations, Cruz argues that writings about the revolt trace changing perceptions of the role played by small countries in the history of Europe. In recent decades grand theory has fallen out of favor among most scholars, who emphasize the particular over the general. Dutch and Belgian historiography since 1945 no longer says the revolt was the culmination of an inevitable process leading to independence and freedom. Instead scholars have put the political and economic details of the towns and provinces under the microscope, while agreeing on the weaknesses of attempts at centralization by the Habsburg rulers. The most influential new studies have been rooted in demographic and economic history, though scholars continue to debate the relationship between economics and politics. The religious dimension has been viewed in terms of mentalities, exposing the minority position of Calvinism, while the international aspects have been studied more seriously by foreign historians than by the Dutch themselves.
Pieter Geyl was the leading historian of the Dutch Revolt, and a highly influential professor at the University of London (1919–1935) and at the State University of Utrecht (1936–58). He wrote a six-volume history of the Dutch-speaking peoples. The Nazis imprisoned him in World War II. In his political views, Geyl adopted the views of the 17th-century Dutch Louvestein faction, led by Johan van Oldenbarneveldt (1547–1619) and Johan de Witt (1625–72). It stood for liberty, toleration, and national interests in contrast to the Orange stadholders who sought to promote their own self-interest. According to Geyl, the Dutch Republic reached the peak of its powers during the 17th century. He was also a staunch nationalist and suggested that Flanders could split off from Belgium and join the Netherlands. Later he decried what he called radical nationalism and stressed more the vitality of Western Civilization. Geyl was highly critical of the world history approach of Arnold J. Toynbee.
Jan Romein (1893-1962) created a "theoretical history" in an attempt to reestablish the relevance of history to public life in the 1930s at a time of immense political uncertainty and cultural crisis, when Romein thought that history had become too inward-looking and isolated from other disciplines. Romein, a Marxist, wanted history to contribute to social improvement. At the same time, influenced by the successes of theoretical physics and his study of Oswald Spengler, Arnold J. Toynbee, Frederick John Teggart, and others, he spurred on the development of theoretical history in the Netherlands, to the point where it became a subject in its own right at the university level after the war. Romein used the term integral history as a substitute for cultural history and focused his attention on the period around the turn of the century. He concluded that a serious crisis occurred in European civilization in 1900 because of the rise of anti-Semitism, extreme nationalism, discontent with the parliamentary system, depersonalization of the state, and the rejection of positivism. European civilization waned as the result of this crisis which was accompanied by the rise of the United States, the Americanization of the world, and the emergence of Asia. His interpretation is reminiscent of that of his mentor Johan Huizinga and was criticized by his colleague Pieter Geyl.
- Canon of Dutch History
- Culture of the Netherlands
- Demographics of the Netherlands
- Dutch diaspora
- Dutch East Indies
- Dutch Empire
- Economy of the Netherlands
- Geography of the Netherlands
- History of Belgium
- History of religion in the Netherlands
- History of Europe
- History of Germany
- History of Luxembourg
- List of Prime Ministers of the Netherlands
- List of monarchs of the Netherlands
- Politics of the Netherlands
- Provinces of the Netherlands
- Netherlands in World War II
- "Neanderthal may not be the oldest Dutchman | Radio Netherlands Worldwide". Rnw.nl. Retrieved 25 March 2012.
- "Neanderthal fossil discovered in Zeeland province | Radio Netherlands Worldwide". Rnw.nl. 16 June 2009. Retrieved 25 March 2012.
- Van Zeist, W. (1957), "De steentijd van Nederland", Nieuwe Drentse Volksalmanak, 75: 4–11
- "The Mysterious Bog People – Background to the exhibition". Canadian Museum of Civilization Corporation. 5 July 2001. Archived from the original on 9 March 2007. Retrieved 1 June 2009.
- Louwe Kooijmans, L.P., "Trijntje van de Betuweroute, Jachtkampen uit de Steentijd te Hardinxveld-Giessendam", 1998, Spiegel Historiael 33, pp. 423–28
- Volkskrant 24 August 2007 "Prehistoric agricultural field found in Swifterbant, 4300–4000 BC"
- Raemakers, Daan. "De spiegel van Swifterbant", University of Groningen, 2006.
- [full citation needed],[page needed], in J.H.F. Bloemers & T. van Dorp (Eds), Pre- & protohistorie van de lage landen. De Haan/Open Universiteit, 1991. ISBN 90-269-4448-9, NUGI 644
- Lanting, J.N. & J.D. van der Waals, (1976), "Beaker culture relations in the Lower Rhine Basin",[page needed], in Lanting et al. (Eds) Glockenbechersimposion Oberried 1974. Bussum-Haarlem: Uniehoek N.V.
- [full citation needed], p. 93, in J. P. Mallory and John Q. Adams (Eds), The Encyclopedia of Indo-European Culture, Fitzroy Dearborn, 1997.
- According to "Het Archeologisch Basisregister" (ABR), version 1.0 November 1992, , Elp Kümmerkeramik is dated BRONSMA (early MBA) to BRONSL (LBA) and this has been standardized by "De Rijksdienst voor Archeologie, Cultuurlandschap en Monumenten" (RACM)" as being at the period starting at 1800 BC and ending at 800 BC.[not in citation given][dead link]
- Mallory, J.P., In Search of the Indo-Europeans: Language, Archaeology and Myth, London: Thames & Hudson, 1989, p. 87.
- Butler, J.J., Nederland in de bronstijd, Bussum: Fibula-Van Dishoeck, 1969, p.[page needed].
- Kinder, Hermann and Werner Hilgemann, The Penguin Atlas of World History; translated by Ernest A. Menze ; with maps designed by Harald and Ruth Bukor. Harmondsworth: Penguin Books. ISBN 0-14-051054-0 Volume 1. p. 109.
- The New Encyclopaedia Britannica, 15th edition, 20:67
- Verhart, Leo Op Zoek naar de Kelten, Nieuwe archeologische ontdekkingen tussen Noordzee en Rijn, ISBN 90-5345-303-2, 2006, pp. 67, 81–82
- The New Encyclopædia Britannica, 15th edition, 22:641–642
- de Vries, Jan W., Roland Willemyns and Peter Burger, Het verhaal van een taal, Amsterdam: Prometheus, 2003, pp. 12, 21–27
- Cunliffe, Barry. The Ancient Celts. Penguin Books, 1997, pp. 39–67.
- Achtergrondinformatie bij de muntschat van Maastricht-Amby, Municipality of Maastricht, 2008.
- Unieke Keltische muntschat ontdekt in Maastricht, Archeonet.be, 15 November 2008. Retrieved 6 October 2011.
- Het urnenveld van het Meijerink, Municipality of Zutphen, Retrieved 0 October 20116.
- Delrue, Joke, University of Ghent
- van Durme, Luc, "Oude taaltoestanden in en om de Nederlanden. Een reconstructie met de inzichten van M. Gysseling als leidraad" in Handelingen van de Koninklijke commissie voor Toponymie en Dialectologie, LXXV/2003.
- Hachmann, Rolf, Georg Kossack and Hans Kuhn, Völker zwischen Germanen und Kelten, 1986, pp. 183–212
- Lendering, Jona, "Germania Inferior", Livius.org. Retrieved 6 October 2011.
- Caes. Gal. 4.10
- Cornelius Tacitus, Germany and its Tribes 1.29
- Nico Roymans, Ethnic Identity and Imperial Power. The Batavians in the Early Roman Empire. Amsterdam Archaeological Studies 10. Amsterdam, 2004. Chapter 4. Also see page 249.
- Plin. Nat. 4.29
- Roymans, Nico, Ethnic Identity and Imperial Power: The Batavians in the Early Roman Empire, Amsterdam: Amsterdam University Press, 2005, pp. 226–27
- Historiae, Tacitus, 109 AD, Translated by Alfred John Church and William Jackson Brodribb.
- Beyen, Marnix, "A Tribal Trinity: the Rise and Fall of the Franks, the Frisians and the Saxons in the Historical Consciousness of the Netherlands since 1850" in European History Quarterly 2000 30(4):493–532. ISSN 0265-6914 Fulltext: EBSCO
- Previté-Orton, Charles, The Shorter Cambridge Medieval History, vol. I, pp. 51–52, 151
- Grane, Thomas (2007), "From Gallienus to Probus – Three decades of turmoil and recovery", The Roman Empire and Southern Scandinavia–a Northern Connection! (PhD thesis), Copenhagen: University of Copenhagen, p. 109
- Looijenga, Jantina Helena (1997), "History, Archaeology and Runes", in SSG Uitgeverij, Runes Around the North Sea and on the Continent AD 150–700; Texts and Contexts (PhD dissertation) (PDF), Groningen: Groningen University, p. 30, ISBN 90-6781-014-2. For this contention, Looijenga cites D.A. Gerrets (1995), "The Anglo-Frisian Relationship Seen from an Archaeological Point of View" in Friesische studien 2, pp. 119–28.
- Berglund, Björn E. (2002), "Human impact and climate changes – synchronous events and a causal link?", Quaternary International, 105 (1), Elsevier (published 2003), p. 10
- Ejstrud, Bo; et al. (2008), Ejstrud, Bo; Maarleveld, Thijs J., eds., The Migration Period, Southern Denmark and the North Sea, Esbjerg: Maritime Archaeology Programme, ISBN 978-87-992214-1-7
- Issar, Arie S. (2003), Climate Changes during the Holocene and their Impact on Hydrological Systems, Cambridge: Cambridge University, ISBN 978-0-511-06118-9
- Louwe Kooijmans, L. P. (1974), The Rhine/Meuse Delta. Four studies on its prehistoric occupation and Holocene geology (PhD Dissertation), Leiden: Leiden University Press
- Bazelmans, Jos (2009), "The early-medieval use of ethnic names from classical antiquity: The case of the Frisians", in Derks, Ton; Roymans, Nico, Ethnic Constructs in Antiquity: The Role of Power and Tradition, Amsterdam: Amsterdam University, pp. 321–37, ISBN 978-90-8964-078-9
- Frisii en Frisiaevones, 25–08–02 (Dutch) Archived 3 October 2011 at the Wayback Machine., Bertsgeschiedenissite.nl. Retrieved 6 October 2011
- Kortlandt, Frederik (1999). "The origin of the Old English dialects revisited" (PDF). University of Leiden.
- Willemsen, A. (2009), Dorestad. Een wereldstad in de middeleeuwen, Walburg Pers, Zutphen, pp. 23–27, ISBN 978-90-5730-627-3
- MacKay, Angus; David Ditchburn (1997). Atlas of Medieval Europe. Routledge. p. 57. ISBN 0-415-01923-0.
- Hodges, Richard; David Whitehouse (1983). Mohammed, Charlemagne and the Origins of Europe. Cornell University Press. p. 99. ISBN 978-0-8014-9262-4.
- Milis, L.J.R., "A Long Beginning: The Low Countries Through the Tenth Century" in J.C.H. Blom & E. Lamberts History of the Low Countries, pp. 6–18, Berghahn Books, 1999. ISBN 978-1-84545-272-8.
- Holmes, U.T and A. H. Schutz (1938), A History of the French Language, p. 29, Biblo & Tannen Publishers, ISBN 0-8196-0191-8
- Dutch people#Genetics
- Blok, D.P. (1974), De Franken in Nederland, Bussum: Unieboek, 1974, pp. 36–38 on the uncertain identity of the Frisians in early Frankish sources; pp. 54–55 on the problems concerning “Saxon” as a tribal name.
- van Eijnatten, J. and F. van Lieburg, Nederlandse religiegeschiedenis (Hilversum, 2006), pp. 42–43, on the uncertain identity of the “Frisians” in early Frankish sources.
- de Nijs, T, E. Beukers and J. Bazelmans, Geschiedenis van Holland (Hilversum, 2003), pp. 31–33 on the fluctuating character of tribal and ethnic distinctions for the early Medieval period.
- Blok (1974), pp. 117 ff.; de Nijs et al. (2003), pp. 30–33
- van der Wal, M., Geschiedenis van het Nederlands, 1992[full citation needed], p.[page needed]
- "Charlemagne: Court and administration". Encyclopædia Britannica. ("Charlemagne relied on his palatium, a shifting assemblage of family members, trusted lay and ecclesiastical companions, and assorted hangers-on, which constituted an itinerant court following the king as he carried out his military campaigns and sought to take advantage of the income from widely scattered royal estates.")
- More info about Viking raids can be found online at L. van der Tuuk, Gjallar. Noormannen in de Lage Landen
- Baldwin, Stephen, "Danish Haralds in 9th Century Frisia". Retrieved 9 October 2011.
- "Vikingschat van Wieringen", Museumkennis.nl. Retrieved 9 October 2011.
- Jesch, Judith, Ships and Men in the Late Viking Age: The Vocabulary of Runic Inscriptions and Skaldic Verse, Boydell & Brewer, 2001. ISBN 978-0-85115-826-6. p. 82.
- James D. Tracy (2002). Emperor Charles V, Impresario of War: Campaign Strategy, International Finance, and Domestic Politics. Cambridge U.P. p. 258.
- H.G. Koenigsberger, "The Beginnings of the States General of the Netherlands," Parliaments, Estates and Representation (1988) 8#2 pp 101–14.
- Albert Guerard, France, A Modern History, (1959), pp. 134–36.
- Martin van Gelderen (2002). The Political Thought of the Dutch Revolt 1555-1590. Cambridge U.P. p. 18.
- Kamen, Henry (2005). Spain, 1469–1714: a society of conflict (3rd ed.). Harlow, United Kingdom: Pearson Education. ISBN 0-582-78464-6.
- R. Po-chia Hsia, ed. A Companion to the Reformation World (2006) pp. 118–34
- Jonathan I. Israel, The Dutch Republic Its Rise, Greatness, and Fall 1477–1806 (1995) p. 104
- Hsia, ed. A Companion to the Reformation World (2006) pp. 3–36
- Israel, The Dutch Republic Its Rise, Greatness, and Fall 1477–1806 (1995) p. 155
- Israel, The Dutch Republic: Its Rise, Greatness, and Fall, 1477–1806 (1995) pp. 374–75
- Israel, The Dutch Republic: Its Rise, Greatness, and Fall, 1477–1806 (1995) pp. 86–91
- Jerome Blum et al, The European World: A History (1970) pp 160-61
- Israel, The Dutch Republic: Its Rise, Greatness, and Fall, 1477–1806 (1995) pp. 361–95
- Diarmaid MacCulloch, The Reformation (2005) pp. 367–72
- Claflin, W. Harold, ed. History of Nations: Holland and Belgium, (New York: P.F. Collier & Son, 1907), pp. 72–74, 103–05
- John Lathrop Motley, The Rise of the Dutch Republic (Harper & Bros.: New York, 1855) pp. 106–15, 121, 122, 207, 213
- Geoffrey Parker, ed. The Thirty Years' War, New York: Routledge Press, 1987, p. 2.
- Israel, The Dutch republic, pp. 184–85
- Violet Soen, "Reconquista and Reconciliation in the Dutch Revolt: The Campaign of Governor-General Alexander Farnese (1578-1592)," Journal of Early Modern History (2012) 16#1 pp. 1–22.
- Bart de Groof, "Alexander Farnese and the Origins of Modern Belgium," Bulletin de l'Institut Historique Belge de Rome (1993) Vol. 63, pp. 195–219.
- see religion map
- Charles H. Parker, Faith on the Margins: Catholics and Catholicism in the Dutch Golden Age (Harvard University Press, 2008)
- Schama, Simon, The Embarrassment of Riches, Bath: William Collins & Sons, 1987. At p. 8: "The prodigious quality of their success went to their heads, but it also made them a bit queasy. Even their most uninhibited documents of self-congratulations are haunted by the threat of overvloed, the surfeit that rose like a cresting flood – a word heavy with warning as well as euphoria...But at the very least, the continuous pricking of conscience on complacency produced the self-consciousness that we think of as embarrassed."
- Sawmills (or "saagmolens" in Dutch) were invented in Uitgeest, according to the "Haarlemmermeer boeck" by Jan Adriaanszoon Leeghwater
- The maps used by Fernando Álvarez de Toledo, 3rd Duke of Alba to attack Dutch cities overland and by water were made by Dutch mapmakers.
- Quinn, Stephen. Roberds, William. The Big Problem of Large Bills: The Bank of Amsterdam and the Origins of Central Banking. August 2005.
- "Baltic Connections: Mercantilism in the West Baltic", BalticConnections.net. Retrieved 9 October 2011.
- Gardner, Helen, Fred S. Kleiner, and Christin J. Mamiya, Gardner's Art Through the Ages, Belmont, CA: Thomson/Wadsworth, 2005, pp. 718–19.
- Jaap Jacobs, The Colony of New Netherland: A Dutch Settlement in Seventeenth-Century America (2nd ed. 2009) online
- Postma, Johannes, The Dutch in the Atlantic Slave Trade, 1600–1815 (2008)[full citation needed], p.[page needed]
- van Welie, Rik, "Slave Trading and Slavery in the Dutch Colonial Empire: A Global Comparison", NWIG: New West Indian Guide / Nieuwe West-Indische Gids, 2008, Vol. 82 Issue 1/2, pp. 47–96, Table 2 & Table 3. Retrieved 9 October 2011.
- Vink, Markus, "'The World's Oldest Trade': Dutch Slavery and Slave Trade in the Indian Ocean in the Seventeenth Century", Journal of World History, 14.2 (2003): 76 pars.. Retrieved 9 October 2011.
- Ames, Glenn J. (2008). The Globe Encompassed: The Age of European Discovery, 1500-1700. pp. 102–103.
- Adrian Vickers, A History of Modern Indonesia (2005) p. 10
- Noble, John (1893). Illustrated Official Handbook of the Cape and South Africa; A résumé of the history, conditions, populations, productions and resources of the several colonies, states, and territories. J.C. Juta & Co. p. 141. Retrieved 25 November 2009.[unreliable source?]
- Smith, Adam (1776), Wealth of Nations, Penn State Electronic Classics Edition, republished 2005, p. 516
- "Afrikaans", Omniglot.com. Retrieved 9 October 2011.
- "Afrikaans language", Britannica.com. Retrieved 9 October 2011.
- Alatis, James E., Heidi E. Hamilton and Ai-Hui Tan (2002). Linguistics, language and the professions: education, journalism, law, medicine, and technology. Washington, DC: University Press. ISBN 978-0-87840-373-8. p.[page needed]
- Tim William Blanning (2007). The Pursuit of Glory: Europe, 1648–1815. Penguin. p. 96.
- E.H. Kossmann, "The Dutch Republic," in F. L. Carsten, ed. (1961). The New Cambridge Modern History: Volume 5, the Ascendancy of France, 1648–88. Cambridge U.P. pp. 275–76.
- Israel, The Dutch Republic Its Rise, Greatness, and Fall 1477–1806 (1995) pp. 277–79, 284
- Joost Jonker (1996). Merchants, bankers, middlemen: the Amsterdam money market during the first half of the 19th century. NEHA. p. 32.
- Charles R. Boxer, The Dutch Seaborne Empire 1600–1800 (1965)
- Jan de Vries and A. van der Woude, The First Modern Economy. Success, Failure, and Perseverance of the Dutch Economy, 1500–1815 (1997) pp. 668–72
- Regin, Deric, Traders, Artists, Burghers: A Cultural History of Amsterdam in the 17th century Van Gorcum, 1976, p.[page needed]
- Edwards, Elizabeth, "Amsterdam and William III," History Today, (Dec 1993), Vol. 43, Issue 12 pp. 25–31
- Elise Van Nederveen Meerkerk; Griet Vermeesch (2010). Serving the Urban Community: The Rise of Public Facilities in the Low Countries. Amsterdam University Press. p. 158.
- Paolo Bernardini; Norman Fiering (2004). The Jews and the Expansion of Europe to the West, 1400–1800. Berghahn Books. p. 372.
- Jonathan Israel (2003). The Anglo-Dutch Moment: Essays on the Glorious Revolution and Its World Impact. Cambridge University Press. p. 111.
- Martin Dunford; et al. (2003). The Rough Guide to Amsterdam. Rough Guides. p. 58.
- Eugen Weber, A Modern History of Europe (1971) p. 290
- John Richard Hill (2002). The Oxford Illustrated History of the Royal Navy. Oxford University Press. pp. 68–75.
- Gijs Rommelse, "Prizes and Profits: Dutch Maritime Trade during the Second Anglo-Dutch War," International Journal of Maritime History (2007) 19#2 pp 139–59.
- D. R. Hainsworth, et al. The Anglo-Dutch Naval Wars 1652–1674 (1998)
- This is the date from the Gregorian calendar that was followed at the time in the Dutch Republic; according to the Julian calendar, still used in England at the time, the date of death was 8 March.
- C. H. Wilson, "The Economic Decline of the Netherlands," Economic History Review (1939) 9#2 pp. 111-127, esp. p. 113 in JSTOR
- Israel, Dutch Republic, pp. 999–1018
- Thomas M. Lennon and Michael Hickson, "Pierre Bayle," The Stanford Encyclopedia of Philosophy (2012) online
- Israel, Dutch Republic, pp. 1021, 1033–36
- Israel, Jonathan The Dutch Republic: Its Rise, Greatness, and Fall 1477–1806 (Oxford" Oxford U.P. 1998) pp. 996–97, 1069–87
- Fulford, Roger Royal Dukes William Collins and Son London 1933
- Edler, Friedrich, The Dutch Republic and The American Revolution (1911, reprinted 2001) Honolulu, Hawaii: University Press of the Pacific, p. 88
- Braudel, Fernand, The Perspective of the World vol. III of Civilization and Capitalism 1984. p. 273.
- C. Cook & J. Stevenson, The routledge companion to European history since 1763 (Abingdon: Routledge, 2005), p. 66; J. Dunn, Democracy: A history (NY: Atlantic Books, 2005), p. 86.
- Palmer, R.R. "Much in Little: The Dutch Revolution of 1795," Journal of Modern History (1954) 26#1 pp. 15–35 in JSTOR
- Kossmann, Low Countries p 112–33
- Kossmann, Low Countries p 115–16
- Simon Schama, "The Rights of Ignorance: Dutch Educational Policy in Belgium 1815–30," History of Education (1972) 1:1, pp. 81–89 link
- Schama, "The Rights of Ignorance: Dutch Educational Policy in Belgium 1815–30," pp. 84–85
- Godefroid Kurth, "Belgium" in Catholic Encyclopedia (1907) online
- see online maps 1830, 1839
- Blom, J. C. H. (1999). History of the Low Countries. pp. 297–312.
- Richard T. Griffiths, Industrial Retardation in the Netherlands, 1830–1850 (The Hague: Martinus Nijhoff, 1979).
- Baten, Jörg (2016). A History of the Global Economy. From 1500 to the Present. Cambridge University Press. p. 19. ISBN 9781107507180.
- Richard T. Griffiths, "The Creation of a National Dutch Economy: 1795–1909," Tijdschrift voor Geschiedenis, 1982, Vol. 95 Issue 4, pp. 513–53 (in English)
- Joel Mokyr, "The Industrial Revolution in the Low Countries in the First Half of the Nineteenth Century: A Comparative Case Study," Journal of Economic History (1974) 34#2 pp. 365–99 in JSTOR
- Loyen, Reginald; et al. (2003). Struggling for Leadership: Antwerp-Rotterdam Port. Competition 1870-2000. Springer.
- E. H. Kossmann, The Low Countries 1780–1940 (1978) ch 5
- Corwin, "Holland" The New Schaff-Herzog Encyclopedia of Religious Knowledge, (1914) 5:319–22
- J. C. H. Blom and E. Lamberts, eds. History of the Low Countries (1999) pp. 387–403
- The oldest universities, in Leiden, Utrecht, and Groningen, had a secular-liberal character. In 1880 Kuyper opened a Protestant university in Amsterdam and in 1923 a Catholic one opened in Nijmegen. The Amsterdam municipal university, which opened in 1877, leaned toward secular-socialism, but was formally neutral.
- A Dutch rhyme forbade intermarriage thus: Twee geloven op één kussen, daar slaapt de Duivel tussen [Two religions on one pillow, there the Devil sleeps in between.] On the decline of intermarriage see Erik Beekink, et al. "Changes in Choice of Spouse as an Indicator of a Society in a State of Transition: Woerden, 1830–1930." Historical Social Research 1998 23(1–2): 231–53. ISSN 0172-6404
- Kossmann, Low Countries pp-57
- Arend Lijphart, The Politics of Accommodation. Pluralism, and Democracy in the Netherlands (1975) is the standard analysis from a leading political scientist; Michael Wintle, "Pillarisation, Consociation, and Vertical Pluralism in the Netherlands Revisited: a European View." West European Politics 2000 23(3): 139–52, defends the concept; more critical is J. C. H. Blom, "Pillarisation in Perspective." West European Politics (2000) 23(3): 153–64.
- Johan Sturm, et al. "Educational Pluralism: A Historical Study of So-Called "Pillarization" in the Netherlands, Including a Comparison with Some Developments in South African Education," Comparative Education, (1998) 34#3 pp. 281–97 in JSTOR
- Richard Bionda and Carel Blotkamp, eds. The Age of Van Gogh: Dutch Painting 1880–1895 (1997)
- Leo Beek, Dutch Pioneers of Science (1986)
- Michael Wintle, An Economic and Social History of the Netherlands, 1800–1920: Demographic, Economic, and Social Transition (2000) p. 342
- CBS Statline - Population; history. Statistics Netherlands. Retrieved on 2009-03-08.
- Jean Gelman Taylor, The Social World of Batavia: Europeans and Eurasians in Colonial Indonesia (1983)
- Antony Wild (2005). Coffee: A Dark History. W. W. Norton. pp. 258–62.
- Maartje M. Abbenhuis, The Art of Staying Neutral the Netherlands in the First World War, 1914–1918 (Amsterdam University Press, 2006).
- "De Dodendraad – Wereldoorlog I". Bunkergordel.be. Retrieved 19 March 2013.
- Erik Hansen, "Fascism and Nazism in the Netherlands 1929–39," European Studies Review (1981) 11#3 pp. 355–85. online at Sage
- "THE KINGDOM OF THE NETHERLANDS DECLARES WAR WITH JAPAN". ibiblio. Retrieved 5 October 2009.
- William I. Hitchcock, The Bitter Road to Freedom: The Human Cost of Allied Victory in World War II Europe (2009) pp. 98–129
- Waddy, John A Tour of the Arnhem Battlefields (Pen & Sword Books, 2001; fist published 1999) (ISBN 0-85052-571-3), p. 192
- Stacey, Colonel Charles Perry, Official History of the Canadian Army in the Second World War, Volume III, The Victory Campaign: The Operations in North-West Europe 1944–1945 (The Queen's Printer and Controller of Stationery Ottawa, 1960) (Downloaded: 4 July 2009), pp. 576–614
- 'Eisch Duitschen grond!' 3 May 2001 In Dutch. Retrieved on 7 October 2006.
- Black Tulip. 13 September 2005. In Dutch. Retrieved on 7 October 2006.
- 'Eisch Duitschen grond!' (Web downloadable doc). 13 September 2005. In Dutch. Retrieved on 7 October 2006.
- Alan S. Milward (1987). The Reconstruction of Western Europe, 1945–1951. U. of California Press. pp. 18–.
- Nelleke Bakker and Janneke Wubs, "A Mysterious Success: Doctor Spock and the Netherlands in the 1950s," Paedagogica Historica (2002) 38#1 pp. 215–17.
- "Netherlands". Encyclopædia Britannica Online. Encyclopædia Britannica, Inc. Retrieved 8 September 2012.
- Nelleke Bakker and Janneke Wubs, "A Mysterious Success: Doctor Spock and the Netherlands in the 1950s", Paedagogica Historica (2002) 38#1 pp. 209–26.
- Van der Eng, Pierre (1987). De Marshall-Hulp, Een Perspectief voor Nederland 1947–1953. Houten: De Haan.
- Andrew J. Hughes Hallett, "Econometrics and the Theory of Economic Policy: The Tinbergen-Theil Contributions 40 Years On," Oxford Economic Papers (1989) 41#1 pp. 189–214
- Green-Pedersen, The Politics of Justification: Party Competition and Welfare-State Retrenchment in Denmark and the Netherlands from 1982 to 1998 p. 44
- Edwin Horlings and Jan-Pieter Smits, "De Welzijnseffecten Van Economische Groei In Nederland 1800–2000" ['The welfare effects of economic growth in the Netherlands, 1800-2000'] Tijdschrift voor Sociale Geschiedenis (2001) 27#3 pp. 266–80.
- Jan C. C. Rupp, "The Americanization of Dutch Academia in the Postwar Era," European Contributions To American Studies, (1996) 30#1 pp. 133–50
- Kees Wouters, "Fear of the "Uncivilized'": Dutch Responses to American Entertainment Music, 1920–1945," European Contributions to American Studies (1996) 30#1 pp. 43–61
- Jan Hoffenaar, "'Hannibal ante portas': The Soviet Military Threat and the Build-up of the Dutch Armed Forces, 1948–1958," Journal of Military History (2002) 66#1 pp. 163–91.
- Hans Renders, "Art, ideology and Americanization in post-war Dutch Mandril: Journalistic innovation of a conservative kind," Quaerendo (2006) 36#1 pp. 114–34 online
- Rob Kroes, "The Great Satan Versus the Evil Empire: Anti-Americanism in the Netherlands," European Contributions to American Studies (1987) 11#1 pp. 37–50.
- van Krieken, Peter J.; David McKay (2005). The Hague: Legal Capital of the World. Cambridge University Press. ISBN 90-6704-185-8., specifically, "In the 1990s, during his term as United Nations Secretary-General, Boutros Boutros-Ghali started calling The Hague the world's legal capital."
- Ricklefs, M.C. A Modern History of Indonesia, 2nd edition (Macmillan, 1991), Chapters 14–15
- The Associated Press (17 August 2005). "Dutch withhold apology in Indonesia". International Herald Tribune.
- Van Nimwegen, Nico De demografische geschiedenis van Indische Nederlanders, Report no.64 (Publisher: NIDI, The Hague, 2002) p. 23 ISSN 0922-7210 ISBN 978-90-70990-92-3 OCLC 55220176
- Sejarah Indonesia – An Online Timeline of Indonesian History – The Sukarno years: 1950 to 1965 Source: www.gimonco.com. Retrieved 24 November 2011.
- Archive of passenger lists showing arrival of people from the former Dutch East Indies from 1945 to 1964 Source: www.passagierslijsten1945-1964.nl. Retrieved 24 November 2011.
- The Indos (MCNL Project, UC Berkeley) Retrieved 23 November 2011
- Christopher G. A. Bryant, "Depillarisation in the Netherlands," British Journal of Sociology (1981) 32#1 pp. 56–74 in JSTOR
- Willem Frijhoff; Marijke Spies (2004). Dutch Culture in a European Perspective: 1950, prosperity and welfare. Uitgeverij Van Gorcum. p. 412.
- John Hendrickx, et al. "Religious Assortative Marriage in The Netherlands, 1938–1983," Review of Religious Research (1991) 33#2 pp. 123–45
- Herman Bakvis (1981). Catholic Power in the Netherlands. McGill-Queens. pp. 172–73, 216.
- Hans Knippenberg, "Secularization in the Netherlands in its historical and geographical dimensions," GeoJournal (1998) 45#3 pp. 209–20. online
- Tomáš Sobotka and Feray Adigüzel, "Religiosity and spatial demographic differences in the Netherlands" (2002) online
- Christoffer Green-Pedersen, The Politics of Justification: Party Competition and Welfare-State Retrenchment in Denmark and the Netherlands from 1982 to 1998 (Amsterdam University Press, 2002), p.13; online; Green-Pedersen, "The Puzzle of Dutch Welfare State Retrenchment," West European Politics (2001) 24#3 pp. 135–50
- Netherlands: Elections held in 1998 Inter-Parliamentary Union
- Aarts, Kees; Semetko, Holli A. (1999). "Representation and responsibility: the 1998 Dutch election in perspective". Acta Politica. Palgrave Macmillan. 34 (2): 111–29.
- Irwin, Galen A.; van Holsteyn, Joop J. M. (2003). "Never a dull moment: Pim Fortuyn and the Dutch parliamentary election of 2002". West European Politics. Taylor and Francis. 26 (2): 41–66. doi:10.1080/01402380512331341101.
- Netherlands, Index of Economic Freedom. heritage.org
- "Where is the happiest place on Earth? | The Search Office Space Blog | Searchofficespace". News.searchofficespace.com. 25 May 2011. Retrieved 28 October 2011.
- J. Adams. The Familial State: Ruling Families and Merchant Capitalism in Early Modern Europe. Ithica: Cornell University Press, 2005
- Petrus Johannes Blok. History of the People of the Netherlands (5 vol 1898–1912) part 1 to 1500, online from Google; part 2 to 1559. online from Google; part 3: The War with Spain 1559-162, online from Google; part 4 on Golden Age online from Google; online edition from Google, vol 5 on the 18th and 19th centuries
- J. C. H. Blom, and E. Lamberts, eds. History of the Low Countries (2006). ISBN 978-1-84545-272-8. 504 pp. excerpt and text search; also complete edition online
- See Jaap Verheul, Besamusca, Emmeline, and J. Verheul et. all. Discovering the Dutch: on Culture and Society of the Netherlands. Amsterdam: Amsterdam UP, 2010. Print.
- Pieter Geyl, The Revolt of the Netherlands: 1555–1609 (1958) online edition
- Peter Burke, "Historians and Their Times: Huizinga, Prophet of 'Blood and Roses.'" History Today 1986 36(Nov): 23–28. ISSN 0018-2753 Fulltext: EBSCO; William U. Bouwsma, "The Waning of the Middle Ages by Johan Huizinga." Daedalus 1974 103(1): 35–43. ISSN 0011-5266; R. L. Colie, "Johan Huizinga and the Task of Cultural History." American Historical Review 1964 69(3): 607–30 in JSTOR; Robert Anchor, "History and Play: Johan Huizinga and His Critics," History and Theory, Vol. 17, No. 1 (Feb. 1978), pp. 63–93 in JSTOR
- Jonathan Israel, The Dutch Republic: Its Rise, Greatness, and Fall, 1477–1806 (1995) ISBN 978-0-19-820734-4. complete online edition; also excerpt and text search
- Jonathan I. Israel, Democratic Enlightenment: Philosophy, Revolution, and Human Rights, 1750-1790 (2011) excerpt and text search
- J. C. H. Blom, "Ludovico Locuto, Porta Aperta: Enige Notities over Deel XII En XIII Van L. De Jongs Koninkrijk Der Nederlanden in De Tweede Wereldoorlog." [After Louis Spoke, Everything Was Clear: Some Notes on Volumes 12 and 13 of Louis De Jong's Kingdom of the Netherlands in World War Ii]. Bijdragen En Mededelingen Betreffende De Geschiedenis Der Nederlanden 1990 105(2): 244–64. ISSN 0165-0505. A review of a great masterpiece. (in Dutch)
- John Lothrop Motley, The Rise of the Dutch Republic, 1555–84 (2 vol. 1856) Gutenberg editions online and History of the United Netherlands, 1584–1609 (4 vol., 1860–67) Gutenberg editions online For a criticism of Motler see Robert Wheaton, "Motley and the Dutch Historians." New England Quarterly 1962 35(3): 318–36. in JSTOR
- A. C. Otto, "Theorie En Praktijk in De Theoretische Geschiedenis Van Jan Romein" [Theory and Practice in the "Theoretical History" of Jan Romein]. Theoretische Geschiedenis 1994 21(3): 257–70. ISSN 0167-8310 (in Dutch).
- See Arthur van Riel, "Review: Rethinking the Economic History of the Dutch Republic: The Rise and Decline of Economic Modernity Before the Advent of Industrialized Growth," The Journal of Economic History, Vol. 56, No. 1 (Mar. 1996), pp. 223–29 in JSTOR
- Robert Wheaton, "Motley and the Dutch Historians," New England Quarterly (1962) 35#3 pp. 318–36 in JSTOR
- Peter Burke, "Historians and Their Times: Huizinga, Prophet of 'Blood and Roses.'" History Today (Nov 1986) (36): 23–28; William U. Bouwsma, "The Waning of the Middle Ages by Johan Huizinga." Daedalus 1974 103(1): 35–43; R. L. Colie, "Johan Huizinga and the Task of Cultural History." American Historical Review (1964) 69#3 pp. 607–30 in JSTOR; Robert Anchor, "History and Play: Johan Huizinga and His Critics," History and Theory (1978) 17#1 pp. 63–93 in JSTOR
- Chris Lorenz, "Het 'Academisch Poldermodel' En De Westforschung in Nederland," [The Dutch Academic Polder Model and Westforschung in the Netherlands]. Tijdschrift Voor Geschiedenis 2005 118(2): 252–70. ISSN 0040-7518
- See home page, with discussion logs
- Alexander Albicher, "A forced but passionate marriage: The changing relationship between past and present in Dutch history education 1945-1979," Paedagogica Historica (2012) 48#6 pp 840–58
- Susan Broomhall, and Jennifer Spinks, "Interpreting place and past in narratives of Dutch heritage tourism," Rethinking History (2010) 14#2 pp 267–85.
- Laura Cruz, "The 80 Years' Question: the Dutch Revolt in Historical Perspective." History Compass 2007 5(3): 914–34.
- Three volumes appeared in English translation, The Revolit of the Netherlands (1555–1609) (1932); and The Netherlands in the Seventeenth Century (2 vol 1936, 1964).
- Herbert H. Rowen, "The Historical Work of Pieter Geyl." Journal of Modern History 1965 37(1): 35–49. ISSN 0022-2801 in Jstor
- A. C. Otto, "Theorie En Praktijk in De Theoretische Geschiedenis Van Jan Romein" [Theory and Practice in the "Theoretical History" of Jan Romein]. Theoretische Geschiedenis 1994 21(3): 257–70. ISSN 0167-8310; P. Blaas, "An Attempt at Integral History." Acta Historiae Neerlandica 1971 (5): 271–315. ISSN 0065-129X
- Arblaster, Paul. A History of the Low Countries. Palgrave Essential Histories Series New York: Palgrave Macmillan, 2006. 298 pp. ISBN 1-4039-4828-3.
- Barnouw, A. J. The Making of Modern Holland: A Short History (Allen & Unwin, 1948) online edition
- Blok, Petrus Johannes. History of the People of the Netherlands (5 vol 1898–1912) famous classic; part 1 to 1500, online from Google; part 2 to 1559. online from Google; part 3: The War with Spain 1559–1621, online from Google; part 4 on Golden Age online from Google; online edition from Google, vol 5 on the 18th and 19th centuries
- Blom, J. C. H. and E. Lamberts, eds. History of the Low Countries (2006) 504 pp. excerpt and text search; also complete edition online
- van der Burg, Martijn. "Transforming the Dutch Republic into the Kingdom of Holland: the Netherlands between Republicanism and Monarchy (1795-1815)," European Review of History (2010) 17#2, pp. 151–70 online
- Frijhoff, Willem; Marijke Spies (2004). Dutch Culture in a European Perspective: 1950, prosperity and welfare. Uitgeverij Van Gorcum.
- Geyl, Pieter. The Revolt of the Netherlands (1555–1609) (Barnes & Noble, 1958) online edition, famous classic
- Van Hoesel, Roger, and Rajneesh Narula. Multinational Enterprises from the Netherlands (1999) online edition
- Hooker, Mark T. The History of Holland (1999) 264 pp. excerpt and text search
- Israel, Jonathan. The Dutch Republic: Its Rise, Greatness, and Fall, 1477–1806 (1995) a major synthesis; complete online edition; also excerpt and text search
- Koopmans, Joop W., and Arend H. Huussen, Jr. Historical Dictionary of the Netherlands (2nd ed. 2007)excerpt and text search
- Kossmann, E. H. The Low Countries 1780–1940 (1978), detailed survey; full text online in Dutch (use CHROME browser for automatic translation to English)
- Kossmann-Putto, J. A. and E. H. Kossmann. The Low Countries: History of the Northern and Southern Netherlands (1987)
- Milward, Alan S. and S. B. Saul. The Economic Development of Continental Europe 1780-1870 (2nd ed. 1979), 552 pp.
- Milward, Alan S. and S. B. Saul. The Development of the Economies of Continental Europe: 1850-1914 (1977) pp. 142–214
- Moore, Bob, and Henk Van Nierop. Twentieth-Century Mass Society in Britain and the Netherlands (Berg 2006) online edition
- van Oostrom, Frits, and Hubert Slings. A Key to Dutch History (2007)
- Pirenne, Henri. Belgian Democracy, Its Early History (1910, 1915) 250 pp. history of towns in the Low Countries online free
- Rietbergen, P.J.A.N. A Short History of the Netherlands. From Prehistory to the Present Day. 5th ed. Amersfoort: Bekking, 2002. ISBN 90-6109-440-2
- Schama, Simon, The Embarrassment of Riches: An Interpretation of Dutch Culture in the Golden Age (1991) excerpt and text search, very well written broad survey
- Schama, Simon. Patriots and Liberators: Revolution in the Netherlands, 1780– 1813 (London: Collins, 1977)
- Treasure, Geoffrey. The Making of Modern Europe, 1648–1780 (3rd ed. 2003). pp 463–93.
- Vlekke, Bernard H. M. Evolution of the Dutch Nation (1945) 382 pp. online edition
- Wintle, Michael P. An Economic and Social History of the Netherlands, 1800–1920: Demographic, Economic, and Social Transition (Cambridge University Press, 2000) online edition
- Van Tuyll Van Serooskerken, Hubert P. The Netherlands and World War I: Espionage, Diplomacy and Survival (Brill 2001) online edition
- Vries, Jan de, and A. van der Woude. The First Modern Economy. Success, Failure, and Perseverance of the Dutch Economy, 1500–1815 (Cambridge University Press, 1997)
- Vries, Jan de. "Benelux, 1920–1970," in C. M. Cipolla, ed. The Fontana Economic History of Europe: Contemporary Economics Part One (1976) pp. 1–71
- van Zanden, J. L. The Economic History of The Netherlands 1914–1995: A Small Open Economy in the 'Long' Twentieth Century (Routledge, 1997) excerpt and text search
- Vandenbosch, A. Dutch Foreign Policy since 1815 (1959)
- Wielenga, Friso. A History of the Netherlands: From the Sixteenth Century to the Present Day (2015) 344 pp
Geography and environment
- Burke, Gerald L. The making of Dutch towns: A study in urban development from the 10th–17th centuries (1960)
- Lambert, Audrey M. The Making of the Dutch Landscape: An Historical Geography of the Netherlands (1985); focus on the history of land reclamation
- Meijer, Henk. Compact geography of The Netherlands (1985)
- Riley, R. C., and G. J. Ashworth. Benelux: An Economic Geography of Belgium, the Netherlands, and Luxembourg (1975) online
|Wikimedia Commons has media related to History of the Netherlands.|
- Chronologisch overzicht van de Nederlandse geschiedenis (in Dutch)
- De Tachtigjarige Oorlog (in Dutch)
- Dutch Crossing: Journal of Low Countries Studies, a scholarly journal
- History of Holland, George Edmundson, 1922, Project Gutenberg EBook.
- History of the Netherlands from 50 BC to 2005
- History of Netherlands: Primary Documents
- Netherlands Institute for War Documentation
- Netherlands: Maps, History, Geography, Government, Culture, Facts, Guide & Travel | Infoplease.com
- Overview of historical novels about The Netherlands and Belgium
- Short survey of the Dutch history
- The Canon of the Netherlands.
- The Netherlands in prehistory
- Timeline from 1914
- Virtual Tour of Dutch History (101 Sites in Google Earth with Links)
- Netherlands Geographic Information System (1812–1997) |
Many of you already know that the sheer price of the silicon used for solar cells is one of the reasons solar energy hasn't become more widely used. As such, researchers all over come up with new solar technologies as time goes by. One of those technologies is the luminescent solar concentrator (LSC) which traps the sun's rays and delivers the light onto a cell using a waveguide. Unlike solar trackers which follow the sun's movements across the sky using mirror installations, LSC has no moving parts.
A conventional LSC is usually a plastic sheet painted with a dye and stretched over a long thin solar cell. The process of harnessing solar energy starts when the dye absorbs the sun's light. It then re-emits the light back, but since it's already trapped within the plastic, it just bounces around to be transferred onto the cell. Unfortunately, some of the light that bounce are lost as heat. To address this problem, Michael Currie and Jonathan Mapel of MIT have come up with a new LSC technology which gets rid of the plastic altogether and uses glass instead.
*Image from MIT
A mixture of dyes and tris(8-hydroxyquinoline) aluminum is sprayed onto the glass. The glass and the dyes prevent light from escaping, while the combination of dyes and tris(8-hydroxyquinoline) aluminum eliminates the heat loss associated with reabsorption of light. To further improve the efficiency of this new LSC, the scientists placed two glass-dye sandwiches, one atop the other. In effect, the lower system absorbs whatever light passes through the first. This method reportedly increased the efficiency of the LSC by ten times more than the conventional solar cells. |
You deal with pathnames every day to browse to your data and toolboxes. You probably don't give them much thought, nor do you need to, until it comes time to share your data and tools. This section delves into detail about pathnames, defining the different types and how ArcGIS manages them. Subsequent topics on sharing tools assume that you've reviewed the material presented here.
Path and pathname
A path is a slash-separated list of directory names followed by either a directory name or a file name. A directory is the same as a system folder.
E:\Data\MyStuff (path terminating in a directory name)
E:\Data\MyStuff\roads.shp (path terminating in a file name)
NOTE: In everyday usage, path and pathname are synonymous. Pathname is sometimes spelled path name.
Forward versus backward slashes
The Windows convention is to use a backward slash (\) as the separator in a path. UNIX systems use a forward-slash (/). Throughout Windows and ArcGIS, it doesn't matter whether you use a forward or backward slash in your path. When using the UNIX operating system, you must use a forward slash. ArcGIS will always translate forward and backward slashes to the appropriate operating system convention.
System versus catalog path
In ArcGIS, you'll sometimes come across the term catalog path or ArcCatalog path. A catalog path is a pathname that only ArcGIS recognizes. For example:
refers to the powerlines feature class found in the EastValley feature dataset in the personal geodatabase Infrastructure. This is not a valid system path as far as the Windows operating system is concerned, since Infrastructure.mdb is a file, not a folder, and Windows doesn't know about feature datasets or feature classes. Of course, everything in ArcGIS knows how to deal with catalog paths.
Absolute and relative pathnames
Absolute, or full, path
An absolute, or full, path begins with a drive letter followed by a colon, such as D:
A relative path refers to a location that is relative to a current directory. Relative paths make use of two special symbols, a dot (.) and a double-dot (..), which translate into the current directory and the parent directory. Double-dots are used for moving up in the hierarchy. A single dot represents the current directory itself.
In the example directory structure below, assume you used Windows Explorer to navigate to D:\Data\Shapefiles\Soils. After navigating to this directory, a relative pathname will use D:\Data\Shapefiles\Soils as the current directory (until you navigate to a new directory, at which point the new directory becomes the current directory). The current directory is sometimes referred to as the root directory.
If you wanted to navigate to the Landuse directory from the current directory (Soils), you could type in the following in the Windows Explorer Address edit box:
and Windows Explorer would navigate to D:\Data\Shapefiles\Landuse. A few more examples using D:\Data\Shapefiles\Landuse as the current directory are
. (C:\Data\Shapefiles\Landuse - the current directory)
NOTE: A relative path cannot span disk drives. For example, if your current directory is D:, you cannot use relative paths to navigate to any directory on E:
Absolute and relative pathnames in ArcMap
You cannot enter relative pathnames using the dot/double-dot notation described above in any ArcGIS desktop application. However, when you create an ArcMap (or ArcScene or ArcGlobe) document, you can specify that pathnames will be stored as relative pathnames. (Absolute pathnames are the default.) To set this option, look under the File menu, click Document Properties, then click the Data Source Options button found on the lower right. This will open the Data Source Options dialog box, and you can specify whether to store absolute or relative paths.
When you save the document with relative pathnames, the application converts all pathnames into relative pathnames (using the dot/double-dot notation) in relation to the location where you stored the document. For example, if your document is stored in
and the data in one of your layers is
what gets stored in Newmap.mxd is
When you open Newmap.mxd again, ArcMap converts the stored relative pathname from the dot/double-dot notation back into the absolute path representation, which is displayed as the data source for a layer. Note, however, that if you move Newmap.mxd to another directory, the data will not be found.
Learn more about referencing data in a map
Absolute and relative paths in geoprocessing tools
Just like data in ArcMap, you can specify that pathnames in your tools are to be stored as relative paths. The current directory used for relative paths is the directory where the tool's toolbox resides.
The relative pathname option converts paths to
- Data in a model
- Script files
- Graphics in a model
- Compiled help files (.chm).
To store as relative paths, right-click the tool and click the General tab. At the bottom of the dialog box, check Store relative path names (instead of absolute paths), as shown below.
When you add a script tool, this option will also appear on the first panel of the Add Script wizard. Setting this option on the Add Script wizard is the same as setting it on the tool's property dialog box, so you can always reset this option when the Add Script wizard is completed.
Why use relative versus absolute pathnames?
Using absolute pathnames:
- You can move the document or toolbox anywhere on your computer and the data will be found when you reopen the document or tool.
- On most personal computers, the location of data is usually constant. That is, you typically don't move your data around much on your personal computer. In such cases, absolute pathnames are preferred.
- You can reference data on other disk drives.
Using relative pathnames:
- When moving a document or toolbox, the referenced data has to move as well.
- When delivering documents, toolboxes, and data to another user, relative pathnames should be used. Otherwise, the recipient's computer must have the same directory structure as yours.
- You cannot reference data on other disk drives.
For example, consider the directory structure below. In this example, D:\Tools\Toolboxes\Toolbox1 contains a script tool that uses D:\Tools\Scripts\MyScript.py.
Using absolute paths, if you moved the toolbox from
to a different disk, such as
ArcGIS will find D:\Tools\Scripts\MyScript.py and everything will work fine. If, however, you use relative paths, ArcGIS will not find the script and the tool will not work. The tool dialog will open but when you execute you'll get the error message "Script associated with this tool does not exist". You will have to open the tool's Properties and enter the correct pathname to the script. At the same time, you should probably uncheck the Store relative pathname option.
On the other hand, if you use relative pathnames, you can simply copy the folder D:\Tools anywhere on anyone's computer and everything will work. This won't work if you use absolute paths, because the recipient could copy the folder to F:\NewTools and the pathname D:\Tools\Scripts\MyScript.py won't exist on their computer.
- Relative paths cannot span disk drives.
- Absolute pathnames work best when data isn't moved, which is typical for disks on a personal computer.
- Relative pathnames work best when you're delivering documents and data to another user.
- Relative pathnames use dot/double-dot (. and ..) notation. You can enter relative pathnames with this notation in Windows Explorer or at the Windows command prompt.
- ArcGIS doesn't allow you to enter relative pathnames using dot/double-dot notation. Rather, relative pathnames are stored in the document or toolbox (once you check the Store relative pathnames option).
- Relative pathnames are relative to a current directory, which is the location of the saved document or toolbox.
UNC stands for U
niversal (or Uniform, or Unified) N
onvention and is a syntax for accessing folders and files on a network of computers. The syntax is:
\\<computer name>\<shared directory>\
followed by any number of directories and terminated with a directory or file name.
The computer name is always preceded by a double backward-slash (\\).
In UNC, the computer name is also known as the host name.
A few rules for UNC pathnames are
- UNC paths cannot contain a drive letter (such as D:).
- You cannot navigate to directories above the shared directory.
- The Store relative pathnames option for documents and tools has no affect on UNC pathnames.
In ArcGIS, you can use a UNC pathname anywhere a pathname is requested. This is particularly advantageous for shared data on a local area network (LAN) or wide area network (WAN). Data can be stored on one computer and everyone with access to the computer can use the data.
There are two issues with sending documents or tools that contain UNC paths.
- The recipient doesn't have access to your computer, either because of security settings on the shared directory, or because they don't have access to your LAN.
- The computer or its shared folder is removed from the network.
In Windows, you can share a folder so that other users on your local area network can access it. In ArcCatalog or Windows Explorer, right-click a folder, click Sharing and Security, then follow the instructions on the dialog box that opens.
URL stands for U
ocator, and uniquely specifies the address of any document on the internet. The components of a URL are:
- The protocol used to access the resource, such as http (HyperText Transfer Protocol) or ftp (File Transfer Protocol)
- The host (server) to communicate with
- The path to the file on the host
Windows Internet Explorer allows you to type "www.esri.com" in the Internet Explorer address bar, and it will add http://. It's more correct, however, to specify the protocol, such as http. Other protocols include https (Secure HyperText Transfer Protocol), ftp (File Transfer Protocol), mailto (E-mail address), and news (Usenet newsgroups), among others.
In ArcGIS, you can only use URLs where permitted. In general, the user interface will tell you whether a URL is permitted or needed. In geoprocessing, URLs can be used in the Documentation Editor when creating links, or in labels within ModelBuilder. When using URLs in ArcGIS, it's recommended that you include the protocol, as in |
Put out that fire in your throat by going to the doctor and getting a throat culture. If the test results show strep throat, you can usually get rid of it pretty quickly with antibiotics. Learn more about this throat infection below.
Strep Throat Basics
Strep throat is caused by the bacterium Streptococcus pyogenes (group A Streptococcus). Although a sore throat is a telltale symptom of strep throat, not all sore throats are caused by this bacterial infection. In fact, most sore throats are the result of viruses.
Other strep throat symptoms include red and white patches in the throat; lower stomach pain; fever; general discomfort, uneasiness, or an ill feeling; loss of appetite; nausea; difficulty swallowing; tender or swollen lymph nodes in the neck; red and enlarged tonsils; headache; and a rash that is often worse under the arms and in skin creases (scarlet fever).
Strep throat responds quickly to antibiotics. Although the illness is relatively common, that doesn't mean it can't be dangerous. Untreated strep throat can lead to the serious disease rheumatic fever, although this happens only in rare cases.
Who's at Risk for Strep Throat
Anyone can get strep throat, but it is most common in children who are between the ages of 5 and 15. Watch for strep during the school year, particularly during the winter months when large groups of children and teenagers are in close quarters. Your physician will need to diagnose strep throat using a laboratory test, such as a throat culture.
Defensive Measures Against Strep Throat
The most effective way to keep clear of strep throat is to wash your hands thoroughly (or at the very least use a liquid antibacterial hand sanitizer) and push the practice on your kids. The bacterium that causes strep throat hangs out in the nose and throat like 15-year-olds at the mall, so when someone who is infected coughs or sneezes, that gunk can potentially be spread to everything they come in contact with.
If someone in your family gets infected with strep throat, you can take some other precautions (besides washing your hands often) at home to keep everyone else from feeling as though their throats are on fire:
- Don't allow the sick person to share drinks, foods, napkins, tissues, or even towels with other family members.
- Be sure the sick person covers his or her mouth and nose with a tissue when sneezing or coughing and then throws it away to prevent passing infectious fluid.
- Keep the sick person's eating utensils, dishes, and drinking glasses separate from everyone else's.
- Thoroughly wash eating utensils, dishes, and drinking glasses after each use; if using the dishwasher, select the "sanitize," "heat dry," and/or similar options.
- Never share a toothbrush.
- Don't kiss anyone with strep throat.
Don't let throat infections get the best of you. Whether you're battling cold sores, mono, or strep throat, the measures outlined in this article will help nip the infection in the bud. Also take note of the suggestions for preventing another outbreak to keep your family healthy and happy.
©Publications International, Ltd.
ABOUT THE AUTHOR:
Laurie L. Dove is an award-winning Kansas-based journalist and author whose work has been published internationally. A dedicated consumer advocate, Dove specializes in writing about health, parenting, fitness, and travel. An active member of the National Federation of Press Women, Dove also is the former owner of a parenting magazine and a weekly newspaper.
This information is solely for informational purposes. IT IS NOT INTENDED TO PROVIDE MEDICAL ADVICE. Neither the Editors of Consumer Guide (R), Publications International, Ltd., the author nor publisher take responsibility for any possible consequences from any treatment, procedure, exercise, dietary modification, action or application of medication which results from reading or following the information contained in this information. The publication of this information does not constitute the practice of medicine, and this information does not replace the advice of your physician or other health care provider. Before undertaking any course of treatment, the reader must seek the advice of their physician or other health care provider. |
What is auditory processing disorder (APD)?
APD is a term used to describe a number of hearing disorders caused by problems with how the brain processes the information it receives from the ear. People with APD have normal hearing when tested using standard tests, but experience problems with understanding and making sense of sounds, especially complicated and fast-changing sounds like speech.
APD is usually present from a young age, and there is some evidence to link it with other conditions, such as autism spectrum disorder, and attention deficit hyperactivity disorder. But as the brain ages, its ability to process sounds diminishes and auditory processing difficulties can also be experienced by older people whose hearing may appear to be normal.
What is hyperacusis?
Hyperacusis is an increased sensitivity to sounds that are not normally considered to be uncomfortably loud. People who have hyperacusis are very sensitive to sounds above a certain volume, and often find everyday sounds, such as running water or the rustling of a newspaper, troubling and sometimes painful. Many people with hyperacusis also experience a feeling of fullness in their ear(s).
Hyperacusis can develop over time, or come on suddenly. It can affect one or both ears. While it can be a nuisance to some people, it's very distressing for others and can have a big impact on their life.
If you think you have hyperacusis, see your GP, who may refer you to a specialised audiologist (hearing specialist) or the ear, nose and throat (ENT) department of your local hospital where you will receive advice on managing your condition, usually through sound therapy and counselling.
For more information about hyperacusis, its causes, treatments, and where you can find support, see our leaflet hyperacusis.
Hearing loss from trauma can be temporary or permanent, depending on the extent of the damage to your ear. Types of trauma that can affect your hearing include:
- Head injuries that can directly affect the inner ear, or the structures within it, and cause hearing loss.
- Loud blasts that can cause damage to the middle and inner ear.
- Sudden large changes in air pressure (although this is quite rare).
- Ear surgery also carries risks for your hearing so you should discuss this with your doctor before the operation.
Hearing loss and other health conditions
New evidence highlights that hearing loss increases the risk or impact of various other long-term conditions. At the same time, because many health conditions are associated with ageing, they are likely to occur alongside hearing loss.
Research shows that hearing loss doubles the risk of developing depression and increases the risk of anxiety and other mental health issues - although hearing aids reduce these risks.
There's also increasing evidence to show a strong link between hearing loss and dementia. With mild hearing loss, the risk of developing dementia doubles, with moderate hearing loss the risk is three times as much, while with severe hearing loss the risk is five times.
Research has shown a link between any type of diabetes and hearing loss. There's also evidence that hearing loss is linked to cardiovascular disease, stroke and obesity.
Where can I get more information and support?
For more information about conditions that can affect your hearing, see our section on ear problems. You can also read our leaflets and factsheets, or contact our information line to find out about services and support in your area. |
Literacy - Key Stage 3 (11-14 year olds)
Speaking and Listening
In this animation the characters Jack and Amy demonstrate both both good and bad practices when giving a speech. It has good advice for improving delivery.Plan, Rehearse and Deliver
A video which demonstrates both good and bad techniques for delivering a successful presentation.School Rules!
Speaking and listening activities based on the theme of religious issues conflicting with school policy. There are opportunities for reading and writing, teacher notes and support materials.Hoodie Trouble
An interactive speaking and listening resource based on the issue of the 'hoodie culture' common in many places. The resource provides reading and writing opportunities and has teacher notes and support material.You are What you Wear
Speaking and listening activities based on identity and belonging. It considers how clothes are used to establish identity. These group discussion resources have teachers' notes and classroom support materials. |
21 August 2012
With a diameter of about 6800 kilometres, Mars is only half as large as Earth. Its internal structure is not well known. With the gephyisical InSight lander planned for 2016/17, NASA expects to gain new insights into the structure and thermal evolution of the planet. Mars has, much like all rocky planets, a shell structure. The relatively small, hot iron core is surrounded by a presumably largely solidified shell of iron-rich, siliceous rocks. There is evidence that Mars, in its early history had magnetic field (probably a weak one).
DLR's HP3 experiment uses an electromechanical impact mechanism capable of driving an instrument container into the Martian surface to a depth of up to five metres. Behind the cylindrical drill is a flat cable with thermal sensors. These sensors measure the temperature profile and the heat conductivity of the soil, from which the heat flow can be determined. Until now, a fully-automatic mole of this kind has never been used on any planetary body in our Solar System.
DLR (CC-BY 3.0).
The 'mole' experiment 'HP3' (Heat Flow and Physical Properties Package) is intended to determine the heat flow in the Martian interior and to determine the planet's physical composition. For this purpose, a special electromechanical impact mechanism drives an instrument container down to a depth of up to five metres below the surface of Mars (left). In these first five metres below the surface, the geoelectrical properties of the ground are to be measured as part of the search for subsurface ice deposits.
Logo of the NASA InSight mission, scheduled to reach Mars in 2016. On the lander itself is the DLR Experiment HP3 (Heat Flow and Physical Properties Package. It will penetrate the Martian surface to a depth of five metres to measure the heat flow and thus provide new insights into the thermal evolution of Mars and hidden water sources under the surface.
After the successful landing of the Mars Science Laboratory Curiosity rover, NASA has selected one more lander mission to Mars. The InSight mission will reach Mars in September 2016, after a six-month journey; it has been designed to take a 'look' into the deep interior of the Red Planet; it will do this with geophysical experiments including DLR's HP3, which will penetrate several metres into the Martian subsurface to measure the soil's thermo-physical and electrical properties.
"The selection of the mission InSight by NASA demonstrates the importance of exploring our planetary neighbour. I am very pleased that DLR can contribute with their own experiment on this lander to unveiling the mysteries of the Red Planet," said Johann-Dietrich Wörner, Chairman of the DLR Executive Board.
InSight stands for 'Interior Exploration using Seismic Investigations, Geodesy and Heat Transport'. The mission name clearly explains that geophysical experiments are conducted on and underneath the Martian surface; for example, measuring the velocity of seismic waves or the heat flow. One of the aims of the mission is to understand the structure and state of the core and crust, as well as the thermal evolution of Mars. The HP3 experiment for the InSight mission was developed at the German Aerospace Center (Deutsches Zentrum für Luft-und Raumfahrt; DLR). HP3 is short for 'Heat Flow and Physical Properties Package'.
"So far, the exploration of Mars has been carried out by observations from orbit and directly on the surface," explains Tilman Spohn, scientific lead of HP3 and Director of the DLR Institute of Planetary Research in Berlin-Adlershof. "But the study of the planet’s interior is only the beginning. HP3 could drive Mars research significantly forward." After the Radiation Assessment Detector (RAD) , which was jointly developed by the University of Kiel and DLR for the current Curiosity mission, and the High Resolution Stereo Camera (HRSC) on board ESA's Mars Express spacecraft, HP3 is the third DLR experiment to investigate our planetary neighbour.
Into the Martian surface with an electro-mechanical mole
For some time now, planetary research has dealt with the question of why Mars is so different compared to Earth; for example, the lack of plate tectonics or continental drift. This process is fundamental to the carbon cycle on Earth, and could determine why, on Earth, conditions for life are much more favourable than on Mars. "In its earlier history, water flowed on Mars, so it is likely that the conditions were favourable for life at some point; for example, those influenced by the effect of volcanic activity on the temperatures of the atmosphere," explains Professor Spohn. "Mars continues to be the likeliest place beyond Earth in the Solar System where life could have originated."
DLR's HP3 experiment uses an electromechanical impact mechanism capable of driving an instrument container into the Martian surface to a depth of up to five metres. "Until now, a fully-automatic mole of this kind has never been used on any planetary body in our Solar System," states Tim van Zoest, a physicist at the DLR Institute of Space Systems in Bremen, where the impact mechanism was developed. "Comparable experiments to analyse material below the planet's surface have only been conducted manually on the Moon during the US Apollo missions 15 and 17 in the early seventies. But the tools used then were similar to conventional drills."
The sensors on HP3 were developed at the DLR Institute of Planetary Research in collaboration with the Space Research Institute of the Austrian Academy of Science in Graz. In particular, the mole will monitor the heat flow inside the Martian surface. The precise and direct measurement of heat flow under the surface will enable the determination of the heat produced deep inside Mars. This will give insights into the composition of the Red Planet and its ongoing cooling process, which is related to its present volcanic activity. HP3 will also study the geological stratification of the first five metres below the Martian surface – especially the presence of ice – through the measurement of the geoelectrical properties of the ground.
InSight is the twelfth mission in NASA's Discovery programme, which is characterised by cost-efficient projects with a relatively small budget of about 500 million US dollars. The trademark of the Discovery missions is their strong focus on specific scientific questions. The mission is headed by Bruce Banerdt of the Jet Propulsion Laboratory (JPL), one of the United States' most prestigious Mars researchers. In addition to DLR, the French Space Agency CNES is also involved.
Last modified:21/08/2012 19:55:23
24/08/2012 22:34 -
How does the probe penetrate the Martian bedrock ? Is there any video or description of how it works please ?
27/08/2012 15:43 -
from Marco Trovatello
Dear Roger,the respective video is embedded in this article. Please direct your first question to Tim van Zoest or Tilman Spohn. You may find the respective contact forms by clicking on their names. |
The Austrian physicist Erwin Schrödinger made fundamental advances in establishing the groundwork of the wave mechanics approach to quantum theory. Born in Vienna, Schrödinger was raised in a household where both English and German were commonly spoken, his mother being part English and part Austrian. Thus, he was bilingual at a very early age and was initially educated at home by a private tutor. In 1898, Schrödinger was enrolled at the Akademisches Gymnasium, graduating from the institution in 1906 with a broad education. Subsequently he entered the University of Vienna, where he studied mathematics, analytical mechanics, and theoretical physics. In 1910, Schrödinger received his doctorate and accepted a research post, but was obliged to leave upon the outbreak of World War I. Following his military service, in 1921, he settled in Zurich, where he taught at the university. Five years later, Schrödinger would develop his foundational work in the field of quantum wave mechanics.
In 1924, a French graduate student named Louis de Broglie completed his thesis on quantum physics. Within the document was a groundbreaking theory that helped revolutionize the field and led Schrödinger to develop the famous equations that carry his name. According to de Broglie, electrons and other forms of matter exhibit a dual nature that enables them to sometimes behave as particles and sometimes as waves. Prior to de Broglie’s work, several scientists were convinced of the duality of light, but he was the first to suggest that the same was true for other forms of matter. Influenced by de Broglie’s work, which had gained additional weight due to the support of Albert Einstein, Schrödinger attributed the quantum energies of the electron orbits in the atom thought to exist to the vibration frequencies of electron matter waves, now known as de Broglie waves, around the nucleus of the atom. This idea led to Schrödinger’s notion that an electron wave would exhibit a fixed quantum of energy, an idea that was fundamental to the development of his wave mechanics, which were based upon calculations more familiar to most scientists than those used previously by Werner Heisenberg to establish his opposing matrix mechanics explanation of electrons. Due to this familiarity and the greater ease that it could be used to visualize atomic events, Schrödinger’s wave mechanics quickly gained acceptance by many physicists as an alternative to Heisenberg’s matrix mechanics. However, there was a certain amount of dissidence among those in the field until Schrödinger proved that matrix mechanics and wave mechanics provided equivalent results, so that they were essentially the same theory expressed via different means.
The year after Schrödinger published his groundbreaking work on wave mechanics, he was offered the prominent post previously held at the University of Berlin by Max Planck, which he accepted. In 1933, however, Schrödinger decided to leave Germany due to the rise of Hitler and the Nazi party, even though, as a Catholic, he was not in any direct danger at the time. He first went to Oxford, where he held a fellowship for a time, and was then offered a position at Princeton, but did not accept it. Eventually, in 1936, he decided to accept a post at the University of Graz in Austria, but when the Nazis invaded the country soon after, Schrödinger was dismissed due to his earlier affront to the party. He and his wife, Anny, fled Austria and eventually established themselves in Ireland, where Schrödinger joined the Institute for Advanced Studies in Dublin.
Despite his political problems and frequent moves in the 1930s, Schrödinger continued to carry out his theoretical work. He is particularly known during this period for positing a vivid example of the paradoxes associated with quantum mechanics. Often referred to simply as Schrödinger’s cat, the famous hypothetical scenario involves a cat placed in a steel chamber with a Geiger counter containing an amount of radioactive material so small that there is an equal probability that in one hour one of the atoms will or will not decay. If one of the atoms does decay, it will cause a chain reaction that results in the release of hydrocyanic acid into the steel chamber, poisoning the cat. However, until the chamber is actually opened, it is impossible to know if the cat has been poisoned or not. Indeed, according to Schrödinger and quantum law, the cat is paradoxically both alive and dead after an hour in the box in a superposition of states. This superposition is only resolved when the fate of the cat is determined by opening the chamber. Schrödinger’s cat is often used as an argument against a blurred model of reality, in which the cat in the chamber would be smeared into a state between the two possibilities, being both partially alive and partially dead.
In his later years, Schrödinger renewed an earlier correspondence with Einstein and, similar to him, began to concentrate his efforts on developing a unified field theory. His work in this area, however, was not any better received than that of Einstein, who was often said to be wasting his time on the fruitless endeavor. Schrödinger, however, had many other interests and published such diverse works as What is Life (1944), Nature and the Greeks (1954), and My View of the World (1961), the latter of which expounded an outlook similar to the Vedanta branch of Hindu philosophy. In 1955, he retired from the Institute for Advanced Studies and subsequently returned to Vienna. He died following a prolonged illness on January 4, 1961. For his significant contributions to science, Schrödinger was bestowed with many honors, including the Nobel Prize for Physics, which he shared with Paul Dirac in 1933.
BACK TO PIONEERS IN OPTICS
Questions or comments? Send us an email.
© 1995-2017 by
Michael W. Davidson
and The Florida State University.
All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our
Graphics & Web Programming Team
in collaboration with Optical Microscopy at the
National High Magnetic Field Laboratory.
Last Modification Friday, Nov 13, 2015 at 02:19 PM
Access Count Since October 4, 2004: 29080
Visit the websites of our partners in education: |
Traditional ways of constructing buildings create pollution and waste. Building materials contain vast amounts of embedded energy. According to Architecture 2030, building construction and materials account for 5.5 percent of global greenhouse gas emissions. In addition, while exact numbers aren’t available, trucks and cranes transporting and installing materials at construction sites produce considerable amounts of greenhouse gas emissions.
(Source: Architecture 2030)
Typically, materials from torn-down buildings and sites are carted off to the landfill. The U.S. Environmental Protection Agency says only 40 percent of building and construction material is now “recycled, reused, or sent to waste-to-energy facilities, while the remaining 60 percent of the materials is sent to landfills.” Many sustainable architects, landscape architects, and construction firms are now moving towards a more sustainable construction process to reduce waste and greenhouse gas emissions. (Source: Environmental Protection Agency)
In a sustainable reconstruction, building materials are reused or recycled, dramatically reducing waste. For example, a new park can be created out of old building materials. Once the materials have been separated, some are kept at the construction site and reprocessed. Reclaimed soils, concrete rubble, glass, wood, and steel can be reused or recycled to serve new functions, reducing greenhouse gas emissions in the process. With climate change, any new construction methods that help landscape architects avoid producing additional emissions are a major benefit both to the project and society as a whole. In a sustainable landscape, everything old is made new again. (Source: Reuse Alliance)
This animation is designed to be a basic introduction to sustainable design concepts, created for the general public and students of all ages. We look forward to receiving your comments. |
The consequences of man's use of fossil fuels (coal, oil and natural gas) in terms of global warming has not escaped anyones attention. Ocean acidification is another, and much less known, result of the approximately 79 million tons of carbon dioxide (CO2) released into the atmosphere every day, not only as a result of fossilfuel burning but also of deforestation and production of cement (7). Since the beginning of the industrial revolution, about one third ofthe CO2 released in the atmosphere by anthropogenic (human-caused) activities has been absorbed by the world’s oceans,which play a key role in moderating climate change (5). Without this capacity of the oceans, the CO2 content in the atmosphere would have been much higher and global warming and its consequences more dramatic. The impacts of ocean acidification on marine ecosystems are still poorly known but one of the most likely consequences is the slower growth of organisms forming calcareous skeletons or shells, suchas corals and mollusks.
The carbon cycle
Inorder to understand ocean acidification and its possible impacts, one needs to understand the behaviour of carbon in nature. Carbon, as other elements, is circulating in different chemical forms and between different parts of the Earth system (atmosphere, biosphere and the oceans). These fluxes of carbon in inorganic (e.g. CO2) and organic forms (sugar and more complex carbohydrates in the biosphere) constitute the carbon cycle. In a very short time span, human activities use an old reservoir of carbon (fossil fuels) which took millions of years to accumulate, thus creating a new and massive flux of CO2 into the atmosphere. The oceans can mitigate this additional carbon dioxide flux and thus help moderate global warming but this is not without consequences.
The world's oceans play a fundamental role in the exchange of CO2 with the atmosphere and constitute an important sink for atmospheric CO2. Once dissolved in sea water, carbon dioxide is subject to two possible fates. It can either be used by photosynthesis or other physiological processes, or remain free in its differentdissolved forms in the water. The latter leads to ocean acidification.
The chemical process of ocean acidification
There is a constant exchange between the upper layers of the oceans and the atmosphere. Nature strives towards equilibrium, and thus for the ocean and the atmosphere to contain equal concentrations of CO2. Carbon dioxide in the atmosphere therefore dissolves in the surfacewaters of the oceans in order to establish a concentration inequilibrium with that of the atmosphere. As CO2 dissolves in the ocean it generates dramatic changes in sea water chemistry. CO2 reacts with water molecules (H2O) and forms the weak acid H2CO3 (carbonic acid). Most of this acid dissociates into hydrogen ions (H+) and bicarbonate ions (HCO3-). The increase in H+ ions reduces pH (measure of acidity) and the oceans acidify, that is they become more acidic or rather less alkaline since although the ocean is acidifying, its pH is still greater than 7 (that of water with a neutral pH). The average pH of today's surface waters is 8.1, which is approximately 0.1 pH units less than the estimated pre-industrial value 200 years ago (2,3).
Projections of future changes
Modeling demonstrates that if CO2 continues to be released on current trends, ocean average pH will reach 7.8 by the end of this century, corresponding to 0.5 units below the pre-industrial level, a pH level that has not been experienced for several millions of years (1). A change of 0.5 units might not sound as a very big change, but the pH scale is logaritmic meaning that such achange is equivalent to a three fold increase in H+ concentration. All this is happening at a speed 100 times greater than has ever been observed during the geological past. Several marine species, communities and ecosystems might not have the time to acclimate or adapt to these fast changes in ocean chemistry.
Possible consequences on marine organisms
The dissolution of carbon dioxide in sea water not only provokes an increase in hydrogen ions and thus a decline in pH, but also a decreasein a very important form of inorganic carbon: the carbonate ion (CO32-). Numerous marine organisms such as corals, mollusks, crustaceans and seaurchins rely on carbonate ions to form their calcareous shells or skeletons in a process known as calcification. The concentration of carbonate ions in the ocean largely determines whether there is dissolution or precipitation of aragonite and calcite, the two natural polymorphs of calcium carbonate (CaCO3), secreted in the form of shells or skeletons by these organisms. Today, surface waters are super saturated with respect to aragonite and calcite, meaning that carbonate ions are abundant. This super saturation is essential, not only for calcifying organisms to produce their skeletons or shells, but also to keep these structures intact. Existing shells and skeletons might dissolve if pH reach lower values, and the oceans turn corrosive for these organisms. Consequently, the results ofthe decrease in carbonate ions might be catastrophic for calcifying organisms which play an important role in the food chain and form diverse habitats helping the maintenance of biodiversity.
The magnitude of ocean acidification can be predicted with a high level of confidence since the ocean chemistry is well known. But the impacts of the acidification on marine organisms and their ecosystems is much less predictable. Not only calcifying organisms are potentially affected by ocean acidification. Other main physiological processes such as reproduction, growth and photosynthesis are susceptible to be impacted, possibly resulting in an important loss in marine biodiversity. But it is also possible that some species, like seagrasses that uses CO2 for photosynthesis, are positively influenced by ocean acidification. Ocean acidification research is still in its infancy and more studies are required to answer the numerous questions related to its biological and biogeochemical consequences.
1) Caldeira, K., Wickett, M.E., 2003. Anthropogenic carbon and ocean pH. Nature 425 (6956): 365–365.
2) Key, R.M.; Kozyr, A.; Sabine, C.L.; Lee, K.; Wanninkhof, R.; Bullister, J.; Feely, R.A.; Millero, F.; Mordy, C. and Peng, T.-H. (2004). "A global ocean carbon climatology: Results from GLODAP". Global Biogeochemical Cycles 18
3) Orr J. C., Fabry V. J., Aumont O., Bopp L., Doney S. C., Feely R. A. et al. 2005. "Anthropogenic ocean acidification over the twenty-first century and its impact oncalcifying organisms". Nature 437 (7059): 681–686.
4) Raven, J. A. et al. 2005. Ocean acidification due to increasing atmospheric carbon dioxide. Royal Society, London, UK.
5) Sabine C. L. et al., 2004. The oceanic sink for anthropogenic CO2. Science 305:367-371.
6) Martin S. et al. 2008. Ocean acidification and its consequences. French ESSP Newsletter 21: 5-16.
7) IPCC 2007. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Summary for Policymakers.
For more information on ocean acidification, carbonate chemistry and the carbon cycle, see the key documents and web resources.
Acclimate - To accustom or become accustomed to a new environment or situation.
Aragonite - An orthorhombic (system of crystallization characterized by three unequal axes at right angles to each other) mineral form of crystalline calcium carbonate, dimorphous with calcite
Biosphere – The living organisms and their environment
Calcite - A common crystalline form of natural calcium carbonate, CaCO3, that is the basic constituent of limestone, marble, and chalk. Also called calcspar.
Inorganic - Involving neither organic life nor the products of organic life
Ocean acidification – The process by which carbon dioxide dissolves in seawater, giving rise to a decrease in pH and other changes in ocean carbonate chemistry
Organic - Of, relating to, or derived from living organisms
pH – Measure of acidity (pH= -log[H+])
Photosynthesis - The process in green plants and certain other organisms by which carbohydrates are synthesized from carbon dioxide and water using light as an energy source. Most forms of photosynthesis release oxygen as a byproduct.
Phytoplankton - Minute, free-floating aquatic plants (algae, protists, and cyanobacteria).
Polymorph – Chemistry: A specific crystalline form of a compound that can crystallize in different forms. |
3D printing is a well-established industrial technology for prototyping and manufacturing, particularly popular with the aerospace and defence sectors. Also known as additive manufacturing (AM), 3D printing is the process of making a solid 3D object from a digital computer aided design (CAD) file. The printer adds successive layers of material together until the final object has been created. This is different from traditional manufacturing methods like CNC machining, which removes material from a solid block using rotating tools or cutters.
3D printing is a rapid production method with minimal waste material. Its design flexibility means users can manufacture bespoke objects for a low cost. These advantages have made it increasingly popular as a production method in the manufacturing industry.
Understanding and using this growing technology can benefit children’s learning, particularly in science, technology, engineering and mathematics (STEM) subjects but also beyond these more traditional fields in music, design technology, history, geography and biology. In 2013, a pilot project introduced 3D printers into 21 schools to investigate learning through 3D printing. This project highlighted the need for robust training and good technical support for the widespread incorporation of 3D printing into the curriculum to be successful.
This project confirmed the potential for 3D printers as a teaching resource, providing that teachers can access adequate training for the technology. Many of the schools reported increased pupil motivation when engaged in 3D printing projects. Exciting and innovative projects are also a simple way to keep pupils engaged in STEM subjects, which is a vital step forward in addressing the STEM skills shortage. Since the pilot project in 2013, 3D printing has become more accessible and popular as a classroom technology.
The rise of 3D printers in schools
The increasing numbers of 3D printers in schools is not only due to the increasing recognition of 3D printing being a relevant and engaging educational tool, but also relates to the number and availability of low cost 3D printing machines. It is now possible for schools to buy a 3D printer for around £500, whereas previous versions were cost prohibitive. The decreasing price tag is drastically improving the technology's pick up in the education sector.
Advances in resources available for teachers and other education professionals are also making 3D printing more widely accessible. Teachers can now download design software and access it via tablets and mobile phones. Easy tutorials for beginners are available for those without basic knowledge of the technology.
3D printing software is considerably more user friendly than it was two years ago, which makes it ideal for younger children to grasp. Innovative apps for mobile phones and tablets make it easy and efficient to create designs and send them to a 3D printer for production. These apps build up students’ skills using design platforms. However, the primary reason the technology is able to positively influence the learning process in design is the ability to learn through trial and error.
Developing new skills
Using 3D printing as a production method enables students and pupils to move from the conception of an idea to producing a physical object with relative ease. The technology provides the ability to produce a part quickly, which is an advantage for students learning about design, particularly the limitations and constraints of the different technologies. Interrogating a physical object can make it easier for pupils to spot mistakes in designs. This allows them to gain valuable problem solving skills in a creative, hands-on way; without the ability to print prototypes, it would be considerably more difficult for students to identify weaknesses in their designs and improve upon them.
In recent years, the price of consumer 3D printers has dropped as the market has expanded. This makes the purchase of a machine easier to justify in the education sector, but for those schools that feel unable to justify the cost of owning a 3D printer despite recognising the benefits it can offer to learning, a purchase is not always necessary. Facilities such as the Fabrication Development Centre (FDC) at the Renishaw Miskin site, near Cardiff, contains five 3D printers that local schools use during their design and technology lessons.
Believed to be the only facility of its kind in the UK that is attached to a manufacturing site, Renishaw’s FDC enriches pupils' learning experience further by showing them how industrial metal additive manufacturing machines are made and used to produce medical devices and dentures within the co-located Healthcare Centre of Excellence. This gives students the opportunity to see Renishaw manufactured metal 3D printers in action — producing objects such as dental frameworks and facial implants. Students are able to relate their learning in the classroom with practical applications in industry, a link that may otherwise be difficult to grasp.
3D printing has a number of benefits to a wide range of school subject areas, from design and technology to physics and even model building for subjects such as biology and geography. A major hurdle to overcome in the education sector was mastering 3D printing machines. However, the emergence of simple software packages and the availability of online tutorials have greatly improved accessibility to the technology. With the reduction in cost of materials and printers, and schools’ focus on active learning and addressing the skills gap, it would be logical for 3D printers to become a widely used educational tool in years to come. Who knows, they might even prove as popular as the electronic calculator. |
In a context of oil shortages, biofuels are an option to consider. Locally sourced and easily renewable, they are also cleaner. Hemp has several benefits as a future source of energy, especially the large quantities of biomass it produces. It is an alternative to non-renewable hydrocarbons that has a great potential.
GREEN ENERGIES FROM PLANTS
Several types of biofuels have been developed over time, thanks to different techniques and different plants. Hemp can be used as a basis for creating at least three different types of energies.
The oil extracted from the seeds can be converted into biodiesel, but it is especially the hemp lignocellulosic biomass that is interesting: it produces 7 to 10 tons of dry matter per hectare (while still allowing to harvest the seeds for food use).
Through a fermentation process (by microbes inside large bioreactors), hemp biomass can be converted into bioethanol, which is already widely used as an oil additive. Another method could produce methane, a gas that is burned by combustion engines and is also used for heating.
Let’s recall that originally, the inventor of the diesel engine made it so it could be powered by all kinds of fuels, especially those of plant origin.
ADVANTAGES OF HEMP TO PRODUCE BIOFUELS
- A generous and rapidly renewable resource. One hectare of hemp produces 7 to 10 tonnes of dry matter in just 4 months.
- A resource produced in an ecological way. Unlike other plants used to make biofuels, hemp does not disturb the ecological balance. It is a non-GMO crop that does not require herbicides or pesticides, little fertilizer and often no irrigation.
- A local resource. Growing in a wide variety of climates (from the tropics to the northern climate of Canada and Russia), hemp could in the future be transformed locally into fuel to limit transportation.
- A beneficial crop for the soil. Far from damaging the environment, hemp improves soils through its root system, natural resistance to weeds and nutrients given back to the soil by leaves and stems left in the fields. This makes it a good crop to add to rotations, without compromising food production.
- An agricultural outlet and a new agro-industrial activity favorable to the local economy.
- Does not contribute to the greenhouse effect. Hydrocarbons are made of fossilized plants. Through burning, they release the carbon dioxide accumulated by these plants hundreds of millions of years ago. Their massive use for a century has suddenly dramatically increased the amount of carbon dioxide in the atmosphere. This is what causes the famous greenhouse gases. The biofuels produced today also release carbon dioxide, but in levels equivalent to the ones that the plants have stored during their growth: it is therefore a balanced cycle.
- Agrofuels are an additional source of fuel, conducive to energy independence and, eventually, great substitutes for oil, which is becoming scarce. |
We have met middle C in the both the treble clef
and bass clef
and found out that the little line that goes through the middle of the note is called a ledger line, and that it makes extra room on the stave for us to use.
We can add more ledger lines to make more space on the stave. We can add ledger lines to the top of the stave, and to the bottom.
The first note we use a ledger line on is the A.
Let’s now add them to the bottom:
and to the bottom:
In Grade Two Music Theory (ABRSM and Trinity), you will need to be able to read notes written with up to 2 ledger lines.
Hover your mouse at the stave to reveal the answers. (Tap on mobile devices)
1. What are the names of these notes (treble clef)?
2. What are the names of these notes (bass clef)?
Highest and Lowest
3. Name the highest and the lowest note in each of these melodies. |
"The history of Buddhism spans from the 6th century BCE to the present. Buddhism arose in the eastern part of Ancient India, in and around the ancient Kingdom of Magadha (now in Bihar, India), and is based on the teachings of Siddhārtha Gautama. The religion evolved as it spread from the northeastern region of the Indian subcontinent through Central, East, and Southeast Asia. At one time or another, it influenced most of the Asian continent. The history of Buddhism is also characterized by the development of numerous movements, schisms, and schools, among them the Theravāda, Mahāyāna and Vajrayāna traditions, with contrasting periods of expansion and retreat."
The Rise and Decline of Buddhism in India (file size: about 20 MB) |
Salad of stories
One of the keys to understanding the way in which the transmedia phenomenon has evolved is the flexibility of telling stories using the old collage technique, which has now become what is called mashup. That is, a piece that is composed of two or more fragments of preceding pieces that combine to produce a new piece and, of course, a message. This phenomenon has many variants, both in fiction and in reality and is largely related to the breaking of limits, occasions, between what is considered true and false or at least unreliable versions. For this reason, this activity can be developed both in the context of real content, news or historical events and fiction or imaginary events. The aim is to collect several short fragments of national and world history and connect them to produce analyses of historical processes and connections.
This activity adapts a classic strategy proposed by Gianni Rodari in Grammar of Fantasy.
NARRATIVE AND AESTHETIC
Recognize and describe
Evaluate and reflect
IDEOLOGY AND ETHICS
Recognize and describe
Evaluate and reflect
- 20 x 15 cm cardboard cards of different colours, five baskets or bag to mix the fragments
- Have any of you been wrong in locating a historical event in time?
- How does someone know at what time an event occurred and see if it is true or not? Does it only depend on memory, books or experts?
Since the previous session each student is asked to choose a historical event from those that have been seen in class, this event must be counted in 5 different cards. One of them will focus on talking about the particular event without giving details of the people or the names of the places "when this war began people from two different countries came, but then they joined with other nations, it was the first time that ... and at this moment he got up ... " Another to show what that historical time was like "at this time people worked in ... to have fun ... when they got married ... and their families were ... related to each other ... and when they died ...". Two others to describe two characters or groups that are part of this historical moment or process "this group was very brave, they had confronted all the communities around them ..." or "this character lived a short life ... he died very young because…". Finally, one to show the effects of that event "because of this event .... and something very important was that from that time ... "
Each student brings cards, in coloured cardboard or different paper, all with the same size. These are put into the bags assigned with name: who, when, what, and then. Once all the stories are put together in pairs, the students will take two tokens of who, one of what, another of when and another of then. And using the proposal can build a complete story. The task is to make it as credible as possible and define beforehand if it is true or false. Once the story is designed, each group prepares the presentation without using proper names or specific dates or places.
Each group will have the opportunity to change up to two cards on a single occasion within the development of the activity if they find that one or the other does not fit or if they would like to find something that meets the conditions for their story better.
To develop the challenge, two groups are called for direct elimination. Each group must give the coordinator of the activity a paper that says whether their story is true or false and the time, subjects, place and event their story explains. The first group makes the presentation and the second responds, without giving a verdict the groups change roles. After the presentation, the response is evaluated by the coordinator of the activity. A point is assigned if the story is correctly defined as true or false, another point if the location is correct, a point is assigned if they guessed true or false of what the opposing partner proposed, and a point if they identified the event, context, epoch and actors.
The coordinator does not clarify the validity of the answers, but only assigns the points. Thus the group can use the same presentation in the successive rounds.
The group that has achieved the highest number of points goes to the next round, the losing group is added to the winning group and everyone can improve their version of the story for the next phase.
In this way in the next round the groups are of four people, then eight and finally by two large groups.
At the end of the session the group works to sort the data they have shared among everyone. Faults are considered and the way in which it was possible to find whether the story was true or false is debated.
Rodari, Gianni. 2002. Gramática de la Fantasía. Barcelona: Del Bronce.
G. Eduardo Gutiérrez. Pontificia Universidad Javeriana (Colombia), [email protected] |
Though scholars know the Anunnaki as the gods of ancient Mesopotamia, fringe theorists believe they are ancient alien invaders from the planet Nibiru.
Before the Greeks exalted Zeus or the Egyptians praised Osiris, the Sumerians worshipped the Anunnaki.
These ancient gods of Mesopotamia had wings, wore horned caps, and possessed the ability to control all of humanity. Sumerians revered the Anunnaki as heavenly beings who shaped the destiny of their society.
Wikimedia CommonsA carving depicting the Anunnaki, the ancient Sumerian gods that some believe were aliens.
But were they more than deities? Some theorists claim that the Anunnaki were aliens from another planet. Even more shocking, they use ancient Sumerian texts to back up this wild idea. Here’s what we know.
Why The Sumerians Worshipped The Anunnaki
The Sumerians lived in Mesopotamia — present-day Iraq and Iran — between the Tigris and Euphrates rivers from about 4500 to 1750 B.C.
Despite being an ancient civilization, their reign was marked by a number of impressive technological advancements. For example, the Sumerians invented the plow, which played a huge role in helping their empire grow.
Wikimedia CommonsSumerian statues, which depict male and female worshippers. Circa 2800-2400 B.C.
They also developed cuneiform, one of the earliest known systems of writing in human history. In addition, they came up with a method of keeping time — which modern people still use to this day.
But according to the Sumerians, they didn’t do it alone; they owed their historic breakthroughs to a group of gods called the Anunnaki. In their telling, the Anunnaki mostly descended from An, a supreme deity who could control both the fate of human kings and his fellow gods.
Though much remains unknown about the Sumerians and their way of life, they left evidence of their beliefs in ancient texts, including the Epic of Gilgamesh, one of the oldest written stories in human history.
And if one thing is clear, it’s that the Anunnaki gods were highly revered. To worship these deities, ancient Sumerians would create statues of them, dress them in clothing, give them food, and transport them to ceremonies.
Millennia later, some scholars would speculate on what made these Anunnaki so special — and why they were held in such high regard. But it wasn’t until the 20th century that the “ancient alien” theory really took off.
Why Some Think The Anunnaki Were Actually Ancient Aliens
Wikimedia CommonsA Sumerian cylinder seal, which some theorists believe is evidence of ancient aliens visiting the Earth.
Much of what we know about the Sumerian civilization comes from clues that they left behind in thousands of clay tablets. To this day, these tablets are still being researched. But one author claimed that some of the texts hold an incredible revelation — the Anunnaki were actually aliens.
In 1976, a scholar named Zecharia Sitchin wrote a book called The 12th Planet, which shared translations of 14 tablets related to Enki, a child of the Sumerian supreme deity An. His book claimed that the Sumerians believed that the Anunnaki came from a far-off planet called Nibiru.
According to Sitchin, Nibiru has an elongated orbit of 3,600 years. At one point, this planet passed close by Earth. And its people, the Anunnaki, decided to make contact with our world around 500,000 years ago.
But the Anunnaki sought more than just a friendly exchange. They wanted gold, which they desperately needed to repair their planet’s atmosphere. Since the Anunnaki weren’t able to mine gold themselves, they decided to genetically engineer primitive humans to mine gold for them.
And by the time the Sumerians emerged as a civilization, the Anunnaki had given people the ability to write, solve math problems, and plan cities — which led to the future development of life as we know it.
Wikimedia CommonsA depiction of the ancient Sumerian god Enki, pictured in the middle.
This may seem like a truly out-of-this-world claim. But Sitchin — who spent decades studying ancient Hebrew, Akkadian, and Sumerian until his death at age 90 in 2010 — once said that skeptics didn’t have to take his word for it.
“This is in the texts; I’m not making it up,” Sitchin told The New York Times. “[The aliens] wanted to create primitive workers from the homo erectus and give him the genes to allow him to think and use tools.”
As it turned out, The 12th Planet — and Sitchin’s other books on this topic — sold millions of copies around the world. At one point, Sitchin even joined forces with Swiss author Erich von Danniken and Russian author Immanuel Velikovsky as a triumvirate of pseudo-historians who believed that the ancient Sumerian texts were not just mythological stories.
Instead, they believed that the texts were more like scientific journals of their time. And if these theorists were hypothetically correct on all counts, this would mean that the Anunnaki were not deities invented by people to explain life — but actual aliens who had landed on Earth to create life.
Humans, in their telling, were made to serve alien masters who needed the Earth’s gold to sustain their civilization. And as chilling as that sounds, millions are apparently willing to entertain this theory — at least for fun.
Controversy Over The “Ancient Aliens” Theory
Wikimedia CommonsAncient figurines that depict Anunnaki figures wearing traditional headpieces.
Most mainstream academics and historians reject the ideas put forth by Sitchin and his colleagues. They often say that these theorists have either mistranslated or misunderstood the ancient Sumerian texts.
One Smithsonian writer outright panned the History Channel show that explores some of these theories, writing: “Ancient Aliens is some of the most noxious sludge in television’s bottomless chum bucket.”
Though some skeptics admit that ancient Sumerian texts may include some unusual-sounding beliefs, they think it’s mostly because they lived in a time before people had a sophisticated understanding of things like floods, astronomy, animals, and other parts of life.
Meanwhile, authors like Sitchin took the Sumerians’ texts literally — and were confident in the translations that they made despite the backlash.
British MuseumClay tablets inscribed with cuneiform.
However, one thing cannot be denied — the people of Sumer were advanced for their time. A clay tablet translated in 2015 shows that ancient astronomers made extremely accurate mathematical calculations for the orbit of Jupiter — a full 1,400 years before Europeans did.
And the Babylonians — who succeeded the Sumerians — may have also created trigonometry 1,000 years before the ancient Greeks.
Although the Sumerian civilization collapsed thousands of years ago, they arguably laid the seeds for humanity to grow and flourish. But did they have help from an otherworldly civilization? Could the ancient Sumerians have had alien visitors who taught them advanced math and science?
Ancient alien theorists would argue yes. They would point to translations like Sitchin’s, the advanced abilities of the people of Sumer, and the fact that some ancient Sumerian texts appear to reference “flying machines” (although this could be a mistranslation).
For now, there is no confirmed evidence that Sitchin’s theories are true. However, no one knows for sure whether or not some of his ideas might’ve been correct. At this point, scholars still have much to learn about the Sumerians. Many of their ancient clay texts are still being translated — and other texts have not even been excavated from the ground yet.
Perhaps most challenging, we also have to recognize that humans today can’t even agree on whether or not aliens exist in our own time. So it’s doubtful that we’ll be able to agree on the existence of ancient aliens anytime soon. Only time will tell if we’ll ever know the real answer. |
- Compare and contrast the baby objects of today with those in Colonial America
The night prior to this lesson students will use newspapers and magazines to cut out objects that represent babyhood, such as, cribs, walkers and clothing. In class we will look at pictures of artifacts from colonial America and make comparisons. We will also see how the objects have evolved and/or remained the same. Students will then chose one item from colonial America and describe how it was used, how it is used today, and how it looks today. |
Shift Up: To change to a higher gear.
Example usage: I need to shift up to get up this hill.
Most used in: Urban and suburban areas with hills or inclines.
Most used by: Commuting cyclists.
Comedy Value: 4/10.
Understanding the Cycling Term 'Shift Up'
If you're new to cycling, you may have heard the term 'shift up' and wondered what it means. In cycling, shifting up means changing to a higher gear ratio, which requires more effort from the cyclist to move the bike forward.
When you shift up, you move the chain to a larger cog in the rear cassette. This larger cog has more teeth, which allows the wheel to rotate faster and move the bike forward with less effort. This is important when you're riding on flat terrain or uphill, as it allows you to increase your speed or maintain it without tiring as quickly.
Shifting up is also important when you're descending on a hill. By shifting up, you can maintain a higher speed without having to pedal as hard. This is particularly important if you're cycling with a group and need to keep up with the group's pace.
It's important to note that shifting up too much can be dangerous. If you shift up too much and try to pedal, you can spin the cranks too fast and lose control of the bike. It's also important to shift down when you need to slow down or stop, as it will help you maintain control of the bike and reduce the chances of a fall.
Shifting up is an important skill for cyclists of all levels. With practice and experience, you'll be able to quickly and confidently shift up and down to adjust your speed and control your bike..
The History of the Cycling Term 'Shift Up'
The phrase 'Shift Up' has been used by cyclists for over a century, with the earliest known use of the term appearing in the New York Times in 1904. This reference, which described a cyclist shifting up to 'second speed,' suggests the phrase was already in common use.
The term 'Shift Up' is believed to have originated in the United Kingdom, where it was commonplace for cyclists to refer to the process of changing gears as 'shifting up.' This usage of the phrase was popularized in the early 1900s by cycling magazines and books, and by the 1920s, it had become a well-known term among cyclists.
Today, the term 'Shift Up' is still widely used by cyclists around the world, and it has become a part of the cycling lexicon. By shifting up, cyclists can increase their speed and make their rides more efficient. It is an essential part of the cycling experience, and it has been a part of the sport since its earliest days. |
Last week’s focus was on using ten frames to help with students’ number sense and conceptual development of number bonds for amounts 1-10. This post will feature ways to use ten frames to enhance students’ understanding of addition and subtraction. Look for freebies and a video!
There are many addition and subtraction strategies to help students memorize the basic facts such as these below. The ten frame is a very good tool for students of all grade levels to make these strategies more concrete and visual. I will focus on some of these today.
add or take away 1 (or 2)
doubles, near doubles
facts of 10
make a ten
add or sub. 10
add or sub. 9
add or sub. tens and ones
Doubles and near doubles (doubles +1, -1, +2, or -2): If the doubles are memorized, then problems near doubles can be solved strategically.
Show a doubles fact on a single ten frame (for up to 5 + 5). Use a double ten-frame template for 6 + 6 and beyond.
With the same doubles fact showing, show a near doubles problem. This should help students see that the answer is just one or two more or less.
Repeat with other examples.
Help student identify what a doubles + 1 more (or less) problem looks like. They often have a misconception there should be a 1 in the problem. Make sure they can explain where the “1” does come from. Examples: 7 + 8, 10+11, 24+25, 15 +16, etc.
For subtraction, start with the doubles problem showing and turn over the 2-color counters or remove them.
Facts of 10: These are important to grasp for higher level addition / subtraction problems as well as rounding concepts.Continue reading →
The focus in this post will be an introduction to ten frames and ways they can help your students gain number sense. Then stay tuned because ten frames can also be a great tool for addition, subtraction, multiplication, and division.
Subitizing: This is the ability to recognize an amount without physically counting. Looking at the picture of red counters: If the top row is full, does the student automatically know there are 5? Doing a Number Talk is a great way to practice subitizing using a ten frame:
Use your own or pre-made dot cards. Flash the card for 1-2 seconds. Observe students. Are any of them trying to point and count? Or do they seem to know right away? Here’s a great video I recommend: KG Number Talk with ten frames
Tell students to put their thumb in front of their chest (quietly) to signal they know how many there are.
Ask a few students to name the amount.
Then ask this very important question, “How did you know?”
For the top picture you might hope a child says, “I knew there were 5 because when the top row is full, there are 5.”
For the bottom picture, you might hope for these types of responses: “I saw 4 (making a square) and 1 more.” or “I saw 3 and 2 more.” or “I pictured the 2 at the bottom moving up to the top row and filling it up, which is 5.”
The idea is to keep building on this.
What if I showed 4 in the top row? Can the student rationalize that it was almost 5? Do they see 2 and 2?
What if I showed 5 in the top row and 1 in the bottom row? Can the student think “5 and 1 more is 6?”
Here are some resources you might like to help with subitizing using ten frames.
Number Bonds:Using ten frames to illustrate number bonds assists students with composing and decomposing numbers. Students then see that a number can be more than a counted amount or a digit on a jersey or phone number. Here is an example of number bonds for 6:
Last time I focused on some basics about learning the number bonds (combinations) of 10 as well as adding 10 to any number. Today I want to show the benefits of making a 10 when adding numbers with sums greater than 10 (such as 8 + 5). Then I’ll show how to help students add up to apply that to addition and subtraction of larger numbers. I’ll model this using concrete and pictorial representations (which are both important before starting abstract forms).
Using a 10 Frame:
A ten frame is an excellent manipulative for students to experience ways to “Make a 10.” I am attaching a couple of videos I like to illustrate the point.
Model this process with your students using 2 ten frames.
Put 8 counters on one ten frame. (I love using 2-color counters.)
Put 5 counters (in another color) on the second ten frame.
Determine how many counters to move from one ten frame to the other to “make a 10.” In this example, I moved 2 to join the 8 to make a 10. That left 3 on the second ten frame. 10 + 3 = 13 (and 8 + 5 = 13).
The example below shows the same problem, but this time move 5 from the first ten frame to the second ten frame to “make a 10.” That left 3 on the first ten frame. 3 + 10 = 13 (and 8 + 5 = 13). Continue reading →
I’m sure everyone would agree that learning the addition / subtraction facts associated with the number 10 are very important. Or maybe you are thinking, aren’t they all important? Why single out 10? My feelings are that of all the basic facts, being fluent with 10 and the combinations that make 10 enable the user to apply more mental math strategies, especially when adding and subtracting larger numbers. Here are a few of my favorite activities to promote ten-ness! Check out the card trick videos below – great way to get kids attention, practice math, and give them something to practice at home. Continue reading → |
Northern Eurasia, one of the so-called ‘hot-spot’ areas, exerts particularly important climatic controls because of its potential for large carbon storage or loss in a changing environment. What will happen with the carbon stored in forests and soils, as well as in wetlands and underlying permafrost in the boreal and arctic zone of Eurasia under warming conditions? Increased carbon storage due to a prolonged vegetation period will likely be counterbalanced by enhanced microbial activity that accelerates the release of carbon through respiration. Changes in precipitation amount and distribution patterns are likewise important – also because they impact frequency and distribution of forest fires and insect outbreaks. Direct anthropogenic impacts are at present still relatively small in the region, but industrial development and changes in land use and management may become increasingly important. Many questions regarding the above processes and their interactions remain unanswered.
ZOTTO the Zotino Tall Tower Observatory
In the framework of the ISTC partner project “Biogeochemical Responses to Rapid Climate Changes in Eurasia”, as part of a global cooperative effort to provide answers to the above questions, the Zotino Tall Tower Observatory (ZOTTO) had been established in 2006. It was intended to serve the scientific community as one of the world’s major continental research platforms for at least 30 years. It should document and help quantifying the changes in biogeochemical cycles in this important region of the globe. |
Every human being born into the world has a golden period in brain development. This period is found in the first 1-5 years of life. The period is none other than when humans are toddlers. At that time, parents are expected to be able to provide appropriate education to children to shape the personality of children from an early age. What parents ‘laced’ during the golden period in children, can affect brain development and personality when they grow up.
On this occasion we will discuss the periodic system of elements, namely the periodic table of complete chemical elements by reading the information from the periodic table itself and you will get some images of free printable periodic table of elements for kids.
Look at the periodic table starting from the top left and ending at the last line, near the bottom right. Tables are arranged from left to right based on increasing atomic numbers. The atomic number is the number of protons in an atom.
Not all rows or columns are filled. Even though there is an empty part in the middle, reading the table starts from left to right. For example, Hydrogen has an atomic number 1 and is located in the upper left. Helium has an atomic number 2 and is located in the upper right. Elements 57 to 102 are usually described as part of the bottom right of the table. These elements are “rare earths.”
Look for the “class” element in each table column. There are 18 columns. Use the term “reading class down” to read from top to bottom. Numbering is usually written above the column; however, numbering can appear at the bottom of other categories, such as metals.
The numbering used on the periodic table will be very different. Numbering can be Roman numbering (IA), Arabic numbering (1A), or numbers 1 to 18.
Hydrogen can be incorporated into the halogen family and the alkali metal family, or both.
Look for the “period” element in each table row. There are 7 periods. Use the phrase “read period to the side” to read from left to right.
Periods are usually numbered 1 to 7 on the left side of the table. Each period is greater than the previous period. This is related to increasing atomic energy levels on the periodic table.
Understand additional classification based on metals, semi-metals and non-metals. The element color will be very different.
Metals always have one color. However, hydrogen usually has the same color and is grouped with non-metals. Metals are shiny, usually solid at room temperature, delivering heat and electricity, and can be forged and elastic.
Nonmetals always have one color. Nonmetals include elements C-6 to Rn-86, including H-1 (Hydrogen). Nonmetals are not shiny, conduct heat or electricity, and cannot be forged. Nonmetals are usually gases at room temperature and can be solids, gases or liquids.
Semi metal / metalloid is usually purple or green, as a combination of the other two colors. The elements included in it form a diagonal line, extending from elements B-5 to At-85. Semi metal has several metallic properties and some nonmetallic properties. Note that elements are also sometimes written based on their families. This family includes alkali metals (1A), earth alkali metals (2A), halogens (7A), noble gases (8A), and carbon (4A). Numbering can be Roman, Arabic, or standard numbering. |
Intensive farming practices, including overuse of antibiotics, high animal numbers, and low genetic diversity, increase the risk of animal pathogens transferring to humans. That warning comes from an international team of researchers led by the Universities of Bath and Sheffield in the UK, working with support from the United States Department of Agriculture (USDA).
The research, published in the journal Proceedings of the National Academy of Sciences, looks at the evolution of Campylobacter jejuni. According to a press release from the University of Bath, this bacterium is the leading cause of gastroenteritis in high-income countries.
Facts about Campylobacter, as outlined in the release:
- Carried in the feces of chickens, pigs, cattle and wild animals;
- Estimated to be present in the feces of 20% cattle worldwide;
- Transferred to humans from eating contaminated meat and poultry;
- Causes bloody diarrhea in humans;
- Causes serious illness in patients with underlying health issues and can cause lasting damage, though not as dangerous as typhoid, cholera or E.coli;
- An estimated 1 in 7 people suffer from an infection at some point in their life;
- Causes three times more cases than E. coli, Salmonella and listeria combined;
- Very resistant to antibiotics due to the use of antibiotics in farming.
The research team studied the genetic evolution of Campylobacter jejuni. What they found: Cattle-specific strains of the bacterium emerged at the same time as a dramatic rise in cattle numbers in the 20th Century. The study authors suggest that changes in cattle diet, anatomy, and physiology triggered gene transfer between general and cattle-specific strains with significant gene gain and loss. As is explained in the release, this helped the bacterium cross the species barrier and infect humans. Add in the increased movement of animals globally, they say, and intensive farming practices have provided the “perfect environment” in which to spread globally through trade networks.
“There are an estimated 1.5 billion cattle on Earth, each producing around 30 kg of manure each day; if roughly 20% of these are carrying Campylobacter, that amounts to a huge potential public health risk,” explained Professor Sam Sheppard, Director of Bioinformatics from the Milner Centre for Evolution at the University of Bath, in the release. “Over the past few decades, there have been several viruses and pathogenic bacteria that have switched species from wild animals to humans: HIV started in monkeys; H5N1 came from birds; now COVID-19 is suspected to have come from bats.”
Sheppard added that research suggests that environmental change and increased contact with farm animals are additional factors that have caused bacterial infections to cross over to humans. “I think this is a wake-up call to be more responsible about farming methods, so we can reduce the risk of outbreaks of problematic pathogens in the future.”
Related: Danone’s 2020 Grant Program Promotes Sustainable Food Systems
SHP Publishes Sustainability & Regenerative Practices Toolkit
Organic Farming is Worse for Climate Change? Not So, Says The Organic Center
In the study, the authors note: “Further understanding of the genetic and functional basis of host adaptation, particularly to livestock that constitute the majority of mammal biomass on Earth, is important for the development of novel strategies, interventions, and therapies to combat the increasing risk of pathogens with the capacity to spread from livestock to humans.” The hope going forward, the researchers say, is that this study can help scientists predict potential problems so those problems can be prevented before they result in another epidemic. |
About This Course
The whole of the French language can be broken down into several different structures. If you take any sentence from any French book or any utterance, you will see that it fits into one of these structures.
I remember one weekend, I was writing some lessons for the week ahead, when I suddenly realised this. I noticed that there are a certain number of structures in French, and that every sentence follows one of these structures. I spent the rest of the weekend working out all the structures, and I wrote them all down.
Every structure you learn gives you the ability to say a huge amount. Some structures are used more than others, but all the structures together make up the whole French language. Once you’ve learnt how a structure works, all you have to do is insert different words into the slots and you have a sentence.
This course introduces you to Structure 4. I’ve limited each course to one structure so as not to overburden you. By looking at just one structure at a time, you can really get to grips with it and understand its usage. It will help to clarify the French language and make it more like a reflex rather than something you have to think about as if it were a maths equation.
Each structure can also help to propel you to fluency; if you can manipulate the structures at high speed, you can start to say anything you want without having to think about how to say it.
This course contains plenty of practice opportunities for you to revise what you’ve learnt and it also contains some hints and tips on how best to learn and memorise the structures and the vocabulary that goes with them. You’ll learn how to make questions out of Structure 3, how to make statements and how to turn positive statements negative.
The Building Structures in French series is set out using the same learning techniques as the 3 Minute French courses. You can work through the course in three minute chunks, enabling anybody to learn French, no matter how little time you have.
- Ideally, you should be a little familiar already with the French language, but if you’re not, panic not! Everything in this course is fully explained, so you won’t get lost
- I recommend you start by enrolling on the “Building Structures in French – Structures 1, 2 and 3” courses
Who this course is for:
- You want to explore the French language a little more deeply
- You are interested in starting to learn about the next structure of the French language and how to manipulate it.
- You enjoy the 3 minute methodology used in other 3 Minute French courses
- You enjoyed learning about the first three structures and would like to learn about the next structure in French
Our Promise to You
By the end of this course, you will have learned French language structure.
30 Day Money Back Guarantee. If you are unsatisfied for any reason, simply contact us and we’ll give you a full refund. No questions asked.
Get started today and learn more about building structures in French.
|Section 1 - Introduction And Chapter 1|
|Section 2 - Chapter 2|
|Section 3 - Chapter 3|
|Section 4 - Chapter 4|
|Section 5 - Chapter 5|
|Section 6 - Chapter 6|
|Section 7 - Chapter 7|
|Section 8 - Chapter 8|
|Section 9 - Chapter 9|
|Section 10 - Chapter 10|
|Section 11 - Chapter 11| |
The batfishes are a family of lophiiform fishes that includes 68 species distributed among 10 genera. These fishes are marine bottom-dwellers that feed on small invertebrates and fishes. The esca appears to exude a fluid that may function as a chemical lure; however, the chemical properties of this substance remain unknown. Although batfishes are taken regularly in commercial fishing operations, they are rarely eaten and do not support a fishery.
Small to medium-sized lophiiform fishes (to 25 cm), dorsolaterally compressed (except for species of Coelophrys). Head large, triangular or circular in outline, forming a disc. Esca within cavity just above mouth, in most species a smooth-skinned glandular structure that can be extended in front of the mouth a short distance; dorsal margin of cavity a protective rostrum formed of close-set tubercles. Eyes of moderate size, about 7 to 15% of standard length; skin surrounding the iris often covered with prickle-like scales, visible in dorsal view but directed anterodorsally. Mouth small, overhung by rostrum in some species; lips usually thickened; jaw teeth minute, arranged in rows on pads. Palatines and vomer with or without teeth. Gill openings small, round, located behind the pectoral fin attachments, directed dorsally. Branchiostegals 6. Dorsal fin small, only 4 to 7 short rays (sometimes absent), located on tail halfway between disc and caudal fin. Anal fin slender, lappet-like, only 3 or 4 rays. Pectoral fins attached to sides of disc, appearing leg-like. Pelvic fins attached to ventral surface of disc anterior to pectoral fins, with 1 spine and 5 soft rays. Lateral-line organs a series of free neuromasts appearing as fleshy knobs, most prominent on ventral margins of disc and lateral sides of tail. Scales highly modified to form conical tubercles, variable in size from minute prickles to large strong spiny structures. Short hair-like extension of skin (cirri) often present, especially around edges of disc and sides of tail.
Color variable, fresh specimens often pink to reddish; dark markings may be present on dorsal surface of disc in the form of reticula, rings, or blotches.
Evolution and Systematics
Discussion of Phylogenetic Relationships
The hypothesized relationships of the Ogcocephalidae as presented by Endo and Shinohara (1999). Numbers on the cladogram represent transformation series as "character number (primitive-derived)." Character reversals are indicated by an asterisk. For character states, click here: 1, 2, 3, 4, 5, 6, 7, 8, and 9.
The interrelationships of ogcocephalid genera were analyzed by Endo and Shinohara (1999) based on characters discussed by Bradbury (1967). Endo and Shinohara (1999) reported a sister relationship between Coelophrys and Halieutopsis supported by the following synapomorphies: normal frontal bones, spine-like illicial bone, and pectoral fins not forming an elbow. They also reported that Coelophrys and Halieutopsis form a larger clade with Dibranchus, Halicmetus, Malthopsis, Zalieutes, and Ogcocephalus supported by the following synapomorphies: an interrupted lateral line on tail and holobranchs present on second and third gill arches, absent on fourth gill arch. The genus Solocisquama Bradbury (1999) was not included in the Endo and Shinohara (1999) analysis because it had not yet been described.
- Lateral line on ventral surface of tail absent (0), present and uninterrupted (1), present and interrupted (2);
- large conical scales (“bucklers”) absent (0), present (1);
- papillary operculum of iris absent (0), present (1);
- teeth present on jaws and palatines present (0), absent on palatines (1), absent or reduced on jaws, palatine, and tongue (2);
- frontal bones normal (0); forming a groove (1), forming a tube (2);
- holobranchs present on second and third gill arches, hemibranchs present on fourth gill arch (0); holobranchs present on second and third gill arches; fourth gill arch without gill filaments (1);
- esca ovular (0), triangular (1), trilobed (2);
- shape of illicial bone unmodified (0), spine-like (1);
- pectoral fins without elbow (0), with elbow.
Molecular Biology and Genetics
Statistics of barcoding coverage
Specimens with Sequences:156
Specimens with Barcodes:150
Species With Barcodes:28
The Ogcocephalidae are a family of bottom-dwelling, specially adapted fish. They are sometimes referred to as seabats, batfishes, or anglerfishes. They are found in tropical and subtropical oceans worldwide. They are mostly found at depths between 200 and 3,000 m (660 and 9,840 ft), but have been recorded as deep as 4,000 m (13,000 ft). A few species live in much shallower coastal waters and exceptionally may enter river estuaries.
They are dorsoventrally compressed fishes similar in appearance to rays, with a large circular, triangular, or box-shaped (in Coelophrys) head and a small tail. The largest members of the family are about 50 cm (20 in) in standard length. The illicium (a modified dorsal fin ray on the front of the head supporting the esca, a bulbous lure) can be retracted into an illicial cavity above the mouth. The esca is not luminous as in most other groups of anglerfishes, but secretes a fluid thought to act as a chemical lure, attracting prey. Analysis of their stomach contents indicates that batfishes feed on fish, crustaceans, and polychaete worms.
- Froese, Rainer, and Daniel Pauly, eds. (2009). "Ogcocephalidae" in FishBase. January 2009 version.
- Bertelsen, F. & Pietsch, T.W. (1998). Paxton, J.R. & Eschmeyer, W.N., ed. Encyclopedia of Fishes. San Diego: Academic Press. pp. 139–140. ISBN 0-12-547665-5.
- Theodore W. Pietsch (2005). "Ogcocephalidae". Tree of Life web project. Retrieved 4 April 2006.
|This order Lophiiformes-related (anglerfish) article is a stub. You can help Wikipedia by expanding it.|
To request an improvement, please leave a comment on the page. Thank you! |
Before the early 1800s, the United States of America was only about half the size that it is today. It was in 1803 when the then President Thomas Jefferson began to expand westward after the Louisiana Purchase. One of the biggest migrations towards the western part of the United States took place on what is now known as the Oregon Trail.
1-5 Oregon Trail Facts
1. The Oregon Trail was more than one path. While there was the main trail that pioneers followed to Oregon, eventually many other branches of the trail were developed as pioneers headed to different parts of Oregon Territory and California.
2. As traffic along the Oregon Trail increased, the paths became a dumping ground for garbage and supplies that the pioneers could not continue to take with them.
3. While the pioneers feared attacks from the Native American as they started the journey, they soon learned that Native Americans were more helpful than anything else and that attacks were very rare.
4. While Oregon Territory was the first big draw for the pioneers and the trail is named the Oregon Trail, most pioneers settled in places outside of Oregon. Pioneers went to Washington, California, and Wyoming as well as Oregon.
5. If you travel in around where the Oregon Trail and its branches were, you can still see wheel ruts today from the thousands of wagons that traveled along the Oregon Trail.
6-10 Oregon Trail Facts
6. The total length of the Oregon Trail was over 2,000 miles. The trip would take anywhere from 4 to 7 months for a pioneer family to complete and preparing for the trip took over a year.
7. The year 1843 has become known as the Great Migration of 1843. That year about 1,000 people and 120 wagons joined a wagon train and headed to Oregon.
9. Unfortunately, not everyone would survive the Oregon Trail. Many people died on the trail and would be buried alongside the trail, due to disease, accidents, or drownings. By 1860, there were graves along every mile on the Oregon Trail.
10. Between 1830 to 1860 around 300,000 pioneers traveled the Oregon Trail and its branches, with the biggest waves happening in the 1840s and 1850s.
11-15 Oregon Trail Facts
11. The Oregon Trail was not planned out, it just happened as the fur traders and then the pioneers headed towards their destinations.
12. The Oregon Trail is also known as the Oregon-California Trail. However, the trail for California also became known as the Santa Fe Trail. Pioneers started heading to California more than Oregon and other areas along the trail once the California Gold Rush started in 1848.
13. Usually, the Oregon Trail began at Independence, Missouri.
14. In 1850, the government passed the Oregon Donation Land Act which pushed for pioneers to settle in Oregon. A single person could claim about 320 acres of land and a married couple could claim about 640 acres of land.
15. Most of the wagon trains would leave in April or May because they wanted to make their destination before the cold and winter weather arrived. |
It consists of fibrous rings (dense and regular connective tissue, a lot of collagen and few elastic fibers) that surround the valve orifices (atrioventricular orifices and orifices that host the semilunar valves) supporting the heart valves ; it also has connective formations that attack myocardial bundles . The space between these four rings is filled with a fibrous connective that forms:
- right fibrous trigone placed between the aortic orifice (aortic valve) and the two atrioventricular orifices (tricuspid and bicuspid valves).
- left fibrous tract placed between the left atrioventricular orifice and the aortic one.
The fibrous ring of the pulmonary orifice does not form the fibrous skeleton but from it originates the tendon that attaches to the right aortic valve flap.
Function of the fibrous skeleton
The fibrous skeleton is a fundamental element as it is at the base of the shape of the heart, since it attaches to the muscular bundles that form the walls of atria and ventricles. It is also at the base of the structure of the heart valves, to which inter alia attack via connective laminae. Another very important function of the fibrous skeleton of the heart is the electrical isolation of atria and ventricles ; it means that the electrical impulse is conducted separately to the atria and ventricles, and is completely under the control of the heart's conduction system . |
What is Cation Exchange Capacity?
Fertilizing the soil is similar in many ways to feeding ourselves. All life needs carbohydrates, minerals, and other nutrients in a healthy balance for optimal health. And just like eating too much fast food and not enough vegetables can lead to a variety of health problems in our own bodies, over fertilizing or feeding an imbalanced diet can harm plants as well. While plants need carbohydrates, proteins and fats just as we do, plants are quite different in how they ingest their food. When a fertilizer is applied, your plants will not use it in its whole form like we would eat.
Plants absorb almost all their nutrients through specialized cells on their roots, and sometimes also through their leaves. These cells can only absorb nutrients that are in the form of ions dissolved in the soil’s water. When you apply fish meal, for example, the meal must be broken down and dissolved into the water in the soil into an almost elemental state of NO3-, K+, H2PO4-, H+ and other ions that the plants can absorb. Fast acting fertilizers such as Liquid Grow will reach this ionic state quickest, where as slow release fertilizers such as Nutri-Rich will break down and release these ions slowly over time. The plants will then use these ions to build proteins, starches and fats, as well as enzymes, hormones, and other compounds they need to grow and thrive. Good soil has the ability to hold these nutritive ions with minimal leaching, giving the plants more time to absorb them and thus requiring less fertilizer be used.
The measurement for this ability is called Cation Exchange Capacity, or CEC. The CEC measures the soil’s negative charge; a better negative charge will hold more positively charged ions (these are called cations). The CEC is measured on a scale of 0 (low) to 50 (high), with values under 10 having poor cation holding ability. CEC is higher in clay soil, because clay particles are negatively charged. Sand, on the other hand, has little to no charge and thus has a low CEC. CEC is also higher in soils with good levels of organic matter. Beneficial microorganisms such as mycorrhizal fungus can help to improve the CEC, as well as working to break down fertilizers into their ionic components. Soil with a higher CEC is buffered against changes in soil nutrients; while this is a good thing if your soil has a good level of soil nutrients and pH, it also means that it will be more difficult to change your soil nutrient levels if there is some aspect that needs corrected. For example, if you have overly acidic soil, the amount of free H+ ions is too high. If the CEC is good, it will strongly hold onto those H+ ions and be resistant to efforts to counter them. More lime will be needed in such soils, compared to soils with a low CEC that readily release the H+ ions and change the pH. Knowing your soil’s CEC level can help you make better fertilization decisions, and will give you a deeper understanding of your soil. Find out your soil’s CEC with a Peaceful Valley Complete Soil Analysis. |
Themes are the fundamental and often universal ideas explored in a literary work.
One of the most important themes in The Giver is the significance of memory to human life. Lowry was inspired to write The Giver after a visit to her aging father, who had lost most of his long-term memory. She realized that without memory, there is no pain—if you cannot remember physical pain, you might as well not have experienced it, and you cannot be plagued by regret or grief if you cannot remember the events that hurt you. At some point in the past the community in The Giver decided to eliminate all pain from their lives. To do so, they had to give up the memories of their society’s collective experiences. Not only did this allow them to forget all of the pain that had been suffered throughout human history, it also prevented members of the society from wanting to engage in activities and relationships that could result in conflict and suffering, and eliminated any nostalgia for the things the community gave up in order to live in total peace and harmony. According to the novel, however, memory is essential. The Committee of Elders does recognize the practical applications of memory—if you do not remember your errors, you may repeat them—so it designates a Receiver to remember history for the community. But as Jonas undergoes his training, he learns that just as there is no pain without memory, there is also no true happiness.
Related to the theme of memory is the idea that there can be no pleasure without pain and no pain without pleasure. No matter how delightful an experience is, you cannot value the pleasure it gives you unless you have some memory of a time when you have suffered. The members of Jonas’s community cannot appreciate the joys in their lives because they have never felt pain: their lives are totally monotonous, devoid of emotional variation. Similarly, they do not feel pain or grief because they do not appreciate the true wonder of life: death is not tragic to them because life is not precious. When Jonas receives memories from the Giver, the memories of pain open him to the idea of love and comfort as much as the memories of pleasure do.
At the Ceremony of Twelve, the community celebrates the differences between the twelve-year-old children for the first time in their lives. For many children, twelve is an age when they are struggling to carve out a distinct identity for themselves, differentiating themselves from their parents and peers. Among other things, The Giver is the story of Jonas’s development into an individual, maturing from a child dependent upon his community into a young man with unique abilities, dreams, and desires. The novel can even be seen as an allegory for this process of maturation: twelve-year-old Jonas rejects a society where everyone is the same to follow his own path. The novel encourages readers to celebrate differences instead of disparaging them or pretending they do not exist. People in Jonas’s society ignore his unusual eyes and strange abilities out of politeness, but those unusual qualities end up bringing lasting, positive change to the community. |
As of September 2015, 87 percent of teachers surveyed by Harris Poll don’t use social media in the classroom. However, many educators find social media to be a helpful tool in learning:
“I have found the quietest students in my class speak the loudest on social media.” –Gail Leicht, 8th grade Language Arts/Literature teacher
“I apply social media in my classroom to help students view it as something that can–and will–influence their academic and professional life, hence the value of its responsible and ethical use.” – Micahel-Ann, Senior School history teacher
However, for many educators, a lack of training and knowledge in how to use social media for learning causes them to refrain. Others are nervous about the privacy concerns and giving students access to sites that can be very non-kid-friendly.
Luckily, you don’t have to bring these sites directly into your classroom—at least not completely. If the Twitterverse is just too big and uncertain for you, use these tips to still reap the benefits.
Use it to Connect Outside the Classroom
One of the greatest benefits of social media in education is that it allows teachers to reach their students outside the four walls of the classroom. Rather than using social media during the school day, you can use it as a means to connect with students after the bell rings. Here are a few ways to make the most of this:
Private Facebook group: Create a private Facebook group for your classroom. Use this to share important resources, reminders and tips with students. Class-wide Pinterest account: Create a Pinterest account for your classroom, creating boards for each specific lesson, unit, book, etc. Post resources to these boards, and send students there if they’re looking for a specific article or piece of information.
Instagram updates: One teacher, Nicole Long, uses Instagram to share student accolades. Being a photo-driven platform, she shares pictures of the projects they do, which is exciting for the student being recognized.
Use it for Specific Projects
Instead of bringing social media into the classroom on a regular basis, turn to it for smaller projects, perhaps once a month or once a quarter. There are many ways students can use these social sites as tools to learn more, organize their knowledge, and put classroom lessons into real-world context. Here are a few ideas to get you started:
Facebook historical profile: Assign each student with a historical character to research. After compiling information, students then have to create a Facebook account for that person, using photos from online and the information they’ve gathered. They’ll love using Facebook in this context and get to see learning applied in a real-world environment.
Emoji poetry: Assign each student a poem and ask them to sum it up with just emojis—you can do this on Twitter or in a private Facebook group. They’ll have a blast getting creative while having to truly understand the poem to find the right emojis to represent it.
Pinterest project planning: Give each student a Pinterest account to use as an organizational tool. Students can create boards for specific topics or pieces of their project and easily add resources, images, videos and more. Because Pinterest is a very creative-focused social site, your students may even be inspired to create something they wouldn’t have otherwise thought of.
Hashtag discussion: Twitter is a great way to get students talking outside of class. Bring one homework assignment a week onto Twitter. Create a hashtag (i.e. #MrsSmithScience) and give students a topic to discuss. You can follow the conversation to see who’s participating and who’s not; you can also join theconversation, tagging students and answering questions.
Instagram photo gallery: Assign students a research project that culminates in a gallery of photos on Instagram. This can be art-specific, more abstract, or tackle topics like digital citizenship or politics. This type of assignment forces students to think about the subject in a different way—how do I depict this information in the form of photos?
Use Tools With Social Features
One of the benefits of social media in the classroom is both the excitement and collaboration it sparks among students. There are a variety of organizations that harness this, making these social features kid-friendly, and building them into their app or learning tool. Here are a few that will help you ease into social media in your classroom:
Bookopolis: This website allows students to connect with other students, share book reviews, and get reading recommendations from their peers. It’s a powerful way to help students discover the books they love.
Twiducate: This is a Twitter-like environment that encourages collaboration among students. Students don’t need an email address or their own Twitter handle—when you sign up, you get a code, and they log in with that.
Whooo’s Reading: This reading log tool encourages student collaboration with a Facebook-style newsfeed—in fact, many students call it “Facebook for reading.” Within their private (class-wide only) newsfeed, they can “like” what their peers’ are reading and comment on comprehension responses.
Social media can be a valuable tool in the learning environment, and you don’t need to be live-Tweeting your lesson to take advantage. Consider how you can reap the benefits of social media without developing every lesson or discussion around it.
Bio: Jessica Sanders is the Director of Social Outreach for Whooo’s Reading, a San Diego-based education organization that motivates students to read more every day.It’s available to teachers, schools and districts. Jessica grew up reading books like The Giver and Holes, and is passionate about making reading as exciting for young kids today as it has always been for her. Follow Learn2Earn on Twitter and Facebook, and check out their new ebook, How to Bring Technology Into the Classroom, just $2.99 on Amazon.com. |
The Mississippi River has built a series of deltas over time, resulting in overlapping lobes of deltaic sediment. The current delta, the Balize, or "Bird Foot," is approximately 1,000 years old. This delta is southeast of New Orleans, but the river is slowly shifting its course to the Atchafalaya distributory. This long-term process of river mouth shifting has taken place repeatedly in the Holocene period.
Most of the greater Mississippi River delta consists of marshland and mud flats with numerous shallow lakes and intertwining channels. The marsh features aquatic plants and an extensive amount of waterfowl. The large amount of sediment from the river has influenced the growth of barriers islands and cheniers. Mangrove coasts and marshes are also common in this area. |
Folks: the posting below discusses some important ways to help students improve their writing it is by david smit of kansas state university and is from pod-dea center notes on instruction series. Children are developing a range of oracy skills in the languages they are learning and the majority enjoy their experience of language learning and teaching 3 this reiterates what the qca has stated to be the benefits of language learning in the primary school. How to improve your child's creative writing skills develop your child's curiosity in order to develop his creative writing skills she loves to read books, but is unable to organize her sentences and thoughts in a correct way wikihow contributor community answer.
European journal of social sciences – volume 13, number 3 (2010) 478 a study on the cultural values, perceptual learning styles, and attitudes toward oracy skills of malaysian tertiary students. 5 10 nationalagencies,notablyqcdaandthepreviousgovernment’snationalstrategies,have attemptedtoencouragethesedevelopmentsthroughtheirowninitiatives. Through the work of school 21’s teachers, our own research and that of others, we know there are some very effective ways of teaching oracy for example, teachers quite often comment that.
The need to develop oracy skills to both improve communication and link ideas and to promote metacognition was very apparent however, the questions from pupils that followed the lab were generally answered to a much higher standard than they have been previously without the additional preparation. 3 articulation correct pronunciation of words supports a listener’s comprehension 4 attending skills and being able to recall their expect a response about what was heard as a way of developing their skills of taking a stance and justifying it 3 following instructions the oracy project (in hubbard, 2005, p 35. Ways of developing one’s intellectual abilities nowadays a lot of people are concerned with a topic of developing one's mental abilities for the past few decades scientists have been developing new approaches in order to understand human’s brain, its structure, potential abilities, and so a lot of different techniques have emerged some scientists strongly believe it’s possible to.
Only affect the oracy of eal learners, but further develop literacy skills, and connect to and enhance performance in other academic subjects (fox, 2000) the ways in which we are able to communicate are, most of the time, dominated through speak. Enhancing their cognitive development and in the way they go about language, literacy and communication skills consists of the progressive development of children’s skills in: • speaking • listening • reading • writing • communicating language, literacy and communication skills 5 oracy speaking and listening are essential. The voice 21 oracy improvement programme supports schools to develop pupils’ use of speech to express their thoughts and communicate effectively participating schools were asked to devote one hour a week of lesson time to developing spoken language, and received materials and training in oracy. Strategic oral language instruction in eld teaching oracy to develop literacy by dr connie williams, edd and dorothy roberts is oral language development “in” or “out.
Oracy – developing speaking and listening through planned opportunities: drama, debate, role play, story telling, news, interviews, talk partners, circle time, etc reading – structured schemes for individual and guided reading, real books, phonics and. The powerpoint ppt presentation: developing oracy and literacy i is the property of its rightful owner do you have powerpoint slides to share if so, share your ppt presentation slides online with powershowcom. The latest tweets from amy gaunt (@amy_gaunt) director of programmes @voice21 co-author (with @stottcrates) of 'transform teaching and learning through talk: the oracy imperative' (out nov '18.
The old way of thinking of the developing world as this place where there’s been no progress is not that helpful” that’s not to say that income classification is a waste of time it just. On the other hand, if the right activities are taught in the right way, speaking in class can be a lot of fun, raising general learner motivation and making the english language classroom a fun and dynamic place to be. In 2012, school 21 in stratford, east london opened its doors recognising the value of developing student’s speaking skills to support them in learning and in life, the school places talk at the heart of every lesson and nurtures a whole-school culture of oracy. Although the writing process is the approach taught and used in all time4writing courses, there are two distinct elementary writing courses that focus on helping students internalize the process so that it becomes their natural way of approaching writing assignments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.