content
stringlengths 275
370k
|
---|
A precisely leveled resource that addresses the comprehension needs of individual students and/or small groups. It aligns with the instructional philosophy of the Engage Literacy Program. By using the Comprehension Strategy Kit, students build confidence in their reading by gaining access to graduated, leveled content of increasing text complexity and experience in making meaning from text. The Engage Literacy Comprehension Strategy Kit includes explicit instruction for teaching and modeling 9 key comprehension strategies. Engaging, leveled fiction and nonfiction student text cards, all richly illustrated, modeling a range of text types and text forms. The accompanying teacher resource material includes Comprehension Strategy Cards for modeling and teaching the 9 key strategies, and a Teacher’s Resource book which will help teachers to systematically develop students’ comprehension skills and strategies. |
We’ve put up quite a few posts looking at VOICES IN TEXTS. Now we want to look at the other aspect of the VOICE course concept as identified in the SCSA glossary: NARRATIVE AND AUTHORIAL VOICE.
The NARRATIVE voice is the voice of the NARRATOR and/or CHARACTER in a text. It may be written from a 1ST PERSON, 2nd PERSON or 3RD PERSON point of view.
AUTHORIAL voice refers to the voice of the author and is a part of that author's writing style.
In the COMPREHENDING SECTION of your exam, you could be asked to identify and/or discuss the VOICE in an unseen text. Students find this quite tricky, but there are KEY THINGS to look for:
- 1st, 2nd or 3rd person POV (We did a post on the significance of this earlier - it’s worth reading if you missed it! 😊)
- Male / female
- Approximate age
- Accent / dialect (and therefore perhaps ethnicity)
- Attitudes or values identifiable from dialogue, actions and/or thoughts
We’ve included some excerpts here with brief notes on each. Hopefully you can see how the tips above can help you identify the VOICE in an unseen text. 😊
First person POV is used in the excerpt above so a subjective and personal VOICE. The narrator talks about his brother being buried and worries about ‘how he would breathe’ and if he should ‘put some fruit in the grave’ in case he gets hungry.
The VOICE is obviously that of a young child since they don’t seem to understand death. We could guess between the ages of 5-10. Gender is not obvious and there is no detectable accent or dialect.
The NARRATIVE VOICE conveys a sense of loneliness (attitude). ‘Nobody really talked’ (repetition) and the narrator tells us the teachers and kids at school stayed away. We could also say the VOICE is one of innocence (attitude). We’ve already identified that the narrator doesn’t understand about death, but the final sentence - ‘finding him in a drain without his clothes on was worse’ - also conveys this. This small detail has a lot more meaning for us as readers than it does for the child narrator (innocent). We know the narrator’s brother has likely been sexually abused and murdered.
We chose this example (Text 1) to show you how important the CONTEXTUAL INFORMATION provided by the examiners can be! 😊 Students were asked to identify the VOICE in the excerpt and explain how it POSITIONED them to view Berlin.
1st person POV so again the VOICE is very subjective (bias) and personal. The AUTHORIAL VOICE is a female Australian voice conveying her experiences of travelling in Berlin, Germany. All of this is given in the CONTEXTUAL INFORMATION provided.
We can expand on this by making some inferences. An Australian travelling in Germany means the VOICE is that of a foreigner or outsider. Additionally, the VOICE is unwell or ill. Anna Funder admits in the first sentence that she is hungover, so not in the best state of mind to be objectively making observations about a foreign city. Her hangover may adversely affect her attitudes towards, and opinions of, Berlin and consequently POSITION READERS to view it in a negative light also.
Katz uses the 1st person POV – subjective/bias, influenced by personal values, attitudes and beliefs. The VOICE is that of a 49yr old male and is COLLOQUIAL (i.e. relaxed, conversational) shown through the use of words such as ‘kid, fella, gramps, little bugger’. Traditional ATTITUDES regarding manners and respect are conveyed by the VOICE who expects the boy to move for him because he is older, dressed snazzy and carrying an expensive cake.
The writer also imagines what the kid is thinking, and in doing so creates a disrespectful, cheeky VOICE for the young boy.
Finally, the VOICE of the article is sarcastic (tone) and classist (attitude) when it says ‘carrying a cheap novelty footy that was probably stitched together by Bangladeshi orphans.’ The VOICE is also humorous (tone) – ‘disrespect was just oozing out of him, mostly from his little snout.’
IS THIS ARTICLE BY KATZ NARRATIVE OR AUTHORIAL VOICE? This is a tricky question to answer because it could be BOTH! Danny Katz may be recalling an experience he has had, in this case, the VOICE would be AUTHORIAL. OR, he might have created this lemon tart carrying 49yr old to explore the generation gap that exists in society. In this case, the VOICE would be NARRATIVE.
At the end of the day, whatever excerpt you are given in your examination/assessment, the examiner will be looking to see if you can identify and discuss the VOICE. It’s also likely they will call it narrative voice, authorial voice or simply voice in the question. So don’t worry too much about this 😊
We chose the example above because it uses the 2nd person POV which is unusual. It could be that the narrator wants to make us part of the story, to put us into the shoes of the character as a way of better understanding the issues. In this case, the ‘you’ (which is us, the reader) can feel under attack from the VOICE, as if we should have known better.
Another possibility is that they used 2nd person POV because the narrator is older and talking to her younger self. Either interpretation can be correct, just be sure to support your interpretation with evidence from the text (i.e. a quote). 😊
NOW, let’s identify the VOICE. The gender and age are not clear. But the TONE is clear. The VOICE seems to be critical of Americans and America itself. The narrator talks about the ‘big hot dog with yellow mustard that nauseated you.’ An iconic symbol of American culture, the hot dog looks great but left you feeling sick on eating it. This perhaps represents that America is not as good as it may seem, particularly for migrants.
The VOICE is also satirical (tone). The narrator mocks America saying ‘they were desperately trying to look diverse. They included a photo of him in every brochure, even those that had nothing to do with his unit.’ The narrator is poking fun at America’s desire to look multicultural, when in fact it is not, so the VOICE is sarcastic. |
“The technique of reducing the physical world into mathematical abstractions… played a key role in producing a new physics, and stands as a distinctive feature of the Scientific Revolution” (p. 73). Would it also be accurate to say that this is what’s distinctive of science, and in particular, what distinguishes science from the humanities? Explain.
Today more than ever, the sciences and humanities have become intertwined and rely on each other to answer the question of how and why regarding human nature. While science may focus on mathematical abstraction, deriving knowledge about the world through mathematical operations, the humanities focuses on a different kind of abstraction. The humanities derive meaning from the simplification of complex concepts into smaller notions. Then, strategically analyze these smaller concepts to gain insight into human life and the reasons for what we do. Science examines big picture ideas and narrows down in succession, while humanities follows small ideas to understand the bigger picture. These distinctions work together rather than oppose each other to create a holistic understanding of the world. Galileo references the “Book of Nature” and quotes “this grand book, I mean the universe… is written in the language of mathematics and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it” (pg 73). The Book of Nature is a philosophical and religious concept which views nature as a “book” to be read to gain knowledge, insight, and to truly understand the world. This reveals the need for both humanities and science to understand the human person and the physical world. Science, through the use of mathematics, communicates the fundamentals of our existence and the humanities go beneath the surface to analyze the human experience.
What, today, can be considered revolutionary? Does this term carry the same meaning as it once did or have we reached the capacity for revolution as it was once defined? |
Parents are the only people who observe their children in all their dimensions. Other adults see their children in narrow and limited environments such as school, sports, arts, and occasional contacts. As parents, you see your children as unique people because you observe their talents, challenges, and personalities across all parts of life.
When parents try to talk about ways in which they see their children as “smart” or “intelligent,” they are often hampered by the limited way in which society defines intelligence. In school, intelligence is often associated with the quality of performance in language, logic, and math. Other measures of “smart” are supplied in extra curricular activities, clubs, or sports. But intelligence is far broader and more complex then the three primary areas around which schools are organized.
Eight Different Types of Intelligence
Years ago, the Harvard University professor, Howard Gardner, Ph.D., researched intelligence and created a framework that gave us a broader view of the qualities that constitute being “smart.” He reported that there are eight different types of intelligence that he labeled Multiple Intelligences. Each person is strong in one or several of these different intelligences.
In my private practice, high school and college students have significantly benefited from using the Highlands Ability Battery (HAB) to determine how their natural mental abilities form their multiple intelligences. Each intelligence requires different types of learning, problem solving, and interaction with the world. Because the HAB assesses natural abilities in memory, problem solving, and personal style, students get a deeper understanding of their multiple intelligences. This understanding helps them make major life decisions about college, majors, and careers.
The eight multiple intelligences identified by Dr. Gardner are
- Interpersonal or “people” smarts such as creating strong relationships
- Intrapersonal or “self” smarts such as knowing what you like and don’t like
- Musical or “music” smarts such as composing, enjoying, and playing music
- Logic-Math or “numbers and reasoning” smarts such as solving advanced math problems
- Linguistic or “word” smarts such as writing articles or enjoying reading
- Spatial or “picture” smarts such as painting, drawing, or designing
- Body-Kinesthetic or “body” smarts such as dancing and playing sports
- Naturalistic or “nature” smarts such as in gardening or caring for animals
If you use Gardner’s broad definitions about the various ways to define “smart,” you will view your childrens’ intelligences and talents through a different lens, and improve the ways in which you support their development. I have found that the Highlands Ability Battery gives high school and college students an easy framework to explore how to build their lives around their unique multiple intelligences. |
1. Students should understand these 3 things about being a good digital citizen:
A. They need to make correct choices when on the computer, even when no one is watching them.
B. They should ask for help when they get any pop ups or they land on a site accidentally that is
inappropriate. Honesty is best in these types of situations.
C. They can still have fun while being a good citizen. Following guidelines provides safety and it allows
them to be comfortable while using these new technology devices.
2. I will use Brainpop because it has some good videos on digital citizenship. It is kid friendly and the kids
enjoy it. We would also discuss as a class what is important.
3. I would teach the idea of digital citizenship by making the connection that the students are all citizens in life
and in their own community. They have to make good choices to represent themselves, and this is the same for using computers and technology devices. They can be tracked and we can watch to see if they are behaving, so making good choices will allow them to have lots of use of these new devices. If a student makes incorrect choices, he/she may lose their privilege to use the technology.
4. I would share this idea of digital citizenship with the parents by sending an email or posting on my class blog all of the guidelines and discussion points that we talked about as a class.
5. I will also have the kids and parents sign an agreement to show they understand the guidelines, and they are going to be held accountable for following them. |
Sir Isaac Newton Biography for Kids – Founder of Calculus
Sir Isaac Newton was born in the county of Lincolnshire, England in 1643. His father died just months before he was born, and when he was three years old, his mother left him in the care of his grandmother. Isaac was always a top student, and went off to the University of Cambridge at age 19. While at Cambridge, Newton was influenced by the writings of Galileo, Nicholas Copernicus, and Johannes Kepler. By 1665, Newton began developing a mathematical theory that would lead to the development of calculus, one of the fundamental branches of mathematics. Newton would go on to discover other important math theories such as Newton’s Identities, and Newton’s Method.
In 1670, Newton moved on to the study of optics and developed theories relating to the composition of white light and the spectrum of colors. In one of his famous experiments, he refracted white light with a prism, resolving it into its constituent colors: red, orange, yellow, green, blue, and violet. As a result of his experiments, he developed Newton’s Theory of Color, which claimed that objects appear certain colors because they absorb and reflect different amounts of light. Newton was the first scientist to maintain that color was determined solely by light, and his findings created much controversy. Most scientists thought that prisms colored light. Nevertheless, Newton then created the world’s first color wheel, which arranged different colors around the circumference of a circle. He is also credited as the first scientist to explain the formation of a rainbow – from water droplets dispersed in the atmosphere. In 1679, Newton continued his work on gravitation and its effects on the planets. In 1687, he published Philosophiae Naturalis Principia Mathematica. In this landmark work, Newton explained his three laws of motion, which included his theory on gravity. According to Newton, gravity is the reason that objects fall to the ground when dropped. Moreover, gravity is the reason why planets orbit the sun, while moons orbit planets, and why ocean tides exist. Newton’s theories remain among the most important concepts in the history of science. There is some evidence that Newton’s ideas concerning gravity were inspired by apples falling from trees. There is no evidence to suggest, however, that any of the apples hit him in the head (as cartoons and fables suggest). Below are Newton’s three laws of motion:Newton’s First Law ( Law of Inertia) states that an object at rest tends to stay at rest and that an object in uniform motion tends to stay in uniform motion unless acted upon by an external force.Newton’s Second Law states that an applied force on an object equals the time rate of change of its momentumNewton’s Third Law states that for every action there is an equal and opposite reaction.
Following the publication of his work, Newton became instantly famous throughout Europe. In the later years of his life he wrote several articles on interpretation of the bible. He was also appointed a member of the British Parliament and spent many years reforming the Royal Mint (coin making agency of Parliament). He died on March 20, 1727. |
As the twentieth century came to an end, Americans grew increasingly interested in the history of the nineteenth. Civil War commemorations, re-enactments, books and films proliferated, depicting the nation’s traumatic division, as well as chronicling its redemptive narratives of suffering, heroism and, of course, freedom. The emancipation struggle, however, rested a bit uncertainly in this Civil War enthusiasm, sometimes ignored, often subordinated in a story that focused on battle. Yet the eagerness to understand national identity that has animated this attention to history cannot be separated from the American dilemma of race, with its roots in slavery and branches stretched across our own time. As the 1990s saw a doubling in the number of Civil War books published each year, it also witnessed an extraordinary flourishing of interest in the Underground Railroad, a phenomenon that in the course of the decade yielded two federal laws, state programs involving millions of dollars for preservation and memorialization, and activities in schools and communities across the nation.
Under the directive of the 1998 National Underground Railroad Network to Freedom Act, the National Park Service now manages a program designed to link state and local efforts into a “mosaic of community, regional and national stories” by certifying the authenticity of Underground Railroad sites and encouraging preservation and education programs. The Underground Railroad, the National Park Service proclaims, “bridged the divides of race, religion, sectional differences and…joined the American ideals of liberty and freedom expressed in the Declaration of Independence and the Constitution to the extraordinary actions of ordinary men and women.” The Network to Freedom program emphasizes the “historical significance of the Underground Railroad in the eradication of slavery and the evolution of our national civil rights movement, and its relevance in fostering the spirit of racial harmony and national reconciliation.”
The assumptions and attractions of this history are clear and explicit. Ordinary people undertake acts of heroism motivated by our national principles. Their efforts are effective in diluting the stain of slavery. Blacks and whites work together in a spirit of racial harmony that remains an unrealized national ideal and a still-elusive reality more than a century later. Just as the Underground Railroad freed nineteenth-century slaves from bondage, so it can offer modern Americans a promise of freedom from –or at least an alternative to–the historical burden of racism and slavery. This is an inspirational story, one far easier to present in school curriculums and in popular commemoration than the divisive and complex experience of American slavery. The Underground Railroad speaks powerfully to what historian David Blight has called “our need to find an ennobling past.”
But we should be wary of a history built on need and desire. The present always uses the past for its own purposes, but the best historical writing attempts to contain that inescapable impulse in order to understand history in its own terms. The Underground Railroad has always made that difficult. It is a metaphor; it was never a railroad, nor was it under the ground. The term emerged in the 1830s to describe networks of support and communication assisting slaves escaping from bondage to the North and freedom. The imperatives of secrecy prevented extensive documentation of illegal and necessarily clandestine activities; romanticized memories exaggerated levels of formal organization, coherence and extensiveness; prevailing racial assumptions shaped recollection. By the end of the nineteenth century, a popular mythology of the Underground Railroad had arisen that celebrated altruistic whites serving as “conductors” for the benefit of largely passive black “passengers.” Legend claimed locations across the North as former “stations” on the Railroad, often with a hidden passage or room offered as definitive evidence for a site’s authenticity.
The civil rights movement of the 1950s and ’60s sparked a revolution in the historical understanding of slavery and abolition and brought significant revisions to the Underground Railroad narrative, as the activities of blacks both as fugitives and “conductors” claimed a more central place in the story. Nevertheless, the power of myth continued to challenge historical accuracy, even as interest in the Railroad grew.
Fergus Bordewich’s Bound for Canaan: The Underground Railroad and the War for the Soul of America enters this complex historical–and political–tradition. His very subtitle suggests how passionately he identifies with the moral agenda of the Underground Railroad story, which he regards as “an answer to slavery’s legacy of hurt and shame.” But his claims for it are hardly limited to its moral significance. The Underground Railroad was, he asserts, “the country’s first racially integrated civil rights movement,” “one of the most ambitious political undertakings in American history,” “an even greater record of personal bravery and self-sacrifice than is generally known,” “a movement with far-reaching political and moral consequences that changed relations between the races in ways more radical than any that…would be seen again until the second half of the twentieth century.”
To make such claims, it is necessary to establish what the Underground Railroad actually was, to capture this poorly documented, ill-defined, elusive and allusive concept. The term has been employed so imprecisely that it has frequently been applied to all flights from slavery to freedom in the antebellum period. This would include escapes that predated the name itself, which only came into use when the emergence of railroads supplied the metaphor. Bordewich follows this custom and invokes the Underground Railroad regardless of the formal or organized nature of a slave’s passage to freedom and regardless of the fugitive’s self-conscious awareness of a structured network of assistance. For example, one of Bordewich’s heroes, a bondsman named Arnold Gragston, who ferried many slaves across the Ohio before eventually fleeing himself, later responded to a question about the Underground Railroad by noting, “I don’t know as we called it anything.” But Bordewich does not quote this remark. Instead he uses Gragston as one of many examples to show that the Underground Railroad by the late 1830s had “taken recognizable form,” developed “trunk lines,” an identifiable leadership structure of “conductors” and members and patterns of “sophisticated coordination.” The Underground Railroad becomes a reality for the modern historian that it was not for the nineteenth-century participant.
Indeed, most slaves who fled bondage did not benefit from an organized network of assistance and communication. The majority of those who ran away stayed within the South, hiding in woods and swamps, disappearing into cities, causing steady and significant disruption of the South’s “peculiar institution,” but never crossing the Mason-Dixon line to freedom. Fugitives who did escape northward came overwhelmingly from the border states. As Bordewich himself acknowledges, “Very few successful freedom seekers were from the Deep South.” Eighty percent of Southern-born blacks counted in the Ontario census of 1861 were from the three states of Virginia, Maryland and Kentucky.
Bordewich writes of a “hemorrhaging of fugitive slaves.” Yet these slaves were more a trickle than a flood, their origins geographically limited, their numbers a tiny fraction in comparison to those who remained captive or never succeeded in their efforts to claim freedom. Bordewich generously estimates that the Railroad transported approximately 100,000 “passengers” in the course of the nineteenth century. Even if Bordewich’s figure is right, it is a small fraction of the slave population. In 1860 there were 4 mil lion slaves living in the United States, and we can only estimate the total number of those caught in bondage from 1800 to 1860, which is of course the population from which Bordewich’s figure of 100,000 came. Each successful fugitive meant a life transformed and saved from slavery’s oppression, and we must not discount the importance of every act of self-liberation by daring and resourceful fugitives. But if we emphasize the reassuring myths of the Underground Railroad over the exploitative realities of the slave system, we distort the past by ignoring the experience of the many for the dramatic and inspiring stories of an exceptional and unrepresentative few –overwhelmingly from the border states, overwhelmingly young, over whelmingly male.
Although the numerical impact of the Underground Railroad on the slave system was limited, the political and symbolic force of fugitives was, as Bordewich argues, significant. White Southerners came to believe in an Underground Railroad far more organized and vast than anything that actually existed. Their consequent sense of threat and anger would contribute meaningfully to mounting sectional tensions. The South’s escalating demands that fugitives be returned from Northern states led to the 1850 Fugitive Slave Act, a measure that proved incendiary by putting the weight of federal law on the side of slave catchers who came north to retrieve runaways, some of whom had been living in freedom for decades. The battles over fugitives across the North made it impossible to compartmentalize slavery as a Southern concern. The fugitive law made slavery the property of the nation.
Bordewich explores a number of these confrontations between slavery and freedom in some detail, using them as a vehicle for expanding his consideration of antislavery activism. He does so, however, in a manner that further muddles his definition of the Underground Railroad. John Brown’s 1859 raid on Harpers Ferry establishes him, in Bordewich’s view, as “the apotheosis of the Underground Railroad,” while the federal policy of welcoming tens of thousands of slaves who fled to Union lines during the Civil War is treated as a version of the Underground Railroad on a “scale that would help destroy the plantation economy of the Confederacy.” Redefining every action that weakened slavery as part of the work of the Underground Railroad makes it far easier to argue for its impact and significance, but it also blurs definition and understanding.
The abolition movement and the Underground Railroad, Bordewich acknowledges, “were never completely congruent.” But his wishful exaggeration of their convergences has produced a book that is about both and fully focused on neither. He admits he has “not written an encyclopedic survey of the underground,” though he relates many of its most dramatic tales and vividly demonstrates the extraordinary bravery and commitment of its black and white participants. But he tells these stories as part of a broad overview of slavery and antislavery that is distorted by its subordination to the Underground Railroad narrative. His presentation of slavery includes fundamental factual errors; for example, 4,000 slave owners did not own two-thirds of the South’s slaves in 1860; US Census data, as cited in Peter Kolchin’s Unfree Labor, indicates that 1 percent of slave holders (3,830 in 1860) owned 2.4 percent of slaves. And the portrait of abolition that emerges here is a curious one. Without the Underground Railroad, Bordewich suggests, it might “never have become anything more than a vast lecture hall in which right-minded, white Americans could comfortably agree that slavery was evil.” The last half-century of scholarship on abolition has advanced quite a different view.
The Gag Rule controversy, all but ignored by Bordewich, offers an illustration of the point. The Congressional battle over how to treat the tens of thousands of petitions forced upon Washington by antislavery activists–particularly women–became, under the leadership of John Quincy Adams, almost as powerful a challenge to the North’s compartmentalization of slavery as the Fugitive Slave Act would later prove. The near absence of this conflict from Bordewich’s book illustrates an important shortcoming of his portrait of the antislavery movement. In its focus on individuals acting heroically within the loose networks of the Underground Railroad, it makes little room for the role of politics in the fight against slavery. In Bordewich’s interpretation, the Underground Railroad, rather than any actions in the political realm, set the stage for slavery’s demise. The Mexican War, the Wilmot Proviso, shifting party politics, debates over the territories–all of which heightened the tensions around slavery–are marginalized or entirely overlooked.
A study of the Underground Railroad need not address every dimension of the antebellum struggle over slavery. But a book that seeks to establish the Underground Railroad as the defining force in a “war for the soul of America” and that portrays a range of other antislavery actions as virtual manifestations of the Railroad must carefully weigh the impact of the Underground Railroad in relation to other antislavery endeavors. Bordewich does not offer such a balanced evaluation but provides instead a fragmented version of the movement against slavery, which he claims had the Underground Railroad at its heart.
In 2004 The National Underground Railroad Freedom Center opened in Cincinnati, Ohio, signaling a continuing American fascination with the extraordinary tales of individuals who acted decisively for freedom. Fergus Bordewich’s telling of these stories will find a ready audience, for his book represents a highly readable, richly detailed, inspiring contribution that integrates many discoveries and revisions of recent decades–most notably, stories of black agency–into an accessible narrative for the general reader. But it offers a vision of the past too much determined by the hopes and needs of the present. The story of slavery and of the efforts to overthrow it should not be seen primarily or exclusively through the lens of a movement whose contours can only hazily be identified or defined, a movement that offers a feel-good story of the past that may divert us from the real burdens of slavery’s legacy in American life. |
The aortic valve allows blood to flow from the heart’s left ventricle up into the aorta and on to the rest of the body. Aortic regurgitation (also called aortic insufficiency or aortic incompetence) occurs when the valve does not close properly. If the valve does not close all the way because it is weakened or widened, blood leaks backward, and the left ventricle overfills with each heartbeat. Aortic valve regurgitation is graded from mild to severe based on the amount of backflow. In time, the left ventricle is forced to pump more blood than normal, and it gradually enlarges. Untreated valve disease can lead to congestive heart failure, cardiomyopathy, arrhythmia, or blood clots.
What Causes Aortic Valve Regurgitation?
Aortic valve regurgitation can be caused by a congenital valve deformity (e.g., having only two leaflets instead of three), an infection (e.g., rheumatic fever or infective endocarditis), aortic root disease (e.g., Marfan syndrome), or another disease process taking place in the body, such as severe high blood pressure. It is most common in men aged 30 to 60.
Symptoms depend on the patient and the type and severity of the regurgitation. Some patients experience no symptoms. In other cases, valve disease may take its toll over many years. Symptoms may not develop until the left ventricle has been affected, and they include:
• fatigue (especially during physical activity)
• shortness of breath
• fluid retention or edema (e.g., in the ankles)
• abnormal arrhythmias (including a fast or fluttering pulse)
• angina pectoris or chest pain that worsens during exercise
These symptoms are likely to occur or worsen during exercise or physical exertion.
Heart valve disease is diagnosed by listening to the heart with a stethoscope; diseased heart valves make distinct clicking sounds or murmurs. Other diagnostic tests that are used to determine the underlying cause, complexity, and severity of heart valve disease include a chest x-ray, blood tests, an echocardiogram, electrocardiography (ECG) or Holter monitoring, exercise stress testing, electrophysiology (EP) studies, cardiac catheterization, and diagnostic imaging scans, such as computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET). A careful and complete diagnostic workup will help determine the timing and composition of the treatment plan.
Treatment for aortic valve regurgitation depends on its type and severity. Asymptomatic patients may not need treatment. In some cases, careful monitoring is all that is needed.
Medicine cannot cure the valve, but some types of medicine (digitalis, diuretics, anticoagulants, beta blockers, calcium channel blockers, and ACE inhibitors) may ease the pain or symptoms, reduce the workload on the left ventricle, or regulate an associated arrhythmia.
If symptoms worsen, become difficult to manage, or if medicine no longer relieves symptoms, an intervention or open heart surgery may be needed. The aortic valve can be repaired or replaced. Repair may involve reinforcing the valve or reconstructing its anatomy to restore normal function. Replacement is used to treat an aortic valve that cannot be repaired. It involves removing the defective heart valve and replacing it with a prosthetic valve. Prosthetic valves can be mechanical (made from materials such as plastic, carbon, or metal) or biological (made from human or animal tissue). Mechanical valves carry the risk of developing a blood clot on the new valve, so patients with mechanical heart valves must take blood-thinning medicine for the rest of their lives to prevent this complication.
Preventative Antibiotics and Heart Valve Disease
Patients with heart valve disease who have an abnormal heart or who’ve had heart surgery risk developing endocarditis. The American Heart Association no longer recommends taking routine antibiotics before certain dental or surgical procedures except for people at the highest risk for bad outcomes if they do develop endocarditis.
Texas Heart Institute www.texasheartinstitute.com/HIC/Topics/Cond/valvedis.cfm
Texas Heart Institute www.texasheartinstitute.com/HIC/Topics/Cond/vaortic.cfm
Mayo Clinic www.mayoclinic.org/heart-valve-disease/index.html
American Heart Association www.americanheart.org/presenter.jhtml?identifier=4448 |
Bluetooth is a protocol for wireless communication. Devices such as mobile phones, laptops, PCs, printers, digital cameras and video game consoles can connect to each other, and exchange information. This is done using radio waves. It can be done securely. Originally, Bluetooth was developed to reduce the number of cables needed to connect such devices to a PC. Bluetooth is only used for relatively short distances, like a few metres.
There are different standards. Data rates vary. Currently, they are at 1-3 MBit per second. Typical Bluetooth applications are to connect a headset to a mobile phone, or to connect a computer mouse, keyboard or printer.
Bluetooth devices use the ISM Band around 2.4 GHz. This can be used worldwide, without the need to pay license fees, but many other devices, like DECT telephones (wireless phones), smart tags with RFID, baby phones use it too. Bluetooth uses the same bands as some WLANs, but the modulation technique is different. Bluetooth uses Frequency-hopping spread spectrum.
Comparing Bluetooth to Wireless LAN[change | change source]
There are many situations today where either Bluetooth or Wireless LANs can be used. Examples include setting up networks, printing or transferring files and data from a PDA to a computer. Both are versions of unlicensed wireless technology. WLANs can span bigger distances and offer higher throughput. Bluetooth does not require expensive hardware. It also needs less power to operate.
Both use the same radio frequency range, but their modulation techniques are different. Both have been developed to replace cabling. Bluetooth wanted to get rid of the peripheral equipment cables, WLANs wanted to get rid of the cables needed to set up a LAN.
Bluetooth devices[change | change source]
Bluetooth exists in many products, such as telephones, the Wii or PlayStation 3. It has recently been built into some electronic watches, modems and headsets. The technology is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e. with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth uses protocols that make it easy to discover devices that are nearby, and to set up a connection between these devices. Bluetooth devices can advertise all the services they offer. Using services becomes easier; a lot of the setup related to security, networking or permissions is automatic.
Wireless LAN[change | change source]
WLANs are more like traditional LANs. More configuration is needed to set up the network, and to make resources accessible by other devices in the network. WLAN devices also consume more power than Bluetooth ones. Sometimes WLANs are called wireless Ethernet. WLAN needs more resources to be set up, but it also offers faster transfer speeds. The security is also better than that offered by Bluetooth. |
Focus: Landmarks: Invention of the CD-Player Laser
Focus Landmarks feature important papers from the archives of the Physical Review. During 2010, the 50th anniversary of the invention of the laser, we’re highlighting some laser-related papers, as part of LaserFest.
The invention of the semiconductor laser took lasers from the scientist’s lab and action hero’s arsenal to every living room DVD player and grocery store scanner. It began with the serendipitous discovery in 1962 that gallium arsenide could be made to produce surprisingly intense light. Later that year, the first gallium arsenide laser was reported in Physical Review Letters. The modern descendants of that device are the tiny lasers that abound in countless modern appliances.
Following the 1948 invention of the transistor [see Focus, 15 May 2009], semiconductor technology–mainly with silicon and germanium–developed rapidly. Eventually some researchers began using gallium arsenide (GaAs) to make certain specialized kinds of diodes, components that let current flow in only one direction.
A diode works by combining two types of material. Depending on what elements are added to it, a semiconductor can be either n-type, meaning that current flows in the form of electrons, or p-type, which carries current as “holes”–positively-charged vacancies corresponding to missing electrons. A diode has a two-layered structure with an internal p-n boundary. Applying a positive voltage to the n-type layer pulls electrons and holes in opposite directions, away from the p-n boundary, and no current flows. A “forward” voltage, with the opposite polarity, draws both types of carrier into the boundary region, and the diode then conducts current.
At a conference in the summer of 1962, researchers reported successful demonstration of GaAs diodes, but with a remarkable twist: the diodes emitted prodigious quantities of infrared light. The phenomenon of electroluminescence from semiconductors was not new, but the intensity of emission from GaAs diodes was a surprise to the meeting participants. It corresponded to a conversion efficiency from electric to radiative energy approaching 100 percent. Physicists at the time understood that electrons and holes combining at the p-n boundary could release photons, but they didn’t know why a GaAs diode emitted so much light; silicon and germanium diodes released none at all.
Nevertheless, with the 1960 invention of the laser fresh in their minds [see Focus 27 Jan 2005], several researchers saw that light emission from a diode junction could potentially be the foundation of a new kind of laser. From a quantum mechanical perspective, they reasoned, merging of electrons and holes could be “stimulated” by the presence of radiation of the correct frequency, leading to a cascade of coherent emission, just as in a conventional laser.
The goal was first achieved by Robert Hall and his colleagues at the General Electric Research Laboratory in Schenectady, New York. They turned a roughly cubic GaAs diode, 0.44 millimeters on a side, into a laser by polishing two of the opposing faces to make an optical cavity in which light reflected back and forth through the crystal. The reflecting light stimulated further emission from the electron-hole pairs in the boundary layer. Hall had the idea of polishing the crystal because, as a child, he had polished glass to make his own telescope, says Russell Dupuis of the Georgia Institute of Technology in Atlanta.
In their Physical Review Letters report of 1 November 1962, Hall and his colleagues surmised that light-emission in GaAs occurs because the electrons and holes occupy quantum states that allow them to produce only photons when they combine. In silicon and germanium, by contrast, a momentum mismatch between electron and hole states forces them to produce phonons, or quantized lattice vibrations, that carry off the excess momentum along with the energy. This picture was essentially correct, as theorists later showed. Light-emitting diodes (LEDs) also radiate by electron-hole merging, but they are constructed without the optical cavity needed for lasing.
Two other groups followed quickly with their own GaAs lasers . A few weeks later, Nicholas Holonyak and his colleagues at General Electric in Syracuse, New York, made a laser with gallium arsenide phosphide, an alloy, that emitted visible red rather than infrared light . This first successful use of a semiconducting alloy, rather than a pure compound, was of huge significance in opening up a new realm of semiconducting materials with a wide variety of desirable properties, says Dupuis.
David Lindley is a freelance writer in Alexandria, Virginia, and author of Uncertainty: Einstein, Heisenberg, Bohr, and the Struggle for the Soul of Science (Doubleday, 2007).
- M. I. Nathan, W. P. Dumke, G. Burns, F. H. Dill, Jr., and G. Lasher, “Stimulated Emission of Radiation from GaAs p-n Junctions,” Appl. Phys. Lett. 1, 62 (1962); T. M. Quist, R. H. Rediker, R. J. Keyes, W. E. Krag, B. Lax, A. L. McWhorter, and H. J. Zeiger, “Semiconductor Maser of GaAs,” Appl. Phys. Lett. 1, 91 (1962)
- N. Holonyak, Jr., and S. F. Bevacqua, “Coherent (Visible) Light Emission from Ga(As1-xPx) Junctions,” Appl. Phys. Lett. 1, 82 (1962)
Invention of the Maser and Laser (Focus 2005)
Birth of Modern Electronics (Focus 2009)
Russell D. Dupuis, The Diode Laser–the First Thirty Days Forty Years Ago, LEOS (IEEE), 2003
PRL Milestones write-up by Martin Blume |
A familiar and widely used example is the inductive ballast used in fluorescent lamps to limit the current through the tube, which would otherwise rise to a destructive level due to the negative differential resistance artifact in the tube's voltage-current characteristic.
Ballasts vary in complexity. They can be as simple as a series resistor or inductor, or a capacitor, or a combination of these. They may be as complex as the electronic ballasts used in fluorescent lamps and high-intensity discharge lamps.
- 1 Current limiting
- 2 Resistors
- 3 Reactive ballasts
- 4 Electronic and magnetic ballasts
- 5 Fluorescent lamp ballasts
- 6 ANSI Ballast factor
- 7 Ballast triode
- 8 See also
- 9 References
- 10 External links
An electrical ballast is a device which limits the current through an electrical load. These are most often used when a load (such as an arc discharge) has its terminal voltage decline when current through the load increases. If such a device were connected to a constant-voltage power supply, it would draw an increasing amount of current until it will be destroyed or caused the power supply to fail. To prevent this, a ballast provides a positive resistance or reactance that limits the current. The ballast provides for the proper operation of the negative-resistance device by limiting current.
A gas-discharge lamp is an example of a device which, under certain conditions, has negative differential resistance. In such a situation (after lamp ignition), every little increase in the lamp current tends to reduce the voltage "dropped" across it (supposing the lamp to be connected in series with other circuit elements). Let represent the change in current I, and represent the change in voltage V. Each variation can be positive (or negative) if its variable increases (or decreases). The differential resistance is the ratio between and , and it can be either positive or negative (and sometimes even null). This is quite a different concept from the resistance, which is always considered positive. In the case of a gas-discharge lamp, the differential resistance (i.e., dV/dI) really becomes negative because the positive variation for the current (dI) causes a negative variation for the voltage (dV) across the lamp.
For a mechanical analogy to negative resistance behavior, and how a ballast can limit current, imagine trying to push a heavy weight across a smooth surface. Applying a force (voltage) to the weight initially does not cause it to move (have positive current) because the static friction of the weight against the surface is more than the applied force. Once enough force (voltage) is applied, the static friction is overcome and the weight starts to move (now dynamic friction opposes the movement of the weight instead of static), but if the amount of force that was used to overcome the static friction is not changed, the weight will be pushed too fast across the surface because the dynamic friction is of a smaller magnitude than the static friction. If we flood the surface with some depth of viscous oil (our ballast), we can continue to push as hard as it took to get the weight moving (leave our voltage constant) while simultaneously not allowing the weight to move too fast (limit current). The transition between static friction (high resistance) and dynamic friction as the weight moves (low resistance) is the negative resistance region - less force (negative ) creates more movement (positive ).
Ballasts can also be used simply to limit the current in an ordinary, positive-resistance circuit. Prior to the advent of solid-state ignition, automobile ignition systems commonly included a ballast resistor to regulate the voltage applied to the ignition system.
Series resistors are used as ballasts to control the current through LEDs.
For simple, low-powered loads such as a neon lamp or a LED, a fixed resistor is commonly used. Because the resistance of the ballast resistor is large it determines the current in the circuit, even in the face of negative resistance introduced by the neon lamp.
The term also referred to a (now obsolete) automobile engine component that lowered the supply voltage to the ignition system after the engine had been started. Because cranking the engine causes a very heavy load on the battery, the system voltage can drop quite low during cranking. To allow the engine to start, the ignition system was designed to operate on this lower voltage. But once cranking is finished, the normal operating voltage would overload the ignition system. To avoid this problem, a ballast resistor was inserted in series with the ignition system.
Occasionally, this ballast resistor would fail and the classic symptom of this failure was that the engine ran while being cranked (while the resistor was bypassed) but stalled immediately when cranking ceased (and the resistor was reconnected in the circuit via the ignition switch). Modern electronic ignition systems (those used since the 1980s or late '70s) do not require a ballast resistor as they are flexible enough to operate on the lower cranking voltage or the normal operating voltage.
Another common use of a ballast resistor in the automotive industry is adjusting the ventilation fan speed. The ballast is a fixed resistor with usually two center taps, and the fan speed selector switch is used to bypass portions of the ballast: all of them for full speed, and none for the low speed setting. A very common failure occurs when the fan is being constantly run at the next-to-full speed setting (usually 3 out of 4). This will cause a very short piece of resistor coil to be operated with a relatively high current (up to 10 A), eventually burning it out. This will render the fan unable to run at the reduced speed settings.
In some consumer electronic equipment, notably in television sets in the era of valves (vacuum tubes), but also in some low-cost record players, the vacuum tube heaters were connected in series. Since the voltage drop across all the heaters in series was usually less than the full mains voltage, it was necessary to provide a ballast to drop the excess voltage. A resistor was often used for this purpose, as it was cheap and worked with both AC and DC.
Some ballast resistors have the property of increasing in resistance as current through them increases, and decreasing in resistance as current decreases. Physically, some such devices are often built quite like incandescent lamps. Like the tungsten filament of an ordinary incandescent lamp, if current increases, the ballast resistor gets hotter, its resistance goes up, and its voltage drop increases. If current decreases, the ballast resistor gets colder, its resistance drops, and the voltage drop decreases. Therefore the ballast resistor reduces variations in current, despite variations in applied voltage or changes in the rest of an electric circuit. These devices are sometimes called "barretters" and were used in the series heating circuits of 1930s to 1960s AC/DC radio and TV home receivers.
This property can lead to more precise current control than merely choosing an appropriate fixed resistor. The power lost in the resistive ballast is also reduced because a smaller portion of the overall power is dropped in the ballast compared to what might be required with a fixed resistor.
Earlier, household clothes dryers sometimes incorporated a germicidal lamp in series with an ordinary incandescent lamp; the incandescent lamp operated as the ballast for the germicidal lamp. A commonly used light in the home in the 1960s in 220–240 V countries was a circleline tube ballasted by an under-run regular mains filament lamp. Self ballasted mercury-vapor lamps incorporate ordinary tungsten filaments within the overall envelope of the lamp to act as the ballast, and it supplements the otherwise lacking red area of the light spectrum produced.
Because of the power that would be lost, resistors are not used as ballasts for lamps of more than about two watts. Instead, a reactance is used. Losses in the ballast due to its resistance and losses in its magnetic core may be significant, on the order of 5 to 25% of the lamp input electric power. Practical lighting design calculations must allow for ballast loss in estimating the running cost of a lighting installation.
An inductor is very common in line-frequency ballasts to provide the proper starting and operating electrical condition to power a fluorescent lamp, neon lamp, or high intensity discharge (HID) lamp. (Because of the use of the inductor, such ballasts are usually called magnetic ballasts.) The inductor has two benefits:
- Its reactance limits the power available to the lamp with only minimal power losses in the inductor
- The voltage spike produced when current through the inductor is rapidly interrupted is used in some circuits to first strike the arc in the lamp.
A disadvantage of the inductor is that current is shifted out of phase with the voltage, producing a poor power factor. In more expensive ballasts, a capacitor is often paired with the inductor to correct the power factor. In ballasts that control two or more lamps, line-frequency ballasts commonly use different phase relationships between the multiple lamps. This not only mitigates the flicker of the individual lamps, it also helps maintain a high power factor. These ballasts are often called lead-lag ballasts because the current in one lamp leads the mains phase and the current in the other lamp lags the mains phase.
In Europe, and most 220-240 V territories, the line voltage is sufficient to start lamps over 20W with a series inductor. In North America and Japan however, the line voltage (120 V or 100 V respectively) may not be sufficient to start lamps over 20 W with a series inductor, so an autotransformer winding is included in the ballast to step up the voltage. The autotransformer is designed with enough leakage inductance (short-circuit inductance) so that the current is appropriately limited.
Because of the large inductors and capacitors that must be used, reactive ballasts operated at line frequency tend to be large and heavy. They commonly also produce acoustic noise (line-frequency hum).
Electronic and magnetic ballasts
An electronic ballast uses solid state electronic circuitry to provide the proper starting and operating electrical conditions to power discharge lamps. An electronic ballast can be smaller and lighter than a comparably-rated magnetic one. An electronic ballast is usually quieter than a magnetic one, which produces a line-frequency hum by vibration of the transformer laminations.
Electronic ballasts are often based on the SMPS topology, first rectifying the input power and then chopping it at a high frequency. Advanced electronic ballasts may allow dimming via pulse-width modulation or via changing the frequency to a higher value. Ballasts incorporating a microcontroller (digital ballasts) may offer remote control and monitoring via networks such as LonWorks, DALI, DMX512, DSI or simple analog control using a 0-10 V DC brightness control signal. Systems with remote control of light level via a wireless mesh network have been introduced.
Electronic ballasts usually supply power to the lamp at a frequency of 20,000 Hz or higher, rather than the mains frequency of 50 – 60 Hz; this substantially eliminates the stroboscopic effect of flicker, a product of the line frequency associated with fluorescent lighting (see photosensitive epilepsy). The high output frequency of an electronic ballast refreshes the phosphors in a fluorescent lamp so rapidly that there is no perceptible flicker. The flicker index, used for measuring perceptible light modulation, has a range from 0.00 to 1.00, with 0 indicating the lowest possibility of flickering and 1 indicating the highest. Lamps operated on magnetic ballasts have a flicker index between 0.04–0.07 while digital ballasts have a flicker index of below 0.01.
Because more gas remains ionized in the arc stream, the lamp operates at about 9% higher efficacy above approximately 10 kHz. Lamp efficiency increases sharply at about 10 kHz and continues to improve until approximately 20 kHz. Trials are ongoing in some Canadian provinces to assess cost savings potential of digital ballast retrofits to existing street lights.
With the higher efficiency of the ballast itself and the higher lamp efficacy at higher frequency, electronic ballasts offer higher system efficacy for low pressure lamps like the fluorescent lamp. For HID lamps, there is no improvement of the lamp efficacy in using higher frequency, but for these lamps the ballast losses are lower at higher frequencies and also the light depreciation is lower, meaning the lamp produces more light over its entire lifespan. Some HID lamp types like the ceramic discharge metal halide lamp have reduced reliability when operated at high frequencies in the range of 20 – 200 kHz; for these lamps a square wave low frequency current drive is mostly used with frequency in the range of 100 – 400 Hz, with the same advantage of lower light depreciation.
Application of electronic ballasts is growing in popularity. Most newer generation electronic ballasts can operate both high pressure sodium (HPS) lamps as well as metal-halide lamps, reducing costs for building managers who use both types of lamps. So, the ballast, at the beginning works as a starter for the voltaic arc, supplying an high-voltage impulse and, later, it works as a limiter/regulator of the electric flow inside the circuit. Electronic ballasts (digital ballasts) also run much cooler and are lighter than their magnetic counterparts.
Fluorescent lamp ballasts
This technique uses a combination filament–cathode at each end of the lamp in conjunction with a mechanical or automatic (bi-metallic or electronic) switch that initially connect the filaments in series with the ballast to preheat them. When filaments are disconnected, an inductive pulse from the ballast starts the lamp. This system is described as "Preheat" in North America and "Switch Start" in the UK, and have no specific name in the rest part of the world. This systems is common in 200–240 V countries (and for 100–120 V lamps up to about 30 watts).
Although an inductive pulse makes it more likely that the lamp will start when the starter switch opens, it is not actually necessary. The ballast in such systems can equally be a resistor. A number of fluorescent lamp fittings used a filament lamp as the ballast in the late 1950s through to the 1960s. Special lamps were manufactured that were rated at 170 volts and 120&nbps;watts. The lamp had a thermal starter built into the 4 pin base. The power requirements were much larger than using an inductive ballast (though the consumed current was the same), but the warmer light from the lamp type of ballast was often preferred by users particularly in a domestic environment.
Resistive ballasts were the only type that was useable when the only supply available to power the fluorescent lamp was DC. Such fittings used the thermal type of starter (mostly because they had gone out use long before the glow starter was invented), but it was possible to include a choke in the circuit whose sole purpose was to provide a pulse on opening of the starter switch to improve starting. DC fittings were complicated by the need to reverse the polarity of the supply to the tube each time it started. Failure to do so vastly shortened the life of the tube.
An instant start ballast does not preheat the electrodes, instead using a relatively high voltage (~600 V) to initiate the discharge arc. It is the most energy efficient type, but yields the fewest lamp-start cycles, as material is blasted from the surface of the cold electrodes each time the lamp is turned on. Instant-start ballasts are best suited to applications with long duty cycles, where the lamps are not frequently turned on and off. Although these were mostly used in countries with 100-120 volt mains supplies (for lamps of 40 W or above), they were briefly popular in other countries because the lamp started without the flicker of switch start systems. The popularity was short lived because of the short lamp life.
A rapid start ballast applies voltage and heats the cathodes simultaneously. It provides superior lamp life and more cycle life, but uses slightly more energy as the electrodes in each end of the lamp continue to consume heating power as the lamp operates. Again, although popular in 100-120 volt countries for lamps of 40 W and above, rapid start is sometimes used in other countries particularly where the flicker of switch start systems is undesirable.
A dimmable ballast is very similar to a rapid start ballast, but usually has a capacitor incorporated to give a power factor nearer to unity than a standard rapid start ballast. A quadrac type light dimmer can be used with a dimming ballast, which maintains the heating current while allowing lamp current to be controlled. A resistor of about 10 kΩ is required to be connected in parallel with the fluorescent tube to allow reliable firing of the quadrac at low light levels.
Used in high end electronic fluorescent ballasts. This ballast applies power to the filaments first, it allows the cathodes to preheat and then applies voltage to the lamps to strike an arc. Lamp life typically operates up to 100,000 on/off cycles when using programmed start ballasts. Once started, filament voltage is reduced to increase operating efficiency. This ballast gives the best life and most starts from lamps, and so is preferred for applications with very frequent power cycling such as vision examination rooms and restrooms with a motion detector switch.
A hybrid ballast has a magnetic core-and-coil transformer and an electronic switch for the electrode-heating circuit. Like a magnetic ballast, a hybrid unit operates at line power frequency—50 Hz in Europe, for example. These types of ballasts, which are also referred to as cathode-disconnect ballasts, disconnect the electrode-heating circuit after they start the lamps.
ANSI Ballast factor
For a lighting ballast, the ANSI ballast factor is used in North America to compare the light output (in lumens) of a lamp operated on a ballast compared to the lamp operating on an ANSI reference ballast. Reference ballast operates the lamp at its ANSI specified nominal power rating. The ballast factor of practical ballasts must be considered in lighting design; a low ballast factor may save energy, but will produce less light. With fluorescent lamps, ballast factor can vary from the reference value of 1.0.
- Compact fluorescent lamp (CFL)
- Fluorescent lamp
- High-intensity discharge lamp (HID)
- Iron-hydrogen resistor
- Mercury-vapor lamp
- Neon lamp
- Sodium lamp
- Sinclair, Ian Robertson (2001). Sensors and transducers, 3rd Ed. Newnes. pp. 69–70. ISBN 0750649321.
- Kularatna, Nihal (1998). Power Electronics Design Handbook. Newnes. pp. 232–233. ISBN 0750670738.
- Aluf, Ofer (2012). Optoisolation Circuits: Nonlinearity Applications in Engineering. World Scientific. pp. 8–11. ISBN 9814317004. This source uses the term "absolute negative differential resistance" to refer to active resistance
- "Understanding Transformer Noise" (PDF). federalpacific.com. Federal Pacific. Retrieved 8 August 2015.
- "infiNET dimmer datasheet" (PDF). Crestron Electronics, Inc. 9 March 2005. Retrieved 22 July 2013.
- Specifier Reports: Electronic Ballasts p.18, National Lighting Product Information Program, Volume 8 Number 1, May 2000. Retrieved 13 May 2013.
- IES Lighting Handbook 1984
- "What Are Fluorescent Ballast Starting Methods?". Bulbsdepot.com. 2014-02-14. Retrieved 2014-03-11.
- IEEE Std. 100 "Dictionary of IEEE Standards Terms, Standard 100" , ISBN 0-7381-2601-2, page 83
- ANSI standard C82.13-2002 Definitions for Flurorescent Lamp Ballasts", page 1
- "Ballast factor". Lawrence Berkeley National Laboratory. Archived from the original on March 19, 2013. Retrieved April 12, 2013.
|Wikimedia Commons has media related to Electrical ballasts.|
|Wikimedia Commons has media related to Starters.|
|Wikimedia Commons has media related to Fluorescent lamp power supplies (includes electronic ballasts).| |
A spacecraft is a craft or machine designed for spaceflight. Spacecraft are used for a variety of purposes, including communications, earth observation, meteorology, navigation, planetary exploration and space tourism. Spacecraft and space travel are common themes in works of science fiction.
On a sub-orbital spaceflight, a spacecraft enters space and then returns to the surface, without having gone into an orbit. For orbital spaceflights, spacecraft enter closed orbits around the Earth or around other celestial bodies. Spacecraft used for human spaceflight carry people on board as crew or passengers, while those used for robotic space missions operate either autonomously or telerobotically. Robotic spacecraft used to support scientific research are space probes. Robotic spacecraft that remain in orbit around a planetary body are artificial satellites. Only a handful of interstellar probes, such as Pioneer 10 and 11, Voyager 1 and 2 , and New Horizons, are currently on trajectories that leave our Solar System.
l (TCS), propulsion, and structures. Attached to the bus are typically payloads.
The first reusable spacecraft, the X-15, was air-launched on a suborbital trajectory on July 19, 1963. The first partially reusable orbital spacecraft, the Space Shuttle, was launched by the USA on the 20th anniversary of Yuri Gagarin's flight, on April 12, 1981. During the Shuttle era, six orbiters were built, all of which have flown in the atmosphere and five of which have flown in space. The Enterprise was used only for approach and landing tests, launching from the back of a Boeing 747 and gliding to deadstick landings at Edwards AFB, California. The first Space Shuttle to fly into space was the Columbia, followed by the Challenger, Discovery, Atlantis, and Endeavour. The Endeavour was built to replace the Challenger when it was lost in January 1986. The Columbia broke up during reentry in February 2003.
The first automatic partially reusable spacecraft was the Buran (Snowstorm), launched by the USSR on November 15, 1988, although it made only one flight. This spaceplane was designed for a crew and strongly resembled the U.S. Space Shuttle, although its drop-off boosters used liquid propellants and its main engines were located at the base of what would be the external tank in the American Shuttle. Lack of funding, complicated by the dissolution of the USSR, prevented any further flights of Buran. The Space Shuttle has since been modified to allow for autonomous re-entry via the addition of a control cable running from the control cabin to the mid-deck which would allow for the automated deployment of the landing gear in the event a un-crewed re-entry was required following abandonment due to damage at the ISS.
Per the Vision for Space Exploration, the Space Shuttle is due to be retired in 2010 due mainly to its old age and high cost of program reaching over a billion dollars per flight. The Shuttle's human transport role is to be replaced by the partially reusable Crew Exploration Vehicle (CEV) no later than 2014. The Shuttle's heavy cargo transport role is to be replaced by expendable rockets such as the Evolved Expendable Launch Vehicle (EELV) or a Shuttle Derived Launch Vehicle.
Scaled Composites' SpaceShipOne was a reusable suborbital spaceplane that carried pilots Mike Melvill and Brian Binnie on consecutive flights in 2004 to win the Ansari X Prize. The Spaceship Company will build its successor SpaceShipTwo. A fleet of SpaceShipTwos operated by Virgin Galactic should begin reusable private spaceflight carrying paying passengers in 2009.
A spacecraft is a vessel that can safely move people and cargo outside the Earth's atmosphere, through space to other planetary bodies, space stations, or orbits. Spacecraft which are launched from the surface of a planet are called launch vehicles and usually take-off from launch pads at spaceports.
Most spacecraft today are propelled by rocket motors, which shoot hot gases and fire opposite to the direction of travel. Other forms of propulsion are used when appropriate. Spacecraft which do not need to escape from strong gravity may use ion thrusters or other more efficient methods.
Because of the very large amount of energy needed to leave the Earth's gravity, spacecraft are usually very expensive to build, launch, and operate. Plans for future spacecraft often focus on reducing these costs so more people can participate in space. But today, costs are still very high, and until recently most spacecraft were sponsored by wealthy national governments.
Most expensive of all is to send people in to space, due to their needs for food, water, air, living space, safety, and control. People participating in this way have special names: Americans call themselves astronauts; Russians call themselves cosmonauts; Chinese call themselves taikonauts.
Some of the most important spacecraft today are artificial satellites. Artificial satellites are smaller, unmanned spacecraft which are mostly sent to geostationary orbit to act as points to reflect radio signals from one part of Earth to another, or to watch events on Earth from a high point of view. Without such spacecraft, people on Earth would not be able to communicate as well with each other, accurately predict the weather, or confidently ensure the security of their countries. |
UFOlogists have proposed that the moon, believed to be a natural satellite of Earth, is, in fact, a huge spaceship, a gigantic UFO, parked in orbit around the Earth by an advanced technological civilization.
The proposal that the moon is an artificial satellite of Earth, specifically an alien spaceship, a massive UFO, parked in orbit around the Earth, is known as the Spaceship Moon Theory, Artificial Moon Theory, or Alien Moon Theory.
According to proponents of the Spaceship Moon Theory in the UFO community, there is evidence to suggest that the moon was built by an alien civilization with science and technology much more developed than ours.
The Spaceship Moon Theory claims that the moon, as an alien UFO parked in orbit around the Earth, has a hollow inside. In other words, the moon is a hollowed-out artificial structure containing an underground base serving also as the interior of a gigantic UFO spaceship.
The startling hypothesis was first proposed in 1970 by two Russian scientists, members of the Soviet Academy of Science, Michael Vasin and Alexander Shcherbakov, in an article, “Is the Moon the Creation of Alien Intelligence?”
Vasin and Shcherbakov suggested that the moon was a natural space body converted into an artificial structure by alien engineers who melted the original solid core, deposited the molten lava on the lunar surface, and created an inner lunar space protected by an artificial shell below the outer shell we know as the lunar surface.
The alien race then placed their gigantic UFO in orbit around the Earth for reasons we can only speculate about.
Part of evidence that Vasin and Shcherbakov presented in support of the theory that the moon has an inner shell made of strong high-tensile engineering material was that lunar craters formed from impact of large space rocks are generally shallower than expected and that the bottoms of the craters tend to be flat or convex.
According to the theorists, this suggests that large meteors impacting on the lunar surface are unable to dig deeper because they hit an impenetrable inner shell made of a high-tensile material.
Vasin and Shcherbakov thus suggested that the moon consists of a natural rocky outer layer that is only about five miles thick and an inner shell that is up to 20 miles thick. Below the inner shell is a cavity that could contain an “atmosphere” to support alien life.
Other proponents of the hollowed-out Spaceship Moon Theory include Don Wilson, who published a book Our Mysterious Spaceship Moon in 1975, and George H. Leonard in his 1976 book, Someone Else is On the Moon.
One of the major reasons for believing the moon is hollow, according to Spaceship Moon theorists, is its calculated mean density of 3.3gm/cm3, compared with Earth’s 5.5 gm/cm3.
Proponents of the Spaceship Moon Theory claim that the unexpectedly low mean density of the moon caused NASA Scientist Gordon MacDonald to remark, “[T]he data require that the interior of the Moon is more like a hollow than a homogeneous sphere.”
MIT’s Sean C. Solomon is also quoted as having written that “the Lunar Orbiter experiments vastly improved our knowledge of the Moon’s gravitational field… indicating the frightening possibility that the Moon might be hollow.”
An incident that Spaceship Moon theorists often cite as proof that the moon is hollow occurred on November 20, 1969, when the Apollo 12 crew conducted the Passive Seismic Experiment by crashing the Apollo 12 Lunar Module into the Moon. The impact created an artificial moonquake that caused the moon to reverberate “like a gong” for nearly an hour.
A similar observation was made during the Apollo 13 mission, with instruments recording reverberations that lasted for more than three hours, compared with a few minutes for the Earth, even in large earthquakes.
According to a NASA document relating to the 1970 Apollo 13 mission, “Nothing comparable happens when objects strike Earth.”
“Back in November 1969, the Apollo 12 astronauts had sent their Lunar Module crashing into the Moon following their return to the command craft after the lunar landing mission. That Lunar Module struck with a force of one ton of TNT. The shock waves built up to a peak in eight minutes and continued for nearly an hour. The information from these two artificial moonquakes led to reconsideration of theories proposed about the lunar interior. Among puzzling features are the rapid build- up to the peak and the prolonged reverberations. Nothing comparable happens when objects strike Earth.”
Spaceship Moon theorists claim that as a natural satellite, the moon is an outrageous anomaly, being obviously too big to have been captured naturally in the Earth’s orbit, as Isaac Asimov acknowledged in his book Asimov on Astronomy, published in 1974.
The video below, produced by a Spaceship Moon theorist, lists some of the moon anomalies claimed to indicate that the moon is a gigantic alien spaceship.
The debate about the origin of the moon, supposedly a natural satellite of the Earth, continues to rage, and proponents of the Spaceship Moon theory point out that the seemingly intractable difficulties in accounting for the moon as a supposed natural satellite of the Earth point to its artificial, or more precisely, alien technological origin.
As far as UFOlogists are concerned, recent sightings of UFO fleets over the moon suggest there could be a massive underground hangar for spacecraft in the moon.
UFOlogists poring over Google Moon images claim regular discoveries of anomalies on the lunar surface that indicate that the moon has a hollow inside that serves as an alien base and interior of a giant UFO spaceship.
Below are examples of such images that UFOlogists believe debunk arguments by skeptics that moon anomalies in NASA images are only pixelation glitches.
UFOlogists have also held that the reason why NASA has not returned to the moon since the Apollo missions is that aliens are there. |
If you have been inside all winter and then go sit out in the sun on a bright spring day, it is very easy to get sunburned. Over the course of several hours, exposed skin turns bright red and becomes extremely painful when touched. The skin will often feel very warm as well.
When you get a sunburn, you're basically killing skin cells.
The outer layer of skin on your body is called the epidermis. The outermost cells of the epidermis -- the cells you see and feel on your arm, for example -- are dead. But just below the dead cells is a layer of living cells. These living cells continuously produce new dead cells to replenish your skin.
By sitting in the sun, you expose yourself to ultraviolet light. Ultraviolet light has the ability to kill cells (see this page for details). Ultraviolet light hits the layer of living cells in the epidermis and starts damaging and killing them.
As your body senses the dead cells, two things happen:
- Your immune system comes in to clean up the mess. It increases blood flow in the affected areas, opening up capillary walls so that white blood cells can come in and remove the damaged cells. The increased blood flow makes your skin warm and red.
- The nerve endings for pain begin sending signals to your brain. If you have read How Aspirin Works, you know that damaged cells release chemicals that activate pain receptors. This is why sunburned skin is so sensitive.
The ways to avoid sunburn (without having to stay inside) are to use a sunscreen, which blocks ultraviolet light, or pace yourself so you get a tan first. When you get a tan, your body essentially creates its own sunscreen using special pigment cells in the epidermis. See How Sunburns and Sun Tans Work for details.
For more information, see the next page. |
1973 – Women’s movements win legalization of abortion in USA
Many women seeking abortion in the USA prior to the 1960s were at great danger, as the procedure was not only illegal but often practiced by back-street abortionists using unsafe techniques. A small group of determined activists had been campaigning for abortion law reform for decades, but the emergence of the civil rights and women’s movements changed the climate, enabling the Roe vs Wade court case to eventually make abortion legal.
In 1964, the Association for the Study of Abortion (ASA) was founded, consisting mainly of doctors and other professionals calling for abortion law reform to allow women to access medically necessary abortions. ASA was joined by Planned Parenthood, which advocated for women’s reproductive rights. These two organizations gave the movement prestige and legitimacy, aiding the movement in the eventual court case that legalized abortion.
While these movements advocated reform, the National Association for the Repeal of Abortion Laws (NARAL) was the first national organization created solely to campaign for the legalization of abortions. From its inception, NARAL chose to engage in controversial direct actions, and by 1969 – with abortion now a central feminist issue – had gained much support from the women’s liberation movement.
NARAL organized regular demonstrations and events, and women’s groups across the US organized ‘speak-outs’, where women who had undergone abortions could share their experiences. NARAL also publicly advocated referral services in which women were provided with resources and referrals to abortionists, and grass-roots education initiatives to raise awareness of issues relating to sexuality, women’s reproductive rights, and women’s health.
In 1970, the Roe vs Wade case was filed in the US District Court of Texas on behalf of a woman under the alias Jane Roe, who wished to have an abortion. When the court ruled against Roe, her lawyers decided to take the case to the US Supreme Court. On January 22, 1973, the abortion repeal movement was victorious with the Roe v. Wade case finally legalizing abortion in the USA (only India had ever legislated for this prior to 1973). The campaign succeeded in all of its goals, and inspired many women to assert their reproductive rights. |
CLASSIFICATION OF HUMAN RIGHTS
Human rights can be classified in a number of different ways. Some rights may fall into more than one of the available categories. One of the most widely used classifications distinguishes two general categories: classic or civil and political rights, and social rights that also include economic and cultural rights. Classic rights generally restrict the powers of the government in respect of actions affecting the individual and his or her autonomy (civil rights) and confer an opportunity upon people to contribute to the determination of laws and participate in government (political rights). Social rights require the governments to act in a positive, interventionist manner so as to create the necessary conditions for human life and development. The governments are expected to take active steps toward promoting the well-being of all its members out of social solidarity. It is believed that everyone, as a member of society, has the right to social security and is entitled to realization of the economic, social and cultural rights (ESCR) indispensable for his or her dignity and the free development of his or her personality.
All human rights carry corresponding obligations that must be translated into concrete duties to guarantee these rights. For many years, traditional human rights discourse was dominated by the misperception that civil and political rights require only negative duties while economic, social and cultural rights require positive duties. In this view, the right to free speech is guaranteed when the state leaves people alone, whereas the state must take positive action to guarantee the right to health by building health clinics and providing immunization.
This positive versus negative dichotomy has been discredited recently in favor of the understanding that all human rights have both positive and negative components. It is a matter of common sense that civil and political rights, including free speech, require the positive outlay of state resources in terms of providing a functioning judicial system and educating people about their rights. Conversely, all ESCR have negative aspects; some states prevent people from freely exercising ESCR, for example by blocking food or medical supplies to disfavored groups or regions.
Most scholars and activists now agree that duties for all human rights -- civil and political as well as ESCR -- can be divided into several discrete categories based on the type of duties. Although there is some variation in these typologies, they converge along the following basic categories: the duties to respect, protect, and fulfill.
The duty to respect is the negative obligation. It requires responsible parties to refrain from acting in a way that deprives people of the guaranteed right. Regarding the right to health, for example, a government may not deprive certain communities of access to health care facilities. The duty to protect is the obligation concerning third parties. It requires responsible parties to ensure that third parties do not deprive people of the guaranteed right. For example, a government must pass and enforce laws prohibiting private companies from releasing hazardous chemicals that impair public health. The duty to fulfill is the positive obligation. It requires responsible parties to establish political, economic, and social systems that provide access to the guaranteed right for all members of society. For example, a government must provide essential health services such as accessible primary care and clean water.
Learn more about classification of human rights by visiting the following Web sites: |
A resistor ladder is an electrical circuit made from repeating units of resistors. Two configurations are discussed below, a string resistor ladder and an R–2R ladder.
An R–2R Ladder is a simple and inexpensive way to perform digital-to-analog conversion, using repetitive arrangements of precise resistor networks in a ladder-like configuration. A string resistor ladder implements the non-repetitive reference network.
String resistor ladder network (analog to digital conversion, or ADC)
A string of many, often equally dimensioned, resistors connected between two reference voltages is a resistor string ladder network. The resistors act as voltage dividers between the referenced voltages. Each tap of the string generates a different voltage, which can be compared with another voltage: this is the basic principle of a flash ADC (analog-to-digital converter). Often a voltage is converted to a current, enabling the possibility to use an R–2R ladder network.
- Disadvantage: for an n-bit ADC, the number of resistors grows exponentially, as resistors are required, while the R–2R resistor ladder only increases linearly with the number of bits, as it needs only resistors.
- Advantage: higher impedance values can be reached using the same number of components.
R–2R resistor ladder network (digital to analog conversion)
A basic R–2R resistor ladder network is shown in Figure 1. Bit an−1 (most significant bit, MSB) through bit a0 (least significant bit, LSB) are driven from digital logic gates. Ideally, the bit inputs are switched between V = 0 (logic 0) and V = Vref (logic 1). The R–2R network causes these digital bits to be weighted in their contribution to the output voltage Vout. Depending on which bits are set to 1 and which to 0, the output voltage (Vout) will have a corresponding stepped value between 0 and Vref minus the value of the minimal step, corresponding to bit 0. The actual value of Vref (and the voltage of logic 0) will depend on the type of technology used to generate the digital signals.
For a digital value VAL, of a R–2R DAC with N bits and 0 V/Vref logic levels, the output voltage Vout is:
- Vout = Vref × VAL / 2N.
For example, if N = 5 (hence 2N = 32) and Vref = 3.3 V (typical CMOS logic 1 voltage), then Vout will vary between 0 volts (VAL = 0 = 000002) and the maximum (VAL = 31 = 111112):
- max Vout = 3.3 × 31 / 25 = 3.196875 volts
with steps (corresponding to VAL = 1 = 000012)
- ΔVout = 3.3 × 1 / 32 = 0.103125 volts.
The R–2R ladder is inexpensive and relatively easy to manufacture, since only two resistor values are required (or even one, if R is made by placing a pair of 2R in parallel, or if 2R is made by placing a pair of R in series). It is fast and has fixed output impedance R. The R–2R ladder operates as a string of current dividers, whose output accuracy is solely dependent on how well each resistor is matched to the others. Small inaccuracies in the MSB resistors can entirely overwhelm the contribution of the LSB resistors. This may result in non-monotonic behavior at major crossings, such as from 011112 to 100002. Depending on the type of logic gates used and design of the logic circuits, there may be transitional voltage spikes at such major crossings even with perfect resistor values. These can be filtered with capacitance at the output node (the consequent reduction in bandwidth may be significant in some applications). Finally, the 2R resistance is in series with the digital-output impedance. High-output-impedance gates (e.g., LVDS) may be unsuitable in some cases. For all of the above reasons (and doubtless others), this type of DAC tends to be restricted to a relatively small number of bits; although integrated circuits may push the number of bits to 14 or even more, 8 bits or fewer is more typical.
Accuracy of R–2R resistor ladders
Resistors used with the more significant bits must be proportionally more accurate than those used with the less significant bits; for example, in the R–2R network discussed above, inaccuracies in the bit-4 (MSB) resistors must be insignificant compared to R/32 (i.e., much better than 3%). Further, to avoid problems at the 100002-to-011112 transition, the sum of the inaccuracies in the lower bits must be significantly less than R/32. The required accuracy doubles with each additional bit: for 8 bits, the accuracy required will be better than 1/256 (0.4%). Within integrated circuits, high-accuracy R–2R networks may be printed directly onto a single substrate using thin-film technology, ensuring the resistors share similar electrical characteristics. Even so, they must often be laser-trimmed to achieve the required precision. Such on-chip resistor ladders for digital-to-analog converters achieving 16-bit accuracy have been demonstrated. On a printed circuit board, using discrete components, resistors of 1% accuracy would suffice for a 5-bit circuit, however with bit counts beyond this the cost of ever increasing precision resistors becomes prohibitive. For a 10-bit converter, even using 0.1% precision resistors would not guarantee monotonicity of output. This being said, high resolution R-2R ladders formed from discrete components are sometimes used, the nonlinearity being corrected in software. One example of such approach can be seen in the Korad 3005 power supply.
Resistor ladder with unequal rungs
It is not necessary that each "rung" of the R–2R ladder use the same resistor values. It is only necessary that the "2R" value matches the sum of the "R" value plus the Thévenin-equivalent resistance of the lower-significance rungs. Figure 2 shows a linear 4-bit DAC with unequal resistors.
This allows a reasonably accurate DAC to be created from a heterogeneous collection of resistors by forming the DAC one bit at a time. At each stage, resistors for the "rung" and "leg" are chosen so that the rung value matches the leg value plus the equivalent resistance of the previous rungs. The rung and leg resistors can be formed by pairing other resistors in series or parallel in order to increase the number of available combinations. This process can be automated.
|Wikimedia Commons has media related to Mixed analog and digital circuit diagrams.|
- ECE209: DAC Lecture Notes - Ohio State University
- EE247: D/A Converters - Berkeley University of California
- Simplified DAC/ADC Lecture Notes - University of Michigan
- Digital to Analog Converters (slides) - Georgia Tech
- Tutorial MT-014: String DACs and Fully-Decoded DACs - Analog Devices
- Tutorial MT-015: Binary DACs - Analog Devices
- Tutorial MT-016: Segmented DACs - Analog Devices
- Tutorial MT-018: Intentionally Nonlinear DACs - Analog Devices |
The Lucanus Cervus or Stag Beetle is one of the biggest insects in the Iberian fauna. The male, with its enlarged mandibles, is particularly spectacular-looking. There are more than 40 species throughout Europe, reaching Occidental Russia.
Its scientific name is Scarabeus Laticollis. There are different sub-families such as Scarabaeus, Sisyphus and Phanaeus Vindex. It is an insect that feeds on feces. The Dung Beetle is medium-size, with strong hairy legs. Its antennae had nine joints that end in maces. It can be black, blue, green, brownish, yellow or red, sometimes iridescent, metallic or with dark spots.
'Mosquito' is a generic term used to name different families of insects from the Diptera order, more specifically from the Nematocera order.
Scorpions are arthropod animals within the class Arachnida, in which other animals such as dust mites, spiders are ticks are included.
Butterflies belong to the order Lepidoptera (from Greek 'lepis' meaning 'scale' and 'pteron' meaning 'wing').
There are lots of types of wasps, with different habits and structural features. They can be roughly divided into social and solitary wasps.
Flies are insects of the order Diptera. Their body is divided into head, thorax and abdomen. They have a pair of fully developed wings and a pair of hind wings that help flies to balance during flight. |
Drones first came to the world’s attention when they were used by the US military to carry out first reconnaissance and then airstrikes without the need for pilots to be put in harm’s way.
Now the drone is being deployed in another kind of battle – one to conserve our forests.
About 30% of the world is covered by forests, but that figure is falling at an alarming rate. Swathes half the size of England are lost every year. This rapid deforestation accounts for around 11% of human-caused greenhouse gas emissions.
Cheap, easy to use and able to operate in even the most inaccessible areas quickly and quietly, drones are proving to be the ideal conservation tool.
Whether operating as a silent eye in the sky on the lookout for illegal logging or hovering above the forest floor scattering seeds to promote new growth, the drone is now a key weapon in the fight to conserve one of earth’s most precious resources.
Drones help conservationists branch out
The drone’s ability to access difficult areas makes it ideal for conservation projects which are, by their nature, often remote and spread over large areas.
Illegal deforestation has often been difficult to prevent because vast areas of forest are nearly impossible to patrol effectively on foot.
Drones can cover large distances in a short time and can even be preprogramed to take off, fly a reconnaissance mission and return on their own, meaning those on the ground can spend their time reviewing footage, pinpointing problems and dealing with them in a targeted way.
Drones can also monitor the health of forests – increasingly important as climate change brings changing environmental conditions.
The aircraft can count and measure different species of trees in two-thirds of the time that it would take a human to do the same job.
This allows unprecedented understanding of the health of a forest and can help identify any problems as early as possible.
Deep-learning software holds the key
While the use of drones is vital in obtaining the necessary imagery, the real results are delivered by the data crunching that goes on behind the scenes.
Tata Consultancy Services’ (TCS) deep-learning neural network algorithm uses pictures taken by drones to identify the number, diameter and height of trees to an accuracy of around 95%.
TCS is currently in the process of training its algorithm to be able to tell the difference between different species of trees.
Tirwari Shashwat, TCS innovation evangelist, explains: “Our algorithm identifies the edges of each tree as seen from the air and based on those edges it works out what the diameter is, as well as where one tree stops and another one starts so we can count the trees.
“When the drone moves from point A to point B it has different angles to look at the trees, so we are also able to calculate the height of these trees.”
While US military drones carry payloads of missiles worth millions of dollars, their conservation counterparts are being used to carry simple tree seeds.
One company has designed a drone that quickly and efficiently plants trees by using a tiny cannon to shoot pods containing germinated seeds into the ground.
This should allow rapid replanting of areas which have been logged or damaged by fire and which might otherwise take generations to regrow.
The drone not only makes this a cheap and quick process, it allows it to be carried out in areas which would simply be impractical for humans to get to.
The many uses of drone technology
It’s not just trees that are benefitting from drone technology.
It has been adopted by conservation groups to monitor and protect endangered species, such as the one-horned rhino.
Other environmental uses include monitoring huge new solar power farms, measuring air quality and watching for changes in temperature over the polar ice caps.
Drones are also being deployed to create a more connected planet, bringing wireless internet access to remote communities in developing countries.
The evolution of the drone from a machine that causes destruction to one that builds better societies is now almost complete. |
While studying echolocation, the author discovers a new and abundant species of pipistrelle. . .
Natural selection is a powerful force, shaping the appearance and behavior of animals in ways that allow efficient survival and reproduction. Because bats are nocturnal, many do not use visual signals to communicate, so natural selection does not always promote visual differences among them. For bats that echolocate, however, acoustic communication is of paramount importance, and natural selection can favor unique calls rather than visual differences. This is apparently what has happened in Europe’s most common bat, the pipistrelle. We now know that bats formerly known as Pipistrellus pipistrellus comprise two cryptic species (species that are difficult to distinguish visually). However, we can readily identify them by using a bat detector. They also use different “social calls” to communicate with one another.
Pipistrelles are widely distributed over Europe, and sometimes form maternity roosts that include more than 1,000 individuals. I began to suspect that there was more than meets the eye with pipistrelles in the early 1990s, when I was researching the ecology of bat echolocation at the University of Bristol. Scientists have long appreciated the fact that the echolocation calls of pipistrelles are highly variable. The calls are short (typically 5-7 thousandths of a second [ms]). They begin with a rapid downward frequency sweep, and end with a “tail” that is almost of constant frequency. This tail is used to detect flying insects, and is the part of the call that contains the most energy. To our ears, it is the loudest part of the call. Over much of Europe, pipistrelle calls tend to have constant frequency tails of either 45 kilohertz (kHz) or 55 kHz. The polarity of the two frequencies was first described by Swiss biologist Peter Zingg, who believed that pipistrelles simply used different calls in different habitats.
I personally believed that the distinction in call frequency was too sharp to be caused by habitat differences. In 1987 I recorded the calls from many bats exiting a maternity roost in western Wales. All the bats called with constant frequency tails close to 55 kHz, despite flying into a wide variety of habitats. This discovery prompted me to record bats from other maternity roosts. In 1992, I was joined in this venture by Sofie van Parijs, an undergraduate from Cambridge University who has since moved on to research the vocal behavior of seals. In every case, all bats exiting from a given roost emitted calls of either 45 kHz or 55 kHz. There was no mixing of the two “phonic types” within a roost. When I recorded pipistrelles of both types in a standard habitat, they still called at the frequency used by their roost mates. I was therefore able to conclusively reject the theory that differences in call frequency were due to differences in habitat use.
I then recorded foraging pipistrelles in a wide range of locations throughout Europe. Over much of Britain, colonies using these two phonic types could be found in the same area. In other parts of Europe, the situation was more complex. Only 55 kHz bats were found in much of Scandinavia and in several Mediterranean regions. The 45 kHz bats appeared to be more abundant in France, and were the only phonic type that I recorded in Holland. Other biologists recorded both types in Switzerland. There was no clear geographic pattern to the distribution of the two types, except that 55 kHz bats appeared to be more widespread toward the edge of the pipistrelle’s range. Because the two types occurred together in several countries, they were clearly not geographical races or “subspecies.”
In the mid-1990s, Kate Barlow came to my lab to begin working on a Ph.D. on the ecological differences between the pipistrelle phonic types. (Kate is now researching the foraging behavior of penguins in the Antarctic!) It was soon apparent that the two pipistrelles filled quite different ecological niches. The 45 kHz bats mainly eat owl-midges, window-midges and dung flies, and bat detector surveys carried out by Nancy Vaughan at Bristol showed that they feed in a wide range of habitats. The 55 kHz bats eat mainly biting and non-biting midges, and are much more often associated with riparian (streamside) habitats. In southwestern England, bat detector surveys suggest that the two types are equally abundant, forming maternity roosts in modern buildings. The 55 kHz bats form larger maternity colonies, however, which sometimes contain more than 1,000 individuals. While a median roost size for 45 kHz maternity roosts was 76 bats before the young began to fly, it was 203 bats for 55 kHz roosts. The reasons the 55 kHz bats form larger colonies are not clear, but it could be associated with a greater dependence on unpredictable food resources and the colony’s need to exchange information about the location of food patches.
On a cool summer’s evening in Britain, people often become aware of foraging bats by hearing high-pitched squeaks. These squeaks are not echolocation calls, but social calls that pipistrelles use in communication. The squeaks usually contain the most energy between 16 and 23 kHz. Because many humans can hear sounds up to 18-20 kHz, the social calls of pipistrelles are often quite obvious.
Kate Barlow and I also studied social calls in pipistrelles. To determine the function of these calls, we recorded the rate of social calling while simultaneously measuring insect abundance with a suction trap. The rate of social calling increased when insects were more scarce. Why then do pipistrelles make more social calls when prey are scarce? Are they trying to call in others of the same species to help track down patches of insect prey in the same way that cliff swallows do during the day? Or, are they warning other bats to stay away from their own patch of precious insects? We determined the function of the calls by performing playback experiments in the field. Playback experiments with bats are tricky to conduct. First, because we were working in the dark, it was not easy to see how the animals responded. We therefore determined how many bats were in the area by recording their echolocation calls. Second, because the playbacks were largely ultrasonic, we were unable to hear them, and so had to check frequently to ensure that the equipment was working properly.
We already knew that the social calls of the 45 kHz and 55 kHz pipistrelles were subtly different. The social calls of 45 kHz pipistrelles usually have four components, and are slightly lower in frequency than the social calls of 55 kHz pipistrelles, which usually have three components. When Kate made playbacks of the social calls of 45 kHz bats, the activity of 45 kHz bats in the immediate area dropped significantly. The 55 kHz bats showed no such reduction in activity in response to playbacks of 45 kHz bat social calls, but they did reduce activity in response to playbacks of social calls from 55 kHz bats.
Thus, the social calls of pipistrelles clearly function in defending feeding patches. Because social calls cause other bats to leave the area, they are perhaps better termed “anti-social calls!” In addition, bats respond to the social calls of only their own phonic type. This suggests that competition between phonic types may be minimal, perhaps because of their differing diets.
In the fall, male bats repeat very similar social calls in a “song-flight.” The function of the calls now seems to change from one of repulsion to one of attraction. Males use the song-flight to defend roost sites to which females are attracted. The mating system of pipistrelles is termed “resource defense polygyny.” Resource defense refers to males defending roost sites; polygyny refers to males mating with several females.
In Europe, males defend roosts, which are often in manmade bat houses. Bat houses normally contain a single sexually active male and one to three females. In collaboration with Kirsty Park and John Altringham of the University of Leeds, we showed that these mating groups always contain bats of just one phonic type, suggesting that the phonic types are reproductively isolated. Because reproductive isolation is one of the criteria for species designation, it seemed likely that the phonic types were two distinct species.
In order to prove that the echolocating types were indeed distinct species, I joined forces with Elisabeth Barratt of the Institute of Zoology in London and Paul Racey of the University of Aberdeen. We sequenced the DNA and discovered that the two phonic types were genetically distinct, despite being strikingly similar in appearance. The genetic sequence differed by more than 11 percent, which is a greater divergence than is seen in many bat species that look very different.
Discovering that one of Europe’s most abundant bats is really two species has been one of the highlights of my academic career. I have suggested to the International Commission on Zoological Nomenclature that the 45 kHz phonic type remain the common pipistrelle, and the 55 kHz type become the soprano pipistrelle (Pipistrellus pygmaeus), since it was first discovered through its high-pitched voice.
Most new species that are described are obscure or rare. I’ve been privileged to discover a common one. I believe that many additional bat species will be described from acoustic characteristics in the future. In Malaysia, for example, Tigga Kingston recently found a previously undiscovered leaf-nosed bat (family Hipposideridae, not yet named) that is almost identical to the bicolored roundleaf bat (Hipposideros bicolor), but which calls at a frequency difference of almost 10 kHz. In conservation terms, these findings are important because they suggest that considerable biodiversity may be hidden. Many bat species probably remain undiscovered because of our biased sensory perception of the world. Because sound is more important than vision for bats, we need to pay more attention to their acoustic world. By doing so we will paint a clearer picture of the diversity of life on earth. |
Oxygen is the 8th element on the periodic table, and is the 3rd most abundant element in the universe. Although the amount of oxygen in the universe is dwarfed by hydrogen and helium, oxygen is the most common element on Earth, and it is a crucial component of life. Plants release oxygen in the process of photosynthesis, and humans and other animals intake the oxygen when they breathe. Oxygen makes up 46% of the Earth's crust, with silicon making up 27%, so unsurprisingly the most abundant minerals present on Earth are a combination of the two, known as silicates.
Some properties of oxygen include:
|Density (at 0oC)||1.429 g/L|
|Boiling point||90.2 K|
|Melting point||54.8 K|
Oxygen is by far the most common element on Earth, making up around 21% of the atmosphere, 90% of the Ocean and two thirds of the human body. However free oxygen wasn't always present in the atmosphere, and around 2.5 billion years ago the only life on Earth were very small bacteria that didn't require oxygen to live. These organisms were in fact poisoned by oxygen, so when a photosynthetic life form called cyanobacteria began to flourish by converting sunlight into energy and producing oxygen, the other life-forms began to die off. This is known as the Great Oxygenation Event, and made way for the life-forms of today. For more information visit our page on atmospheric oxygen.
Oxygen is found in many forms on Earth, most commonly bonded with hydrogen to form H2O, water, but is also a key part of other molecules important to life. Oxygen combines with almost every other element to form "oxides": For example it combines with silicon in the Earth's crust to form silicon dioxide found in granite and quartz, with iron to form iron oxide found in rust, and with calcium and carbon to form calcium carbonate found in limestone. Visit UC Davis' ChemWiki to learn more about how oxygen interacts with the other elements.
Aside from oxygen's role in the chemistry of life, a different form of oxygen is also vitally important on our planet. Ozone (O3) is found in the upper layers of the atmosphere, forming a thin layer around the Earth, known as the Ozone layer. It is important because it absorbs most of the Sun's ultraviolet radiation that is harmful to life. Ozone is created when molecular oxygen (O2) is split apart by sunlight into single oxygen atoms, which then recombine to form ozone. Ozone is destroyed when it reacts with certain molecules, some of which are put into the atmosphere by humans. The destruction of ozone caused by humans is contributing to the known "ozone hole" which is a big concern.
Oxygen is important for its role in the burning of fuels, which is a large focus on this encyclopedia. Fossil fuels such as methane go through the process of combustion in which they react with oxygen, which produces carbon dioxide, water vapour and heat as shown in Figure 3. This process supplies the world with most of its primary energy - around 95%. This process is actually quite fascinating; dead plants and animals are what make the fossil fuels, live plants are what make the oxygen, and the result is energy for human's needs.
The video below is from the University of Nottingham's periodic videos project. They have created a complete suite of short videos on every element on the periodic table of elements. This video talks about oxygen, but also discusses ozone. |
American Sign Language For Dummies Cheat Sheet
Successfully communicating with others in American Sign Language (ASL) starts with learning to sign the manual alphabet, numbers 1 through 10, important expressions, and important one-word questions. And because good communication also involves manners, learning some basic do’s and don’ts of Deaf etiquette is also helpful.
One-Word Questions in American Sign Language
Signing one-word questions in American Sign Language (ASL) is a way to initiate small talk, get to know people, and gather information. When you sign these one-word questions, look inquisitive; the facial expression will come naturally when you are genuinely interested. Also, tilt your head and lean forward a little as you sign the question.
ASL: Signing Essential Expressions
Practice signing these basic expressions in American Sign Language (ASL) to meet and greet people, join in on conversations, answer questions, and be polite and courteous.
ASL: Signing the Manual Alphabet
Learning the manual alphabet in American Sign Language (ASL) will help you when you don’t know a sign as you begin communicating. If you don’t know the sign for something, you need to use the manual alphabet to spell the word, or fingerspell. Check out and practice the manual alphabet:
Note: If you need to fingerspell a word that has two letters that are the same, make a small bounce between the letters or simply slide the repeated letter over slightly.
ASL: Signing Numbers 1 through 10
In American Sign Language (ASL), knowing how to sign the cardinal (counting) numbers helps you in everyday situations like banking and making appointments. Pay attention to the way your palm faces when you sign numbers. For 1 through 5, your palm should face yourself. For 6 though 9, your palm should face out toward the person who’s reading the sign.
Deaf Etiquette Do’s and Don’ts
As you become more confident in your ability to communicate through American Sign Language (ASL) and begin to meet Deaf acquaintances and form friendships, keep some simple etiquette do’s and don’ts in mind.
To get a Deaf person’s attention, tap him or her on the shoulder or flick the light switch.
Let a Deaf person know that you can hear and that you’re learning Sign.
If you’re at a Deaf social function, allow the Deaf friend you came with to introduce you to others.
Introduce yourself using your first and last name.
Converse about sports, the weather, politics, pop culture, or whatever else you’d discuss with your hearing friends.
Don’t barge into a Deaf person’s house because you think they can’t hear the doorbell.
Avoid ordering for a Deaf person in a restaurant, unless he or she asks you to do so.
Never try to correct a Deaf person’s signing or lecture them that they don’t sign the way your instructor does.
Don’t initiate a conversation about a Deaf person’s hearing loss. Asking such questions implies that you think of the person as broken or inferior. |
Fruits and Veggies and Kids!
We all know that fruits and vegetables are good for us, but how do you get your kids to eat them? Here are some tips for getting your kids to eat the fruits and veggies they need to stay healthy.
It starts with choosing the fruits or veggies for your family to eat:
- Let your child choose a fruit or vegetable that looks appealing at the grocery store.
- For fresh fruits and veggies, select them when they are in season – they taste better and are usually cheaper.
- If buying frozen or canned vegetables or fruit, choose those with low or no sodium and no added sugar.
- You might even want to grow some veggies at home – try planting a vegetable plant in a pot and let your kids take care of it with you. They will be very proud of what they’ve helped to grow.
Then, involve your kids in preparing the fruits and veggies:
- Involve your child in preparing meals so that he can become familiar with the foods.
- They can snap beans or break the florets off of broccoli.
- If you are putting veggies in a quesadilla or on a pizza, let your kids arrange them in fun patterns – smiley faces, butterflies or hearts, for example.
- Remember, vegetables such as carrots and corn may pose a choking hazard for children under 4 years of age. Grate carrots and remove strings from celery for younger children.
- Have a raw and cooked vegetable option so that your child can choose the one she likes best. Some children like the crunch in raw vegetables, while others like vegetables to be soft and mushy.
Make snack time healthy:
- Keep a bowl of fruit on the kitchen table for a quick, easy snack.
- Freeze fruits such as bananas or grapes for a frozen treat (but remember, raw fruits such as grapes may pose a choking hazard for children under 4 years of age. Cut grapes in quarters for younger children.
- Always have freshly cut vegetable sticks in the refrigerator.
- Eat fruit and vegetables with your children – you are their best role model!
What if you child is picky?
- Be patient, children can be very picky. It may take as many as 10 to 15 tries with a new food before your child is willing to accept it.
- Think about color, smell and texture when introducing your child to a new food – he may enjoy raw crunchy broccoli but not cooked broccoli in casseroles, or soft canned peaches but not freshly sliced peaches.
- Add raisins, bananas and other fresh or dried fruits to hot or cold cereals.
- Be sneaky – add broccoli florets or julienne carrots to pasta or potato salad; add spinach, mushrooms or zucchini to spaghetti sauce; mash beans and add corn and carrots in chili; or shred zucchini and carrots into meat loaf or casseroles.
- Don’t give up – you are responsible for what your children eat, they are responsible for when and how much – be patient.
A few more things to keep in mind:
- Learn what counts as a cup of fruit or vegetables, for example: 1 small apple; 8 large strawberries, 12 baby carrots, or 1 large ear of corn are all about 1 cup.
- Fresh fruit is a better choice than juice – while whole fruit contains some natural sugars that make it taste sweet, it also has lots of vitamins, minerals and fiber – which makes it more filling and nutritious than a glass of fruit juice. |
Huntington's disease is an inherited neurological disorder that causes the progressive degeneration of nerve cells in the brain.
Huntington's disease has a severe impact on movement and cognitive and emotional abilities. Symptoms worsen over a period of 15 to 20 years and result in death. There is no cure for the condition so treatment aims to manage the symptoms.
Huntington's disease used to be called Huntington's Chorea, (from the Greek word for dance) because of the jerky involuntary movements which are a common symptom of the condition. Other symptoms include:
- Lack of coordination
- Muscle rigidity
- Problems speaking/swallowing
- Walking difficulties
- Mood swings
- Lack of emotion
- Memory difficulties
- Lack of concentration
- Lack of self-awareness in behaviour
The disease affects both men and women and although present from birth, tends to develop between the ages of 30 and 50. Juvenile Huntington's disease occurs before the age of 20 and is usually more aggressive in its progress.
The disease is caused by an inherited defective gene. Huntington's disease is called an autosomal dominant disorder, which means that only one copy of the faulty gene from either parent is necessary to produce the disease. Therefore a child of a parent who carries the defective gene has a 50 per cent chance of inheriting the defective gene. The disease occurs in about 4 to 8 people in 100,000. |
During The Middle Ages, knights used a coat of arms to identify themselves, which was especially useful in battle. In a society where few people could read and write, pictures were very important.
Classroom Activity: Create Your Coat of Arms
Traditional Colors: Black, Royal Purple, Emerald Green, Royal Blue or Sky Blue, Bright Red Metals: Gold (yellow) and Silver (white)
The basic rule is "metal on color or color on metal, but not metal on metal or color on color." This means that the field (the background) on the shield can be either a metal or a color.
Animals were frequently used as a main charge. They were not drawn to look three dimensional, but were shown as if they were flat. The pictures were to represent the animal as a symbol: Lion, Bear, Boar, Eagle, Horse, Dragon, Griffin.
Use these medieval symbols and mythical creatures to create your own shield or family coat of arms and explain the reason for your choices. |
A key issue for the development of fusion energy to generate electricity is the ability to confine the superhot, charged plasma gas that fuels fusion reactions in magnetic devices called tokamaks. This gas is subject to instabilities that cause it to leak from the magnetic fields and halt fusion reactions.
Now a recently developed imaging technique can help researchers improve their control of instabilities. The new technique, developed by physicists at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL), the University of California-Davis and General Atomics in San Diego, provides new insight into how the instabilities respond to externally applied magnetic fields.
This technique, called Electron Cyclotron Emission Imaging (ECEI) and successfully tested on the DIII-D tokamak at General Atomics, uses an array of detectors to produce a 2D profile of fluctuating electron temperatures within the plasma. Standard methods for diagnosing plasma temperature have long relied on a single line of sight, providing only a 1D profile. Results of the ECEI technique, recently reported in the journal Plasma Physics and Controlled Fusion, could enable researchers to better model the response of confined plasma to external magnetic perturbations that are applied to improve plasma stability and fusion performance.
Explore further: A fast new method for measuring hard-to-diagnose 3-D plasmas in fusion facilities
More information: Tobias, B. et al. 2013. Boundary perturbations coupled to core 3/2 tearing modes on the DIII-D tokamak, Plasma Physics and Controlled Fusion. Article first published online: July 5, 2013. DOI:10.1088/0741-3335/55/9/095006 |
Extraordinarily so. The ear can detect a sound wave so small it moves the eardrum just one angstrom, 100 times less than the diameter of a hydrogen molecule. Murray Sachs, director of biomedical engineering, likes to say that if there were nothing between you and the airport, 10 miles away, and if there were no other sounds, nothing for sound to reflect from--then theoretically, you could hear a piece of chalk drop at the airport.
What does hearing do for us?
It helps humans communicate by hearing and understanding speech, other species by hearing its less elaborate cousin, vocalization. More generally, says Eric Young, director of the Johns Hopkins Center for Hearing Sciences, "it's our far sense. It notifies us of things we can't see but that may be important," be it a prowler or the baby whimpering. "Hearing does that by being extraordinarily sensitive, and also by being able to compute where a sound is in space."
What the nervous system gets is two streams of sound, one in the left ear and one in the right; it then calculates a sound's time of arrival at each ear, the difference revealing roughly where the sound is in space (within about 1ø of a circle). ("Ha! The left ear got it sooner, so it's off to the left, about there.") Compared with vision, human hearing locates objects crudely. "But it's good enough," says Young, "that you can turn your eyes toward the object and try to find it."
What good is earwax?
It does unpleasant things to insect intruders.
How does hearing work?
Mechanically, it's like a Swiss watch. Any engineer would be proud to have invented a device of such precision.
You can think of the system as a relay race, except that the baton keeps transforming into something else: Energy enters the ear (see diagram) in the form of a sound wave, to be converted at the eardrum into mechanical vibrations of the middle-ear bones (the ossicles, the smallest bones in the body). These mechanical vibrations become pressure waves in the fluid of the inner ear (the cochlea), and the waves bend bundles of the cilia (Latin for hairs) of what are called hair cells. Each time cilia bend, hair cells start electrical signals firing toward the brain.
Moreover, at the same time it's doing hey presto change-o, your ear mechanically boosts the signal by some 25 decibels in our best range of hearing.
How does the brain manage to get all the subtleties of sound and speech out of vibrations alone?
The auditory system does a lot of work before the cortex gets involved, more than other senses. Smell sensations go directly from receptor to olfactory bulb, while signals for sight and touch make three stops before they reach the cortex. But in hearing, there are five waystations, nerve cells that Young calls "calculational centers, right in the brain stem."
The brainstem is the stemlike structure that connects the spinal column with the cerebral hemispheres, and its processing starts almost from scratch: the sound wave that enters your ear is inchoate. It might include bagpipe droning, trees rustling, air conditioner hissing, keyboard clicking, ambulance ululating, fax teeping, several people talking, and more. The nervous system must pick this jumble apart so you can tell one sound from another and pay attention to what matters--the person you're talking to, let's say.
Step one is pitch, which is handled by the hair cells in the cochlea. (Young people with normal hearing have about 15,000 in each ear.) The cells are arranged rather like a piano keyboard, on a long narrow membrane that spirals the length of the (spiral) cochlea, and each hair cell is sensitive to a particular frequency at a particular loudness. At one end of the membrane, hair cells react to high-pitched sounds, at the other to low ones, in between to in between.
Then nuclei in the brainstem take over, to locate the source of the sounds in space (as discussed), and to sort all those hundreds of tones into units by timbre, families of resonance. Between the two distinctions, we all know, with seeming lack of effort, that one set of sounds represents a bagpipe, another set footsteps.
Auditory signals also get sharper, because the clever brain stem deletes a clutter of echoes, so they never reach awareness. As your friend's voice and piano playing bounce off the walls, fireplace, and ceiling, a processing center picks out the echoes as duplicates because they arrive a tad later. It deletes all but the original signal--a neat trick, given the complexity of the sound.
We do hear echoes like halloos at the Grand Canyon, of course. That's because they come at longer intervals, so the brain stem construes them as separate sounds and sends them on to conscious awareness.
New and unfamiliar sounds do not get deleted, however. On the contrary, they tend to attract our attention, as you may have noticed the first time you heard an icemaker dropping ice cubes into the bin. In such a case, the brain stem may even trigger the motor cortex, making you jump and look around--a startle reaction, which is reflexive; the conscious mind is not involved.
The brain stem also handles the first steps of understanding speech: it ascertains that a particular series of sounds is speech. Then it deletes all but the sounds that matter to meaning in the hearer's native language. Such sounds (for example oh and ah, puh and tuh) are called phonemes.
There are at least 60 phonemes, depending what sounds you count (English uses 40-some) and by the time a child is 6 months old, its brain is already specialized for its own language. The classic example is English vs. Japanese. Native English speakers have a notoriously hard time learning Japanese, because the meaning of individual English words does not depend on rising and falling inflections.
Many Japanese, conversely, cannot distinguish between L and R, because R does not exist in their language. "Of course they can hear R," says Stewart Hulse, a psychologist in Arts & Sciences whose field is auditory processing. "If you test them--'Is this sound, ruh, the same as this sound, luh?'--they'll say no. They can hear it. But they can't hear R in spoken language, because their brain stem has thrown it out, before conscious awareness. It's almost impossible to hear these things."
By the time a sound arrives at the cortex, then, it has been analyzed for pitch, timbre, salience, and where it comes from, at a minimum.
What happens once the signals reach the cortex?
More processing. In general, the cortex is arranged in anatomical columns, literally stacks of cells that work together to store, decode, and process information (a discovery made in the somatosensory cortex by Hopkins's great neuroscientist emeritus, Vernon Mountcastle).
At the point where sound signals reach the auditory cortex, columns initially correspond to frequencies reported by the hair cells. A single tone may activate a large area of cortex, though, in ways that are only murkily understood. Suffice it that as complex patterns of firing develop, the rest of the cortex gets involved, comparing the patterns with stored templates to tell you, "Oh! that's just the refrigerator. Pay no attention."
Music is thought to be processed in the right hemisphere, language in the left, both in structures that evolved from the auditory cortex itself. Note that the auditory cortex reports to the language center, not the other way around.
If you sit quietly and catalog the sounds around you, you may be surprised at how very many signals are out there. Yet what you consciously hear depends on which sounds you pay attention to, if any. If you're reading, you may feel you hear nothing. If you're deep in conversation, you hear the other person's voice. But you won't be aware of the icemaker's clatter unless you've never heard it before; attention suppresses stimuli that are non- salient. Otherwise we would all go mad.
Context helps in the work of integrating signals, too. Next time you are listening to someone who mumbles or has a strong accent, notice how much it helps if you have some idea what the person is going to say, or at least what the topic is.
Why do so many elderly people have hearing problems? Is the auditory system especially fragile?
Actually, the ear protects itself well. The outer ear keeps the eardrum warm and out of harm's way, while the middle ear can dampen most sounds that are loud enough to hurt the all-important hair cells. And when hair cells do get overexercised, they tend to quit for a time. That's why the universe seems muted right after a loud concert.
Probably because of continued insults, however, hearing problems seem to be more widespread in the industrialized world than elsewhere. In the U.S., the major causes of hearing loss are thinning hair cells and sclerosis of the middle ear.
Loss of hair cells is permanent. ("You have all the hair cells you'll ever have at birth," says Young.) It mainly affects soft sounds and high frequencies, the range where women and children tend to speak.
To date, even the best hearing aid is "a very crude instrument," Young concedes. In boosting soft sounds, it also amplifies the too-loud and loud-enough. (No wonder people turn off their hearing aids.)
Nevertheless, Young strongly recommends that both children and adults get a hearing aid early rather than late, to keep auditory stimuli coming to that part of the brain. Otherwise, other sensory systems will take over the area. The cortex reorganizes, and later, a hearing aid may find no receptive tissue. As for any stigma, Young says that wearing a hearing aid is "just like wearing eyeglasses."
The other major cause of hearing loss is stiffening of the middle ear. The tissues become sclerotic, so the ossicles can't relay sound waves. Happily, surgery can cure this one.
Hulse says that perceived pitch also rises by half a tone in old age, a fact that is noticed mostly by that .01 percent of the population with "perfect" pitch--primarily professional musicians. Speech is fine, but music becomes agony, because everything is half a tone off-key. "To the trained ear, it's very disturbing, that's what they tell me," says Hulse.
So-called tone-deafness, by the way, may be a myth. "Some people can carry a tune better than others," says Young. But in his opinion, "if you were really tone-deaf, you couldn't understand speech. Sound would make no sense to you at all." You wouldn't even be able to separate one sound from another.
Which is worse, going deaf or blind?
Young people can adapt successfully to either, say gerontologists, but it's harder in old age. And of the two, for most people deafness is worse, because the deaf person tends to become socially isolated.
The deaf retiree whose greatest joy lies in reading and e-mail may do well enough, but most of the newly deaf miss normal social interaction. It's no fun when dinner parties turn into a lot of nodding and smiling and frustration. Delivering monologues is more comfortable, but also leaves you alone. Gestures only go so far, and people just won't write notes for the inconsequential chitchat of daily life. Life can get lonely.
Send EMail to Johns Hopkins Magazine
Return to table of contents. |
Your students will have two different opportunities to practice the skill or knowing whether it is a fragment or a complete sentence. Two different center activities are included. All activity sheets are in black and white as well as in color. In Sentence Savers Sentence Fragment or Complete Sentence? they will read the sentence and decide if it is a fragment or sentence. They will put a F or S on the blank. (You can run copies off for them to do individually, laminate, or put them in a plastic sleeve so they will be reusable.) Four different sheets are included with 10 questions on each page. The second activity has 24 sentences divided into fragments. They will need to match them up. I would run them off on different colors so it will be easier to match.
Thank you for looking at my product. Be sure to check out my store for more activities. |
Inner workings of the testicles.
Diagram of male (human) testicles
|Vein||Testicular vein, Pampiniform plexus|
|Lymph||Lumbar lymph nodes|
The testicle or testis is the male gonad in animals. Like the ovaries to which they are homologous, the testicles (testes) are components of both the reproductive system and the endocrine system. The primary functions of the testes are to produce sperm (spermatogenesis) and to produce androgens, primarily testosterone.
Both functions of the testicle are influenced by gonadotropic hormones produced by the anterior pituitary. Luteinizing hormone (LH) results in testosterone release. The presence of both testosterone and follicle-stimulating hormone (FSH) is needed to support spermatogenesis. It has also been shown in animal studies that if testes are exposed to either too high or too low levels of estrogens (such as estradiol; E2) spermatogenesis can be disrupted to such an extent that the animals become infertile.
- 1 Structure
- 2 Evolution
- 3 Clinical significance
- 4 Society and culture
- 5 History
- 6 Gallery
- 7 See also
- 8 Notes
Almost all healthy male vertebrates have two testicles. They are typically of similar size, although in sharks, that on the right side is usually larger, and in many bird and mammal species, the left may be the larger. The primitive jawless fish have only a single testis, located in the midline of the body, although even this forms from the fusion of paired structures in the embryo.
The testicles of a dromedary camel are 7–10 cm (2.8–3.9 in) long, 4.5 cm (1.8 in) deep and 5 cm (2.0 in) in width. The right testicle is often smaller than the left. The testicles of a male red fox attain their greatest weight in December–February. Spermatogenesis in male golden jackals occurs 10–12 days before the females enter estrus and, during this time, males' testicles triple in weight.
In mammals, the testes are often contained within an extension of the abdomen called the scrotum. In mammals with external testes it is most common for one testicle to hang lower than the other. While the size of the testicle varies, it is estimated that 21.9% of men have their higher testicle being their left, while 27.3% of men have reported to have equally positioned testicles. This is due to differences in the vascular anatomical structure on the right and left sides.
In healthy European adult humans, average testicular volume is 18 cm³ per testis, with normal size ranging from 12 cm³ to 30 cm³. The average testicle size after puberty measures up to around 2 inches long, 0.8 inches in breadth, and 1.2 inches in height (5 x 2 x 3 cm). Measurement in the living adult is done in two basic ways:
- comparing the testicle with ellipsoids of known sizes (orchidometer).
- measuring the length, depth and width with a ruler, a pair of calipers or ultrasound imaging.
The volume is then calculated using the formula for the volume of an ellipsoid: 4/3 π × (length/2) × (width/2) × (depth/2).
The size of the testicles is among the parameters of the Tanner scale for the maturity of male genitals, from stage I which represents a volume of less than 1.5 ml, to stage V which represents a testicular volume of greater than 20 ml.
Under a tough membranous shell, the tunica albuginea, the testis of amniotes and some teleost fish, contains very fine coiled tubes called seminiferous tubules. The tubules are lined with a layer of cells (germ cells) that from puberty into old age, develop into sperm cells (also known as spermatozoa or male gametes). The developing sperm travel through the seminiferous tubules to the rete testis located in the mediastinum testis, to the efferent ducts, and then to the epididymis where newly created sperm cells mature (see spermatogenesis). The sperm move into the vas deferens, and are eventually expelled through the urethra and out of the urethral orifice through muscular contractions.
Amphibians and most fish do not possess seminiferous tubules. Instead, the sperm are produced in spherical structures called sperm ampullae. These are seasonal structures, releasing their contents during the breeding season, and then being reabsorbed by the body. Before the next breeding season, new sperm ampullae begin to form and ripen. The ampullae are otherwise essentially identical to the seminiferous tubules in higher vertebrates, including the same range of cell types.
Primary cell types
- Within the seminiferous tubules
- Here, germ cells develop into spermatogonia, spermatocytes, spermatids and spermatozoon through the process of spermatogenesis. The gametes contain DNA for fertilization of an ovum
- Sertoli cells – the true epithelium of the seminiferous epithelium, critical for the support of germ cell development into spermatozoa. Sertoli cells secrete inhibin.
- Peritubular myoid cells surround the seminiferous tubules.
- Between tubules (interstitial cells)
- Leydig cells – cells localized between seminiferous tubules that produce and secrete testosterone and other androgens important for sexual development and puberty, secondary sexual characteristics like facial hair, sexual behavior and libido, supporting spermatogenesis and erectile function. Testosterone also controls testicular volume.
- Also present are:
Blood supply and lymphatic drainage
Blood supply and lymphatic drainage of the testes and scrotum are distinct:
- The paired testicular arteries arise directly from the abdominal aorta and descend through the inguinal canal, while the scrotum and the rest of the external genitalia is supplied by the internal pudendal artery (itself a branch of the internal iliac artery).
- The testis has collateral blood supply from 1. the cremasteric artery (a branch of the inferior epigastric artery, which is a branch of the external iliac artery), and 2. the artery to the ductus deferens (a branch of the inferior vesical artery, which is a branch of the internal iliac artery). Therefore, if the testicular artery is ligated, e.g., during a Fowler-Stevens orchiopexy for a high undescended testis, the testis will usually survive on these other blood supplies.
- Lymphatic drainage of the testes follows the testicular arteries back to the paraaortic lymph nodes, while lymph from the scrotum drains to the inguinal lymph nodes.
Many anatomical features of the adult testis reflect its developmental origin in the abdomen. The layers of tissue enclosing each testicle are derived from the layers of the anterior abdominal wall. Notably, the cremasteric muscle arises from the internal oblique muscle.
The blood–testis barrier
Large molecules cannot pass from the blood into the lumen of a seminiferous tubule due to the presence of tight junctions between adjacent Sertoli cells. The spermatogonia are in the basal compartment (deep to the level of the tight junctions) and the more mature forms such as primary and secondary spermatocytes and spermatids are in the adluminal compartment.
The function of the blood–testis barrier (red highlight in diagram above) may be to prevent an auto-immune reaction. Mature sperm (and their antigens) arise long after immune tolerance is established in infancy. Therefore, since sperm are antigenically different from self tissue, a male animal can react immunologically to his own sperm. In fact, he is capable of making antibodies against them.
Injection of sperm antigens causes inflammation of the testis (auto-immune orchitis) and reduced fertility. Thus, the blood–testis barrier may reduce the likelihood that sperm proteins will induce an immune response, reducing fertility and so progeny.
The testes work best at temperatures slightly less than core body temperature. The spermatogenesis is less efficient at lower and higher temperatures than 33 °C. This is presumably why the testes are located outside the body. There are a number of mechanisms to maintain the testes at the optimum temperature.
The cremasteric muscle is part of the spermatic cord. When this muscle contracts, the cord is shortened and the testicle is moved closer up toward the body, which provides slightly more warmth to maintain optimal testicular temperature. When cooling is required, the cremasteric muscle relaxes and the testicle is lowered away from the warm body and is able to cool. It also occurs in response to stress (the testicles rise up toward the body in an effort to protect them in a fight). There are persistent reports that relaxation indicates approach of orgasm. There is a noticeable tendency to also retract during orgasm.
The cremaster muscle can reflexively raise each testicle individually if properly triggered. This phenomenon is known as the cremasteric reflex. The testicles can also be lifted voluntarily using the pubococcygeus muscle, which partially activates related muscles.
There are two phases in which the testes grow substantially; namely in embryonic and pubertal age.
During mammalian development, the gonads are at first capable of becoming either ovaries or testes. In humans, starting at about week 4 the gonadal rudiments are present within the intermediate mesoderm adjacent to the developing kidneys. At about week 6, sex cords develop within the forming testes. These are made up of early Sertoli cells that surround and nurture the germ cells that migrate into the gonads shortly before sex determination begins. In males, the sex-specific gene SRY that is found on the Y-chromosome initiates sex determination by downstream regulation of sex-determining factors, (such as GATA4, SOX9 and AMH), which leads to development of the male phenotype, including directing development of the early bipotential gonad down the male path of development.
Testes follow the "path of descent" from high in the posterior fetal abdomen to the inguinal ring and beyond to the inguinal canal and into the scrotum. In most cases (97% full-term, 70% preterm), both testes have descended by birth. In most other cases, only one testis fails to descend (cryptorchidism) and that will probably express itself within a year.
The testes grow in response to the start of spermatogenesis. Size depends on lytic function, sperm production (amount of spermatogenesis present in testis), interstitial fluid, and Sertoli cell fluid production. After puberty, the volume of the testes can be increased by over 500% as compared to the pre-pubertal size. Testicles are fully descended before one reaches puberty.
The basal condition for mammals is to have internal testes. Boreoeutherian land mammals however, the large group of mammals that includes humans, have externalized testes. Their testes function best at temperatures lower than their core body temperature. Their testes are located outside of the body, suspended by the spermatic cord within the scrotum. The testes of the non-boreotherian mammals such as the monotremes, armadillos, sloths, elephants remain within the abdomen.[not in citation given] There are also some Boreoeutherian mammals with internal testes, such as the rhinoceros.
Marine boreotherian mammals, such as whales and dolphins, also have internal testes. As external testes would increase drag in the water they have internal testes which are kept cool by special circulatory systems that cool the arterial blood going to the testes by placing the arteries near veins bringing cooled venous blood from the skin.
There are several hypotheses why most boreotherian mammals have external testes which operate best at a temperature that is slightly less than the core body temperature, e.g. that it is stuck with enzymes evolved in a colder temperature due to external testes evolving for different reasons, that the lower temperature of the testes simply is more efficient for sperm production.
1) More efficient. The classic hypothesis is that cooler temperature of the testes allows for more efficient fertile spermatogenesis. In other words, there are no possible enzymes operating at normal core body temperature that are as efficient as the ones evolved, at least none appearing in our evolution so far.
The early mammals had lower body temperatures and thus their testes worked efficiently within their body. However it is argued that boreotherian mammals have higher body temperatures than the other mammals and had to develop external testes to keep them cool. It is argued that those mammals with internal testes, such as the monotremes, armadillos, sloths, elephants, and rhinoceroses, have a lower core body temperatures than those mammals with external testes.
However, the question remains why birds despite having very high core body temperatures have internal testes and did not evolve external testes. It was once theorized that birds used their air sacs to cool the testes internally, but later studies revealed that birds' testes are able to function at core body temperature.
2) Irreversible adaptation to sperm competition. It has been suggested that the ancestor of the boreoeutherian mammals was a small mammal that required very large testes (perhaps rather like those of a hamster) for sperm competition and thus had to place its testes outside the body. This led to enzymes involved in spermatogenesis, spermatogenic DNA polymerase beta and recombinase activities evolving a unique temperature optimum, slightly less than core body temperature. When the boreoeutherian mammals then diversified into forms that were larger and/or did not require intense sperm competition they still produced enzymes that operated best at cooler temperatures and had to keep their testes outside the body. This position is made less parsimonious by the fact that the kangaroo, a non-boreoeutherian mammal, has external testicles. The ancestors of kangaroos might, separately from boreotherian mammals, have also been subject to heavy sperm competition and thus developed external testes, however, kangaroo external testes are suggestive of a possible adaptive function for external testes in large animals.
3) Protection from abdominal cavity pressure changes. One argument for the evolution of external testes is that it protects the testes from abdominal cavity pressure changes caused by jumping and galloping.
Testicular size as a proportion of body weight varies widely. In the mammalian kingdom, there is a tendency for testicular size to correspond with multiple mates (e.g., harems, polygamy). Production of testicular output sperm and spermatic fluid is also larger in polygamous animals, possibly a spermatogenic competition for survival. The testes of the right whale are likely to be the largest of any animal, each weighing around 500 kg (1,100 lb).
Among the Hominidae, gorillas have little female promiscuity and sperm competition and the testes are small compared to body weight (0.03%). Chimpanzees have high promiscuity and large testes compared to body weight (0.3%). Human testicular size falls between these extremes (0.08%).
There is some evidence to suggest that average human testicle size and weight has been progressively shrinking in recent years among younger cohorts in Western industrialized nations. This has been suggested to be associated with a possible decline in sperm counts in some world regions. The recent changes suggest involvement of environmental or lifestyle factor(s) such as increasing exposure to endocrine disruptors.
Protection and injury
- The testicles are well-known to be very sensitive to impact and injury. The pain involved travels up from each testicle into the abdominal cavity, via the spermatic plexus, which is the primary nerve of each testicle. This will cause pain in the hip and the back. The pain usually goes away in a few minutes.
- Testicular torsion is a medical emergency. Treatment within 4–6 hours of onset can prevent necrosis of the testis.
- Testicular rupture is a medical emergency caused by blunt force impact, sharp edge, or piercing impact to one or both testicles, which can lead to necrosis of the testis in as little as 30 minutes.
- Penetrating injuries to the scrotum may cause castration, or physical separation or destruction of the testes, possibly along with part or all of the penis, which results in total sterility if the testicles are not reattached quickly.
- Some jockstraps are designed to provide support to the testicles.
Diseases and conditions that affect the testes
Some prominent conditions and differential diagnoses include:
- Testicular cancer and other neoplasms To improve the chances of catching possible cases of testicular cancer or other health issues early, regular testicular self-examination is recommended.
- Varicocele, swollen vein(s) from the testes, usually affecting the left side, the testis usually being normal
- Hydrocele testis, swelling around testes caused by accumulation of clear liquid within a membranous sac, the testis usually being normal
- Endocrine disorders can also affect the size and function of the testis.
- Certain inherited conditions involving mutations in key developmental genes also impair testicular descent, resulting in abdominal or inguinal testes which remain nonfunctional and may become cancerous. Other genetic conditions can result in the loss of the Wolffian ducts and allow for the persistence of Müllerian ducts.
- Bell-clapper deformity is a deformity in which the testicle is not attached to the scrotal walls, and can rotate freely on the spermatic cord within the tunica vaginalis. It is the most common underlying cause of testicular torsion.
- Epididymitis, a painful inflammation of the epididymis or epididymides frequently caused by bacterial infection but sometimes of unknown origin.
Effects of exogenous hormones
To some extent, it is possible to change testicular size. Short of direct injury or subjecting them to adverse conditions, e.g., higher temperature than they are normally accustomed to, they can be shrunk by competing against their intrinsic hormonal function through the use of externally administered steroidal hormones. Steroids taken for muscle enhancement (especially anabolic steroids) often have the undesired side effect of testicular shrinkage.
In all cases, the loss in testes volume corresponds with a loss of spermatogenesis.
Society and culture
In the Middle Ages, men who wanted a boy sometimes had their left testicle removed. This was because people believed that the right testicle made "boy" sperm and the left made "girl" sperm. As early as 330 BC, Aristotle prescribed the ligation (tying off) of the left testicle in men wishing to have boys.
Usage and etymology
There are two senses of the word testicle. One is equivalent to the narrowest sense of testis, referring specifically to each olive-shaped sperm-producing gland inside the scrotum, thus not including the scrotal sac itself, nor the epididymis and vas deferens. The other is equivalent to an entire half of the scrotum and its contents. Thus a man can speak, for example, of his left testicle slipping out of the opening of his underpants and getting caught in the zipper of his trousers. In this sense, the plural testicles is equivalent to the whole scrotum and all of its contents (as in "grabbing him by his testicles"). This subtle duality of word senses can also apply to testis and testes, so that in the narrower sense they refer to the two "balls" themselves inside the "ball sac" (as cruder speech would have it) but in the broader sense are equivalent to the aforementioned broader sense of testicle(s). In medicine the narrower sense prevails, whereas in general vocabulary both are common.
One theory about the etymology of the word testis is based on Roman law. The original Latin word "testis", "witness", was used in the firmly established legal principle "Testis unus, testis nullus" (one witness [equals] no witness), meaning that testimony by any one person in court was to be disregarded unless corroborated by the testimony of at least another. This led to the common practice of producing two witnesses, bribed to testify the same way in cases of lawsuits with ulterior motives. Since such "witnesses" always came in pairs, the meaning was accordingly extended, often in the diminutive (testiculus, testiculi).
Another theory says that testis is influenced by a loan translation, from Greek parastatēs "defender (in law), supporter" that is "two glands side by side".
A healthy scrotum containing normal size testes. The scrotum is in tight condition. The image also shows the texture.
- Cryptorchidism (cryptorchismus)
- List of homologues of the human reproductive system
- Sterilization (surgical procedure), vasectomy
- Spermatic cord
- WikiSaurus:testicles — the WikiSaurus list of synonyms and slang words for testicles in many languages
- Heptner, V. G.; Naumov, N. P. (1998). Mammals of the Soviet Union Vol.II Part 1a, SIRENIA AND CARNIVORA (Sea cows; Wolves and Bears). Science Publishers, Inc. USA. ISBN 1-886106-81-9. Retrieved 2013-11-09.
- Sierens, J. E.; Sneddon, S. F.; Collins, F.; Millar, M. R.; Saunders, P. T. (2005). "Estrogens in Testis Biology". Annals of the New York Academy of Sciences. 1061: 65–76. doi:10.1196/annals.1336.008. PMID 16467258.
- Romer, Alfred Sherwood; Parsons, Thomas S. (1977). The Vertebrate Body. Philadelphia, PA: Holt-Saunders International. pp. 385–386. ISBN 0-03-910284-X.
- Mukasa-Mugerwa, E. The Camel (Camelus dromedarius): A Bibliographical Review. pp. 11–3.
- Heptner & Naumov 1998, p. 537
- Heptner & Naumov 1998, pp. 154–155
- Scrotal Asymmetry: Right-Left and the scrotum in male sculpture"" By I. C. Manus
- Andrology: Male Reproductive Health and Dysfunction"" By E. Nieschlag, Hermann M. Behre, H. van. Ahlen
- Human Origins 101 – Page 138, Holly M. Dunsworth – 2007
- Histology, A Text and Atlas by Michael H. Ross and Wojciech Pawlina, Lippincott Williams & Wilkins, 5th ed, 2006
- Skinner M, McLachlan R, Bremner W (1989). "Stimulation of Sertoli cell inhibin secretion by the testicular paracrine factor PModS". Mol Cell Endocrinol. 66 (2): 239–49. doi:10.1016/0303-7207(89)90036-1. PMID 2515083.
- Arch Histol Cytol. 1996 Mar;59(1):1–13
- Mieusset, R; Bujan, L; Mansat, A; Pontonnier, F (1992). "Hyperthermie scrotale et infécondité masculine" (PDF). Progrès en Urologie (in French) (2): 31–36.
- Online textbook: "Developmental Biology" 6th ed. By Scott F. Gilbert (2000) published by Sinauer Associates, Inc. of Sunderland (MA).
- D. S. Mills; Jeremy N. Marchant-Forde (2010). The Encyclopedia of Applied Animal Behaviour and Welfare. CABI. pp. 293–. ISBN 978-0-85199-724-7.
- BIOLOGY OF REPRODUCTION 56, 1570–1575 (1997)- Determination of Testis Temperature Rhythms and Effects of Constant Light on Testicular Function in the Domestic Fowl (Gallus domesticus)
- "Ask a Biologist Q&A / Human sexual physiology – good design?". Askabiologist.org.uk. 2007-09-04. Retrieved 2010-10-25.
- "'The Human Body as an Evolutionary Patchwork' by Alan Walker, Princeton.edu". RichardDawkins.net. 2007-04-10. Retrieved 2010-10-25.
- Newscientist.com – bumpy-lifestyle-led-to-external-testes
- Crane, J.; Scott, R. (2002). "Eubalaena glacialis". Animal Diversity Web. Retrieved 2009-05-01.
- Shackelford, T. K.; Goetz, A. T. (2007). "Adaptation to Sperm Competition in Humans". Current Directions in Psychological Science. 16: 47. doi:10.1111/j.1467-8721.2007.00473.x.
- Dindyal, S. (2007). "The sperm count has been decreasing steadily for many years in Western industrialised countries: Is there an endocrine basis for this decrease?". The Internet Journal of Urology. 2 (1): 1–21.
- S. Khan, J. Rehman, B. Chughtai, D. Sciullo, E. Mohan & H. Rehman: "Anatomical Approach to Scrotal Emergencies: A New Paradigm for the Diagnosis and Treatment of the Acute Scrotum". The Internet Journal of Urology. (2010) Volume 6: Number 2. ISSN 1528-8390.
- Amazon.com: Suspensory Jockstrap for Scrotal/Testicle Support: Health & Personal Care
- "Varicocele". Kidshealth.org. Retrieved 2010-10-25.
- Reviving a very delicate delicacy
- Understanding genetics; Human health and genome Stanford school of medicine, Dr. Barry Starr
- Hoag, Hannah. I'll take a girl, please... Cherry-picking from the dish of life. Drexel University Publication. Archived August 31, 2011, at the Wayback Machine.
- The American Heritage Dictionary of the English Language, Fourth Edition |
Catalysis is the change in speed (rate) of a chemical reaction due to the help of a catalyst. Unlike other chemicals which take part in the reaction, a catalyst is not consumed by the reaction itself. A catalyst may participate in many chemical reactions. Catalysts that speed the reaction are called positive catalysts. Catalysts that slow the reaction are called negative catalysts, or inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons.
A catalyst is something which changes the rate of a chemical reaction. An example is when manganese oxide (MnO2) is added to hydrogen peroxide (H2O2), and the hydrogen peroxide starts to break up into water and oxygen. Catalysts are either of natural or synthetic origin. Catalysts are useful because they leave no in the solution they have speeded up. A catalyst can also be used in a reaction again and again as it is not used up. There are many catalysts in our body which play an important part in many biochemical reactions. These are called enzymes. Most catalysts work by lowering the 'activation energy' of a reaction. This allows less energy to be used, thus speeding up the reaction. The opposite of a catalyst is an inhibitor. Inhibitors slow down reactions. Some of them are found in snake venom and are dangerous for our nervous system or heart.
Related pages[change | change source]
|The Simple English Wiktionary has a definition for: catalysis.|
- Enzymes are biological catalysts |
Lesson 2 of 7
Objective: SWBAT sequencing the stages of a pumpkins life cycle for a story retell.
This lesson addresses the CCSS by having the students engage in whole group reading with the intention of gaining an understanding of the lifecycle of a pumpkin. During the whole group reading I will ask my students questions about the story. It is important for my ELL students to engage in multiple listening and speaking activities to help them learn English vocabulary and content comprehension. We will then use our reading and writing skills to label pictures of the life cycle of a pumpkin.
I gather my students on the carpet around me and introduce todays lesson.
Today we are going to talk about pumpkins. If you could go and get a pumpkin, where would you go? Let's write our answers on a circle map.
I use my name sticks to call on students to tell me where they could get a pumpkin. And I write their answers on the circle map.
There are many places that we can get a pumpkin. Someone mentioned we could grow a pumpkin. Does anyone know how a pumpkin grows? Hmmmm, you know a lot about the pumpkin Let's watch a video to see how a pumpkin grows.
We watch the video a few times because they like the song.
Reading the Story
Because I showed the video prior to reading the story, the students seemed interested in hearing the story. We are still gathered on the carpet as I begin the lesson.
"We have learned a fun song that teaches us the stages of the pumpkin lifecycle. Now I am going to read you a story about a little boy who grows his own pumpkin. It is titled; Pumpkin Pumpkin .Let's see if his pumpkin grows like the one on the video. When we get to each stage, lets put up one of our pictures. That is called sequencing the story."
As I read the story I stop at each stage of growth and pull a name sick for someone to put the correct pictures on the board. I found some real pictures of each part of the pumpkin life cycle for my students to use for this activity. I use name sticks so that I can give everyone the opportunity to participate in our activity. Stopping at each stage and discussing the growth increases student comprehension and vocabulary.
"I love what the boy did with his pumpkin. Will you make a jack-o-lantern out of a pumpkin this year? That will be so much fun. You can tell your mom's how the pumpkin grew out of a tiny seed."
For the writing piece of this lesson, my students will be sequencing and labeling the lifecycle of the pumpkin. I will use the same pictures as my students to model the sequencing activity. I cut the pictures and the words out as two separate circles so they can match the words to the pictures. We will first sequence the pictures and then label each picture. We will sequence the pictures on a sentence strip. I will call on students to help me and to keep their interest.
"Hmm, Which picture shows the first stage of the pumpkin lifecycle? Maybe I need help. I will use my name sticks and call on my friends to help me put the pictures in the right order."
The colored pictures from the reading of the story are still on the board in proper sequence so the students can use it as a reference if they forget which picture comes next. My students were able to sequence the life cycle pretty fast. We move on to labeling the pictures.
"Wow, we were able to sequence the lifecycle really fast. Now we need to label the stages of the lifecycle. Labeling something means we tell what it is. So what do you think I should label the first picture? A seed, great, you remembered. I need help with the labeling so I will use my name sticks again to call on students that are sitting criss cross apple sauce."
We label all the pictures on the sentence strip. I then send them to their tables to do this activity independently.
"Now you will get to sequence the lifecycle of a pumpkin. I will leave my sequence up here on the board for you to look at. You will need to color the pictures and then cut the circles out. Remember to cut the word off the picture so you can glue all the pictures down first and then glue the words."
I use my paper passers to pass out the papers as the other students are called by rows to get their pencil box from their cubbies. All my students have a daily job to foster a sense of responsiblity and community. I walk around the room to prompt and assist. As they finish they sit quietly and read library books.
When the students have finished their sequencing activity we gather back on the carpet. I have them retell the life cycle of the pumpkin to me. They then sit quietly on the carpet for the other students to finish. Each student has the opportunity to orally sequence the life cycle of the pumpkin. After each reading we applaud and cheer. It is important for my ELL students to hear the sequencing multiple times so they can learn the vocabulary and comprehend the process.
Just like I use a video to introduce my lesson topic, I like to end my day with a video of the story we read. Each time my students hear a story they learn more vocabulary and gain better comprehension of the story events. I usually show the story reading video at the end of the day. It kind of a nice way to end the day, with a review. |
NASA's Kepler Space Telescope Discovers Five Exoplanets
Orbiting Telescope Designed to Find Earth-Like Planets
MOFFETT FIELD, Calif. -- NASA's Kepler space telescope, designed to find Earth-size planets in the habitable zone of sun-like stars, has discovered its first five new exoplanets, or planets beyond our solar system.
Kepler's high sensitivity to both small and large planets enabled the discovery of the exoplanets, named Kepler 4b, 5b, 6b, 7b and 8b. The discoveries were announced Monday, Jan. 4, by the members of the Kepler science team during a news briefing at the American Astronomical Society meeting in Washington.
"These observations contribute to our understanding of how planetary systems form and evolve from the gas and dust disks that give rise to both the stars and their planets," said William Borucki of NASA's Ames Research Center in Moffett Field, Calif. Borucki is the mission's science principal investigator. "The discoveries also show that our science instrument is working well. Indications are that Kepler will meet all its science goals."
Known as "hot Jupiters" because of their high masses and extreme temperatures, the new exoplanets range in size from similar to Neptune to larger than Jupiter. They have orbits ranging from 3.3 to 4.9 days. Estimated temperatures of the planets range from 2,200 to 3,000 degrees Fahrenheit, hotter than molten lava and much too hot for life as we know it. All five of the exoplanets orbit stars hotter and larger than Earth's sun.
"It's gratifying to see the first Kepler discoveries rolling off the assembly line," said Jon Morse, director of the Astrophysics Division at NASA Headquarters in Washington. "We expected Jupiter-size planets in short orbits to be the first planets Kepler could detect. It's only a matter of time before more Kepler observations lead to smaller planets with longer period orbits, coming closer and closer to the discovery of the first Earth analog."
Launched on March 6, 2009, from Cape Canaveral Air Force Station in Florida, the Kepler mission continuously and simultaneously observes more than 150,000 stars. Kepler's science instrument, or photometer, already has measured hundreds of possible planet signatures that are being analyzed.
While many of these signatures are likely to be something other than a planet, such as small stars orbiting larger stars, ground-based observatories have confirmed the existence of the five exoplanets. The discoveries are based on approximately six weeks' worth of data collected since science operations began on May 12, 2009.
Kepler looks for the signatures of planets by measuring dips in the brightness of stars. When planets cross in front of, or transit, their stars as seen from Earth, they periodically block the starlight. The size of the planet can be derived from the size of the dip. The temperature can be estimated from the characteristics of the star it orbits and the planet's orbital period.
Kepler will continue science operations until at least November 2012. It will search for planets as small as Earth, including those that orbit stars in a warm habitable zone where liquid water could exist on the surface of the planet. Since transits of planets in the habitable zone of solar-like stars occur about once a year and require three transits for verification, it is expected to take at least three years to locate and verify an Earth-size planet.
According to Borucki, Kepler's continuous and long-duration search should greatly improve scientists' ability to determine the distributions of planet size and orbital period in the future. "Today's discoveries are a significant contribution to that goal," Borucki said. "The Kepler observations will tell us whether there are many stars with planets that could harbor life, or whether we might be alone in our galaxy."
Kepler is NASA's 10th Discovery mission. Ames is responsible for the ground system development, mission operations and science data analysis. NASA's Jet Propulsion Laboratory in Pasadena, Calif., managed the Kepler mission development. Ball Aerospace & Technologies Corp. of Boulder, Colo., was responsible for developing the Kepler flight system. Ball and the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder are supporting mission operations.
Ground observations necessary to confirm the discoveries were conducted with ground-based telescopes the Keck I in Hawaii; Hobby-Ebberly and Harlan J. Smith 2.7m in Texas; Hale and Shane in California; WIYN, MMT and Tillinghast in Arizona; and Nordic Optical in the Canary Islands, Spain.
For more information about the Kepler mission, visit:
- end -
text-only version of this release
NASA press releases and other information are available automatically by sending a blank e-mail message to
To unsubscribe from this mailing list, send a blank e-mail message to
Back to NASA Newsroom |
Back to NASA Homepage |
Here are the main grammatical elements in Spanish and some useful information about them:
A noun is a word which is mostly used to refer to a person or thing. All nouns in Spanish have a gender, meaning that they are either masculine or feminine. For example, "niño" (boy) is masculine and "niña" (girl) is feminine. The best way to identify gender is undoubtedly experience, although here are some general guidelines which may be useful at the beginning: usually nouns ending in –o are masculine and nouns ending in –a are feminine. Of course there are always exceptions.
For example, "mano" (hand) and "radio" (radio) are feminine. On the other hand, words of Greek origin ending in –ma, such as "dilema" (dilemma) or "problema" (problem) are masculine. When you are learning new vocabulary, it is recommendable that you learn a noun together with its corresponding article. That will help you to remember their gender. For example "la niña", "la mano" or "el problema" and "el niño".
Adjectives are used to qualify a particular noun, to say something about it. It is important to remember that in Spanish they are usually placed after the noun. Since adjectives are always related to a noun, they have to agree with them in gender and number.
This means that if you want to say something about the noun "niño", which is masculine and singular, the adjective that you use will also have to be masculine and singular. Thus, you can say "niño alto" (tall boy), "niño pequeño" (small boy), etc. If, on the other hand, if you were talking about a girl, you would have to say "niña alta" and "niña pequeña". |
The debate over the institution of slavery is a primary reason that led the United States into the bloodiest war in it’s history. At the core of this war, were African Americans and equality. Victimized by the shackles of slavery, the treatment of African Americans has been an elephant in the room throughout American history. This precedent was key during the Civil War. Due to slavery, blacks were prevalent in the armies of the South as labor or servants, but after the Emancipation Proclamation by Abraham Lincoln in 1863, blacks were volunteering to join the Union forces at an alarming rate. This rampant enlistment, played a vital role in the Union’s victory. Black regiments, like the Massachusetts 54th, were well known for their heroism and valor in many Union victories.
Yet, what fueled African Americans to fight for a country that had treated them with inequality and enslavement was the promise of freedom. The emancipation proclamation promised freedom for blacks. It proclaimed to all blacks that enlisted into the Union army, that after a contract of service, that they would be considered free men and considered citizens. The text list that I compiled is full of primary sources, artwork, books, and websites that tackle this dominant theme of the Civil War: Blacks and Emancipation.
These texts cover the relationship between emancipation and the historical significance of black soldiers. I chose this theme to be used within an eleventh grade history classroom. The reason why I decided to integrate this theme in an eleventh grade classroom is because it shows a social connection between the Civil War and civil rights. It asks students to analyze the significance of African American soldiers and their role in race relations. The Civil War is covered in eighth and eleventh grade social studies classrooms, but I feel that this theme is not suitable for eight grade because eighth grade classrooms only go up to the Civil War in their standards. Whereas, eleventh grade social studies classrooms cover all of U.S. history until the 21st century. This allows them to delve deeper into the importance of black soldiers and the impact that they had on racism in America, and their impact for future civil rights.
Annotated Text List
1. Zwick, Edward. (Director) & Fields, Freddie. (Producer), 1989. Glory [Movie]. Tri-Star Pictures.
This movie provides an accurate visual depiction of the all black Massachusetts 54th regiment that was led by Colonel Robert Shaw. The director follows the regiment from their beginning, to their final battle at Battery Wagner. This visual text does a very good job portraying the inequalities and aspirations of black soldiers who joined the Union. It shows the maltreatment they faced, in regards to payment, supplies, and orders. As well as their hopes and dreams of freedom, as a result of their service. According to the 10 Factors of Text Assessment by Graves, this text would be appropriate for 11th grade students. This is because of the graphic nature of scenes, the length of the movie, and how the movie elaborates more about black soldiers and their role in altering the views towards African Americans. This text would be useful to show what black soldiers went through, also how their impact was so essential that a movie was created for them.
Democratic Vistas: Civic Life, History, and American Art. “Robert Gould Shaw and the 54th Massachusetts Regiment Memorial.” November 2009. http://www.youtube.com/watch?v=YZ6pJ0HlXtA
This video clip shows a tour guide explaining the historical significance of black soldiers to a group visiting the Robert Shaw memorial. The guide explains the importance of Shaw himself and the societal importance of the his all black regiment. I would use this tour guides lecture as a supplemental piece to show the importance of the Massachusetts 54th by how they were immortalized in a national memorial with the Colonel, at his...
Please join StudyMode to read the full document |
Goiter - simple
A simple goiter is an enlargement of the thyroid gland. It is usually not cancer.
Simple goiter; Endemic goiter; Colloidal goiter; Nontoxic goiter; Toxic nodular goiter
Causes, incidence, and risk factors
There are different kinds of goiters.
- A simple goiter can occur without a known reason. It can occur when the thyroid gland is not able to make enough thyroid hormone to meet the body's needs. This can be due to, for example, a lack of iodine in a person's diet. To make up for the shortage of thyroid hormone, the thyroid gland grows larger.
- Toxic nodular goiter is an enlarged thyroid gland that has a small, rounded growth or many growths called nodules. These nodules produce too much thyroid hormone.
Iodine is needed to produce thyroid hormone.
- Simple goiters may occur in people who live in areas where the soil and water do not have enough iodine. People in these areas might not get enough iodine in their diet.
- The use of iodized salt in many food products in the United States today prevents a lack of iodine in the diet.
In many cases of simple goiter, the cause is unknown. Other than lack of iodine, other factors that may lead to the condition include:
- Certain medicines (lithium, amiodarone)
- Cigarette smoking
- Certain foods (soy, peanuts, vegetables in the broccoli family)
Simple goiters are also more common in:
- Persons over age 40
- People with a family history of goiter
Main symptom is an enlarged thyroid gland. The size may range from a single small nodule to a large neck lump.
Some people with a simple goiter may have symptoms of an underactive thyroid gland.
In rare cases, an enlarged thyroid can put pressure on the windpipe (trachea) and food tube (esophagus). This can lead to:
- Breathing difficulties (with very large goiters)
- Swallowing difficulties
Signs and tests
The doctor will do a physical exam. This involves feeling your neck as you swallow. Swelling in the area of the thyroid may be felt.
If you have a very large goiter, you may have swelling in your neck vein. As a result, when the doctor asks you to raise your arms above your head, you may feel dizzy.
Blood tests may be ordered to measure thyroid function:
Tests to look for abnormal and possibly cancerous areas in the thyroid gland include:
If nodules are found on an ultrasound, a biopsy may be needed to check for thyroid cancer.
A goiter only needs to be treated if it is causing symptoms.
Treatments for an enlarged thyroid include:
- Thyroid hormone replacement pills, if the goiter is due to an underactive thyroid
- Small doses of Lugol's iodine or potassium iodine solution if the goiter is due to a lack of iodine
- Radioactive iodine to shrink the gland, especially if the thyroid is producing too much thyroid hormone
- Surgery (thyroidectomy) to remove all or part of the gland
A simple goiter may disappear on its own, or may become larger. Over time, the thyroid gland may stop making enough thyroid hormone. This condition is called hypothyroidism.
Calling your health care provider
Call your health care provider if you experience any swelling in the front of your neck or any other symptoms of goiter.
Using iodized table salt prevents most simple goiters.
Kim M, Ladenson P. Thyroid. In: Goldman L, Schafer AI, eds. Goldman’s Cecil Medicine. 24th ed. Philadelphia, Pa.: Elsevier Saunders; 2011:chap 233.
Schlumberger MJ, Filetti S, Hay ID. Nontoxic diffuse and nodular goiter and thyroid neoplasia. In: Melmed S, Polonsky KS, et al., eds. Williams Textbook of Endocrinology. 12th ed. Philadelphia, Pa.: Elsevier Saunders; 2011:chap 14.
Last reviewed 5/28/2013 by Brent Wisse, MD, Associate Professor of Medicine, Division of Metabolism, Endocrinology & Nutrition, University of Washington School of Medicine. Also reviewed by David Zieve, MD, MHA, Bethanne Black, and the A.D.A.M. Editorial team.
- The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition.
- A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions.
- Call 911 for all medical emergencies.
- Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. |
Hoary marmots are one of the most widespread alpine mammals in North America, ranging from Alaska south through northwest Canada to Washington, Idaho, and Montana (Karels et al. 2004). They have a wide distribution in Alaska, including the Alaska Peninsula, Alaska Range, and White Mountains. In Canada, hoary marmots inhabit the Ogilvie Mountains in the Yukon Territory. They are found in the Cascades Mountain range, northern and central Rocky Mountains, the Beaverhead and Flint Creek Mountains of northwestern Montana, and the Salmon River mountains of central Idaho (Hoffmann et al. 1979). (Hoffman, et al., 1979; Karels, et al., 2004)
Hoary marmots occupy areas with rocky talus slopes and alpine tundra vegetation (Kyle et al. 2007) and dig their burrows in these areas. Burrows provide shelter from predators and weather, marmots spend about 80% of their lives in them (Barash 1989). Entrances to marmot burrows are not easily identified because they simply appear as spaces between and/or under large boulders (Karels et al. 2004). The elevational range of hoary marmot habitats varies latitudinally; they are found at sea level in Alaska and only at higher elevations in the southern portions of their range. (Barash, 1989; Karels, et al., 2004; Kyle, et al., 2007)
Hoary marmots weigh 8 to 10 kg and are from 45 to 57 cm in length, with males being slightly larger than females (Kyle et al. 2007). Tail length is 7 to 25 cm. Their coats are mostly black and white with hoary tips to the fur, base fur color varies geographically (Barash 1989, Hoffmann et al. 1979). They have a white patch between the eyes and across the rostrum, and the tips of their noses are white. Hoary marmots differ from other marmots in that they have black feet. This is the basis of the species’ name caligata, which means “booted” (Hoffmann et al. 1979). They also have a black cap on their head that is larger than similar caps found in other species of marmots. Marmots generally undergo a single annual molt. The onset of the molt varies, but can happen as soon as emergence from hibernation. By midsummer, molting is advanced in all individuals except the young of the year (Barash 1989). Hoary marmots have small eyes and small, rounded, furred ears. They have well-developed claws on their front feet for burrowing and 5 pats on their forepaws, 6 on their hind paws. (Barash, 1989; Hoffman, et al., 1979; Kyle, et al., 2007)
Mating occurs shortly after emergence in the spring. Typical mating behavior involves the male approaching the female, sniffing her (possibly to determine if she is reproductive), then mounting her dorsoventrally. The female, when mounted, lifts her tail and holds it to one side. Successful mounts may last 30 seconds to 8 minutes. Non-reproductive females usually fight against an attempted mount, while reproductive females are more tolerant (Barash 1989). Sniffing, fighting, and chasing are all examples of marmot reproductive behavior. Originally, northern populations of hoary marmots were thought to be predominantly monogamous, while southern populations were thought to be both monogamous and polygynous. Recent studies suggest that mating among hoary marmots is more flexible than previously thought, varying between monogamy and polygyny. This may reflect local variation and resource availability (Kyle et al. 2007). (Barash, 1989; Kyle, et al., 2007)
Females reproduce every other year, with an average litter size of 3.3, range 2 to 5 (Barash 1989, Armitage 2003). Yearlings remain in their natal colony and disperse the following year as two-year-olds, which is the age of sexual maturity. The reproductive cycle lasts 10 weeks, and gestation lasts about 4 weeks. Estrous in reproductive females occurs about 1 to 2 weeks after emergence from the hibernation and only occurs once yearly (Barash 1989). (Armitage, 2003; Barash, 1989)
As the breeding season progresses, adult males and females become less closely associated. When the young of the year are born, females provide more parental care than males and are the most watchful during the two-week period of their young’s emergence (Barash 1989). Young of the year are born blind and naked, except for vibrissae and short hair on the muzzle, chin, and head. Crawling (backward and forward) and teat seeking are the first movements to occur (Armitage 2003). Young of the year are weaned between the third week of July and the first week of August. Even though adult males are larger than adult females, increases in size are relatively constant between sexes during development. Hair gradually develops from head to tail, and more rapidly on the back than on the belly (Armitage 2003). (Barash, 1989)
Weight at hibernation is significantly related to overwinter mortality, which is highest among young of the year. Winter mortality during hibernation is often more predictable than predation, and the mortality of males is higher than that of females (Barash 1989). (Barash, 1989)
Hoary marmots are highly social animals, and greeting is a frequently exhibited behavior. The exact function of greeting is unknown, but it is thought to be involved in recognizing individuals. Greeting is frequent after the animals emerge from their burrows, and begins with nose-to-nose or nose-to-mouth contact. Adult males and young of the year initiate most greetings (Barash 1989). Marmots live in colonies. The basic social structure of a colony consists of one adult male, one satellite male, one or more adult females, two-year-olds, yearlings, and young of the year. In order to avoid the dominant male, satellite males are more likely to occur in colonies located in large meadows. Because marmots are social animals, they also tend to play. Play fighting is common among young and yearlings (Barash 1989). In addition to social behavior, hoary marmots exhibit surveillance behaviors. Approximately 30% of time above ground is spent on surveillance, with looking up and looking out while sunning the two most popular behaviors (Tyser 1980). Look-up and upright-alert postures usually occur during foraging and tend to be associated with rock size. (Barash, 1989; Tyser, 1980)
Hoary marmots hibernate during the winter. They emerge in mid-May, become lethargic by late August, and re-enter the burrows as early as early September. The onset of hibernation is gradual, with a steady decline in social activity, foraging, and time spent above ground (Barash 1989). All members of family groups hibernate together (Kyle et al. 2007). In the summer, activity above ground peaks in the morning and late afternoon. Marmots may facilitate their energy intake by adjusting their behavior to capture radiant energy during low temperatures. They do this by sunning themselves on rocks and sprawling on the ground near their burrows. On sunny days in July hoary marmots spend 44% of their above ground time in early morning sunning themselves (Barash 1989). In the early spring, late summer, and in inclement weather, above-ground activity only peaks at midday. (Barash, 1989; Kyle, et al., 2007)
Home range sizes vary regionally and with local food availability. Areas with poor forage may make it impossible for males to control family groups because individuals are too widely dispersed.
Hoary marmot alarm calls tend to be loud, relatively short, and are associated with predators or agitation (Barash 1989, Blumstein and Armitage 1997). Hoary marmots also use visual signals to communicate. The clearest visual signal is an upraised tail, which appears to signal aggression towards other members of the colony (Barash 1989). (Barash, 1989; Blumstein and Armitage, 1997)
Hoary marmots are mostly herbivorous. Vetches, sedges, fleabanes, fescues, mosses, lichens, and willows collectively comprise about 90% of the overall diet of populations living on the Kenai Peninsula, while populations from mountainous regions prefer flowers and flower heads (Barash 1989, Hansen 1975). Marmot populations from different regions have similar diet-habitat characteristics even if less plant biomass is available. Hoary marmots do not select vegetation in proportion to the amount available, but rather show a preference for certain plants. Hoary marmots spend most of their above-ground time foraging. They appear to prefer each other’s company, feeding in groups. Feeding groups of up to 8 animals can occur. However, these groups are loosely organized and dynamic in terms of membership (Barash 1989). (Barash, 1989; Hansen, 1975)
Hoary marmots are eaten by a variety of predators, including golden eagles, lynx, coyotes, grizzlies, and wolverines. Predator avoidance appears to exert a strong influence on foraging patterns, and marmots have been known to remain in their burrows for many hours following the appearance of a predator (Barash 1989). They also use alarm calls to alert one another if a predator has entered their foraging area. (Barash, 1989)
Hoary marmots are good candidates as indicator species because alpine ecosystems are particularly vulnerable to climate change (Krajick 2004). Compared to other alpine species, they have little commercial value in North America and hardly experience any human-related mortality. Changes in their populations could be indicative of other large-scale impacts. Long-term population dynamics of hoary marmots may also provide an indication of changes in alpine snowpack, plant phenology and abundance, or predators (Karels et al. 2004). (Karels, et al., 2004; Krajick, 2004)
The feces of hoary marmots are important to pikas, which have been observed consuming these droppings. Dried fecal pellets of hoary marmots have been found in haypiles made by pikas (MacDonald and Jones 1987). Marmot feces may be important for soil as well. Soil surrounding marmot burrows is thought to be quite high in nutrients because marmots tend to deposit fecal matter in these areas (Bowman and Seastedt 2001). (Bowman and Seastedt, 2001; MacDonald and Jones, 1987)
Hoary marmot hides were prized by northwestern Native Americans, mainly for clothing. Marmots were hunted after the molt, and their hides were used in potlatch ceremonies. Their hides were also used as a sort of currency, measuring wealth among Tlingit and Gitksan tribes (Armitage 2003). (Armitage, 2003)
There are no known negative impacts associated with hoary marmots. Hoary marmots inhabit areas with low human population densities.
Hoary marmots have a stable population trend and are considered a species of least concern. The state of Alaska, however, considers two subspecies of hoary marmots to be of conservation concern: Montague Island marmots (M. c. sheldoni) and Glacier Bay marmots (M. c. vigilis). Montague Island marmots were last seen at the turn of the 20th century and are considered a species of concern because of lack of sightings. Because these marmots are endemic to Montague Island, they may face a higher risk of extinction. The state of Alaska also considers Glacier Bay marmots to be a subspecies of concern due to its endemism and presumed small population size. In addition, there is lingering taxonomic uncertainty regarding both of these subspecies. (MacDonald and Cook, 2007)
Recent studies have shown that fecal pellet counts can provide an accurate estimate of group size in hoary marmots, thus allowing for better monitoring of population changes (Karels et al. 2004). (Karels, et al., 2004)
Tanya Dewey (editor), Animal Diversity Web.
Danielle Gunderman (author), University of Alaska Fairbanks, Link E. Olson (editor, instructor), University of Alaska Fairbanks.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
uses sound to communicate
young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
uses smells or other chemicals to communicate
animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds.
an animal that mainly eats leaves.
A substance that provides both nutrients and energy to a living thing.
Referring to a burrowing life-style or behavior, specialized for digging or burrowing.
An animal that eats mainly plants or parts of plants.
the state that some animals enter during winter in which normal physiological processes are significantly reduced, thus lowering the animal's energy requirements. The act or condition of passing winter in a torpid or resting state, typically involving the abandonment of homoiothermy in mammals.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
Having one mate at a time.
having the capacity to move from one place to another.
This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation.
the area in which the animal is naturally found, the region in which it is endemic.
having more than one female as a mate at one time
breeding is confined to a particular season
reproduction that includes combining the genetic contribution of two individuals, a male and a female
associates with others of its species; forms social groups.
digs and breaks up soil so air and water can get in
uses touch to communicate
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
A terrestrial biome with low, shrubby or mat-like vegetation found at extremely high latitudes or elevations, near the limit of plant growth. Soils usually subject to permafrost. Plant diversity is typically low and the growing season is short.
uses sight to communicate
reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female.
Armitage, K. 2003. Wild Mammals of North America: biology, management, and conservation. Maryland, USA: John Hopkins University Press.
Barash, D. 1989. Marmots: Social Behavior and Ecology. Stanford, California, USA: Stanford University Press.
Blumstein, D., K. Armitage. 1997. Does Sociality Drive the Evolution of Communicative Complexity? A Compartive Test with Ground-dwelling Sciurid Alarm Calls. The American Naturalist, 150: 179-200.
Bowman, W., T. Seastedt. 2001. Structure and Function of an Alpine Ecosystem: Niwot Ride, Colorado. USA: Oxford University Press.
Hall, E. 1981. The Mammals of North America, Second Edition. Caldwell, New Jersey, USA: The Blackburn Press.
Hansen, R. 1975. Foods of the hoary marmot on the Kenai Peninsula, Alaska. American Midland Naturalist, 94: 348-353.
Hoffman, R., J. Koeppl, C. Nadler. 1979. The relationships of the amphiberingian marmots (Mammalia: Sciuridae). Occasional Papers of the Museum of Natural History, University of Kansas, 83: 1-56.
Karels, T., L. Koppel, D. Hik. 2004. Fecal pellet count as a technique for monitoring an alpine-dwelling social rodent, the hoary marmot (Marmota caligata). Arctic, Antarctic, and Alpine Research, 36: 490-494.
Krajick, K. 2004. All Downhill from Here?. Science Magazine, 303: 1600-1602.
Kyle, C., T. Karels, C. Davis, B. Mebs, C. Clark, C. Strobeck, D. Hik. 2007. Social structure and facultative mating systems of hoary marmots (Marmota caligata). Molecular Ecology, 16: 1245-1255.
MacDonald, S., J. Cook. 2007. Mammals and amphibians of Southeast Alaska. The Museum of Southwestern Biology, Special Publication, 8: 1-191.
MacDonald, S., C. Jones. 1987. Ochotona collaris. Mammalian Species, 281: 1-4.
Steppan, S., M. Akhverdyan, E. Lyapunova, D. Fraser, N. Vorontsov, R. Hoffman, M. Braun. 1999. Molecular phylogeny of the marmots (Rodentia: Sciuridae): tests of evolutionary and biogeographic hypotheses. Systematic Biology, 48: 715-734.
Tyser, R. 1980. Use of substrate for surveillance behaviors in a community of talus slope mammals. American Midland Naturalist, 104: 32-38. |
for Social Change: From Theory to Practice
1.1 More than a century ago, Emile Durkheim rejected the idea that education
could be the force to transform society and resolve social ills. Instead,
Durkheim concluded that education “can be reformed only if society itself
is reformed.” He argued that education “is only the image and reflection
of society. It imitates and reproduces the latter…it does not create it”
(Durkheim, 1897/1951: 372-373).
1.2 Most mainstream proposals for improving education in the United States
assume that our society is fundamentally sound, but that for some reason,
our schools are failing. Different critics target different villains:
poor quality teachers, pampered, disruptive or ill-prepared students,
the culture of their families, unions, bureaucrats, university schools
of education, tests that are too easy, or inadequate curriculum. But if
Durkheim was correct, a society has the school system it deserves. Denouncing
the poor quality of education is like blaming a mirror because you do
not like your reflection.
1.3 The first step in improving education is to recognize that the problems
plaguing our schools are rooted in the way our society is organized. We
live in a competitive economy where businesses and individuals continually
seek advantage and higher profits, and where people on the bottom rung
of the economic ladder are stigmatized as failures and blamed for their
condition. Our culture glorifies violence in sports, movies, video games, and on evening news broadcasts that celebrate the death of others through hygienic
strategic bombings. It is a society where no one feels obligated to pay taxes for
the broader social good and where welfare “reform” means denying benefits
to children if their parents cannot find work; a society that promotes
the need for instant gratification and uses youthful alienation to sell
products; a society where those who do not fit in are shunned (Bowles
& Gintis, 1976).
Under the circumstances, it is not surprising that our school system is
designed to sort children out and leave many uneducated. To legitimize
the way our society is organized, its schools teach competitive behavior
and social inequality as if they were fundamental law of nature. Just
as with the economy, some are rewarded in school, others are punished,
and both groups are taught that rewards and punishment are the result
of their own efforts (Kohn, 1999).
1.5 As a teacher educator and a public high school social studies teacher,
we try to avoid being overwhelmed by pessimism during debates over school
reform. Even though we believe that education will not be changed in isolation,
we recognize that efforts to improve schools can be part of a long term
struggle to create a more equitable society in the United States. We also
believe that students, especially high school students, must be part of
this struggle and that an important part of our job as teachers is to
help prepare them to participate as active citizens in a democratic society.
1.6 Should teachers encourage high school students to work for social change?
Thomas Jefferson believed that, in a democratic society, teachers do not
really have a choice. According to Jefferson, freedom and republican government
rest on two basic principles: “the diffusion of knowledge among the people”
and the idea that “a little rebellion now and then is a good thing.” Jefferson
supported the right to rebel because he recognized that the world was
constantly changing. The crucial question was not whether it would change,
but the direction of change. Education was essential so that ordinary
citizens could participate in this process, defending and enhancing their
In the United States, there has frequently been a close connection between
advocacy for mass public education and demands for expanding democracy,
social equity, and political reform. For example, in the mid-19th century,
Horace Mann championed public education because he believed that the success
of the country depended on “intelligence and virtue in the masses of the
people.” He argued that, “If we do not prepare children to become good
citizen...then our republic must go down to destruction” (The New York
1.8 John Dewey (1939) saw himself within this intellectual tradition. He believed
that democratic movements for human liberation were necessary to achieve
a fair distribution of political power and an “equitable system of human
liberties.” However, criticisms have been raised about limitations in
Deweyan approaches to education, especially the way they are practiced
in many elite private schools. Frequently, these schools are racially,
ethnically, and economically segregated, and therefore efforts to develop
classroom community ignore the spectrum of human difference and the continuing
impact of society’s attitudes about race, class, ethnicity, gender, social
conflict, and inequality on both teachers and students. In addition, because
of pressure on students to achieve high academic scores, teachers
maintain an undemocratic level of control over the classroom. Both of
these issues are addressed by Paulo Freire, who calls on educators to
aggressively challenge both injustice and unequal power arrangements in
the classroom and society.
1.9 Paulo Freire was born in Recife in northeastern Brazil, where his ideas
about education developed in response to military dictatorship, enormous
social inequality, and widespread adult illiteracy. As a result, his primary
pedagogical goal was to provide the world’s poor and oppressed with educational
experiences that make it possible for them to take control over their
own lives. Freire (1970; 1995) shared Dewey’s desire to stimulate students
to become “agents of curiosity” in a “quest for...the ‘why’ of things,”
and his belief that education provides possibility and hope for the future
of society. But he believes that these can only be achieved when students
are engaged in explicitly critiquing social injustice and actively organizing
to challenge oppression.
1.10 For Freire, education is a process of continuous group discussion (dialogue)
that enables people to acquire collective knowledge they can use to change
society. The role of the teacher includes asking questions that help students
identify problems facing their community (problem posing), working with
students to discover ideas or create symbols (representations) that explain
their life experiences (codification), and encouraging analysis of prior
experiences and of society as the basis for new academic understanding
and social action (conscientization) (Shor, 1987).
1.11 In a Deweyan classroom, the teacher is an expert who is responsible for
organizing experiences so that students learn content, social and academic
skills, and an appreciation for democratic living. Freire is concerned
that this arrangement reproduces the unequal power relationships that
exist in society. In a Freirean classroom, everyone has a recognized area
of expertise that includes, but is not limited to, understanding and explaining
their own life, and sharing this expertise becomes an essential element
in the classroom curriculum. In these classrooms, teachers have their
areas of expertise, but they are only one part of the community. The responsibility
for organizing experiences and struggles for social change belongs to the entire community;
as groups exercise this responsibility, they are empowered to take control
over their lives.
We agree with Freire’s concern that teachers address social inequality
and the powerlessness experienced by many of our students. We also recognize that it is difficult to imagine secondary school social
studies classrooms where teachers are responsible for covering specified
subject matter organized directly on Freirean principles. Maxine Greene
(1993a; 1993b;1993c), an educational philosopher who advocates a “curriculum
for human beings” integrating aspects of Freire, Dewey, and feminist thinking,
offers ways for teachers to introduce Freire’s pedagogical ideas into
1.13 Greene believes that, to create democratic classrooms, teachers must learn
to listen to student voices. Listening allows teachers to discover what
students are thinking, what concerns them, and what has meaning to them.
When teachers learn to listen, it is possible for teachers and students
to collectively search for historical, literary, and artistic metaphors
that make knowledge of the world accessible to us. In addition, the act
of listening creates possibilities for human empowerment; it counters
the marginalization experienced by students in school and in their lives,
it introduces multiple perspectives and cultural diversity into the classroom,
and it encourages students to take risks and contribute their social critiques
to the classroom dialogue.
1.14 Greene’s ideas are especially useful to social studies teachers. Just
as historians discuss history as an ongoing process that extends from
the past into the future, Greene sees individual and social development
as processes that are "always in the making." For Greene, ideas,
societies, and people are dynamic and always changing. She rejects the
idea that there are universal and absolute truths and predetermined conclusions.
According to Greene, learning is a search for “situated understanding”
that places ideas and events in their social, historical, and cultural
1.15 Greene believes that the human mind provides us with powerful tools for
knowing ourselves and others. She encourages students to combine critical
thinking with creative imagination in an effort to empathize with and
understand the lives, minds, and consciousness of human beings from the
past and of our contemporaries in the present. She sees the goal of learning
as discovering new questions about ourselves and the world, and this leads
her to examine events from different perspectives, to value the ideas
of other people, and to champion democracy.
1.16 During the Great Depression, striking Harlan County, Kentucky coal miners
sang a song called “Which Side Are You On?” (lyrics available on the web
at www.geocities.com/Nashville/ 3448/whichsid.html). In a book he co-authored
with Paulo Freire, Myles Horton (1990) of the Highlander School argued
that educators cannot be neutral either. He called neutrality “a code
word for the existing system. It has nothing to do with anything but agreeing
to what is and will always be. It was to me a refusal to oppose injustice
or to take sides that are unpopular” (p. 102).
1.17 James Banks (1991; 1993), an educational theorist whose focus is on the
development of social studies curriculum, shares the ideas that “knowledge
is not neutral,” and that “an important purpose of knowledge construction
is to help people improve society.” Although Banks is a strong advocate
of a multicultural approach to social studies, he argues that a “transformative”
curriculum depends less on the content of what is taught than on the willingness
of teachers to examine their own personal and cultural values and identities,
to change the ways they organize classrooms and relate to students, and
to actively commit themselves to social change.
1.18 The main ideas about education and society at the heart of the philosophies
of Dewey, Freire, Greene, Horton, and Banks are that society is always
changing and knowledge is not neutral—it either supports the status quo
or a potential new direction for society; people learn primarily from
what they experience; active citizens in a democratic society need to
be critical and imaginative thinkers; and students learn to be active
citizens by being active citizens. Assuming that we agree with these ideas,
we are still left with these questions: How do we translate educational
theory into practice? What do these ideas look like in the classroom?
1.19 In Alan Singer’s high school social studies classes before becoming a
teacher educator, he promoted transformative goals through direct student
involvement in social action projects as part of New York State’s “Participation
in Government” curriculum. In New York City, periodic budget crises, ongoing
racial and ethnic tension, and the need for social programs in poor communities
provided numerous opportunities to encourage students to become active
citizens. Class activities included sponsoring student forums on controversial
issues, preparing reports on school finances and presenting them as testimony
at public hearings, writing position papers for publication in local newspapers,
and organizing student and community support for a school-based public
health clinic. One of our most successful programs was organizing students
across the city to struggle for a condom availability program in the high
1.20 During each activity, social studies goals included making reasoned decisions
based on an evaluation of existing evidence, researching issues and presenting
information in writing and on graphs, exploring the underlying ideas that
shape our points of view, giving leadership by example to other students,
and taking collective and individual responsibility for the success of
1.21 Singer now works with a number of teachers who are part of the Hofstra
University New Teachers Network and who share a commitment to empower
students as social actvists and critical thinkers. Michael Pezone is a
high school social studies teacher in a working-class, largely African
American and Caribbean public high school in New York City where many
of his students have histories of poor performance in school. Pezone is
a former student in the Hofstra University School of Education and Allied
Human Services, a cooperating teacher in the program, and a mentor teacher
in our alumni group. Virtually ever social studies teacher education student
in the Hofstra program at one time or another visits Pezone’s classroom,
where he has involved his students and the pre-service teachers in exploring
the possibility of political action.
1.22 During the Fall semester of 2001, in response to the destruction of the
World Trade Center, the New York City Board of Education required all
public schools to lead students in the Pledge of Allegiance at the beginning
of each school day and at all school-wide assemblies and school events
(Pezone, 2002). Pezone’s students were confused about the law governing
behavior during the flag salute and concerned with defending the first
amendment rights of fellow students. They contacted the New York Civil
Liberties Union to clarify legal issues and learned that participation
was not required by law. They decided to monitor both compliance with
the directive’s requirement that the Pledge of Allegiance be recited each
day and freedom of dissent. They also circulated a questionnaire in the
school that asked students about their opinions on the issues, encouraged
students to behave respectfully and responsibly during the pledge, informed
them of their legal right not to participate, and asked them to report
violations of the law. The results of the student survey and student comments
were later distributed in the school’s magazine.
1.23 The next year (Fall, 2002), New York City initiated a new metal detector
program that made students up to one hour late for class every morning.
Pezone’s students organized to petition fellow students while they were
waiting for admission to the building. As a result of their efforts, the
problem was highlighted on a television news broadcast and finally addressed
by district administrators.
1.24 At the center of Pezone’s pedagogy is a project he calls the democratic
dialogue (Pezone and Singer, 1997; Pezone, Palacio & Rosenberg, 2003).
It has been adopted by a number of colleagues in the New Teachers Network
who first participated in the project when they visited Pezone’s classroom.
Pezone believes that the success of the dialogues depends on the gradual
development of caring, cooperative communities over the course of a year.
To encourage these communities, he works with students to create an atmosphere
where they feel free to expose their ideas, feelings, and academic proficiencies
in public without risking embarrassment or attack and being pressed into
silence. He stresses with students that the dialogues are not debates;
that as students learn about a topic the entire class “wins or loses”
1.25 The student dialogues are highly structured. Pezone believes that structure
maximizes student freedom by insuring that all students have an opportunity
to participate. It also helps to insure that classes carefully examine
statements, attitudes, and practices that may reflect biases and demean
1.26 Pezone uses dialogues to conclude units, however, preparation for the
dialogues takes place constantly. At the start of the semester, he and
his students decide on the procedures for conducting dialogues so that
everyone in class participates and on criteria for evaluating team and
individual performance. Usually students want the criteria to include
an evaluation of how well the team works together; the degree to which
substantive questions are addressed; the use of supporting evidence; the
response to statements made by the other team; whether ideas are presented
effectively; and whether individual students demonstrate effort and growth.
These criteria are codified in a scoring rubric that is reexamined before
each dialogue and changed when necessary. Students also help to define
the question being discussed. After the dialogue, students work in small
groups to evaluate the overall dialogue, the performance by their team,
and their individual participation.
1.27 During a unit, the class identifies a broad social studies issue that
they want to research and examine in greater depth. For example, after
studying the recent histories of India and China, they discussed whether
violent revolution or non-violent resistance is the most effective path
to change. On other occasions they have discussed if the achievements
of the ancient world justified the exploitation of people and whether
the United States and Europe should intervene in the internal affairs
of other countries because of the way women are treated in some cultures.
1.28 The goal of a dialogue is to examine all aspects of an issue, not to score
points at the expense of someone else. Teams are subdivided into cooperative
learning groups that collect and organize information supporting different
views. The teams also assign members as
either opening, rebuttal, or concluding speakers. During dialogues, teams
“huddle-up” to share their ideas and reactions to what is being presented
by the other side. After dialogues, students discuss what they learned
from members of the other team and evaluate the performance of the entire
1.29 An important part of the dialogue process is the involvement of students
in assessing what they have learned. In Pezone’s classes students help
develop the parameters for class projects and decide the criteria for
assessing their performance in these activities. The benefit of this involvement
for students includes a deeper understanding of historical and social
science research methods; insight into the design and implementation of
projects; a greater stake in the satisfactory completion of assignments;
and a sense of empowerment because assessment decisions are based on rules
that the classroom community has helped to shape.
Pezone uses individual and group conferences to learn what students think
about the dialogues and their impact on student thinking about democratic
process and values. Students generally feel that the dialogues give them
a personal stake in what happens in class and they feel responsible for
supporting their teams. Students who customarily are silent in class because
of fear of being ridiculed or because they are not easily understood by
the other students, become involved in speaking out. For many students,
it is a rare opportunity to engage in both decision making and open public
discussion “in front of other people.”
1.31 From the dialogues, students start to learn that democratic society involves
a combination of individual rights and initiatives with social responsibility,
collective decision-making, and shared community goals. They discover
that democracy frequently entails tension between the will of the majority
and the rights of minorities and that it cannot be taken for granted.
It involves taking risks and is something that a community must continually
work to maintain and expand. Another benefit of the dialogue process is
that it affords students the opportunity to actively generate knowledge
without relying on teacher-centered instructional methods.
1.32 Pezone finds that the year long process of defining, conducting, and evaluating
dialogues involves students in constant reflection on social studies concepts,
class goals, student interaction, and the importance of community. It
makes possible individual academic and social growth, encourages students
to view ideas critically and events from multiple perspectives, and supports
the formation of a cooperative learning environment. He believes that
when students are able to analyze educational issues, and create classroom
policy, they gain a personal stake in classroom activities and a deeper
understanding of democracy.
1.33 A number of the teachers related to the Hofstra New Teachers Network consider
themselves transformative educators, yet none of them, including either
of us, has created a model transformative classroom. It may simply be
that, although the educational goals discussed above provide a vision
of a particular kind of classroom, transformative education, like history,
is part of a process that is never finished.
Banks, J. (1991). A curriculum for
empowerment, action and change,” in Sleeter, C. (Ed.), Empowerment
through multicultural education (pp. 125-142). Albany, New York: SUNY
Banks, J. (1993). The canon debate,
knowledge construction, and multicultural education. Educational Researcher,
Bowles, S. & Gintis, H. (1976).
Schooling in capitalist America, Educational reform and the contradictions
of economic life. New York: Basic Books.
Dewey, J. (1939). Freedom and culture.
New York: G. P. Putnam’s Sons.
Durkheim, E. (1897/1951). Suicide,
A study in sociology. New York: Free Press.
Freire, P. (1970). Pedagogy of
the oppressed. New York: Seabury.
Freire, P. (1995). Pedagogy of
hope. New York: Continuum.
Greene, M. (1993a). Diversity and
inclusion: Towards a curriculum for human beings. Teachers College
Record, 95(2) 211-221.
Greene, M. (1993b). Reflections on
post-modernism and education. Educational Policy, 7(2), 106-111.
Greene, M. (1993c). The passions of
pluralism: Multiculturalism and expanding community. Educational Researcher,
Horton, M., & Freire, P. (1990).
We make the road by walking. Philadelphia: Temple University Press.
Kohn, A. (1999). Punished by rewards:
The trouble with gold stars, incentive plans, A's, praise, and other bribes.
Boston, MA: Houghton Mifflin.
The New York Times (1953, September
15). Horace Mann.
Pezone, M., (Summer-Fall, 2002). Defending
First Amendment rights in schools. Social Science Docket, 3(1).
Pezone, M., Palacio, J., & Rosenberg,
L. (Winter-Spring, 2003). Using student dialogues to teach social studies.
Social Science Docket, 3(1).
Pezone M. & Singer, A. (February
1997). Empowering immigrant students through democratic dialogues. Social
Shor, I. (1987). Educating the educators:
A Freirean approach to the crisis in teacher education, in I. Shor, (Ed.s),
Freire for the classroom (pp. 7-32). Portsmouth, NH: Heinemann. |
Humans depend on the natural world for food, fuel and other resources. If they are depleted or altered then there is a cost.
Some species are pests of agricultural or other products, which can dramatically reduce yields and the income they generate.
By studying these species, Museum scientists can learn more about their distribution and behaviour, and suggest ways of helping to control them. Find out more about the species below.
Agathiphaga vitiensis is an unusual moth found in the south western Pacific. Its larvae live on Kauri pine trees, where they develop in seeds within the pine cone. It is rarely seen in the wild, but in the 1970s, Museum scientists raised adults from infested pine seeds. This primitive moth is now helping scientists understand the evolutionary relationships between moth species and their close relatives, the caddis flies.
The spiralling whitefly, Aleurodicus dispersus, is a widespread pest that is costing millions of dollars in lost yield in agricultural crops across the tropics. Its widespread distribution is an example of failed quarantine procedures. Find out more about the spiralling whitefly and what is being done to manage its impact.
Leafcutter ants are the subject of the Veolia Environnement Wildlife Photographer of the Year 2010 winning photograph. Ants in the genus Atta harvest leaves to cultivate fungus that they then eat. Castes of ants fulfil a range of tasks including collecting vegetation, tending fungus gardens, construction and defence. Find out more about this fascinating species.
Azolla filiculoides is a tiny invasive fern that has spread around the world, but can be put to good use. It thrives in nutrient-rich ponds and ditches, where it forms potentially damaging thick mats of foliage. Azolla can be a useful fertiliser, thanks to a nitrogen-fixing alga that lives in it, and so is often used in paddy fields to improve rice yields. Take a closer look at this floating fern.
Buddenbrockia plumatellae is a tiny worm that is parasitic in freshwater bryozoans and so unknown since it was first described in 1910 that giving it a place in the animal kingdom has only recently been achieved. Find out more about this species.
The Chacoan peccary, Catagonus wagneri,was discovered in the remote Chaco forest in Paraguay, in 1975. Before its surprise discovery, scientists assumed it was extinct as it was only known from fossilised remains. Find out what threatens the Chacoan peccary today and what more can be done to protect this living fossil.
Ceiba chodatii is a deciduous tree commonly found in the Dry Chaco of Paraguay. Its characteristic bottle-shaped trunk makes it easy to spot and helps it survive in its arid habitat. Local people use the tree in a variety of ways, from crafting canoes to curing headaches. Read on to discover more about palo borracho (the drunken tree) and its many uses.
Conopeum seurati is found in estuarine habitats from Northern Europe to as far as New Zealand. Find out more about this bryozoan.
Find out more about the widespread Dendrolimus pini, including how it came to be discovered in Britain and the impact it is having on forestry trees.
Derogenes varicus is a tiny flatworm that lives in fish that inhabit cold waters around the world. The flatworm has a complex lifecycle that involves three hosts, a marine snail, another invertebrate and a fish. The worms tail is central to its success which it uses for protection, propulsion and as a way of infecting one of its hosts. Read on to find out more about this intriguing parasite.
Fredericella sultana is freshwater bryozoan commonly found in flowing or turbulent waters such as rivers and streams. It forms colonies of identical zooids, can resemble plants and is preyed on by waterfowl and fish. Find out more about one of the most common freshwater bryozoans in the world, Fredericella sultana.
Ivy is one of the few woody vines growing in Britain, and is commonly found as a ‘living curtain’ clinging to buildings and trees. It flowers and produces fruit late in the year and is often used as part of Christmas decorations. There are many ivy varieties, but it is still unclear how many species exist. Find out more about this climbing evergreen plant.
Melolontha melolontha is the largest species of chafer beetle in the UK. It is seen flying between the months of May and July and often enters homes through open windows or chimneys, attracted by the artificial light. Find out more about this species.
Paralecanium expansum metallicum is a scale insect with a remarkable appearance, like a splash of metal solder on the surface of the leaves it feeds on. Find out more about this species.
Tetracapsuloides bryosalmonae is a multicellular endoparasite known for causing proliferative kidney disease in trout and salmon, which has resulted in significant economic losses for aquaculture and has threatened wild fish populations. Find out more about Tetracapsuloides bryosalmonae.
The pineapple, Ananas comosus, was first discovered by the Tupi-Guaraní Indian tribe in what is now Paraguay. Ananas comes from the Tupi word meaning excellent fruit, and the name pineapple was coined by European explorers who noticed the fruit’s similarity to pine cones. Find out more about this succulent fruit and its many uses.
Ilex paraguariensis is a member of the holly family and is an evergreen tree that grows up to 18 metres tall. For generations, it has been used in Paraguay and other parts of South America to make a herbal infusion known as yerba mate, or Paraguayan tea. Discover the plant’s many beneficial properties and how yerba mate has become one of South America’s biggest exports.
Leptoglossus occidentalis is an invasive insect that has spread from North America to Europe in the last 10 years. It is an agricultural pest that feeds on pine trees and can cause significant seed loss in commercial tree crops such as the Douglas fir. Find out more about this 'leaf-footed' bug.
Macrocystis pyrifera, is a giant among seaweeds. Its fronds can grow up to 45m long in a single season and it forms extensive underwater forests that create the base for an ecosystem of hundreds of marine animals. For years it has been harvested for commercial purposes. Find out how this seaweed is exploited, and what threatens its survival.
Sminthurus viridis is a springtail species that is native to Europe but, since its introduction to the southern hemisphere, has become an agricultural pest. This tiny animal can decimate crops such as clover and lucerne as numbers reach a million per square metre. Discover more about the life of the lucerne flea, and how recent DNA studies are helping scientists explore the springtail's evolutionary relationship with insects.
The capercaillie is a most charismatic grouse, found in Scotland’s pinewood forests. It feeds on plants, seeds and even pine needles. The birds use open spaces within the woodland to perform an unusual mating ritual called ‘lekking’. Discover more about the habits of this majestic bird and find out what conservation efforts are underway to bolster its dwindling numbers.
Welwitschia mirabilis is a remarkable plant that can live for over a thousand years in an inhospitable desert habitat. It has a short stem and 2 huge leaves that take in water from fog and dew. The plant is a relic from the Jurassic Period and has changed little over millions of years. Discover more about the life of this ‘living fossil’. |
Could power plant waste help cut water pollution?
November 18, 2016
(sciencemag.org Nov. 17, 2016) CAMPBELLSPORT, WISCONSIN—When it rains, a river flows through a shed on Dan Johnson’s farm here. The runoff trickles through his crop fields, then beneath a small white structure where a pump sucks up small water samples. When the water fills a 20-liter jug, researchers collect it and test it for the presence of phosphorus.
The setup is part of an experiment aimed at testing an unusual water pollution control scheme that uses gypsum, a waste product from coal-fired power plants, to reduce nutrient runoff from farms.
Here in a heartland of U.S. agriculture, a growing number of farmers are spraying manure produced by animal feeding operations—which can raise thousands of animals on relatively small plots of land—across vast swaths of cropland. The phosphorus and nitrogen in the dung help fertilize crops. But when the nutrients wash into waterways, they can spur algal blooms that ultimately suffocate aquatic ecosystems.
To prevent such damage, researchers have long sought ways to keep nutrients from leaching from farm soils. And recently, they’ve taken a fresh look at using gypsum, a soft white or gray mineral also known as calcium sulfate dihydrate, to help keep phosphorous where it is wanted.
It’s an old concept. U.S. farmers have been treating fields with gypsum since George Washington was president. In part, that’s because sulfate in the gypsum binds with magnesium in the soil, helping the soil hold water. But pollution specialists are more interested in the calcium in the gypsum; it binds with phosphate in soil, forming a larger particle that resists being washed away.
Fertilizer companies once routinely mixed gypsum into their products, but that practice faded. And because the mineral traditionally came from mines, shipping was prohibitively expensive. “Somewhere in that shuffle we forgot about gypsum,” says Francisco Arriaga, a soil scientist at the University of Wisconsin (UW) in Madison.
But researchers saw a possible comeback for gypsum as nutrient pollution problems grew and coal-fired power plants proliferated. Many plants are equipped with equipment—dubbed scrubbers—that use lime to remove pollutants. The chemical reactions involved produce a form of gypsum known as flue-gas desulfurization (FGD) gypsum. A lot of FGD gypsum ends up in landfills, but companies also use it to make wallboard and cement. In part, that’s because it is relatively cheap: A ton of mined gypsum can cost as much as $140, whereas a ton of FGD gypsum costs $38.
Farmers are also allowed to use FGD gypsum on their fields, and over the past decade scientists have begun research projects in seven states examining how it affects crops and soils. Here in Wisconsin, Dan Johnson’s farm is one of three study sites selected by Arriaga and other UW researchers. They are working in cooperation with the Sand County Foundation, a nonprofit based in Madison, and We Energies, an energy company that runs a coal-fired power plant just 160 kilometers away in Milwaukee, Wisconsin.
On Johnson’s farm, the potential for polluted runoff from steep slopes is high, making it an ideal study site, says Greg Olson, the field projects director for the Sand County Foundation. The project is of particular interest to Olson’s group because of growing concern about polluted runoff creating oxygen-poor dead zones in the Great Lakes.
In 2014, Johnson started applying gypsum, obtained from a We Energies facility just 10 kilometers from his farm, to about 4 hectares. (The treatment lasts 2 to 3 years.) “We’re doing a chemistry experiment in the soil,” Olson says. Gypsum not only can make phosphorus particles “less mobile,” but increase the amount of water available to crops and reduce runoff.
Preliminary results—which Arriaga will present at a Soil Science Society of America conference this month—suggest gypsum is helping keep phosphorous in Johnson’s soils. And previous field experiments, including projects in Georgia and Ohio, have found that the mineral can also can reduce levels of toxic aluminum and pathogens in soils, as well as provide a source of calcium and sulfur, two nutrients plants need to grow. (Now that power plants emit less sulfur, which used to fall back to land in the form of acid precipitation and dust, some soils are deficient in that nutrient.)
Still, FGD gypsum may have some downsides. For one, if heavy rains wash gypsum into waterways, it could liberate phosphorus stored in river sediments, adding the nutrient to the water column. (Heavier rains are one anticipated effect of climate change in the Midwest.)
This story was made possible in part by reporting support from the Institute for Journalism & Natural Resources. |
In this quick tutorial you'll learn how to draw a Cuttlefish in 5 easy steps - great for kids and novice artists.
The images above represents how your finished drawing is going to look and the steps involved.
Below are the individual steps - you can click on each one for a High Resolution printable PDF version.
At the bottom you can read some interesting facts about the Cuttlefish.
Make sure you also check out any of the hundreds of drawing tutorials grouped by category.
How to Draw a Cuttlefish - Step-by-Step Tutorial
Step 1: A cuttlefish is an aquatic animal related to the squid. To draw one, let's start with the head. Make two humps for the top of the head. Then draw a long curve that bends right and down
Step 2: Now draw the face tentacles. Each tentacle is just two lines that meet at the end. Try to make the endings of your tentacles bend, and point in slightly different directions.
Step 3: Cuttlefish have very special, weird looking eyes. Make a circle with a pie-piece missing from the top to make the outside of hte eye. Then draw a small sideways hourglass shape to make the interior of the eye, and you're done!
Step 4: Now draw the body. It will be one long loop that starts with a flat line that is drawn to the left, from above the head. Near the end there should be a small dip down, before turning up. Then there should be a gentle curve to make the rear of the animal, and a curved line should bend down and to the right to finish.
Step 5: Now draw the 'skirt' around the body of your Cuttlefish. It should be a wobbly line that starts at the divot you made near the back of the animal, that curves up, around, and then below the Cuttlefish to then join to the body just left of the head. Then draw another line between the skirt and the lower tentacles, and you're done! You've drawn a Cuttlefish.
Interesting Facts about Cuttlefish
Cuttlefish are not actually fish. Cuttlefish are mollusks. They live in ocean waters. Cuttlefish have green blood and three hearts. They have the ability to change their color and pattern in order to attract a mate and camouflage with their surroundings. Sometimes male cuttlefish make themselves look like females so that they can steal another male’s mate.
Did You Know?
- Cuttlebone is the oval-shaped, calcium-rich skeleton of the cuttlefish.
- Cuttlefish eat small fish, crabs, and shrimp. They are carnivores.
- Cuttlefish can see backwards, but they cannot see colors.
- Female cuttlefish lay about 200 eggs at a time, and they die after giving birth.
- There are 120 species of cuttlefish. The largest, appropriately named the giant cuttlefish, is about three feet long. The smallest species is the flamboyant cuttlefish, which is only a few inches long.
- Because cuttlefish are excellent at camouflaging, there might be many more species people have yet to discover.
Cuttlefish are related to octopus and squid. Like their relatives, they have tentacles (which surround their mouth), and they eject ink to scare off predators. Due to their size, they have many predators, which include dolphins, sharks, birds, and humans. If you’ve ever eaten calamari, you might have eaten cuttlefish. |
Official Blog of The Yellin Center for Mind, Brain, and Education
Monday, November 1, 2010
Vocabulary in Middle School
The authors of an article in the October issue of the journal Educational Leadership look at existing research and conclude that " a system of cross-content, whole-school vocabulary instruction can result in better reading comprehension." What do they mean by that? Words that students encounter frequently, in various academic settings, and in somewhat different formats, need to be not just familiar to students, but thoroughly understood. They recommend The Academic Word List as one source of these words - such words as "distribute," "perceive" and "contrast," which students may encounter in such diverse subjects as literature, science, and history. They go on to note that it is important to consider the difficulty and frequency of specific words in an academic context and suggest such online tools as Word Count to help select those words which students might most benefit from studying and understanding. The authors found that students need multiple exposures in these important words, across content areas, to fully understand their meaning. They suggest that vocabulary instruction be limited to only a few words each week, with teachers of different subject matters using the same words (hence the "whole school" aspect of this instruction) and demonstrating that they can take on different meaning from one area of academic study to another.
For situations where teachers may have difficulty with defining specfic words in a clear enough manner to instruct their classes, the authors suggest such resources as the Longman Dictionary of Contemporary English. The authors conclude their article with lists of resources to support the instruction of vocabulary in middle schools. Even where schools may not adopt this type of program, parents can implement some of its elements at home. |
This was one of the most influential maps of North America published in the late 17th Century. Coronelli was a Franciscan monk who trained as a cartographer for the Republic of Venice. In 1681, he was commissioned by King Louis XIV in 1681 to produce two globes, one celestial and one terrestrial. This map by Coronelli was made a few years later and is cartographically identical to his more famous globe. While the Great Lakes and the Atlantic coastline are mapped fairly accurately, there are a couple of notable errors on this map. Based on what might have been an intentional navigational error by French explorer Rene Robert Cavelier, the Sieur de La Salle, who landed in Texas and claimed it for France in 1685, Coronelli placed the Mississippi River about 600 miles west of its actual location. And note the “island” of California – for almost a century, until the Gulf of California was finally explored, it was not known if California itself was an island or a peninsula. |
Did you know that cattle don’t get all their permanent teeth until they’re 5 years old? The lower front teeth, known as incisors, come in over a period of years, 2 pair at a time, starting with the two center teeth. This means that you can tell the age of your animals by how many of the front incisors they have. You can also tell if an cow is nearing the end of her productive years and is ready to be marketed by looking at the condition of her incisors. This 4 minute video from Mississippi State University will show you how to open the cow’s mouth safely to look inside, and then how to figure out how old your animal is based on what you see inside. Enjoy!
Tablet readers, here’s your link. |
A Rotary Or Mechanical Dial is an impulse sending device. When the subscriber lifts his handset, the D.C. loop is completed and steady current flows through the line provided by the exchange. The impulsing cam of the dial breaks the circuit as many times as the number dialed, thus producing pulses of current and sends it over the subscriber's line to the exchange.
The Rotary Or Mechanical Dial consists of finger plate with ten holes in it. These holes are equally spaced around 2/3rd of the outer ring of the finger plate. The numbers 1 to 0 are written on a number ring below the finger plate. These numbers can be seen through the ten holes of the finger plate. There is a finger stop adjacent to the digit "0". The rotation of the fingerplate is clockwise against the tension of a spring that restores it to its normal position, after dialing a digit. During the anti clockwise or reverse movement of-the finger plate, the speed of the plate is kept constant with the help of a "Governor". The governor is a mechanical device and consists of a number of weights on the spring. It moves along with the dial through gear assembly.
- when dialing, due to the interruption of current clicks are produced and are acceptable. The off-normal contacts are used to bypass the speech circuit wh3 current from the exchange flows through them and.
- when the contacts close the impedance of the line reduces, hence the current increases. The constructional detail of the rotary or mechanical dial is shown in the figure |
With the premiere of his play The Weavers (in German, Die Weber, 1893), Gerhart Hauptmann was recognized as the top dramatist of his generation. A realistic play based on the meager, dismal lives of the Silesian weavers, it dramatized the weavers’ 1844 revolt. It was considered the most gripping and humane of Hauptmann’s naturalistic dramas. It was also the most objectionable to the political authorities at the time of its publication.
Hauptmann, who was born in Silesia, Prussia, in what is now Germany, studied sculpture in Breslau (now Poland), and science and philosophy in Jena (now Germany). From 1889, he was known as a dramatist writing about contemporary problems. He won the Nobel Prize for Literature in 1912.
The play, written in five acts, was first performed by the Freie Buhne on February 23, 1893. Hauptmann wrote it in a Silesian dialect, but rewrote it in high German flavored with the dialect. Both versions were published in 1892. Hauptmann dedicated the play to his father, whose oral account of the Silesian weavers’ riots provided the playwright with the idea for his play. The Weavers presents contemporary social tensions against this historical background, creating a compassionate dramatization of the crises faced by the weavers. Hauptmann was a literary innovator, presenting the weavers as a collective protagonist. Neither hero nor villain is clearly identified, and the poverty of the newly industrialized society is painful for the characters as well as for the audience. |
In this Ask the Expert, climatologist Brian Fuchs, National Drought Mitigation Center at the University of Nebraska-Lincoln, discusses the U.S. Drought Monitor, the USDA programs triggered by the Monitor data and how the data can be used to inform decision-making by America‘s agricultural producers.
What is the U.S. Drought Monitor (USDM)?
The U.S. Drought Monitor (USDM) is an online, weekly map showing the location, extent, and severity of drought across the United States. It categorizes the entire country as being in one of six levels of drought. The first two, None and Abnormally Dry (D0), are not considered to be drought. The next four describe increasing levels of drought: Moderate (D1), Severe (D2), Extreme (D3) and Exceptional (D4). The map is released on Thursdays and depicts conditions for the week that ended the preceding Tuesday.
Why is the USDM important to agricultural producers?
The USDM provides producers with the latest information about drought conditions where they live, enabling producers to best respond and react to a drought as it develops or lingers. In some cases, the USDM may help a producer make specific decisions about their operation, such as reducing the stocking rate because forage is not growing. For others, it may provide a convenient big-picture snapshot of broader environmental conditions.
Drought not only affects how farmers, ranchers, and forest managers run their day-to-day operations, it also impacts our municipal water supply and water supply in general, the quality of our fish and wildlife habitats, and influences a variety of industries and livelihoods like landscaping, energy production, river navigation, and more.
USDA uses the USDM to determine a producer’s eligibility for certain drought assistance programs, like the Livestock Forage Disaster Program (LFP) and Emergency Haying or Grazing on CRP acres. Additionally, the Farm Service Agency uses the Drought Monitor to trigger and “fast track” Secretarial Disaster Designations which then provides producers impacted by drought access to emergency loans that can assist with credit needs.
How do you determine drought location and severity?
The USDM incorporates varying data – rain, snow, temperature, streamflow, reservoir levels, soil moisture, and more – as well as first-hand information submitted from on-the-ground sources such as photos, descriptions, and experiences. But sometimes the data tell different stories, especially over different lengths of time. For example, a rainy week may erase drought in people’s minds, but may not make up for long-term deficits and may not soak hard, dry soil.
The levels of drought are connected to the frequency of occurrence across several different drought indicators. A Moderate drought is one that happens, statistically, up to 20% of the time, and an Exceptional drought, up to 2% of the time, or once every 50 years. In reality, Exceptional droughts may happen in back-to-back years, or may not happen at all in 100 years.
Who creates the drought map?
The science that goes into detecting drought is climatology, which involves comparing data over time to understand what is normal for a particular place. The map authors are trained climatologists or meteorologists from the National Drought Mitigation Center at the University of Nebraska-Lincoln (the academic partner and website host of the USDM), the National Oceanic and Atmospheric Administration, and the U.S. Department of Agriculture.
What makes the USDM unique is that it is not a strictly numeric product. The mapmakers rely on their judgment and a nationwide network of 450-plus experts to interpret conditions for each region. They synthesize their discussion and analysis into a single depiction of drought for the entire country.
How does the U.S. Drought Monitor provide information for producers?
Earlier this month, the USDM added a feature where users can view drought data or maps for 323 tribal areas. While this information is not new, we now provide deeper insight into drought conditions and compute statistics for these areas, allowing users to shine a spotlight on how drought affects tribal communities.
The project to enhance tribal area data on the U.S. Drought Monitor site is the latest effort to make the USDM more accessible to historically underserved populations. A Spanish-language version of the USDM was previously created to provide weekly drought information for Spanish-speaking populations in the U.S.
How can people contribute to the USDM?
There are multiple ways to contribute your observations to the USDM process:
- Talk to your state climatologist - Find the current list at the American Association of State Climatologists website.
- Email - Emails sent to [email protected] inform the USDM authors.
- Become a CoCoRaHS observer - Submit drought reports along with daily precipitation observations to the Community Collaborative Rain, Hail & Snow Network.
- Submit Condition Monitoring Observer Reports (CMOR) - go.unl.edu/CMOR.
- Use the drought.gov contact form - Contact us online at www.drought.gov/drought/contact-us.
On farmers.gov, the Disaster Assistance Discovery Tool, Disaster-at-a-Glance fact sheet, and Farm Loan Discovery Tool can help you determine which program or loan options may be right for you. For assistance with a crop insurance claim, contact your crop insurance agent. For FSA and NRCS programs, contact your local USDA Service Center.
Ciji Taylor is a public affairs specialist with USDA |
Data recovery is the process of restoring data that has been lost, accidentally deleted, corrupted or made inaccessible.
In enterprise IT, data recovery typically refers to the restoration of data to a desktop, laptop, server or external storage system from a backup.
Data recovery can be performed on a variety of storage devices including internal storage drive of laptop or desktop, external hard disk drive, solid-state drive, USB flash drive, optical storage medium (CD/DVD/BD), and memory card (SD, SDHC, SDXC).
Most data loss is caused by human error, rather than malicious attacks, human error accounted for almost two-thirds of the incidents reported. The most common type of breach occurred when someone sent data to the wrong person.
Other common causes of data loss include power outages, natural disasters, equipment failures or malfunctions, accidental deletion of data, unintentionally formatting a hard drive, damaged hard drive read/write heads, software crashes, logical errors, firmware corruption, continued use of a computer after signs of failure, physical damage to hard drives, laptop theft, and spilling coffee or water on a computer.
The data recovery process varies, depending on the circumstances of the data loss.
Data recovery is possible because a file and the information about that file are stored in different places. For example, the Windows operating system uses a file allocation table to track which files are on the hard drive and where they are stored. The allocation table is like a book’s table of contents, while the actual files on the hard drive are like the pages in the book.
When data needs to be recovered, it’s usually only the file allocation table that’s not working properly. The actual file to be recovered may still be on the hard drive in flawless condition. If the file still exists — and it is not damaged or encrypted — it can be recovered. If the file is damaged, missing or encrypted, there are other ways of recovering it. If the file is physically damaged, it can still be reconstructed.
So, when data is deleted or a drive is formatted, the data is not removed from the storage drive. Rather, the data remains intactin the storage medium in an inaccessible state ready to be overwritten by new data. This marooned data can be retrieved by using a data recovery software that uses file signatures to scan the entire storage drive bit by bit. You can then preview and select the required data from the scanned items and recover them to a distinct storage drive to avoid overwriting. In case the data that you are trying to recover is overwritten or corrupt, it can’t be fixed by a data recovery software. |
Budding Software Engineer
Operators are used to doing operations on any given data stored inside variables. In Python, we learn 7 types of operators - namely :
1. Arithmetic Operators
Arithmetic operators make mathematical operations possible on operands in a program.
I know! I know! This is a basic concept! But let’s make it fun!
Addition 9 Subtraction 8
Muliplication 314.0 Division 2.0
Floor division — // : rounds off the result to the nearest whole number.
Modulus — % : produces the remainder of the numbers.
Floor Division 3 Modulus 1
Exponentiation — **: produces the power of given numbers
Exponentiation 0.025517964452291125 Exponentiation 37.78343433288728
When it comes to binary numbers, bitwise operators are the choice.
Bitwise operators are used to perform operations on binary numbers.
AND, OR, XOR operators
AND 82 OR 2039 XOR 1957
Ha Ha, surprised about the outputs?!
The outputs are a result of the binary numbers a and b which gets converted into an integer, each time bitwise operation is performed.
NOT ~ operator inverts all the bits. In Python, the number gets converted into an inverted signed number.
Right shift 277 Left shift 4444
So, basically, comparison operators are used to compare two values that are numbers.
If we level up to be geeky, comparison operators can also be used to compare other data types.
Now, let’s start with equality checks and I hope you like spider-man movies.
== Equal comparison operator
!= Not Equal comparison operator
Alright, I’m sure that you are aware of how to use other operators to compare two number values, right? OK, now’s the time to level up to be geeky.
For the rest of the operators let us compare the letters from the Alphabet.
Wait, what?! You heard me right!
Let me explain it at the end of this post.
> Greater than comparison operator
False True False
< Less than comparison operator
True True False
>= Greater than or equal to comparison operator
False True True
<= Less than or equal to comparison operator
False True True
Here’s the answer for the above craziness.
When we compare two letters (or characters), it gets converted into ASCII code. You can check the link where the table contains ‘DEC’ (Decimal values) for the characters from Alphabet.
Now that the characters are converted into ASCII code, which is nothing but numbers, we are back to square one. That is, we can compare the values as numbers and return true or false.
Assignment operators are used to assign values to variables.
That is to store values in variables we use = assignment operator.
OK, now comes the real fun. Have ever been tired to use x = x + 5, where we type the variable x twice? There's actually a shortcut for this called augmented assignment operators.
Augmented assignment operators can be used as a replacement as follows:
x += 3 ---> x = x + 3 x -= 3 ---> x = x - 3 x *= 3 ---> x = x * 3 x /= 3 ---> x = x / 3 x %= 3 ---> x = x % 3 x //= 3 ---> x = x // 3 x **= 3 ---> x = x ** 3 x &= 3 ---> x = x & 3 x |= 3 ---> x = x | 3 x ^= 3 ---> x = x ^ 3 x >>= 3 ---> x = x >> 3 x <<= 3 ---> x = x << 3
Here is the Code and Output
9 6 18 6.0
64 1 0
0 3 24
So, while coding make sure you practice to use print statements after each operation.
Logical operators are used to combine more than two conditional statements.
These operators are very useful for writing logical conditions in control flow statements of a programming language.
Let’s code them one by one.
and operator returns the boolean value True only if both statements are true.
or operator returns the boolean value True if any statement is true.
not operator acts as a unary operator which returns the boolean value True the statement is true and vice versa.
Identity operators are used to checking whether the objects are the same or not.
True False True
True False True
Membership operators are used to testing whether a sequence with the specified value is present in the given object.
Let’s go code through each of them.
not in operator
Code along and have fun ;)
Create your free account to unlock your custom reading experience. |
Disease has been a part of the human condition since the beginning of recorded history – and no doubt earlier – decimating populations and causing widespread social upheaval. Among the worst infections recorded is the plague which is fairly well documented in the West starting with the Plague of Justinian (541-542 CE) and continuing on through the Black Death (1347-1352 CE). Outbreaks of plague following the Black Death already had a body of literature to draw upon and so, in the West, are also well documented.
The same cannot be said for the plagues of the Near East which claimed millions of lives between 562-1486 CE throughout the regions now known as Iran, Iraq, Syria, Turkey, Lebanon, Israel, Saudi Arabia, and Egypt among others. The initial plague is thought to have been a continuation of Justinian's Plague, although other theories as to origin have been suggested, and the epidemics which followed are considered either a resurgence of this plague or another strain brought to the region through trade or the return of troops from campaign. These outbreaks are sporadically mentioned in histories of the plague owing to a number of factors including:
- Reluctance of Near Eastern writers of primary sources to address the issue
- Terminology used by these writers which confused plague with cholera
- Tendency of Near East scribes to ignore affected regions beyond their own
- Religious interpretation of the outbreaks which ignored practical details
- Lack of translations of primary documents into Western languages
- Reliance of Western historians on earlier Western historians/travel writers
Although later Muslim Arab writers would attempt to chronicle the outbreaks, they were working with meager source material, which was often confusing, and their works are therefore incomplete. Most scholars in the modern-day, therefore, focus on periods within the plague years which are better documented, the most famous being the Plague of Sheroe of 627-628 CE which helped topple the Sassanian Empire (224-651 CE) and contributed to the destabilization of the region. Even so, enough source material does exist to enable one to chart the plagues of the Near East through 1486 CE, by which time more complete records of the disease were being kept.
First Recorded Plague
Plague is defined as a contagious bacterial disease which, since the 19th century CE, is known to be caused by the bacterium Yersinia pestis which was only localized and identified in 1894 CE. Prior to that date, no one knew what caused the plague and it was routinely attributed to the anger of the gods or God for the sins of humanity.
Symptoms of plague include fever, body aches, nausea and vomiting, diarrhea, dehydration and, with bubonic plague, the presence of buboes, swollen nodes of the lymph glands. The three types of plague are bubonic, septicemic (infecting the blood), and pneumonic (infecting the lungs), and all are usually fatal.
The first definitive outbreak of plague was the Plague of Justinian as recorded by the historian Procopius (l. 500-565 CE) which killed an estimated 50 million people. Although this plague is routinely dated to 541-542 CE – the period when it struck Constantinople the hardest – it continued until c. 750 CE. It takes its name from the reign of the Byzantine emperor Justinian I (527-565 CE) and Procopius blamed him for the disease, claiming he had angered God by his unjust and capricious actions.
In his work History of the Wars Volume II, Procopius describes the plague as originating in the East and traveling to Egypt before reaching the Byzantine capital of Constantinople from which it spread further. Procopius describes the disease as sparing no region and respecting no season:
It seemed to move by fixed arrangement and to tarry for a specified time in each country, basting its blight slightingly upon none, but spreading in either direction right out to the ends of the world, as if fearing lest some corner of the earth might escape it. For it left neither island nor cave nor mountain ridge which had human inhabitants; and if it had passed by any land, either not affecting the men there or touching them in indifferent fashion, still at a later time it came back; then those who dwelt round about this land, whom formerly it had afflicted most sorely, it did not touch at all, but it did not remove from the place in question until it had given up its just and proper tale of dead, so as to correspond exactly to the number destroyed at the earlier time among those who dwelt round about. (II.xxii. 7-11,Lewis, 470)
Symptoms began with a fever – which Procopius describes as seeming light at first and barely discernible by doctors – and then fatigue followed by dehydration, the appearance of buboes, delirium or coma, and then death. He writes:
Death came in some cases immediately, in others after many days; and with some the body broke out with black pustules about as large as a lentil and these did not survive even one day, but all succumbed immediately. With many also a vomiting of blood ensued without visible cause and straightaway brought death. Moreover, I am able to declare this, that the most illustrious physicians predicted that many would die who unexpectedly escaped entirely from suffering shortly afterwards and that they declared that many would be saved who were destined to be carried off almost immediately. So it was that, in this disease, there was no cause which came within the province of human reasoning. (II.xxii.30-36, Lewis, 473)
This would be the paradigm that defined later outbreaks of the plague in the Near East. The disease seemed to descend upon a population swiftly, take many lives, and move on. Procopius makes clear that, when it left Constantinople, it traveled to the land of the Persians where it killed many more than it had in the Byzantine Empire.
Djazirah Outbreak of 562 CE
The plague had been present in the East before it arrived at Constantinople, however. Scholar Michael G. Morony, citing the historian John of Ephesus (l. c. 507 - c. 588 CE), notes:
Whenever it invaded a city or village, it fell furiously and quickly upon it and its suburbs as far as three miles. It would not move on until it had run its course in one place. After becoming firmly rooted, it moved along slowly. This allowed word of the plague to precede its arrival. The people of Constantinople learned about the progress of the plague by hearsay over a period of one or two years. (Little, 64)
The Byzantines of Constantinople seem to have felt the plague of the East had nothing to do with them, however, and only found they were in error after it was too late. When it left Constantinople, it returned to the East – following the course described by Procopius – and struck at Mesopotamia, although precisely where is unknown. The later Arab writers describe this as the Plague of Djazirah (also given as Jazeera, “island”) which was their name for Mesopotamia (“the land between two rivers”). Where it first struck and how long it lingered is unknown but, in 562 CE, it killed 30,000 people in the city of Amida (present-day Diyarbakir in southeastern Turkey) and struck again in 599 CE. Morony notes how “there was another outbreak of bubonic plague in 600 CE when many houses were left without inhabitants and fields went unharvested, but we are not told where” (Little, 65). This is characteristic of the reports of plagues in the East because many writers recorded news of the plague without specifying where it had struck unless it was close at hand.
It is for this reason that most discussions of the plague in the Near East focus on Sheroe's Plague because, even though the details of the epidemic itself are often unclear, its effects are certain.
Sheroe's Plague of 627-628 CE
The Plague of Sheroe takes its name from the Sassanian monarch Kavad II (r. 628 CE) whose birth name was Sheroe (also given as Shiroe). Kavad II came to power following the disastrous wars of his father Kosrau II (r. 590-628 CE) who drained Sassanian resources in his efforts to destroy the Byzantine Empire. The Sassanian nobility finally overthrew Kosrau II and crowned the prince Sheroe as Kavad II in his place.
Kavad II had all his brothers, half-brothers, and stepbrothers killed so they could not challenge his claim to the throne and then initiated peace talks with the Byzantines and reconstruction of the many cities damaged or ruined during Kosrau II's wars. He did not have the time to complete any of his plans, however, as the plague – which had been sweeping the region since 627 CE – killed him in the fall of 628 CE, only a few months into his reign. Having executed all the legitimate male heirs who could have then taken the throne, he was succeeded by his seven-year-old son Ardashir III (r. 628-629 CE) whose reign was administered by the vizier Mah-Adur Gushnasp who was quickly overthrown and both he and the young emperor assassinated.
The death of Kavad II, and the aftermath, destabilized the Sassanian Empire which was still trying to recover from the losses incurred by Kosrau II's wars and the plague. When the Arab Muslims invaded during the reign of Yazdegerd III (632-652 CE), the Sassanian Empire had no strength to repel them and so the plague is recognized as contributing to the empire's decline and fall.
Later Plagues as Prelude to Black Death
The plague continued to prowl Mesopotamia afterwards and flared up again in 688-689 CE when the city of Basra alone lost 200,000 people in three days. In 698-699 CE the plague swept through Syria and in 704-705 CE it returned to northwestern Mesopotamia. This trend continued throughout the century until the Great Outbreak of 749-750 CE when the bubonic plague killed millions.
Unfortunately, the specifics of where these plagues struck is not always given nor are the death tolls other than a vague reference to “many” or “the whole city” or “the region” which do not always define which city or region or how many people were lost. There was a flare-up of bubonic plague between 746-749 CE – referred to as the Great Outbreak – in Constantinople, Greece, and Italy, with a death toll of upwards of 200,000, but in 750 CE the disease seemed to vanish; it is for this reason that 750 CE is usually given as the end of the plague but it is now believed that it only lay dormant before resurfacing again as the infamous Black Death. The last date given for plagues in Persia and the Near East, however, is 689 CE. Scholar Ehsan Mostafavi writes:
To the best of our knowledge, there is not any concrete documentation about plague outbreaks and plague's impact on Persia between 689 and 1270 CE; it seems, though, that plague continued to spread throughout Persia, remaining endemic after the outbreaks of 689 CE, until the middle of the thirteenth century. (5)
Why the plague went dormant c. 689 CE in the East and 750 CE in the West is unknown. Theories concerning the effects of weather conditions on the rat population and how or if they were transported to more or less places seem untenable since Procopius clearly states the plague was unaffected by the weather or by any human action.
Black Death of 1346 - c. 1360 CE
For whatever reason, the plague slept until 1218 CE when there was an outbreak in Egypt that claimed 67,000 people before vanishing away again (Ben-Menahem, 663). When the plague returned in 1332 CE it struck in isolated areas at first and then gained momentum to engulf the East beginning in 1346 CE and spread to Europe by 1347 CE. This was the bubonic plague but the epidemic – which became a pandemic – also carried with it the other two types, septicemic and pneumonic, and came from Central Asia, most likely China.
Symptoms began, as with the Plague of Justinian, with a fever, body aches, and fatigue before the buboes emerged in the groin, armpits, and around the infected person's ears. The plague also struck dogs, cats, and other animals – even mice – as was also reported of Justinian's Plague. The mortality rate was astonishing with daily numbers given of 20,000 or more dead and final tallies of between 20-30 million people. Morony comments on the numbers given:
Can we take the numbers recorded in the [primary] sources literally? One cannot discount the presence of rhetorical hyperbole in these accounts, or the fact that the large round numbers they give can only have been estimates at best. There are at least two considerations to remember in dealing with this kind of information. One is that recording the number of fatalities was one of the ways these authors attempted to express the magnitude of the disaster. The other is that the number of fatalities is meaningless in demographic terms without knowing the size of the total population. (Little, 72)
While this observation has merit, it does nothing to diminish the widespread devastation of the human population of the Near East. People died so quickly, and in such large numbers, that proper mortuary rituals had to be abandoned and the dead disposed of as quickly as possible. Even so, as the death toll rose, bodies were simply thrown out of doors, tucked into the corners of buildings or left in alleys, on the porches of churches and mosques, or dragged into fields. Those who died in the street were left there because others were too afraid to go near them. Corpses that were dumped by streams or irrigation canals infected the water which then spread the disease downstream. The stench of the decaying bodies, and the fear of imminent death, made any semblance of returning to one's former daily routine impossible and people tended to avoid the streets of any given city as well as each other.
The plague traveled from the East to the West in 1347 CE to ravage Europe via Genoese ships from the port city of Caffa (also given as Kaffa) on the Black Sea (modern-day Feodosia in Crimea). Caffa had been under siege by the Mongol Golden Horde, who had brought the plague with them, under the command of Khan Djanibek (also given as Jani Beg, r. 1342-1357 CE). As Mongol soldiers began dying of the plague, Djanibek ordered their corpses catapulted over Caffa's walls, infecting the city's population. Merchant ships from Caffa then fled the city for Italy, stopping at Sicily, then Marseilles, and Valencia, from whence the plague spread throughout Europe.
The disease is also thought to have arrived in Europe via the Silk Road with merchants coming from the East and this theory on point-of-origin, along with the Genoese ship account, provided European chroniclers with the starting point for their narratives on the Black Death. There was no need for the author of the Chronicle of the Black Death (c. 1350 CE), for example, to expend effort in researching the origin of the plague in Europe since it was clear it had come “from the east” and any further details were considered irrelevant as the authors focused on the plague's effects. European writers also based their point-of-origin theories on earlier travel writers who reported regions such as Iran as plague free. The work of the Moroccan traveler and writer Ibn Battuta (l. 1304-1368/69 CE) – who reported on the plague in the East – was not available to Western writers.
Even if medieval and Renaissance European writers had made the effort to research plagues in the East, it is doubtful they would have had much success owing to the reasons cited above. As noted, primary documents of the Near East do not always mention the outbreaks and, sometimes, information on the devastation of a region only comes from town or city records or accounts by Muslim writers many centuries later. According to scholars Ahmad Fazlinejad and Farajollah Ahmadi, one difficulty in determining where and when the plague struck in the East is the terminology used:
Early historians used the word 'plague' to refer to any epidemic illness with a large death toll. Muslim writers generally use the Arabic word “ta`un” for “plague” but it seems that they were unable to distinguish plague from cholera because, in many cases describing the Black Death, the term is used interchangeably with the Arabic word “vaba” (cholera). (56-57)
Another difficulty in defining where and when the plague struck, as noted, is simply a scribe's tendency to ignore areas beyond their own town, city, or surrounding region. The religious interpretation of the plague also affected how it was recorded in the Near East since, because it was thought to have been sent by God, scribes tended to focus on how one should respond on the spiritual, rather than physical plane, and precisely where the plague struck, or for how long, was considered less important than discussing how a believer should behave in the face of it. The question of whether God would want one to flee a land stricken by plague or remain, for example, is dealt with at length while what one should do on a practical level to avoid the disease is ignored and, it seems, was not even considered since the plague was supernatural in origin, sent by divine will.
The plague continued through 1486 CE, though on a much smaller scale (except for periodic flare-ups) than in 1346 - c. 1360 CE, and would continue to make appearances in the Near East up into the 18th, 19th, and early 20th centuries CE before its cause was understood and steps could be taken to control it. Even so, and contrary to popular opinion, the plague continues to significantly affect populations around the world in the present day, many of whom continue to attribute the disease to the will of God and ignore practical measures which would save lives. |
Why have so many of us kept a sketchbook or journal at some point in our lives — no matter how brief — and what did they stimulate in us? The value of using illustration and text in order to make sense of ourselves and our surroundings has been recognized by humanity for much of our art history. In this course, students will keep sketchbooks wherein they use illustration, doodling, text, paint, collage, and/or other mediums, to document and digest the happenings of their current lives. As the pages fill with work, one goal of this course is for students to note and investigate a theme within their sketchbooks, and create a final piece further investigating said theme. But another equally important goal is for students to note their exceptional creative output during our time together, and continue to explore the work within their sketchbooks long after the course has ended. |
A Telescope Made of Moondust
| | + Join mailing list
July 9, 2008: A gigantic telescope on the Moon has been a dream of astronomers since the dawn of the space age. A lunar telescope the same size as Hubble (2.4 meters across) would be a major astronomical research tool. One as big as the largest telescope on Earth—10.4 meters across—would see far more than any Earth-based telescope because the Moon has no atmosphere. But why stop there? In the Moon's weak gravity, it might be possible to build a telescope with a mirror as large as 50 meters across, half the length of a football field—big enough to analyze the chemistry on planets around other stars for signs of life.
"If we lift all materials from Earth, we're limited by what a rocket can carry to the Moon," Chen explains. "But on the Moon, you're absolutely surrounded by lunar dust"—a prized natural resource in the eyes of Chen, an expert in composite materials.
Right: Astronauts erect a telescope on the Moon, an artist's concept. [more]
Composite materials are synthetic materials made by mixing fibers or granules of various materials into epoxy and letting the mixture harden. Composites combine two valuable properties: ultralight weight and extraordinary strength. On Earth, for example, bicycle frames made of a composite of carbon fibers and epoxy are favorites of racing cyclists.
Excited, Chen made a small telescope mirror using a long-known technique called spin-casting. First he formed a 12-inch (30-cm) diameter disk of lunar-simulant/epoxy composite. Then he poured a thin layer of straight epoxy on top, and spun the mirror at a constant speed while the epoxy hardened. The top surface of the epoxy assumed a parabolic shape—just the shape needed to focus an image. When the epoxy hardened, Chen inserted it into a vacuum chamber to deposit a thin layer of reflective aluminum onto the parabolic surface to create a 12-inch telescope mirror.
Above: A 12-inch parabolic moondust mirror made by spincasting. The mirror consists of a bottom layer of lunar soil simulant JSC-1A Coarse mixed with a small quantity of carbon nanotubes and bonded with thinned epoxy. Photo credit: Peter C. Chen, NASA/GSFC
The carbon nanotubes make the composite a conductor. Conductivity would allow a large lunar telescope mirror to reach thermal equilibrium quickly with the monthly cycle of lunar night and day. Conductivity would also allow astronomers to apply an electric current as needed through electrodes attached to the back of the mirror, to maintain the mirror's parabolic shape against the pull of lunar gravity as the large telescope was tilted from one part of the sky to another.
To make a Hubble-sized moondust mirror, Chen calculates that astronauts would need to transport only 130 pounds (60 kg) of epoxy to the Moon along with 3 pounds (1.3 kg) of carbon nanotubes and less than 1 gram of aluminum. The bulk of the composite material—some 1,300 pounds (600 kilograms) of lunar dust—would be lying around on the Moon for free.
"I think we've discovered a simple method of making big astronomical telescopes on the Moon at 'non-astronomical' prices," Chen declares. "Building a large space-based astronomical observatory using locally available material is something that is possible only on the Moon. That capability can be a major scientific justification for a return to the Moon."
"It’s a great idea in principle, but nothing is simple on the Moon," cautions physicist James F. Spann, who leads the Space and Exploration Research Office at Marshall Space Flight Center. "Launching a big spinning table to the Moon would be a challenge. If we got the machine spinning in the Moon's dusty environment, how long would it take the dust to settle?" he asks.
Sputtering aluminum vapor onto a large mirror in the presence of ambient dust would be another challenge, because "coating mirrors on Earth is done in a clean environment. There are practical issues about manufacturability that must be resolved."
Despite his concerns, Spann sees real promise in Chen's work and he's enthusiastic about starting out to make simple composite structures on the Moon, such as casting basic blocks from epoxy and lunar dust. "The blocks could be useful for building igloos or habitats for the lunar astronauts," he points out. Then astronauts could work up to making rods, tubes, and other composite structures, to learn how epoxy cures in the Moon's vacuum, and how robust the composites are under solar ultraviolet light. In the end, telescopes might prove practical. "We have a lot of work to do to find out what's possible," he says.
One thing is clear: The sky's the limit, especially when you have so much moondust to work with.
Author: Trudy E. Bell | Editor: Dr. Tony Phillips | Credit: Science@NASA
The paper "Moon Dust Telescopes, Solar Concentrators, and Structures," which Peter C. Chen and three colleagues presented at a poster session at the American Astronomical Society meeting in June 2008, appears here. Chen is now in the process of preparing a technical paper for publication.
For background about lunar simulants, see "True Fakes: Scientists make simulated lunar soil," and "Development of Standardized Lunar Regolith Simulant Materials."
The fact that a spinning liquid naturally assumes a parabolic shape was known at least as early as the nineteenth century. Spin-casting of astronomical telescope mirrors was tried in the 1960s by General Electric (see "Electroforming of Large Mirrors," by F. J. Schmidt, Applied Optics, vol. 5 (5), pp. 719–725, May 1966). The modern pioneer of spin-casting glass mirrors is widely acknowledged to be Arizona physicist Roger Angel at the Steward Observatory Mirror Laboratory; SOML has spin-cast glass as large as 8.4 meters in diameter: more.
Examples of the kinds of astronomical science that could be done with large telescopes on the Moon are given in Chapter 4 of the report "Heliophysics Science and the Moon" (September 2007) available here.
NASA's Future: US Space Exploration Policy |
Presentation on theme: "Copy and Answer on a SEPARATE SHEET:"— Presentation transcript:
1 Copy and Answer on a SEPARATE SHEET: How do you think you did on the ch. 8 quiz?How long did you study (outside of class)?Have you been to tutoring since the last test/quiz?What could YOU do to improve your grade?What could Coach D do to help you?
2 Vocabulary to know Demographic Transition Fertility Rate Age Structure Changes in PopulationExponential GrowthDemographyLife ExpectancySurvivorship
3 Chapter 9 The Human Population Remember to write the slides that show the clipboard symbol. Examples written in italics do not need to be written down. We will just discuss them, along with the other slides.
5 ObjectivesDescribe how the size and growth rate of the human population has changed in the last 200 years.Define four properties that scientists use to predict population sizes.Make predictions about population trends based on age structure.Explain why different countries may be at different stages of the demographic transition.
6 Studying Human Populations Demography is the study of the characteristics of populations, especially human populations.Demographers study the historical size and makeup of the populations of countries to make comparisons and predictions.Demographers also study properties that affect population growth, such as economics and social structure.
7 Studying Human Populations Countries with similar population trends are often grouped into two general categories: developed and developing countries.Developed countrieshigher average incomes, slower population growth, diverse industrial economies, & stronger social support systemsDeveloping countrieslower average incomes, simple & agriculture-based economics, & rapid population growth
8 The Human Population Over Time Exponential growth in the 1800s (population growth rates increased during each decade)Mostly due to decreased death ratesincreases in food productionclean waterimprovements in hygienesafe sewage disposaldiscovery of vaccinesHowever, it is unlikely that the Earth can sustain this growth for much longer.
10 Age Structure Classification of members of a population into groups by age or distribution of members of a population interms of age groupsHelps make predictionsCountries with high rates of growth usually have more young people. Countries that have slow or no growth usually have an even distribution of ages in the population.Graphed in a population pyramid
11 Survivorship% of newborns in a population that can be expected to survive to a given ageUsed to predict population trendsTo predict survivorship, demographers study a group of people born at the same time & notes when each member of the group dies.
12 Survivorship Type I=most people live to be very old Wealthy developed countries like Japan & GermanyType II=populations have a similar death rate at all agesType III=many children dieVery poor, underdeveloped countriesType I & Type III may result in populations that remain the same size or grow slowly.
13 Fertility RateThe number of births (usually per year) per 1,000 women of childbearing age (usually 15 to 44)Replacement level is the average number of children each parent must have in order to “replace” themselves. This number is slightly more than 2 because not all children born will survive & reproduce.
14 MigrationAny movement of individuals or populations from one location to anotherMovement INTO an area=ImmigrationMovement OUT of an area=Emigration (Exit)
15 Life ExpectancyAverage length of time that an individual is expected to liveMost affected by infant mortalityExpensive medical care is not needed to prevent infant deaths. Infant health is more affected by the parents’ access to education, food, fuel, and clean water.
16 The Demographic Transition The general pattern of demographic change from high birth & death rates to low birth & death rates, & observed in the history of more-developed countriesIndustrial development causes economic & social progress that then affects population growth rates
17 Stages of the Transition First StageA society is in a preindustrial conditionThe birth & death rates are both at high levelsPopulation size is stableSecond StageA population explosion occursDeath rates decline as hygiene, nutrition, & education improve.Birth rates remain highThird StagePopulation growth slows because birth rate decreasesPopulation size stabilizes, but the population is much larger than before the demographic transitionFourth StageBirth rate drops below replacement levelPopulation begins to decreaseIt has taken from 1-3 generations for the demographic transition to occur.
18 Women and FertilityThe factors most clearly related to a decline in birth rates are increasing education & economic independence for women.In the demographic transition model, the lower death rate of the second stage is usually the result of increased levels of education.Educated women find that they don’t need to bear as many children to ensure that some will survive. They may also learn family planning techniques.Women are able to contribute to their family’s increasing prosperity while spending less energy bearing & caring for children.As countries modernize, parents are more likely to work away from home. If parents must pay for child care, children may become a financial burden rather than an asset.
20 ObjectivesDescribe three problems caused by rapid human population growth.Compare population growth problems in more-developed countries and less developed countries.Analyze strategies countries may use to reduce their population growth.Describe worldwide population projections into the next century.
21 Changing Population Trends Throughout history, populations that have high rates of growth create environmental problems.A rapidly growing population uses resources at an increased rate & can overwhelm the infrastructure of a community.Infrastructure is the basic facilities of a country or region, such as roads, bridges, sewers, power plants, subways, schools, & hospitals.
22 Problems of Rapid Growth Uses resources faster than the environment can renew themSymptoms of overwhelming populations include suburban sprawl, polluted rivers, barren land, inadequate housing, & overcrowded schools.
23 Shortage of FuelwoodStandards of living decline when wood is removed from local forests faster that it can grow back.Fuelwood ensures that people can boil water & cook food. Without fuelwood, people suffer from disease & malnutrition.
24 Unsafe WaterIn places that lack infrastructure, the local water supply may be used not only for drinking & washing but for sewage disposal.The water supply then becomes a breeding ground for organisms that can cause diseases.Many cities have populations that are doubling every 15 years, & water systems can’t be expanded fast enough to keep up with this growth.
25 Impacts on LandArable land is farmland that can be used to grow crops.Growing populations make trade-offs between competing uses for land such as agriculture, housing, or natural habitats.For example, Egypt has a population of more than 69 million that depends on farming within the narrow Nile River valley.Most of the country is desert, and less than 4 percent of Egypt’s land is arable.The Nile River Valley is also where the jobs are located, and where most Egyptians live. They build housing on what was once farmland, which reduces Egypt’s available arable land.Urbanization is an increase in the ratio or density of people living in urban areas rather than in rural areas.People often find work in the cities but move into suburban areas around the cities.This leads to traffic jams, inadequate infrastructure, & reduction of land for farms, ranches, & wildlife habitat. Meanwhile, housing within cities becomes more costly, more dense, & in shorter supply.
26 A Demographically Diverse World Not every country in the world is progressing through each stage of demographic transition.In recent years, the international community has begun to focus on the least developed countries.Least developed countries are countries that have been identified by the United Nations as showing the fewest signs of development in terms of income, human resources, & economic diversity.These countries may be given priority for foreign aid & development programs to address their population & environmental problems.
27 A Demographically Diverse World Populations are still growing rapidly in less developed countries, with most of the world’s population now within Asia.
28 Managing Development & Population Growth Today, less developed countries face the likelihood that continued population growth will prevent them from imitating the development of the world’s economic leaders.Countries such as China, Thailand, & India have created campaigns to reduce the fertility rates of their citizens.These campaigns include public advertising, family planning programs, economic incentives, or legal punishment.In 1994, the United Nations held the International Conference on Population & Development (ICPD),It involved debates about the relationships between population, development, & the environment.Many countries favor stabilizing population growth through investments in development, especially through improvements in women’s status.
30 Managing Development and Population With these goals, worldwide fertility rates are dropping as shown below.
31 Growth Is SlowingFertility rates have declined in both more-developed & less-developed regions.Demographers predict that this trend will continue & that worldwide population growth will be slower this century than the last century.If current trends continue, most countries will have replacement level fertility rates by If so, world population growth would eventually stop.
32 Projections to 2050Looking at the graph below, most demographers predict the medium growth rate, and a world population of 9 billion in 2050.
33 Define infrastructure & list 4 examples. The 3 main problems with rapid population growth are ____, ____, and ____.Land that can be used for growing crops is called _____.______is an increase in the ratio or density of people living in urban areas rather than in rural areas. |
Students learn about plant reproduction through the use of this worksheet. The information provided on the worksheet covers how the ovule, or female gamete, is formed and the embryo grows. The ovary, which produces the flower, develops into the fruit or seed and the ovules become the seeds. The reproductive cycle of flowering plants can be explained in a short video. You can also find the worksheet and Answer Key below.
Reproductive Parts of the Flower differentiated by teachitwise from plant reproduction worksheet , source:tes.com
The process of reproduction is essential for all living things. The process of plant reproduction is complex and requires knowledge of the processes involved. It involves the joining of the male and female gametes. This is done by wind and insects, which carry pollen from one plant to another. The pollen is blown by the wind and the seeds are produced in a flower. The flowering plants reproduce in different ways, such as vegetative propagation.
Plants reproduce by combining the gametes of male and female. They also produce new individuals through the spread of pollen through wind and insects. The wind then blows the pollen to other plants. This process is called spore formation. Eventually, these seeds will mature and form a plant, which will have an entirely different DNA sequence from its parent. When the flowers have finished forming their fruits, they will be ready to reproduce.
Year 7 Food Chains & Webs Plants & Ecosystems 7 6 by TRF23 from plant reproduction worksheet , source:tes.com
The next step in the life cycle of plants is asexual reproduction. Sexual reproduction creates new individuals from the gametes of the parents. Asexual reproduction creates new individuals without fusion of male and female gametes. This process is referred to as cloning, and the offspring are genetically identical to their parents. Asexual reproduction is another example of asexual reproduction, as it does not require the production of male and female gametes. This can be achieved by binary fission, budding, spore formation, and vegetative propagation.
All plants reproduce in various ways. The process of flowering plants is similar to that of animals. For instance, asexual reproduction involves asexual reproduction. However, asexual reproduction does not produce any offspring. Both forms of plant reproduction have asexual and sexual stages. The reproductive life cycle of flowering plants is cyclical. As a result, the two types of life have different methods. Whether the process is asexual or sexual, the process will affect the process of regenerating the organism.
life cycle Archives The Crafty Classroom from plant reproduction worksheet , source:thecraftyclassroom.com
Besides cloning, plant reproduction can be either asexual or sexual. Asexual reproduction produces individuals genetically identical to the parents. By contrast, sexual reproduction involves the fusion of the male and female gametes. In addition to this, asexual reproduction may involve binary fission, budding, and spore formation. The final stage of asexual reproduction involves dispersing the seeds, which are germination.
Reproduction in plants involves the joining of the male and female gametes. Insects and wind are two other ways in which plants reproduce. Some plants produce seeds, while others produce flowers that grow in another plant. These flowering plants reproduce by pollination and spreading seeds. The reproductive process of a flowering plant is a cyclical one. Asexual reproduction is when the plant develops in one stage and then reproduces the other.
Flowering Plant Life Cycle science animals KS2 seed from plant reproduction worksheet , source:twinkl.co.uk
Reproduction is the process of producing new individuals from two parents. The reproductive cycle of a plant involves five stages, each of which is unique. It is genetically distinct from its parents and is the only way that plants can reproduce. This process is important for the continuation of their race, but it is not essential for plants to reproduce. All living things must reproduce to survive. Each stage is vital to a plant’s survival.
The most common form of plant reproduction is through seeds. Other forms of plant reproduction occur asexually. This is the process in which plants divide their seeds. They are called asexual and heterosexual. They can be both asexual or sexual. They can reproduce in asexual ways. But asexual methods are not very efficient for some plants. They are ineffective for the purpose of propagation.
A ual Reproduction Worksheet fadeintofantasy from plant reproduction worksheet , source:fadeintofantasy.net
In sexual reproduction, plants can create a number of reproductive strategies. In most cases, they will produce seeds within their fruits. Then, they will release the seeds. If a plant produces seeds by vegetative means, the process is called angiosperm. In this case, the pollen from the anther contacts the stigma of the female. The ovule then becomes a zygote. The two reproductive strategies are similar for all species.
AQA Biology 4 2 L12 Plant Tissues by Vmongoose Teaching from plant reproduction worksheet , source:tes.com
Flowering Plant Reproduction from plant reproduction worksheet , source:www2.estrellamountain.edu
high school biology worksheets – reynoldbot from plant reproduction worksheet , source:reynoldbot.com
Pollination Fertilisation Seed Dispersal and Germination from plant reproduction worksheet , source:twinkl.co.uk
Flowers and Plant Reproduction line Lesson 1 Watch this first and from plant reproduction worksheet , source:slideplayer.com |
A hardy member of the genus Tulipa in the lily family. Most tulips in gardens and bouquets are varieties of Tulipa gesneriana; they have deep bell-shaped blossoms in gorgeous rich colours, long sword-shaped leaves, and bulbous roots.
Tulips are indigenous to the northern temperate regions of Europe and Asia, and grow most abundantly on the Central Asian steppes. Tulips were apparently first cultivated in Turkey, and Ottoman arts such as pottery and poetry incorporate lovely images of tulips. Tulipists developed upwards of 2000 varieties and gave them romantic names such as Meadow Beauty, Fountain of Life, Diamond's Envy, and Cloth of Love. The flower's heyday occurred during the reign of Ahmet III (1703-1730), also known as the Tulip period.
In 1554 Ogier Ghiselin de Busbecq, ambassador of the court of Ferdinand I in Vienna to Suleiman I the Magnificent in Constantinople, noted the abundance of beautiful flowers that he (erroneously) thought were called "tulipan". Actually, the Turks called the flower "lale"; the confusion apparently arose because de Busbecq misunderstood his interpreter, who was in fact telling him that people wore the flowers in their turbans ("tulipan") as decoration. In any case, the ambassador brought seeds and bulbs back to Vienna with him, and within five years tulips were flourishing in that city.
Botanist Carolus Clusius was given some of the bulbs and in 1592 brought them to Holland, introducing the flower to the country with which it has become inextricably associated. He began to grow tulips in his home garden and in the botanical gardens of Leiden, of which he was director. Taken by the blossom, over the next decades the flower became a favourite of the tulipomaniac Dutch. A "futures market" in tulips developed, and people began to pay exorbitant prices for bulbs expected to produce exotic variations in colour, size, and shape. Particularly popular were blossoms with colour flecking or striping known as "breaking", now known to be caused by a virus carried by aphids. However the variations came about, the prices paid were high: one bulb is recorded as being sold for "a load of grain, 4 oxen, 12 sheep, 5 pigs, 2 tubs of butter, 1,000 pounds of cheese, 4 barrels of beer, 2 hogsheads of wine, a suit of clothes and a silver drinking cup." Tulipomania raged between 1632 to 1637, after which the government intervened and the market subsequently collapsed. The Dutch regained their sanity, but the tulip remains the national flower of the Netherlands and an important export for the country. It was also a last-resort food source during World War II, when the starving Dutch were reduced to eating the bulbs.
Dutch settlers to the United States brought the tulip with them to the New world; the Pennsylvania Dutch potters featured tulips in their wares, often called "tulip ware", and Holland, Michigan holds a popular annual tulip festival, as does Ottawa, the capital of Canada. Canada sheltered the displaced Dutch royal family in that city in World War II, and Canada played an important role in liberating Holland from German occupancy. In appreciation, the royal family presented Ottawa with 100,000 tulip bulbs after the war, and sends thousands more each year. The millions of blooms are at their height of beauty in May. |
Every data type within the Java language can be categorized as a primitive data type or a reference data type. As discussed in a prior lesson, there are eight primitive types in Java (byte, short, int, long, float, double, char, and boolean). Every other data type in this section is a reference type. Two examples of reference types were already discussed: Strings and Arrays.
The primary difference between a primitive type and a reference type is the data that is stored.
A primitive type stores its own data.
For example, when programmers write this:
int myExample = 3;
The variable myExample stores the actual value 3.
Conversely, instead of storing the actual value, a reference type stores a reference to the data. The compiler is not informed about the data and instead learns about where the data can be found.
String message = "Perfect";
The string “Perfect” is not stored in the variable message.
Conversely, the string “Perfect” is stored at some other location in the computer’s memory, and the message only stores the path to this location.
For now, this is all that will be covered about reference types. Since this lesson is aimed at beginners, this topic or reference types’ significance will not be discussed further. Just keep in mind the primary difference between primitive types and reference types. |
Thyroid cancer refers to formation of malignant cells in the thyroid gland. Accumulation of these cells forms a cancerous growth in the thyroid which often takes the form of a nodule (thyroid nodule) at the beginning.
There are various types of thyroid cancer, named depending on group of cells affected:
- Papillarythyroid cancer – papillary thyroid cancer is the most common type of thyroid cancer; it affects mostly women aged between 30 and 40. Usually, papillary thyroid cancer grows slowly, and tends to have good prognosis when diagnosed early. The tumor develops in the cells that produce the triiodothyronine, one of the thyroid hormones.
- Follicular Thyroid Cancer– this type of thyroid cancer, although a little slow growing, tends to grow quicker than papillary carcinoma. It occurs mostly in women, principally women over 50 years old. Prognosis of follicular cancers is not alarming when diagnosed and treated early. Usually, follicular thyroid cancers develop in the cells of the thyroid gland that produce and secrete thyroxine (thyroid epithelial cells, also called follicular cells or principal cells).
- Medullary Thyroid Cancer – this form of thyroid cancer is more aggressive than papillary and follicular cancers; it has poor prognosis. It tends to spread to distant organs in the body: liver, bone, brain, and adrenal medulla. Medullary cancer usually begins in the parafollicular cells(or C cells). This group of thyroid cells produces and secretes calcitonin, an hormone that acts by keeping the blood calcium at normal levels. About 25 percent of medullary thyroid cancer is due to genetic disorders.
- Anaplastic thyroid cancer – unlike papillary carcinoma and follicular thyroid cancer, anaplastic thyroid cancer is a very life-threatening condition; it tends to grow rapidly, and resist to treatments. Due to its metastatic characteristic, anaplastic thyroid cancer has a very poor prognosis. Usually, the tumor begins in the follicular cells of the thyroid, and affects most often people aged 60 and over. Anaplastic thyroid cancer is rare, representing about 3% of all thyroid cancers.
- Thyroid lymphoma – this is a very rare form of thyroid cancer. Usually, the cancer begins in the immune system cells of the thyroid (lymphoma cells). Development of thyroid lymphomais often associated with preexisting chronic autoimmune disease of the thyroid gland called thyroiditis (inflammation of the thyroid gland). |
Questions and Answers
How do you prove air is matter?
It took mankind tens of thousands of years to figure out that air existed, let alone that it was matter. It was only in recent human history that we figured out anything about air. Proving that air is matter is analogous to today's physics experiments where you cannot see the object of your study, but have to define its properties and its existence from indirect evidence.
We define matter as something which occupies space, is effected by gravity and has weight. Make a vessel that won't collapse if there is no air inside of it. Weigh the vessel when it is full of air. Then pump all of the air out and weigh the vessel again. The difference in weight is the weight of the air.
There is a famous experiment done by Otto Von Guericke in 1654 in Regensburg, Germany. Regensburg was a Roman outpost on the banks of the Danube River. If you ever go there, I highly recommend the Wirstkuke, an 850 year old restaurant near the river. It was there when Guericke was studying air and he may have had a dinner or two there. Anyway, to prove that air exists and has pressure he made a hollow sphere made of two copper halves and sealed it with a gasket. He used an air pump, which he also invented, to pump the air out of the sphere. Air pressure held the two halves of the sphere together. He then took two teams of horses and had them try to pull the sphere apart. They failed. Guericke then opened a valve that let the air back in and that is when the sphere fell apart under its own weight. The sphere was 14" in diameter meaning the air pressure exerted a force of approximately 4.5 tons.
The force would have been the same if one side of the sphere was attached to something fixed, like a really big rock, instead of another team of horses. Guericke might not have understood that or he might have just appreciated the drama of using two teams. Showmanship, you see, is important even in science.
Matter is anything that has mass and takes up space. So, in order to prove that air is matter, we need to prove that air has mass and takes up space. It's easier to prove that air takes up space, so let's do that part of the problem first.
Go and get a balloon. While you're at it, get two balloons. Go ahead and inflate the balloons with air. The balloons get larger as you put air into them. The only way that air could make them get larger is if air takes up space, so half of our proof is complete. Tie the balloons closed so that they stay inflated - we will need both balloons for the second half of this problem.
Although air has mass, a small volume of air, such as the air in the balloons, doesn't have too much. Air just isn't very dense. We can show that the air in the balloon has mass by building a balance. For this, you will need a meter stick, some tape, some string and a sharp needle. Take some of the string and tie one end to the middle of the meter stick. Take the other end of the string and tape it to the top of a table or a counter, just make certain that the meter stick is free to move around. Tie a section of string to each balloon. On one balloon, make an "X" with two pieces of tape (if you want to be fair, you can make a tape "X" on the second balloon as well, but we really only need one). Take the balloons and tie each one to the meter stick, one on each end of the meter stick. Balance the meter stick by repositioning the balloons, if necessary.
So, at the moment, you should have two balloons hanging from a meter stick, one from each end. If one of the balloons changes mass, we will be able to tell because the meter stick will 'tilt' towards the more massive object. So, all you need to do is to let the air out of one of the balloons. Take the needle and CAREFULLY poke a hole in the center of the "X". You don't want to pop the balloon - you just want to make a hole so that the air will leak out. Hopefully, the tape will keep the balloon together...
What happened? If all went well, one balloon lost its air in a very calm, controlled fashion without sending its balloon guts all over the room. The end of the meter stick with the deflated balloon should have risen into the air. It did this because there was less mass in the balloon after it deflated. The only way the balloon could have lost mass is if the air that was inside it has mass.
With this experiment you have shown that air takes up space and has mass, so you have proven that air is matter.
Answer 1 - Brian Kross, Chief Detector Engineer (Other answers by Brian Kross)
Answer 2 - Steve Gagnon, Science Education Specialist (Other answers by Steve Gagnon) |
Contrary to previous thought, a gigantic planet in wild orbit does not preclude the presence of an Earth-like planet in the same solar system – or life on that planet.
What’s more, the view from that Earth-like planet as its giant neighbor moves past would be unlike anything it is possible to view in our own night skies on Earth, according to new research led by Stephen Kane, associate professor of planetary astrophysics at UC Riverside.
The research was carried out in the backdrop of a planetary system called HR 5183, which is about 103 light years away in the constellation of Virgo. It was there that an eccentric giant planet was discovered earlier this year.
Normally, planets orbit their stars on a trajectory that is more or less circular. Astronomers believe large planets in stable, circular orbits around our sun, like Jupiter, shield us from space objects that would otherwise slam into Earth.
Sometimes, planets pass too close to each other and knock one another off course. This can result in a planet with an elliptical or “eccentric” orbit. Conventional wisdom says that a giant planet in eccentric orbit is like a wrecking ball for its planetary neighbors, making them unstable, upsetting weather systems, and reducing or eliminating the likelihood of life existing on them.
Questioning this assumption, Kane and Caltech astronomer Sarah Blunt tested the stability of an Earth-like planet in the HR 5183 solar system. Their modeling work is documented in a paper newly published in the Astronomical Journal.
Kane and Blunt calculated the giant planet’s gravitational pull on an Earth analog as they both orbited their star. “In these simulations, the giant planet often had a catastrophic effect on the Earth twin, in many cases throwing it out of the solar system entirely,” Kane said.
“But in certain parts of the planetary system, the gravitational effect of the giant planet is remarkably small enough to allow the Earth-like planet to remain in a stable orbit.”
The team found that the smaller, terrestrial planet has the best chance of remaining stable within an area of the solar system called the habitable zone — which is the territory around a star that is warm enough to allow for liquid-water oceans on a planet.
These findings not only increase the number of places where life might exist in the solar system described in this study — they increase the number of places in the universe that could potentially host life as we know it.
This is also an exciting development for people who simply love stargazing. HR 5813b, the eccentric giant in Kane’s most recent study, takes nearly 75 years to orbit its star. But the moment this giant finally swings past its smaller neighbor would be a breathtaking, once-in-a-lifetime event.
“When the giant is at its closest approach to the Earth-like planet, it would be fifteen times brighter than Venus — one of the brightest objects visible with the naked eye,” said Kane. “It would dominate the night sky.”
Going forward, Kane and his colleagues will continue studying planetary systems like HR 5183. They’re currently using data from NASA's Transiting Exoplanet Survey Satellite and the Keck Observatories in Hawaii to discover new planets, and examine the diversity of conditions under which potentially habitable planets could exist and thrive. |
First off, it is important to make a distinction: a physical or hardware port is a socket in which device cables – such as those for monitors, routers, modems – are plugged. A networking port is software-based. These ports allow multiple applications on the same device to access network resources at the same time, and will be the focus of this article.
Port number can be considered an extension or add-on to the already available IP address, which identifies the device that uses the network. To run multiple applications on a network, an IP is not enough. As such, port numbers are used to identify what application or service is using the network.
An IP address can be thought of as a telephone area code, while the port number can be considered the phone number itself. The area code identifies the geographic region (the device’s IP on the network), while the phone number identifies a user in that region (an application used by that particular IP). The combination of IP+port is called a socket.
What Is TCP/IP?
It stands for Transmission Control Protocol/Internet Protocol and is essentially a set of protocols (rules) that work together and govern how devices interact on a network. Nowadays, TCP/IP is the most commonly available and widely used set of protocols (or protocol suite).
Protocol suites such as TCP/IP work in layers so that protocol responsibilities can be delegated efficiently. TCP/IP is generally considered to have four layers:
- Application layer;
- Transport layer;
- Network layer;
- Data link layer
What Are TCP and UDP?
TCP and UDP (User Datagram Protocol) are two protocol types, both present on the transport layer of the TCP/IP suite. There are many other protocols, but these are the two most commonly used on this layer. Each deals with a different type of connection between two devices:
- TCP – used where a reliable connection between two devices is required. Basically, TCP is used when information loss is undesirable during a transfer (e.g. downloading a file);
- UDP – used where a reliable connection is not required. Information loss while streaming a video, for example, can result in a loss of quality. But the video can still be watched;
Whether TCP or UDP is used, a device connects to another using a network socket. TCP and UDP both have their own separate port numbers for applications, ranging from 0 to 65535.
Ports 1-1023 are the well-known ports on TCP/IP. They are also called system ports and are used by devices to access various network services. The following ports are the most commonly used by the average web user:
- 20, 21 – File Transfer Protocol (FTP) – designed for transferring files of any kind. It can also act as a file manager (view, add, modify or remove files on a server);
- 22 – Secure Shell (SSH) – a “shell” can be considered the interface of an operating system, which can be graphical (like Windows Explorer), or text-based (in Linux). Secure Shell is a protocol that allows for an encrypted connection to increase security while communicating over an unsecured network;
- 23 – TELNET (TCP/IP Terminal Emulation Protocol) – similar to SSH but without the added security. Can be used by a client device to access a server remotely. Everyone can see what a user is doing just by watching the network communication between the two computers;
- 25 – Simple Mail Transfer Protocol (SMTP) – used by both email clients and servers to understand each other and be able to send emails;
- 53 – Domain Name Server (DNS) – associates a server name (www.example.com) with an IP address, so a user is not required to know the IP of the website they are trying to access;
- 80 – HyperText Transfer Protocol (HTTP) – if FTP is used for file transfer, HTTP is used to “transfer” websites (or rather the code that makes up one) from a server to a client;
- 110 – Post Office Protocol (POP3) – a type of protocol used by email clients to communicate with a server. Downloads new email from the email server and then wipes the downloaded emails from the server;
- 123 – Network Time Protocol (NTP) – a protocol used to synchronize clocks between all computers on a network with authoritative sources;
- 143 – Internet Message Access Protocol (IMAP4) – another type of protocol to communicate with an email server, but the emails are not wiped from the server unless they are deleted from the client. This allows users to access their emails across multiple devices.
- 161 – Simple Network Management Protocol (SNMP) – a description of SNMP and its uses can be found here; essentially, it is used to manage devices (such as printers) on a network;
- 443 – HTTP with Secure Sockets Layer (SSL) – HTTP transfers websites and their data insecurely from one computer to another. HTTPS encrypts that data to add a layer of security
Ports 1024-49151 are called registered ports. They are usually used for specific services (such as multiplayer game servers) after they are registered with the IANA. A list of them can be found here. Ports 49152–65535 are called dynamic or ephemeral ports. They cannot be registered with IANA, and according to Wikipedia:
This range is used for private or customized services, for temporary purposes, and for automatic allocation of ephemeral ports.
What Is Port Blocking?
Firewalls can be used to block incoming and outgoing traffic on ports set by the user. If, for example, somebody does not want to receive emails on their device, they would simply have to block ports 110 and 143.
Schools and enterprises who wish to block access to the web for their students or employees would block port 53. This would leave most users unable to use websites unless they know their specific IP address. At the same time, the 80, 110, 143 and 443 ports remain unblocked, so web and email access is still possible.
One can also block out certain port numbers and use firewall exceptions to allow communication with specific IP addresses. Thus, a school that wants its students to access only one or two educational websites and blocks all other web traffic would employ these exceptions.
Denial-of-service (DoS) and distributed denial-of-service (DDoS) have one thing in common - they both try to shut down a network or service, or otherwise clog them up and make them difficult to access for legitimate users.… Read More »
IPv4 (Internet Protocol version 4) is the most commonly used version of the Internet Protocol, a group of rules that define how data is sent and received over a network. As its name implies, it… Read More » |
Rockalingua has a free 11-page set of printable activities to teach children Spanish colors and numbers. This is a wonderful set of materials for children who are beginning to learn this vocabulary or as a review. The printable activities correspond to a song and video, allowing for flexibility in presenting the material and using the worksheets.
These printable activity sheets:
– have a corresponding song and video. The music video makes the meaning of the language clear, teaches correct pronunciation and uses the words in context.
– include matching, coloring, answering ¿Cuántos hay? questions, two levels of word search and bingo boards.
– include support for speaking activities.
– use the language in a context by linking colors to fruit and the rainbow and including other relevant language: El arcoíris sale cuando llueve y luego hace sol. Si fuerte quieres crecer, mucha fruta tienes que comer.
– provide visual support for all of the words in all of the activities.
– provide necessary repetition in an entertaining and effective way.
The song, video and activities fit well into any elementary Spanish program and are excellent materials for home schoolers and other parents introducing their children to the language at home. The varied formats and activities create depth of exposure and tap into multiple intelligences and different learning styles.
Link to Spanish Color and Number Activities
Video – Colors and Numbers You can hear the song in this video.
Activity sheets for the other Rockalingua videos and songs are also available with a subscription. The subscription is one of the best values I have found because the materials are so easy to use and fit with any curriculum. Most important, kids love the songs and learn so much Spanish! |
When you set up virtualization, you are creating a virtual network. Before we discuss a virtual network, let's explore a normal network and see how it works. This will help you understand how a virtual network operates.
When setting up a network, you normally have a server machine and then you have client machines. The client machines connect to the server machine to access physical resources (files, folders, applications, etc.) or networking services (DNS, DHCP, etc.). You need physical machines to run the server operating systems. These machines must be able to handle the higher-end operating systems. When client machines connect to the server systems, they normally connect using the server name or the TCP/IP address of the server.
Virtual networking works a lot like normal networking except that you don't need as much hardware. You instead set up virtual servers that run on your network just like physical servers. The end users (clients) cannot tell the difference between a physical server and a virtual server.
When setting up virtual servers, you assign those virtual servers Ethernet adapters (just like a normal server) and give those Ethernet adapters TCP/IP addresses and Media Access Control (MAC) addresses. When setting up virtual network adapters, keep in mind that you can assign only one virtual network to a physical adapter. Also, wireless network adapters can't be used with Hyper-V virtual machines. You must be physically plugged into your network when setting up virtual networking. Your clients can still be wireless but not your Hyper-V virtual machines.
Utilities are available to help you with the configuration tasks. For example, Microsoft's Virtual Network Manager, shown in Figure 3.13, lets you add, remove, modify, and manage virtual networks from one location.
When discussing virtual networking, there are a few concepts that you need to understand. These concepts will not only help you set up a Hyper-V network, they are also covered in detail on the Hyper-V exam.
Virtual Local Area Network (VLAN) A virtual local area network (VLAN) refers to the virtual network. It is the virtual network that the client machines access to get to their resources and network services.
Virtual Switches Virtual switches help Hyper-V secure and control the network packets that enter and exit the virtual machines. You can limit the communications to or from a virtual machine and the VLAN. When setting up your network adapters, you can associate a single virtual switch with that adapter.
VLAN Tagging One problem that a virtual network could run into is that you have multiple virtual machines using the same physical network adapter. This is where VLAN tagging comes in handy. VLAN tagging allows multiple virtual machines on the same physical machine to use the same physical network adapter in that machine.
External/Private/Internal Settings When setting up your network adapters you have three choices. You can configure the communications to use the External setting, Internal setting, or the Private setting:
- External This option creates a connection from the physical adapter and the virtual machine. It allows a virtual machine to access the network through the network adapter.
- Internal This option allows communications between the virtualization servers and the virtual machines.
- Private This option provides communications only among the virtual machines and allows the virtual machines to talk to each other only.
PXE Boot Hyper-V supports the Pre-Boot Execution (PXE) environment on the virtual network adapters that you configure. PXE booting allows a network card to be configured without the need of a hard drive or operating system. This enables the network cards to access a network without operating system assistance. The host network must be configured to use PXE if you want to take advantage of this feature.
Virtual Machine Quarantine One advantage to using Hyper-V on Windows Server 2008 is that you get to use many of the services offered with the Windows Server 2008 environment. One of those services is the Network Access Protection (NAP) feature. NAP enables you to quarantine machines that do not meet specific network or corporate policies. The noncompliant machines will not be permitted to access the network until they comply with the organization's policies. NAP is discussed in detail in MCTS: Windows Server 2008 Network Infrastructure Configuration Study Guide by William Panek, Tylor Wentworth, and James Chellis (Sybex, 2008).
In Exercise 3.7 we will use the Virtual Network Manager to configure a network adapter in Hyper-V. The Virtual Network Manager is included with the Hyper-V Manager.
EXERCISE E 3.7
Creating a Virtual Network Connection
1. Start the Hyper-V Manager by clicking Start > Administrative Tools > Hyper-V Manager.
2. Open the Virtual Network Manager by clicking Virtual Network Manager in the right-hand window under Actions.
3. Make sure that External is highlighted under What Type Of Virtual Network Do You Want To Create? and click Add.
5. A warning box may appear stating that you are going to temporary lose your network connection while the virtual adapter is being configured. Click Yes.
6. Close the Hyper-V Manager. We will discuss how to use and configure this new adapter we just created in Chapter 4, "Creating Virtual Machines."
A solid understanding of virtual networking is critical because the virtual environment runs within the virtual network. Being able to create virtual adapters and set up virtual networking are key components of setting up a virtual environment. Now let's take a look at configuring Hyper-V remotely.
Managing and optimizing the Microsoft Hyper-V server
Configuring virtual networking for Microsoft Hyper-V
Configuring remote administration for Microsoft Hyper-V
Download the rest of this chapter excerpt
Printed with permission from Wiley Publishing Inc. Copyright 2009. MCTS: Windows Server Virtualization Configuration Study Guide: (Exam 70-652) by William Panek. For more information about this title and other similar books, please visit http://www.wiley.com. |
Lung Cancer Awareness
Lung Cancer, the Number One Cancer Killer
Each year, about 200,000 people in the United States are told they have lung cancer and more than 150,000 people die from this disease. Deaths from lung cancer represent about one out of every six deaths from cancer in the U.S.
Research has found several causes and risk factors for lung cancer. A risk factor is anything that changes the chance of getting a disease. Lung cancer risk factors include—
- Secondhand smoke from other people's cigarettes.
- Radon gas in the home.
- Things around home or work, including asbestos, ionizing radiation, and other cancer-causing substances.
- Medical exposure to radiation to the chest.
- Chronic lung disease such as emphysema or chronic bronchitis.
- Increased age.
You can reduce your risk of developing lung cancer in several ways.
- Don't smoke. If you do smoke, quit now.
- Avoid secondhand smoke.
- Have your home tested for radon and take corrective actions if high levels are found.
- Be aware of your exposure to radiation from medical imaging. Ask your doctor about the need for medical tests that involve images of the chest.
- Follow health and safety guidelines in the workplace when working with toxic materials.
- Avoid diesel exhaust and other harmful air pollutants.
CDC helps support a national network of quitlines that makes free "quit smoking" support available by telephone to smokers anywhere in the United States. The toll-free number is 1-800-QUITNOW (1-800-784-8669), or visit smokefree.gov.
Different people have different symptoms for lung cancer. Some people don't have any symptoms at all when first diagnosed with lung cancer. Lung cancer symptoms can be due to the direct effect of growth of cancer cells in the lung, or due to the effect of cancer cells that have spread to other parts of the body. Lung cancer symptoms due to growth of cancer cells in the lung may include—
- Shortness of breath.
- Coughing that doesn't go away.
- Coughing up blood.
- Chest pain.
- Repeated respiratory infections such as bronchitis or pneumonia.
These symptoms can happen with other illnesses, too. Talk to your doctor if you have symptoms that concern you.
Lung cancer is treated in several ways, depending on the type of lung cancer and how far it has spread. Treatments include surgery, chemotherapy, and radiation. People with lung cancer often get more than one kind of treatment.
People with lung cancer may want to take part in a clinical trial. Clinical trials study new potential treatment options. Learn more about clinical trials at the National Cancer Institute.
People who have been treated for lung cancer may continue to have symptoms caused by the cancer or by cancer treatments (side effects). People who want information about symptoms and side effects should talk to their doctors. Doctors can help answer questions and make a plan to control symptoms.
For more information about symptoms and side effects, visit the National Cancer Institute's Coping with Cancer.
For information about finding or providing support for people with lung cancer and their caregivers, visit CDC's Cancer Survivorship.
- Lung Cancer
- Smoking and Tobacco Use
- Lung Cancer (National Cancer Institute)
- Radon (Environmental Protection Agency)
- Cáncer de pulmón |
The Aviation world has been all aflutter about Boeing completing a critical part of the 787 Dreamliner testing programs — the all-important flutter tests. They went off without a hitch and the Dreamliner is cleared to fly throughout its normal performance range, which includes speeds up to Mach 0.85 and altitudes just over 40,000 feet. During flight testing the airplane achieved speeds of Mach 0.97 and altitudes of more than 43,000 feet.
Flutter phenomena are seen when vibrations occurring in an aircraft match the natural frequency of the structure. If they aren’t properly damped, the oscillations can increase in amplitude, leading to structural damage or even failure. A similar problem occurs in bridges and brought down the Tacoma Narrows Bridge in 1940.
In the video below, astronaut Fred Haise is piloting a Piper PA-30 Twin Commanche during flutter tests NASA conducted with general aviation aircraft in the late 1960s. Once the vibrations are introduced into the tail of the aircraft, in this case a stabiliser, the flutter increases dramatically causing tremendous oscillations in the horizontal surface making the stabiliser flex as if made of rubber.
According to NASA, Haise said of the experience, “I’m fearless, but that scares me.”
That kind of motion is extreme, but flutter phenomena have caused several crashes during the history of aviation. One of the most dramatic was break up of a Lockheed F-117 Nighthawk after a loose elevator led to flutter strong enough to cause a structural failure.
Over the years, engineers have used several techniques to “excite” a flight surface. The purpose is to purposely create oscillations in parts of the wing or tail and then confirm that they damp out quickly and do not lead to flutter. Small thrusters at the end of each wing have been used to introduce a pulse, as have aerodynamic vanes that lead to oscillations. Rotating or vibrating weights inside the wings or tail also have been used. But the most common ways to introduce the oscillations is to simply manipulate the flight controls.
A pulse test is perhaps the simplest form of flutter testing and simply consists of a pilot abruptly moving the controls — a deliberate smack forward of the yoke, for example. Engineers on the ground observe how the abrupt motion introduced to the flight surface is damped over time. Once the oscillations disappear in a time the engineers have calculated to be safe enough not to cause any issues, the team moves on to the next test.
During a sweep test, a wide range of oscillations are introduced to a flight-control surface such as the ailerons, rudder or elevator. In modern fly-by-wire aircraft, this sweep across a range of frequencies is done with a device called a function generator. It works in tandem with the computer controlling the fly-by-wire system to introduce frequencies that increase over time. A typical range of frequencies in flight testing may be from 5 to 60 Hertz. Pilots control the device from the cockpit, and engineers monitor both the introduction of oscillations and the damping to ensure a safe result.
Flutter testing has led to changes in several aircraft over the years. During flutter testing of the Boeing 747 in the late 1960s, damping in the wing was not satisfactory in some circumstances, especially with certain fuel loadings. Design changes were made to stiffen the wing structure and no flutter was reported once the fix was made.
So the next time you have a window seat and you’re watching the wing move up and down, you can rest assured — thanks to flutter testing — those movements are totally normal and you won’t have to deal with the same excitement Fred Haise encountered during his test in 1966.
Following the test when the video was shot, Haise went on to become the lunar-module pilot on the ill-fated Apollo 13 mission and was the pilot during the first glide flight of the Space Shuttle. Haise and fellow astronauts conducted several flutter tests of the Enterprise during the space shuttle’s development in the 1970s. |
What is temperature, and why are there three different temperature scales? Young scientists learn the true nature of temperature with an informative video. The narrator discusses all three temperature scales and the relationship between temperature and kinetic energy.
- Use the resource to the Kelvin scale before beginning the Gas Laws
- Show examples of Fahrenheit temperatures and their corresponding Celsius and Kelvin temperatures to give students some perspective
- Explain the conditions present at absolute zero for learners that are not familiar with them
- Side-by-side comparison of the three temperature scales is helpful for pupils unaccustomed to using the Celsius and Kelvin scales
- Illustrations and graphs present the concept of temperature as an average of kinetic energy of particles |
X-Linked Agammaglobulinemia (XLA) is an inherited immunodeficiency in which the body is unable to produce the antibodies needed to defend against bacteria and viruses.
Frequently called Bruton's Agammaglobulinemia, XLA is caused by a genetic mistake in a gene called Bruton's Tyrosine Kinase (BTK), which prevents B cells from developing normally. B cells are responsible for producing the antibodies that the immune system relies on to fight off infection.
The most common bacteria causing infection in XLA are Streptococcus, Staphylococcus and Haemophilus.
Keep pace with the latest information and connect with others. Join us on Facebook.
XLA Symptoms & Diagnosis
XLA often becomes apparent in infancy due to recurrent and severe bacterial infections including:
• Ear infections
• Diarrhea due to a parasite called Giardia
When a baby is first born, it is protected from infection by IgG antibodies that are passed through the placenta from the mother. This maternal IgG only lasts for several months, and then the infant needs to start producing antibodies on its own. When affected by XLA, the infant cannot do this on his own, and becomes susceptible to these recurrent infections.
XLA can be detected through screening tests that measure immunoglobulin levels or the number of B cells in the blood.
Videos: Choosing Wisely »
XLA Treatment & Management
There is no cure for XLA, but the condition can be successfully treated. Immunoglobulin replacement therapy is a life-long and life-saving treatment that restores some of the missing antibodies. In addition, some people benefit from a daily course of oral antibiotics to prevent or treat infections.
Most individuals with XLA who receive immunoglobulin on a regular basis can lead relatively normal lives.
Live viral vaccines, such as those for polio, measles, mumps or rubella, are not considered safe for people with XLA. Though rare, these vaccines can infect the recipient with the very disease they were intended to prevent. This is true for most B and T cell immune defects.
To learn more about PIDDs visit the Immune Deficiency Foundation website. |
5th Fifth Grade
NUMBERS AND OPERATIONS
Access the mathematical learning of Numbers and Operations in Fifth Grade with which to learn and develop number sense (numerical literacy) and operability (numerical concepts, properties, strategies and procedures):
"Good teachers are expensive, but bad teachers are even more so"
QUANTITIES AND MEASUREMENTS
Choose any of the teachings and lessons from Quantities and Measurements in 5th Grade (also known as "Grade 5" or "Year 6" in different educational systems) to enjoy thousands of free worksheets ready to download and print in PDF format:
Units of Length
Units of Capacity
Units of Mass or Weight
Units of Time
Money and Finances
Units of Area
Units of Volume
"Do not spare your children the difficulties of life, rather teach them to overcome them"
GEOMETRIC AND SPATIAL REASONING
Select the Geometry learnings for Fifth Grade (10 years old) and study the classification, description and analysis of the relationships and / or properties of the figures in the plane (two dimensions) and in space (three dimensions), as well as the curricular contents related to orientation and representation spatial, the location, description and knowledge of objects in space:
"The goal of Education is virtue and the desire to become a good citizen"
DATA ANALYSIS AND PROBABILITY
Choose the fundamental educational experiences and knowledge related to Statistics and Probability for Grade 5 and learn to collect, classify, record and represent data graphically through our exercises and mathematical educational games:
JOIN OUR EDUCATIONAL COMMUNITY
Normality is a paved road where you can walk comfortably ... but beautiful flowers will never grow there
Vincent Van Gogh |
Last month, biologists-geneticists Jeffrey C. Hall, Michael Rosbash and Michael W. Young were awarded the Nobel Prize in Physiology/Medicine for their work on circadian rhythms – the biological clocks driving our sleep-wake rhythms. Geneticist Charalambos Kyriacou told Sputnik why the research is so important for people living in the modern world.
Hall, Rosbach and Young were awarded the Nobel for work spanning decades to discover the mechanisms governing circadian rhythms in human beings, animals and plants, and how plant and animal life adapts its biological rhythms to changes in the environment which take place through evolution.
Dr. Charalambos Kyriacou, a professor of behavioral genetics at the University of Leicester, who has worked alongside and encouraged Drs. Hall and Rosbash in their research at various stages in the 1970s and 80s, spoke to Radio Sputnik about what the Nobel-winning research means for the study of biological rhythms, and why its findings are so important for individuals and society as a whole.
Sputnik: Before we go any further and talk about the practical implications of this discovery, could you explain to us please what exactly it’s about? Scientists have known about the biological clock for ages. What makes this particular discovery worthy of a Nobel Prize?
Dr. Charalambos Kyriacou: These daily biological clocks – we call them ‘circadian clocks’ (‘circa’ is the Latin for ‘about’, and ‘diem’ is ‘a day’), ticking with a cycle of about a day, have been known since – well one of Alexander the Great’s soldiers actually noticed circadian rhythms in the way that plants hold their flowers up towards the sun during the day, and drop during the night. The first scientific study of plant clocks was done in the 1720s by a French philosopher-scientist.
So we’ve known about them for hundreds [if not thousands] of years, but they’ve only started to be taken seriously since around the 1930s and 40s by a few scientists. And really up to the 1950s and 60s, the whole field of biological timing was thought to be a bit weird and flaky by the rest of the scientific community. It was only when a scientist called Ron Konopka, working with famous geneticist Seymour Benzer, [discovered mutations] in the fruit fly. Fruit flies have a sleep/wake cycle of 24 hours, just like we do. In 1971, Ron Konopka found three ‘mutants’ in the fly that changed the clock – one mutant’s clock ticked very quickly, and had a 19 hour cycle, another ticked slowly, and had a 29 hour cycle, and another mutant was an insomniac, and was completely arrhythmic.
That was the very first study showing that the clock was genetically encoded within us. What Hall, Rosbash and Young have done is that they’ve taken these mutations that Konopka and Benzer found and described what these mutations do molecularly. All three mutations were found in the same gene (called the period gene for obvious reasons, because if you change the timing of the clock you’re changing the period of the clock)…What Hall, Rosbash and Young did molecularly describe what this period gene was doing – that’s their great contribution.
Because since the 1980s, what we’ve [known] that the clock percolates through everything that we do – our behavior, our physiology, our biochemistry. The clock is [extremely] important in everything that we do, because we evolved on a rotating planet. So it’s not surprising that natural selection favors organisms that anticipate the regular changes in light and dark, cold and hot temperature that is a feature of our rotating planet. That’s why they won the Nobel Prize – because basically, the circadian clock underlies everything that’s medical about us.
Sputnik: So these scientists actually kind of took apart the time-keeping machinery that’s all-important for us…
Dr. Kyriacou: Exactly! And the remarkable thing is…that all the genes in the fruit fly also build the clock in you and me, with very slight modifications. When evolution solves a problem – how to build a clock, it’s very conservative. It keeps that solution for different organisms. So mammals have ostensibly the same molecular components of the clock as the fly, which is another bonus, and another reason why the work of Hall, Rosbash and Young is so generalizable to other organisms.
Sputnik: Does this mean that we now know all there is to know about this subject, or is there more research to be done?
Dr. Kyriacou: No no, not at all. The biological clock is generated by many genes. A lot of the core genes have now been identified, but we are always being surprised by unexpected results. For example, a colleague of mine at Cambridge recently discovered that it’s not just neurons that are important for generating mammalian rhythms, but also glial cells [the cells surround neurons and hold them in place in the brain –ed.], which are also extremely important in generating normal rhythms. That was published only a couple of months ago. So it’s a fascinating field, and we’re always finding unexpected results.
Sputnik: So what are some of the practical implications of this discovery?
Dr. Kyriacou: One of the [mutations in the fruit fly genes] was called the period short, because it had a 19 hour rhythm. The molecular lesion in the period gene was identified many years ago. There was an extended family of people that showed this strange behavior: people in this family got up very early and went to bed very early; they couldn’t help it. Well it turned out that they had a mutation in their period gene, making their clock tick faster. So how do people with a fast clock of 21-22 hours relate to our 24 hour world? What they do is move all their behavior forward by a few hours; they are what we call extreme larks, as opposed to night owls…[And] it turned out that the molecular lesion in these people was exactly the same as the one in the fruit fly ‘period short’ mutation.
So yes, different people have variation in their clock genes. Michael Young just picked up another natural mutation in the Turkish population, finding that a mutation in another gene, the cryptochrome leads to late sleeping – meaning getting up late and going to bed late. These are natural variations found in a certain percentage of the population.
Sputnik: Does this mean that different nationalities have different clocks based on their geographic location? And can this clock be inherited?
Dr. Kyriacou: Well there have been some reports that different populations carry different variants, but that’s not been generally reproduced [scientifically]. For example, there’s one variant found in northern populations in the northern hemisphere, and southern populations in the southern hemisphere. So [scientists] are still trying to wrap their heads around that, and whether it’s [a trend], or just an accident. But it is absolutely true that variations in these core clock genes in humans have been shown to have large implications on their behavior.
Sputnik: How does light figure into all of this?
Dr. Kyriacou: In plants and fruit flies, there’s a photoreceptor called the cryptochrome, and it responds to blue light…In humans there’s also a photoreceptor, called melanopsin, which again is blue light responsive. So let’s look at our lifestyles. I have two teenage kids. What do they do? They sit on their phones or their computers until late into the night, and what comes out of computers? Blue light. And what blue light does is it keeps you awake…So what we’re doing by working late at night, watching television, computer and phone screens is delaying how long it takes us to get to sleep by about an hour. So yes, we’ve become sleep-deprived, because we’re absolutely addicted to this blue light technology. Some companies are now recognizing this, and so are trying to turn out the amount of blue light coming out of their screens.
Sputnik: It’s said that all of Western society is suffering from chronic sleep deprivation…
Dr. Kyriacou: It is, and that’s because people tend to work harder, have personal computers and are taking their work home, and working late at night. We are sleep deprived, our sleep hygiene is pretty terrible, and it does have implications. Probably the worst group of people in terms of sleep hygiene are shift workers. They comprise about 25% of [the workforce of industrialized countries]. If you take any index of health…you find that these are not happy bunnies. People who have rotating shifts [which advance from week to week are] changing the complete circadian lifestyle, and this disruption of their clocks leads to all sorts of health problems, including things like cancer, cardiovascular disease, depression, and of course sleep disorders. The latter have knock-on effects on all sorts of behavior, including things like how bright you are during the day.
In fact, I don’t think it’s any coincidence that the three great industrial accidents of my generation – Bhopal, Three Mile Island and Chernobyl were all caused by operator error at about 3 in the morning, which is when the circadian clock is at its trough. Your body temperature is at its trough; your vigilance rhythms are at their trough; and all three of those terrible accidents were caused by jail-hacked operator error, which was a big contribution. So yes, disruptive sleep patterns, particularly in shift workers, have serious health implications for a big proportion of our population.
Sputnik: So what, if anything, can be done? How does one remedy this?
Dr. Kyriacou: Education. There’s a wonderful study done many years ago where they took a factory in Ohio that had this rotating shift, where workers would advance their shift by eight hours every week. These were not happy people, taking lots of time off for sickness. Then a Harvard circadian biologist went there and talked to [management]. He said ‘look, the circadian clock of a human responds better if instead of advancing the clock, delaying it by 8 hours.’…They tried it, and found that their productivity was so much better, the number of days off was so much less, and just a happier working population.
So education can help in reducing the effects of our shiftwork society, our 24 hour society. [This also means] educating people not to have their computers on late at night – to stop using their computers a couple of hours before they plan to go to bed.
I think the Nobel Prize will help, because it actually says that this is a serious area of scientific endeavor, and people, [particularly people in medicine] will take note. |
According to the law in all 50 states, a pedestrian is defined as being a person who is traveling on foot, meaning they are walking or running. Some states maintain a broader definition of the term “pedestrian,” defining pedestrians as people who ride on skateboards, scooters, or roller skates. Additionally, pedestrians may also include people riding bicycles, tricycles, or using a wheelchair. However, as some states consider a bicycle to be a vehicle, this definition varies considerably from state to state.
Pedestrians are required to share the road with motorists. Motorists are generally defined as being people who are operating motor vehicles, including cars, trucks, and motorcycles. As such, a pedestrian accident is what occurs when a pedestrian is injured by a motorist.
Pedestrian accidents can occur for a wide variety of reasons. Under some circumstances, the fault is with the motorist. An example of this would be how a motorist who speeds, operates a vehicle while impaired, or fails to watch where they are going can injure an innocent pedestrian. Under other circumstances, the fault actually lies with the pedestrian. Pedestrians, similar to motorists, must comply with state traffic regulations and signage. When a pedestrian fails to do so, a motorist who is otherwise observing the rules of the road may injure them.
In terms of risk of injury, pedestrians with a “failure to appreciate risk” generally stand a higher risk of injury. Most commonly, this refers to children. However, pedestrians who are impaired by drugs and/or alcohol stand a higher risk of injury for the same reason. Additionally, people who have limited or decreased mobility are at greater risk of injury, which may include elderly people. Other examples of high-risk pedestrians would be people with a physical impairment, such as those who use a cane or a wheelchair.
Are Sidewalk Accidents The Same As A Pedestrian Crash?
Sidewalk accidents are personal injury accidents that happen on a sidewalk, which generally involve a pedestrian being injured by another party. An example of this would be a bicyclist who is riding on the sidewalk instead of in the street. The bicyclist collides with a pedestrian, and causes injuries. The resulting injuries may be minor or severe, depending on the collision and/or the health of the pedestrian.
Another example would be how a slip and fall accident could be a sidewalk accident. Slip and fall accidents occur when a person slips, trips, or slides on the ground. Because a person can slip on a snowy and icy sidewalk that has not been properly maintained by a property owner, a slip and fall accident could be considered a sidewalk accident.
It is important to note that sidewalk accidents are not the same as pedestrian crashes. A pedestrian crash, or pedestrian accident as was previously mentioned, generally refers to an automobile accident involving a pedestrian. These accidents generally happen when a person is walking in the street, and not on the sidewalk.
Do Pedestrians Have Any Legal Duties?
To reiterate, pedestrians must adhere to all state traffic laws and regulations. These laws and regulations prohibit pedestrians from engaging in specific activities, such as:
- Darting Out: Darting out, or running out, into a street or public thoroughfare is generally prohibited by law. Here, a pedestrian “jumps out” into a public area with traffic, from an area that a motorist cannot see. An example of this would be the pedestrian coming out from behind a tree; or
- Jaywalking: A pedestrian who crosses a roadway is legally required to comply with all traffic lights, signals, and rules while doing so. Jaywalking refers to crossing a street when the traffic light is green. However, jaywalking is also defined as crossing the street outside of the area that is marked as the “crosswalk” area. Jaywalking laws are intended to prevent pedestrians from crossing a street in moving traffic.
Generally speaking, a pedestrian who is walking on a roadway must use a sidewalk if the sidewalk is present, and the pedestrian can safely use it. A pedestrian who is walking on a roadway without a sidewalk must generally walk only on the left side of the roadway. Additionally, the pedestrian must walk against traffic that can approach from the other direction. State laws most commonly dictate that a pedestrian must move as far to the left as is practicable when a vehicle approaches from the other direction.
Additionally, pedestrians are prohibited from walking, jogging, or running on freeways or interstate highways. The absence of stop signs, traffic lights, and crosswalks makes it especially dangerous for pedestrians to do such activities. Additionally, pedestrians are generally prohibited from “hitchhiking,” or soliciting a ride, while on highways and freeways. Some state laws do permit pedestrians to walk on a freeway or highway, but only under specific exceptions.
Examples of such exceptions are generally limited to emergency circumstances, such as when a motorist’s vehicle has broken down. When their vehicle has broken down, most states permit a driver and passenger to walk to the nearest exit on the side of the freeway in which the vehicle is broken down.
Who Can Be Held Liable For Sidewalk Accidents?
Who is at fault for a sidewalk accident largely depends on whether one of the parties was negligent. Negligence is the legal theory which allows injured parties to recover for the carelessness of others. A person is said to be negligent when they fail to use the same amount of care that an ordinary person would use in the same or similar circumstances. The person causing the accident may have been responsible for protecting the victim from harm; however, under specific circumstances, the victim may be entirely or partially responsible for the accident because of their own negligence. This would serve as a defense to liability.
Negligence has four elements that must be shown in order to recover a monetary damages award for injuries:
- Duty: A duty is a responsibility that one person owes to another person. Generally, people who are going about their business owe a duty of reasonable care to each other. Reasonable care refers to the level of care that an ordinary and prudent person would use in the same situation. An example of this would be how if a person is driving during inclement weather, they would be exercising their duty of reasonable care by driving slower and having their headlights on in order to increase visibility. Alternatively, a person would not be exercising reasonable care if they instead were driving forty miles per hour over the speed limit;
- Breach: Breach of duty occurs when a person’s level of care falls below the level that is required by their duty. The person who was driving forty miles per hour in the above example breached their duty of reasonable care by speeding during inclement weather;
- Causation: The breach of a duty must be the cause of injury. Essentially, the legal test for causation is “but for” one person’s actions, the injury would not have occurred. As such, if the person who was speeding during inclement weather did not have enough time to stop before hitting another car, they have breached their duty of reasonable care which then caused injury to the other car and driver; and
- Damages: There must be some sort of harm that occurred. The specific type of injury can vary from property damage and emotional stress, to lost wages.
All of the above elements must be present in order to successfully determine that the other party was negligent. If one of the above elements cannot be proven, then negligence cannot be established.
Do I Need An Attorney For Sidewalk Accidents?
If you were involved in a sidewalk accident, you should consult with an experienced personal injury attorney. A personal injury lawyer can help you understand your legal rights and options according to your state’s specific personal injury laws, and will also be able to represent you in court as needed. |
Aerial images of marsh plant communities reveal information that is not possible to obtain from ground level. High resolution aerial photographs display fine details of vegetation composition and distribution. Since 2006, The Meadowlands Environmental Research Institute has employed balloon photography to capture images of marsh and landfill surfaces. These images are a key resource for MERI scientists working to improve the Hackensack River marshland ecosystem.
High resolution images are captured using consumer grade cameras at altitudes between 150 to 300 feet. The current camera being flown is a 14.3 megapixel Canon G1X. The state of the art camera rig attaches the camera to an inflated 3 meter balloon. The balloon is tethered to a reel operated from the ground. This setup is cost efficient, easily deployable, and is highly mobile.
The high resolution images are assembled into a mosaic of the surveyed area which can encompass many acres. Individual images are stitched together using Adobe Photoshop. The mosaic is then geo-referenced using known geographic feature locations within the site. These mosaics are used to ground truth high-resolution hyperspectral image classifications. They are also used to identify training sites from satellite images and to develop spectral libraries for vegetation classification. Current applications of balloon photography include:
- Monitoring of plant establishment in closed landfill sites
- Mapping invasive species distribution
- Mapping endangered plant populations thought the district |
Megalodon was not only the biggest shark in the world but one of the largest fish ever to be existent. Guesstimates suggest that the megalodon shark will grow up to 15 to 18 meters in length. And three times longer than the largest great white shark recorded. The violent bite of these sharks is more dangerous. According to the NHM, by nature human beings have been measured to have a bite force of around 1,317 newtons whereas researchers have estimated that the megalodon shark had a bite force between 108,514 and 182,201 newtons. Its massive teeth are almost three times larger than the teeth of a modern great white shark. This megalodon, which went died out millions of years ago, was the largest shark ever to hang around the oceans and one of the largest fish on record. The scientific name, Carcharocles megalodon, means “giant tooth,” and the megalodon’s fossilized bones and teeth give scientists major clues about what the creature was like and when it died off.
J.A.Cooper a former member of the NSERC Canadian Integrated multi-trophic Aquaculture Network. Dr. Cooper led researchers to investigate the potential effects of aquaculture nutrients on wild species colonization and growth to develop IMTA (Integrated multi-trophic Aquaculture) as a strategic enhancement within economically sustainable aquaculture production systems. According to him, the ancient megalodon shark species has discovered the most accurate estimates of its size to date, which says that the creatures could grow up to 59 feet in length. And the sharks lived between 4 million and 23 million years ago.
As said in the movie “The Meg” pits modern humans against an enormous megalodon. Before Humans even evolved, the beast died out but it is difficult to pinpoint the exact date that the megalodon went extinct because the fossil record is incomplete.
In 2014, a research group at the University of Zurich studied megalodon fossils using a technique called optimal linear estimation to determine their age. These studies found that most of the fossils date back to the middle Miocene epoch to the Pliocene epoch. All signs of the creature’s existence ended 2.6 million years ago in the current fossil record. According to the University Of California Museum Of Palaeontology for comparison, our earliest Homo sapiens ancestors emerged only 2.5 million years ago, during the Pleistocene epoch.
A very small portion of the Zurich study’s data said 6 out of 10,000 simulations showed a 1% chance that these giant sharks could still be alive. But there is no one has discovered any recent evidence of the monster or not even fossils that are any younger than 2.6 million years old. If you happened to be swimming in Earth’s oceans anywhere from around 4 million to 23 million years ago, the last creature you’d have wanted to run into was a megalodon shark.
Now, researchers in the UK have come up with what they say is the most accurate measurement of an adult megalodon, and the figures are almost too wild to be believed. This mainly consists of teeth, and while that might not sound like a lot to go on, by working backwards and comparing the size of the teeth to modern shark species, the researchers were able to paint a picture of what the full-sized adult sharks would have looked like. Therefore scientists agree that megalodons are long gone. |
RD Sharma Solutions Class 12 Chapter 29
A plane is a flat, 2-dimensional surface having infinite dimension, but having zero thickness. Such that if any two points are taken on it, the line segment joining them lies completely on the surface. A plane in 3-dimensional space has the following equation
ax + by + cz + d = 0,
Where at least one of the values of a,b,c, must be a non-zero.
The three possible planes in 3-d coordinate system are
- xy plane, where the value of z coordinates is zero.
- y-z plane, where the value of x coordinate is zero.
- x-z plane, where the value of y coordinate is zero.
This chapter provide you solution for RD Sharma for class 12th maths on the topic of ‘Planes.’ Practice questions to find to have better understanding of the topic. Questions in this chapter are important from the exam point of view.
The Plane Class 12th RD Sharma Exercises |
Definition of extravasate in English:
verb[with object] (usually as adjective extravasated) chiefly Medicine
Let or force out (a fluid, especially blood) from the vessel that contains it into the surrounding area: this established the presence of extravasated blood [no object]: some cells may extravasate and form secondary tumours
More example sentences
- Scattered areas of lymphocytic exocytosis and extravasated red blood cells within the superficial dermis and epidermis were also present.
- Aggressive angiomyxoma has extravasated red blood cells and thick-walled vessels, some of which may be large.
- The infant should be assessed for pallor, petechiae, extravasated blood, excessive bruising, hepatosplenomegaly, weight loss, and evidence of dehydration.
- Example sentences
- Tumors show a spindle cell neoplasm with prominent extravasation.
- Purpura results from the extravasation of blood from the vasculature into the skin or mucous membranes.
- This pain is a potential warning sign of tissue damage and possible phlebitis or extravasation.
Pronunciation: /ɪkstravəˈseɪʃ(ə)n/ /ɛkstravəˈseɪʃ(ə)n/noun
Definition of extravasate in:
- US English dictionary
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. |
The investigation from yesterday continues--encourage students to choose a level that challenges them in thinking about these new problems. I give students as much time as possible today to answer the questions on this assessment. The questions are written in a way that allows multiple possible answers--there are still right and wrong answers, but there are many different ways to describe a relationship, which hopefully urges students to think more deeply about all of the questions.
Instructor's Note: You can fit this section into the lesson anywhere that feels convenient--it can be right after the warm-up, or you can use this as the closing.
Some students noticed that their partners wrote different different equations than they did. This is because, if you choose 10 as of your numbers, both 40 and -20 have a difference of 30 from this number. When students found different equations from each other, they assumed one of them was wrong. The purpose of this quick task is for them to realize that both of them can arrive at the same correct answer.
I frame this task just by telling students that these are two solutions I saw yesterday--and I want to know if either of them are correct. The point of this discussion is to create some argument or disagreement between students. Eventually, somebody will start to realize that perhaps these are both accurate solutions.
The end of this lesson is a good opportunity to focus on the "Big Ideas." As much as possible when I use a differentiated lesson like this, I want to emphasize the fact that there are key ideas that all students should be grappling with.
In this lesson, some of the key ideas are:
1) Some verbal descriptions lead to linear relationships and others lead to quadratic relationships. What are the key properties of each?
2) When you look at numbers with a constant sum, this creates a linear relationship with a negative slope. When you look at numbers with a constant difference, this creates a linear relationship with a positive slope. Why is this? How do these relationships show up in the parabolas when you look at the products of the numbers?
These questions are both worth having students discuss and write about. I like to use the same questions for lesson closings that I will ask them to write about as part of their assessment, so that they get several chances to think about these questions. Also, I have noticed that the students who learn the content best are the ones who can articulate answers to these types of questions clearly, because they have a big picture understanding. So I like to ask them to do a Think-Pair-Share with these two questions. Alternatively, they can do a "Think-Pair-Write" if you want to get some writing from them on their way out of class. The specific format doesn't matter, as long as students are held accountable to actually thinking about the questions. |
What Role can Reinforcement and Punishment Play in Shaping Your Child’s Behavior?
Being a parent has been known as the best thing ever BUT also the most challenging endeavor you will encounter in your lifetime. Parents strive to raise a healthy and happy child that will one day grow up as a full-fledged mature and independent adult. But to successfully accomplish this goal, a parent must set forth structure or rules throughout their childhood to help them understand and be realigned when their behavior needs to be modified. When a parent recognizes the need to change a behavior, they will likely end up using either reinforcement, punishment, or a mixture of both. When we’re helping to decrease the frequency of a child’s negative behavior, having the reinforcement or punishment methods in our toolkit can help you modify and implement the desired behavior.
How does Reinforcement help with changing behavior?
There are two basic kinds of reinforcement, positive and negative reinforcement. Both can be useful if applied correctly to shape a child’s behavior and to help teach them the correct skills to use in the future. To name just a few, reinforcement can be used to teach and implement communication, social, self-help and table manner skills.
Positive Reinforcement: When a parent uses positive reinforcement, what they are essentially doing is providing something, known as an object or stimulus, that will increase the chances of a certain
desired behavior to happen again in the future. For example, you might reward polite behavior with access to the child’s favorite toy or by giving them a sticker to place on their token board. Praise can also help a child feel good about doing something right which makes them want to repeat that action. Please note that each child’s interests are different, so you’ll need to tailor the positive reinforcement accordingly by identifying what motivates them.
Negative Reinforcement: With negative reinforcement, you increase a certain behavior by the removal of a certain stimulus/object. For example, let’s assume that a parent is attempting to establish the picture exchange communication system (PECS) and wants to use negative reinforcement to do so. If the child does not like a certain fruit, they may learn that holding up the PECS ‘No’ card results in the disliked fruit being taken away. In this example the behavior being reinforced is the use of the PECS ‘No’ card and the negative reinforcement is the removal of the disliked fruit.
The role of Punishment in making behavioral changes.
Punishment does not need be extreme. It is simply a stimulus that is used to discourage or decrease an undesirable behavior. Although punishment does not replace the negative behavior like reinforcement does, it is still a resourceful technique.
Positive Punishment: While this may sound odd, it is actually what most of us are familiar with. It is the introduction of a stimulus/object which will decrease the chances of a specific undesirable behavior from happening again in the future. For example, the verbal warning you received as a child for misbehaving in class, or for doing something inappropriate was the stimulus that discouraged your unwanted behavior.
Negative Punishment: When using negative punishment, the parent or teacher must remove a certain stimulus to lower the chances of an unwanted behavior from happening again. For instance, a child may find that their favorite toy is taken away from them if they are messy or do not clear up after themselves. This then lowers the chances of the child cluttering up their room or doing a messy job with their work in the future and can be attributed to negative punishment.
It is important to always teach a replacement behavior that serves the same function as the unwanted behavior you are trying to decrease. Since reinforcement focuses on increasing a desired behavior and punishment focuses on reducing an unwanted behavior but does not teach a replacement for it, it is typically recommended to use positive reinforcement when trying to make a behavior change. Yet, whether you choose to use punishment or reinforcement, the key to successfully using these approaches, is to remain consistent. Remain hopeful even when you don’t see results right away; it will take time, patience, kindness, love and understanding. Yet when the desired behavior starts to occur again, it will help you believe in the whole process, so stick with it and know that you’re not alone in this journey. |
The nuclear arms race between the United States of American and the former Soviet Union technically ended in 1994 with the collapse of the latter. In current times, such competition between states and sovereignty has moved into the realms of science and technology; a global race in which different countries embark on their quests to build the world’s fastest supercomputer.
According to Wikipedia’s definition, a supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. It also boasts massive compute prowess that is able to simulate and model real-life systems that are impossible to replicate in laboratories due to scale and cost issues.
With such simulated data, supercomputers are able to reduce the time spent on scientific research and advance our understanding of the world, as well as to provide immediate benefits – the United States' National Oceanic and Atmosphere Administration uses supercomputers to help make more accurate weather forecasts, which can have life and death consequences when dangerous storms occur.
To vested organisations and governments, being top dog in the supercomputing realm is an important goal. According to the TOP500 project that tracks the performance of supercomputers, the fastest of the lot today is the Cray XK7 Titan that belongs to the Oak Ridge National Laboratory (ORNL) in the United States. This laboratory comes under the purview of the US Department of Energy and the US had been at the top of the game since 2004.
However, in 2010, China managed to snatch the honor of having the fastest supercomputer in the world from the US. It was a shocking defeat as China’s Tianhe-1A took the crown from the US’ Jaguar system, which was also based at the ORLC. Another Asian power delivered a wakeup call to the United States when Japan relegated the former to third place with their K Computer.
With this sudden turn of events, it seemed that the dominance of the United States in the realm of supercomputing was gravely under threat from the East. At the same time, there was also a paradigm shift in the engineering principles of supercomputing. Instead of just gunning for best performance in supercomputing systems, there's now a pressing need to address the power consumption of these energy-hungry computing monstrosities. Take for example, the K Computer from Japan, it required 9.89 MW (megawatt) of power during operation, which translated to the energy consumption of almost 10,000 suburban homes.
By June 2012, the United States had regained the crown with IBM’s Sequoia supercomputer, and their lead was further cemented with the Cray XK7 that has a power consumption of about 8.9 MW and is 17 times faster than the K Computer, making it a relatively power-efficient global supercomputer. Now, we wait with bated breath as the race shifts towards building a supercomputer that runs on less power but delivers more computing prowess. Whoever wins the race for the moment, the backers of supercomputing development argue that the ultimate beneficiary is the common man, who stands to benefit from the innovations derived from this competition.
Wong Chung Wee / Tech Writer
Chung Wee has a penchant for recycling hardware hand-me-downs due to his strong 'waste not, want not' belief. |
international payment and exchangeArticle Free Pass
- Balance-of-payments accounting
- Adjusting for fundamental disequilibrium
- Foreign exchange markets
- The gold standard
- The International Monetary Fund
- The IMF system of parity (pegged) exchange rates
- Equilibrating short-term capital movements
- Forward exchange
- Disequilibrating capital movements
- Stresses in the IMF system
- Special Drawing Rights
- Other efforts at financial cooperation
- The end of pegged exchange rates
- Floating exchange rates
- The international debt crisis
The IMF system of parity (pegged) exchange rates
When the IMF was established toward the end of World War II, it was based on a modified form of the gold standard. The system resembled the gold standard in that each country established a legal gold valuation for its currency. This valuation was registered with the International Monetary Fund. The gold valuations served to determine parities of exchange between the different currencies. As stated above, such fixed currencies are said to be pegged to one another. It was also possible, as under the old gold standard, for the actual exchange quotation to deviate somewhat on either side of the official parity. There was agreement with the International Monetary Fund about the range, on either side of parity, within which a currency was allowed to fluctuate.
But there was a difference in the technical mode of operation. The service of the arbitrageurs in remitting physical gold from country to country as needed was dispensed with. Instead the authorities were placed under an obligation to ensure that the actual exchange rates quoted within their own territories did not go outside the limits agreed upon with the International Monetary Fund. This they did by intervening in the foreign exchange market. If, for instance, the dollar was in short supply in London, the British authorities were bound to supply dollars to the market to whatever extent was needed to keep the sterling price of the dollar from rising above the agreed-upon limit. The same was true with the other currencies of the members of the International Monetary Fund. Thus, the obligation of the monetary authorities to supply the currency of any Fund member at a rate of exchange that was not above the agreed-upon limit took the place of the obligation under the old gold standard to give actual gold in exchange for currency.
It would be inconvenient for the monetary authorities of a country to be continually watching the exchange rates in its market of all the different currencies. Most authorities confined themselves to watching the rate of their own currency against the dollar and supplying from time to time whatever quantity of dollars might be required. At this point the arbitrageurs came into service again. They could be relied upon to operate in such a way that the exchange rates between the various currencies in the various foreign exchange markets could be kept mutually consistent. This use of the dollar by many monetary authorities caused it to be called a currency of “intervention.”
The official fixing of exchange rates as limits on either side of parity, outside of which exchange-rate quotations were not allowed to fluctuate, bears a family resemblance to the gold points of the old gold standard system. The question naturally arose why, in devising a somewhat different system, it was considered desirable to keep this range of fluctuation. In the old system it arose necessarily out of the cost of remitting gold. Since there was no corresponding cost in the new system, why did the authorities decide not to have a fixed parity of exchange from which no deviation would be allowed? The answer was that there was convenience in having a range within which fluctuation was allowed. Supply and demand between each pair of currencies would not be precisely equal every day. There would always be fluctuations, and if there were one rigidly fixed rate of exchange the authorities would have to supply from their reserves various currencies to meet them. In addition to being inconvenient, this would require each country to maintain much larger reserves than would otherwise be necessary.
Under a system of pegged exchange rates, short-term capital movements are likely to be equilibrating if people are confident that parities will be maintained. That is, short-term capital flows are likely to reduce the size of overall balance-of-payments deficits or surpluses. On the other hand, if people expect a parity to be changed, short-term capital flows are likely to be disequilibrating, adding to underlying balance-of-payments deficits or surpluses.
Do you know anything more about this topic that you’d like to share? |
Grassland birds are the most imperiled habitat-based group of birds in North America. Grassland songbirds have been in decline since their populations were first estimated in the 1960s. This trend is largely consistent across types of grassland birds: migrants and residents, gamebirds, shorebirds, songbirds, hawks, and owls. This is alarming as birds are indicators of ecosystem health. People are largely unaware of the importance (economic, esthetic, cultural) of birds and that many changes in land development, land use policies, and grazing can occur that would benefit grassland habitats and birds, with minimal financial costs to ranchers, and perhaps even benefits.
Cliff Wallis Photo.
Examples of simple changes include consideration for: the timing of disturbances relative to bird breeding activities; the area requirements of birds beyond pasture fences to achieve habitats more appropriate in size; and the rotation of grazing among pastures to increase habitat heterogeneity with no change in animal production.
Conservation Relevance for the Northern Great Plains:
For millennia, grassland wildlife specialized on habitats created by free-ranging grazers. Modern land uses have created a vastly different landscape that is fragmented and altered in terms of dynamic ecosystem drivers which once maintained key habitats.
Interactions between bison, prairie dogs, human hunters, and fire created a diversity of large-scale habitats that shifted in space and time (see figure below). Such a shifting mosaic of habitats provided a wider array of habitats than is present now. Certain breeding birds like McCown’s Longspurs and Mountain Plovers prefer heavily grazed areas while Sprague’s Pipits and Baird’s Sparrows favor more lightly grazed grasslands for nesting.
For some birds the entire spectrum is needed during the life cycle: heavily grazed areas of short grass and bare ground may be used as staging areas for mating displays while taller vegetation provides nesting and feeding habitat as well as cover for young birds. Grassland birds require these habitats in relatively large areas; individuals require smaller patches, but many species are more productive at sites where groups of individuals can occur (on the order of 300-500 acres of one habitat type).
Simplified diagram (after Knopf 1996) representing habitats once maintained by interactions between bison, fire, predators, and other grazers. Some species require different habitats across their lifecycles. Fencing and grazing practices have managed to the middle such that the species associated with heavy and little grazing face the greatest shortages in habitats.
1. Threats identified on NPCN interactive map at https://npcn.net/npcnWebmap/index.html:
- Conversion – fragmentation, pesticides
- Oil & Gas – fragmentation and industrial development of otherwise intact grassland
- Wind Development – fragmentation of intact grasslands if inappropriately sited
- Land practices – lack of diverse grazing regimes, lack of fire
- Loss of sagebrush and spread of West Nile Virus by mosquitoes — impacted by climate change
2. U.S. Farm Bill – incentives for landowners that can be bad or good for grasslands (incentives to grow crops can further reduce grassland extent, while incentives to retain native cover can protect a wide array of grassland plants, animals and ecosystem services)
3. Canadian Prairie Farm Rehabilitation Administration (PFRA) lands, similar to U.S. Federal lands abandoned by homesteaders during the Dust Bowl, (in southern Saskatchewan equal 1.6 million acres of native grassland) will no longer be under Federal control and their fate is uncertain. See: http://pfrapastureposts.wordpress.com/
Recently, Canada has further announced intentions to relinquish research stations such as Onefour, home to 23 Canadian listed species at risk. Environmental NGOs are working to secure long-term protection for this site with a focus on species at risk.
4. Industrial development for Oil and Gas production in the Little Missouri National Grassland, and around Theodore Roosevelt National Park, both in North Dakota.
1. The majority of land in the region is privately owned and grazing for cattle production is the dominant use. Several bird species have relatively simple habitat requirements that can be produced through grazing management — and in many cases may represent win-wins for conservation and livestock production.
2. Production of Best Management Practices exist or are being developed for:
- Oil & Gas well & road densities
- Agricultural practices – grazing, fencing, water use
- Restoration – re-seeding, fire, grazing
- BLM Environmental Impact Statement/Resource Management Plan sage grouse revisions
3. Metrics are in development for gauging success
- Birds as indicators of ecosystem health — birds as umbrella species
- Possibilities for understanding benefits for other types of wildlife
4. Political initiatives:
- Neotropical Migratory Bird Conservation Act ($5 million appropriated annually)
- US Fish and Wildlife Service Migratory Bird Treaty Act — mitigation for incidental ‘take’
- Farm Bill: possible inclusion of incentives to promote landowners maintaining grass and removal of crop incentives that encourage the conversion of grasslands
- Canadian PFRA pasture and Onefour Research Station securement
- Possible re-funding of Montana Fish Wildlife and Parks Avian Conservation Biologist position
- Ballot initiative to dedicate tax receipts from Oil/Gas development in North Dakota to conservation, with particular focus on the Little Missouri National Grasslands and Theodore Roosevelt National Park.
|NPCN Priority Bird list:||Habitat in NGP||Status (Canada; US)|
|1. Greater Sage Grouse||Sage brush dependent||End.; ESA candidate|
|2. Burrowing Owl||Burrowing mammal dependent||End.; None|
|3. Mountain Plover||Prairie dog dependent||End.; warranted precluded|
|4. Long-billed Curlew||Native grasses mixed grazing impacts||Sp. Concern; None|
|5. Sprague’s Pipit||Native grasses lightly grazed||Threatened; ESA candidate|
|6. Baird’s Sparrow||Native grasses lightly grazed||COSEWIC Sp Concern|
|7. Chestnut-collared Longspur||Native grasses moderately to heavily grazed||Threatened; None|
|8. Lark Bunting||Mix of shrubs and grass||None; None|
|Discussed but tabled:|
|Grasshopper Sparrow||Native grasses lightly to moderately grazed||None; None|
|McCown’s Longspur||McCown’s Longspur||Sp. Concern; None| |
The ancestry of most Dominicans is a combination of Taino Indians, Spanish colonists, and African slaves. Several original Taino words and meals have managed to survive in this melting pot nation, where family, food, and music are at the heart of the nation. Most Dominicans may not be rich, but they are always friendly and willing to share what they have.
The Dominican Republic’s first residents were friendly Tainos and cannibalistic Caribs, both of whose populations were dramatically diminished during the six years after Christopher Columbus’ famous 1492 voyage. The tiny island Columbus named La Hispaniola became home to both the New World’s first formal European settlement, La Isabela, and the starting point of Spain’s vast conquest of much of the southwestern hemisphere.
La Isabela, near present-day Puerto Plata, was shortly abandoned after its settlers endured three years of hunger, disease and mutiny. The remains of La Isabela are exhibited at the La Isabela National Historic Park (La Isabela), which includes the New World’s first Christian cemetery, the remains of Columbus’ modest home, and an ancient guayacán tree which has grown in the area since before Columbus’ time.
The next settlement Columbus founded, Santo Domingo, remains very much alive as the Dominican Republic’s modern national capital. Centuries of history are found within the small square mile known as the Colonial Zone, including Catedral Primada de América (Calle Arzobispo Merino, Santo Domingo), the New World’s oldest cathedral. In 1596, Sir Francis Drake used the church as his headquarters after he captured Santo Domingo and collected ransom to return it to Spanish rule.
The first African slaves were brought to the Dominican Republic in 1503 to replace hundreds of thousands of Taino who lost their lives to starvation, disease, massacres, and hard gold mining work. In 1605, the Spanish forcibly relocated their settlers on the west end of La Hispaniola closer to Santo Domingo to stop them from illegally trading with the Dutch, whom the Spanish were fighting at the time. Over half of these resettled colonists perished from disease or starvation. Spain ceded La Hispaniola’s west end, which would later become Haiti, to France in 1697.
Shortly after the 1791 Haitian Revolution, France seized control of all of La Hispaniola in 1795. Although the French were expelled from the eastern region in 1809, the newly independent nation of Haiti occupied the entire island from 1821 to 1844. The Dominican Republic gained its own freedom after the 1844 Dominican Independence War.
The Dominican Republic’s road to independence has been shaky. The territory briefly reverted to Spanish rule during the 1860’s and was twice occupied by the United States. The first lasted from 1916 to 1924, while the second was from 1965 to 1966 and several volatile dictatorships ruled in between these periods.
Ever since Joaquin Balaguer’s 30-year term as president came to an end in 1996, the country’s future has never been brighter or more stable. The Dominican Republic’s economy is now growing faster than nearly any other in the western hemisphere.
Family, music, and food are the three main cornerstones in this melting pot nation where 80 percent of residents have Taino, Spanish, and African roots. Several Taino words still survive in this predominantly Spanish speaking country. African influence is most evident in the merengue music which is loudly played in most Dominican homes, shops, streets, and guagua buses.
Most Dominicans are poor and live paycheck to paycheck. Locals always share their wages with family and take care of their neighbors. Each day ends with dancing in front of neighborhood convenience stores called colmados. Hip hop and reggae have joined merengue and bachata as the Dominican Republic’s most commonly played music. Baseball is the most popular sport, and many Major League baseball players are Dominican. The town of Sosua was founded by Jewish immigrants encouraged to settle in the Dominican Republic during WWII. |
Born into slavery and badly treated as a young girl, Harriet Tubman (c. 1822–1913) found a shining ray of hope in the Bible stories her mother told. The account of Israel’s escape from slavery under Pharaoh showed her a God who desired freedom for His people.
Eventually Harriet slipped over the Maryland state line and out of slavery. She couldn’t remain content, however, knowing so many were still trapped in captivity. So she led more than a dozen rescue missions back into slave states, dismissing the personal danger. “I can’t die but once,” she said.
Harriet knew the truth of the statement: “Do not be afraid of those who kill the body but cannot kill the soul” (Matt. 10:28). Jesus spoke those words as He sent His disciples on their first mission. He knew they would face danger, and not everyone would receive them warmly. So why expose the disciples to the risk? The answer is found in the previous chapter. “When he saw the crowds, [Jesus] had compassion on them, because they were harassed and helpless, like sheep without a shepherd” (Matt. 9:36).
When Harriet Tubman couldn’t forget those still trapped in slavery, she showed us a picture of Christ, who did not forget us when we were trapped in our sins. Her courageous example inspires us to remember those who remain without hope in the world.
Source: Our Daily Bread |
A group of “source” reefs have been identified that could form the basis of a life support system for the Great Barrier Reef, helping repair damage by bleaching, starfish and other disturbances.
Researchers from the University of Queensland, CSIRO, Australian Institute of Marine Science and the University of Sheffield searched the Great Barrier Reef for ideal areas that could potentially produce larvae and support the recovery of other damaged reefs.
The Great Barrier Reef is the world’s largest living structure and is made up of more than 3800 individual reefs, stretching 2300km down Australia’s eastern coastline.
The study found 112 “robust source reefs” – just 3% of the entire system – which had “ideal properties to facilitate recovery” of others by spreading fertilised eggs to replenish other areas.
“Finding these 100 reefs is a little like revealing the cardiovascular system of the Great Barrier Reef,” said Prof Peter Mumby, from the University of Queensland’s school of biological sciences and ARC Centre of Excellence in Coral Reef Studies.
Researchers had strict criteria – the reefs must be consistently well connected to other reefs through the constantly shifting currents, be less likely to die in a coral bleaching event and be less susceptible to crown-of-thorn starfish outbreaks.
These reefs were more likely to still be standing in the event of bleaching incidents, for example, and were in the right location to send fertilised eggs to the reefs that need them during the annual reproduction, Mumby said.
“It gives us a bit more hope that the capacity for the barrier reef to heal itself is greater than we expected.”
There have been four significant bleaching events on the Australian reef, including one this year. The longest and worst for the Great Barrier Reef was in 2016, when bleaching caused by climate change killed almost 25% of the reef.
Scientists have only recently been able to understand how connected the reefs are by ocean currents, Mumby said.
“The Great Barrier Reef is about the size of Italy and at any given time there are patches that have been damaged and patches that are pretty good, so it has an ability to heal itself if you like.”
The researchers used ocean circulation simulations to model the connectivity of the reef larvae across the Great Barrier Reef and generated 208 networks, through which the 112 strong reefs could reach almost half of all reefs through their “amazing capacity to connect the wider system”.
“It’s not perfect,” Mumby said. “There are areas in the northern barrier reefs where there are relatively few of these reefs identified.
“So some next steps are to relax the criteria and look at plan B and plan C.”
Associate professor John Alroy, from Macquarie University’s department of biological sciences, also noted the lack of the robust reefs in the north, saying the research paper was “thorough and interesting” but optimistic.
“That makes me wonder whether reefs in the far north can really be kept alive by being replenished from the south.”
Alroy also suggested many of the Great Barrier Reef’s animals would be absent from those reefs and suggested the paper did not fully acknowledge the worsening nature of climate change, which would probably also kill the robust reefs anyway.
Dr Andrew Lenton, principal research scientist at CSIRO Oceans and Atmosphere, said the report identified what was needed to maximise the capacity of coral to recover, including the protection of the robust reefs.
“It also recognises that this alone is not likely to be sufficient to ensure the longer-term viability of the Great Barrier as a whole and will need to be coupled with climate mitigation, local management and active management such as coral re-seeding.”
Mumby said new information about the reef and the way it functioned and repaired itself was readily adopted by the marine park authorities into its management plans, and received support from the federal government.
“This list of around 100 reefs is both a tangible and feasible set of intervention points to form part of a strategy for maintaining the systemic resilience of an ecosystem that is thousands of kilometres in scale,” the report said.
Tourism generated by the 2 million annual visitors to the reef contributes almost $6bn to the Australian economy.
The “life support” reefs identified in the study were good news, Mumby said, but more needed to be done to ensure the survival of the Great Barrier Reef.
In May scientists warned that the central goal of the government’s protection plan was no longer feasible because of the dramatic impact of climate change.
“It’s very clear that in order to maintain a beautiful reef into the future, we absolutely need to be much more aggressive in our response to climate change,” Mumby said. “We need coherent policies in government about what government is trying to achieve in our actions towards climate change and we need to continue to invigorate the local protections.” |
Use the lesson and student worksheet below to reinforce comprehension of the student article "The Deadly Effects of Tobacco Addiction."
This month's Heads Up article from the National Institute on Drug Abuse and Scholastic provides your students with science-based facts about tobacco addiction and secondhand smoke. The article summarizes scientific information and describes current research on the effects of nicotine on adolescents.
Your students will benefit greatly from science-based information about the effects of tobacco addiction, the dangers of secondhand smoke, and how tobacco addiction is treated. The Lesson Plan below is designed to enhance students' understanding of the article. We appreciate your ongoing efforts in providing young people with facts about addiction and how it affects them.
Nora D. Volkow, M.D.
Director of NIDA
In This Installment
- What causes tobacco addiction
- Why secondhand smoke is harmful to nonsmokers
- The latest research on tobacco addiction and teens
LESSON PLAN & STUDENT WORKSHEET
Preparation: Before conducting the lesson, make two photocopies of the Student Worksheet for a pre- and post-lesson quiz.
Use the Student Worksheet as an Assessment Quiz to determine what your students have learned about tobacco addiction and secondhand smoke.
OBJECTIVE: To test students' self-knowledge about tobacco addiction and secondhand smoke before and after reading the article
NATIONAL SCIENCE EDUCATION STANDARDS: Life Science: Science in Personal and Social Perspective
WHAT YOU WILL DO:
- Ask students, What makes tobacco addictive? and What is secondhand smoke and why is it harmful? Give students time for discussion.
- Distribute copies of the Student Worksheet. Tell students to write their name on the paper and answer the questions. Explain that they will answer the questions again after they read the article.
- Next provide students with three questions to consider as they read the article "The Deadly Effects of Tobacco Addiction" in their magazine. Why is tobacco addiction a problem for adolescents? What health problems are caused by smoking? What are the dangers of secondhand smoke?
- After students read the article and discuss their answers, have them complete the Student Worksheet again. When they have finished, reveal the correct answers.
- Wrap up the lesson by asking students: How would you respond to a teen smoker who says, "I can quit whenever I want"? What would you say to someone you know who regularly smokes around children?
ANSWERS TO STUDENT WORKSHEET:
1. a; 2. b; 3. c; 4. d; 5. d; 6. a & b; 7. d; 8. true; 9. true; 10. true.
- For printable past and current articles in the HEADS UP series, as well as activities and teaching support, go to www.drugabuse.gov/parent-teacher.html or www.scholastic.com/headsup.
- For access to more information for teens on tobacco addiction research, visit www.teens.drugabuse.gov.
- For information on tobacco abuse and addiction, go to www.smoking.drugabuse.gov.
- Find information on how to quit smoking at www.smokefree.gov. |
In highly tectonic regions of the Earth’s crust, or in areas connected to deep petroleum basins, methane streams up from the ocean floor. In other places, these gaseous seeps are much less common. Now, though, scientists have discovered methane bubbling from at least 570 locations on the Atlantic Ocean floor where the continental shelf meets the deeper sea—generally, a tectonically calm place.
And some of these plumes are likely more than 1,000 years old.
At approximately 40 of the seeps, the methane could be originating in deeper reservoirs of the gas and traveling upward through layers of sediment.
Here’s Henry Fountain, writing for The New York Times:
But Dr. Ruppel said most of the seeps had been found in depths of about 800 to 2,000 feet, where the methane, which is produced by microbes, is most likely trapped in sediments near the seafloor, within cagelike molecules of ice called hydrates. Natural variability in water temperatures, caused by ocean circulation and other factors, may be warming these hydrates just enough to release the gas.
Hydrates at such relatively shallow depths “are exquisitely sensitive to small changes in temperature,” she said. “You don’t have to change things very much to get the methane to come out.”
In other words, the area (which extends from Cape Hatteras, N.C. to the Georges Bank southeast of Nantucket, M.A.) is a convenient spot for researchers to study the links between climate change, methane emission, and ocean acidification—particularly because it’s not a tectonically active place.
Here’s Sid Perkins, interviewing study co-author Adam Skarke for Nature:
Sampling the bubbles, along with the waters in and around the plumes, will help scientists to estimate the effects of the methane emissions, says Skarke. The gas reacts with, and thereby diminishes, dissolved oxygen, a process that creates carbon dioxide that will tend to acidify surrounding waters.
The scientists hope to use methane measurements from the area to determine how much of the gas is being produced, how it varies with ocean floor temperature, how the seeps have evolved over time, and if there are any other unexplored ocean plume communities. These are some big questions that could generously contribute to the study of our changing planet. |
Extension > Garden > Yard and Garden > Fruit > Integrated pest management for home raspberry growers > Raspberry sawfly
The raspberry sawfly (Monophadnoides geniculatus) is a small, black, wasp-like insect that appears in early summer. The female lays eggs on the leaves of primocanes. The larvae are small and green and look like little caterpillars. They feed on the leaves, while avoiding the larger veins. The result is a leaf that is peppered with small holes, creating a distinct netted look, a type of feeding referred to as skeletonizing. Be sure to look for the presence of the sawflies as this damage could be confused with Japanese beetles. Sawflies have one generation per year.
The skeletonizing done by sawflies can look serious on canes where an adult laid multiple eggs, but raspberry sawflies rarely causes a loss in yield. Typically, only a small number of canes have sawfly larvae. The damage only occurs in early summer. Individual primocanes can often outgrow the damage. Gardeners may find the middle of canes with damaged leaves, while the tops and bottoms of the same canes have normal, healthy leaves.
Sawflies rarely need to be managed in Minnesota. In small patches with minor infestations, the larvae can be removed by hand. If the majority of primocanes have sawfly damage, an insecticide may be necessary. Only spray insecticides if the little pale green "worms" are visible and still feeding on the leaves. If there are no worms, and the cane is forming healthy new leaves above the skeletonized leaves, the larvae may already be gone. Sawfly larvae are susceptible to 'soft' insecticides, such as insecticidal soap. Contact residual insecticides such as permethrin, malathion and carbaryl are also options. |
In many cultures (notably Western, Middle Eastern, and African) the family name is typically the last part of a person's name. In some other cultures, the family name comes first. The latter is often called the Eastern order because Europeans are not familiar with the examples of China, Korea, Japan and Vietnam. Because the family name is normally given last in English-speaking societies, the term last name is commonly used for family name.
Family names are most often used to refer to a stranger or in a formal setting, and are often used with a title or honorific such as Mr., Mrs., Ms., Miss, Dr., and so on. Generally the given name, Christian name, first name, forename, or personal name is the one used by friends, family, and other intimates to address an individual. It may also be used by someone who is in some way senior to the person being addressed.
The oldest use of family or surnames is unclear. Surnames have arisen in cultures with large, concentrated populations where single names for individuals became insufficient to identify them clearly. In many cultures, the practice of using additional descriptive terms in identifying individuals has arisen. These identifying terms or descriptors may indicate personal attributes, location of origin, occupation, parentage, patronage, adoption, or clan affiliation. Often these descriptors developed into fixed clan identifications which became family names in the sense that we know them today.
In China, according to legend, family names originated with Emperor Fu Xi in 2852 BC. His administration standardised the naming system in order to facilitate census-taking, and the use of census information. The surnames "Zhu," "Lee," "Chung" and "Chang" are most popular in Taiwan, and/or China. In Japan, family names were uncommon except among the aristocracy until the 19th century.
In Ancient Greece, during some periods, it became common to use one's place of origin as a part of a person's official identification. At other times, clan names and patronymics ("son of") were also common. For example, Alexander the Great was known by the clan name Heracles and was, therefore, Heracleides (as a supposed descendant of Heracles) and by the dynastic name Karanos/Caranus, which referred to the founder of the dynasty to which he belonged. In none of these cases, though, were these names considered formal parts of the person's name, nor were they explicitly inherited in the manner which is common in many cultures today. They did, however, survive with a vengeance as clan names as 'Greeks' or 'Hellenes' or 'Minoans', as opposed to the toponymic 'The Sea Peoples' used by the Egyptians, or 'Ionians', which is one of the names still used for the Greeks today by Arab-speaking people as 'Younanis'.
In the Roman Empire, the bestowal and use of clan and family names waxed and waned with changes in the various subcultures of the realm. At the outset, they were not strictly inherited in the way that family names are inherited in many cultures today. Eventually, though, family names began to be used in a manner similar to most modern European societies. With the gradual influence of Greek/Christian culture throughout the Empire, the use of formal family names declined. By the time of the fall of the Roman Empire in the 5th century, family names were uncommon in the Eastern Roman (i.e. Byzantine) Empire. In Western Europe where Germanic culture dominated the aristocracy, family names were almost non-existent. They would not significantly reappear again in Eastern Roman society until the 10th century, apparently influenced by the familial affiliations of the Armenian military aristocracy. The practice of using family names spread through the Eastern Roman Empire and gradually into Western Europe although it was not until the modern era that family names came to be explicitly inherited in the way that they are today.
In the case of England, the most accepted theory of the origin of family names is to attribute their introduction to the Normans and the Domesday Book of 1086. As such, documents indicate that surnames were first adopted among the feudal nobility and gentry, and only slowly spread to the other parts of society. Some of the early Norman nobility arriving in England during the Norman Conquest differentiated themselves by affixing 'de' (of) in front of the name of their village in France. This is what is known as a territorial surname, a consequence of feudal landownership. In medieval times in France, those distinguishing themselves by this manner indicated lordship, or ownership, of their village. But some early Norman nobles in England chose to drop the French derivations and simply call themselves after the name of their new English holdings.
During the modern era, many cultures around the world adopted the practice of using family names, particularly for administrative reasons, especially during the imperialistic age of European expansion and particularly from the 17th to 19th centuries onwards. Notably examples include the Netherlands (1811), Japan (1870s), Thailand (1920), and Turkey (1934). Nonetheless, their use is not universal: Icelanders, Tibetans, Burmese, and Javanese do not use family names.
Most surnames of British origin fall into seven types:
The original meaning of the name may no longer be obvious in modern English (e.g., a Cooper is one who makes barrels, and the name Tillotson is a matronymic from a diminutive for Matilda). A much smaller category of names relates to religion, though some of this category are also occupations. The names Bishop, Priest, or Abbot, for example, may indicate that an ancestor worked for a bishop, a priest, or an abbot, respectively, or possibly took such a role in a popular religious play (see pageant play).
In the Americas, the family names of many African-Americans have their origins in slavery (i.e. slave name). Many of them came to bear the surnames of their former owners. Many freed slaves either created family names themselves or adopted the name of their former master. Others, such as Muhammad Ali and Malcolm X changed their name rather than live with one they believed had been given to their ancestors by a slave owner.
In England and cultures derived from there (though not in France, for example), there has long been the patriarchal tradition for a woman to change her surname upon marriage from her birth name to her husband's last name. From the first known instance of a woman keeping her birth name, Lucy Stone in the 19th century, there has been a general increase in the rate of women keeping their original name. This has gone through periods of flux, however, and the 1990s saw a decline in the percentage of name retention among women. As of 2004, roughly 60% of American women automatically assumed their husband's surname upon getting married. Even in families where the wife has kept her birth name, parents often choose to give their children their father's family name. In English-speaking countries, married women are traditionally known as Mrs [Husband's full name].
In the Middle Ages, when a man from a lower-status family married an only daughter from a higher-status family, he would often take the wife's family name. In the 18th and 19th centuries in Britain, bequests were sometimes made contingent upon a man's changing (or hyphenating) his name, so that the name of the testator continued. It is rare but not unknown for an English-speaking man to take the name of his wife, whether for personal reasons or as a matter of tradition (such as among Canadian aboriginal groups, especially the matrilineal Haida and Kwakiutl); it is increasingly common in the United States, where a married couple may choose a new last name entirely. This has become more widely popular in Southern California since the election of Antonio Villaraigosa as Los Angeles mayor.
As an alternative, both spouses may adopt a double-barrelled name. For instance, when John Smith and Mary Jones marry each other, they may become known as John Smith-Jones and Mary Smith-Jones. However, some consider the extra length of the hyphenated names undesirable. A spouse may also opt to use his or her birth name as a middle name. An additional option, although rarely practiced, is the adoption of a last name derived from a portmanteau of the prior names, such as "Simones". Some couples keep their own last names but give their children hyphenated or combined surnames.
In some jurisdictions, a woman's legal name used to change automatically upon marriage. That change is no longer a requirement (except in South Africa), but women may still easily change to their husband's surname. Upon marriage, men can easily change their surname with the federal government, through the Social Security Administration, but may face difficulty on the state level in some states. In some places, civil rights lawsuits or constitutional amendments changed the law so that men could also easily change their married names (e.g., in British Columbia and California). (Note: many Anglophone countries are also common-law countries.)
Many people choose to change their name when they marry, while others do not. There are many reasons why people maintain their surname. One is that dropped surnames disappear throughout generations, while the adopted surname survives. Another reason is that if a person's surname is well known due to his or her particular family's heritage or prominence, he or she may choose to keep his or her birth surname. Yet another is the identity crisis people may experience when giving up their surname. People in academia, for example, who have previously published articles in academic journals under their birth name often do not change their surname after marriage, in order to ensure that they continue to receive credit for their past and future work. This practice is also common among physicians, attorneys, and other professionals, as well as celebrities for whom continuity is important. Though the practice of women's maintaining their surname after marriage is increasing, it has not caught on in the general population and there is great peer pressure for women to change their names. Practices among same-sex married couples do not at this point follow any discernible pattern, with some choosing to share surnames, while others do not.
In Southern gospel and folk music, families often perform together as groups. When female artists in these genres marry, they usually adopt double-barrelled surnames if the husband comes from a noted musical family as well (e.g. Allison Durham Speer, Kelly Crabb Bowling), or simply continue to go by their birth names if the husband is not from such a family (e.g. Karen Peck, Libbi Perry, Janet Paschal).
Spelling of names in past centuries is often assumed to be a deliberate choice by a family, but due to very low literacy rates, the reality is that many families could not provide the spelling of their surname, and so the scribe, clerk, minister, or official would write down the name on the basis of how it was spoken, or how they heard it. This results in a great many variations, some of which occurred when families moved to another country (e.g. Wagner becoming Wagoner, or Whaley becoming Whealy). With the increase in bureaucracy, officially-recorded spellings tended to become the standard for a given family.
In medieval times, a patronymic system similar to the one still used in Iceland emerged. For example, Álvaro, the son of Rodrigo would be named Álvaro Rodríguez. His son, Juan, would not be named Juan Rodríguez, but Juan Álvarez. Over time, many of these patronymics became family names and are some of the most common names in the Spanish-speaking world. Other sources of surnames are personal appearance or habit, e.g. Delgado ("thin") and Moreno ("tan"); occupations, e.g. Molinero ("miller"), Rey ("King") and Guerrero ("warrior"); and geographic location or ethnicity, e.g. Alemán ("German").
However, nowadays in Spain and in many Spanish-speaking countries (former Spanish colonies, e.g. Philippines, Dominican Republic, Puerto Rico, Cuba, Guatemala, Colombia, Peru, Chile, Venezuela), most people have two family names, although in some situations only the first is used. The first family name is the paternal one, inherited from the father's paternal family name. The second family name is the maternal one, inherited from the mother's paternal family name. (As an example, Mexican boxer Marco Antonio Barrera's full name is Marco Antonio Barrera Tapia, though Barrera is the only one used in general conversation. In Spain, a new law approved in 1999 allows an adult to change the order of his/her family names, and parents can also change the order of their children's family names if they agree (if one of their children is at least 12 years old they need his/her agreement too).
Depending on the country, the family names may or may not be linked by the conjunction y ("and"), i ("and", in Catalonia), de ("of") and de la ("of the", when the following word is feminine). However, in many South American countries, people have now adopted the English-speaking custom of having a single family name (e.g., in Argentina). Sometimes a new father transmits his complete family name by creating a new one, combining his two family names, e.g., the paternal surname of the son of Javier (given name) Reyes (paternal family name) de la Barrera (maternal surname) may become the new paternal surname Reyes de la Barrera.
At present in Spain, women upon marrying keep their own two family names. In certain rare situations, especially the nobility, she may be addressed as if her maternal surname had been replaced with her husband's paternal surname, often linked with de. For example, a woman named Ana García Díaz, upon marrying Juan Guerrero Macías, could be called Ana García de Guerrero. This custom, begun in medieval times, is decaying and only has legal validity in Dominican Republic, Puerto Rico, Ecuador, Guatemala, Peru, Panama, and to a certain extent in Mexico, where its use is becoming minor through time. In Mexico, women who got married kept their first family name followed by "de" and then the husband's last name. For example María Martínez López when married to Josué Vásquez Hernández would then be María Martínez de Vásquez; this usage is being discontinued and it's only used by elder women, or by adult females that knew that custom when they were little, also it's used to refer to a woman who one doesn't know her full name and use her husband's last name instead (like in the former example). In Peru and the Dominican Republic, women normally conserve all family names after getting married. For example, if Rosa María Pérez Martínez marries Juan Martín De La Cruz Gómez, she will be called Rosa María Pérez Martínez de De La Cruz, and if the husband dies, she will be called Rosa María Pérez Martínez Vda. de De La Cruz (Vda. is the abbreviation for Viuda, "widow" in Spanish). In Ecuador, a couple can choose the order of their children's surnames. Most choose the traditional order (e.g., Guerrero García in the example above), but some invert the order, putting the mother's paternal surname first and the father's paternal surname last (e.g., García Guerrero from the example above). Such inversion, if chosen, must be maintained for all the children. Spanish surnames are also based on location, for example, " De La Torre" is spanish for "Of The Tower" and is commonly used in Mexico, it is thought to be fom Spanish descent(from Spain) it may have been taken from Spanish conquistadors who settled in Mexico.
In Argentina only one family name, the father's paternal family name, is commonly used and registered, as in English-speaking countries (the real reason why many Argentinians [but by no means all, a large proportion of them use two as per Spanish usage] use one last name is because a large proportion of the dominant class come from Italian ascent, and therefore follow the conventions of this country). Women, however, do not change their family names upon marriage and continue to use their birth family names instead of their husband's family names.
In Cuba, both men and women carry their two family names (first their father's, and second their mother's). Both are equally important and are mandatory for any official document. Married women never change their original family names for their husband's. Even when they migrate to other countries where this is a common practice, many prefer to adhere to their Cuban heritage and keep their maiden name.
France Belgium Canadian
There are about 1,000,000 different family names in German. German family names most often derive from given names, occupational designations, bodily attributes or geographical names. Hyphenations notwithstanding, they mostly consist of a single word; in those rare cases where the family name is linked to the given names by particles such as von or zu, they usually indicate noble ancestry. Not all noble families used these names (see Riedesel), while some farm families, particularly in Westphalia, used the particle von or zu followed by their farm or former farm's name as a family name (see Meyer zu Erpen).
Family names in German-speaking countries are usually positioned last, after all given names. There are exceptions, however: In parts of Austria and the Alemannic-speaking areas, the family name is regularly put in front of the first given name. Also in many - especially rural - parts of Germany, to emphasize family affiliation there is often an inversion in colloquial use, in which the family name becomes a possessive: Rüters Erich, for example, would be Erich of the Rüter family.
In Germany today, upon marriage, both partners can choose to keep their birth name or one of them can adopt a hyphenated name of their birth names (the latter case is forbidden for both partners and for the last names of children), or one of them can switch to their partner's name (if the partner keeps it). After that, they must decide on one family name for all their future children, by pretty much the same rules. (German name)
Changing one's family name for reasons other than marriage, divorce or adoption is only possible in Germany if the applicant can prove that they suffer extraordinarily due to their name.
In the case of Portuguese naming customs, the main surname (the one used in alphasorting, indexing, abbreviations, and greetings), appears last (reverse the order of Spanish surnames).
Each person usually has two family names: the first is the maternal family name; the last is the paternal family name. A person can have up to six names (two first names and four surnames — he or she may have two names from the mother and two from the father).
In ancient times a patronymic was commonly used — surnames like Gonçalves ("son of Gonçalo"), Fernandes ("son of Fernando"), Nunes ("son of Nuno"), Soares ("son of Soeiro"), Sanches ("son of Sancho"), Henriques ("son of Henrique") which along with many others are still in regular use as very prevalent family names.
Brazilians usually call people only by their given names, omitting family names, even in many formal situations (as in the press referring to authorities, e.g. "Former President Fernando Henrique", never Former President Cardoso), or "President Lula" ("Lula" was actually his nickname). When formality or a prefix requires a family name, the given name usually precedes the surname, e.g. João Santos, or Sr. João Santos.
The Netherlands Belgium South Africa
However, following the occupation of Azerbaijan by the Red Army, the coutry became part of the Soviet Union. As a result, Azeri people were forced to abandon their traditional Azeri surname suffixes and were then replaced by Russian "-ov", "-yev" for men and "-ova", "-yeva" for women suffixes.
In 1991, Azerbaijan gained its independence from the Soviet Union. Since then, more and more Azeris are switching back to their original surnames.
In Western Finland, the agrarian names dominated, and the last name of the person was usually given according to the farm or holding they lived on. In 1921, surnames became compulsory for all Finns. At this point, the agrarian names were usually adopted as surnames. A typical feature of such names is the addition of prefixes Ala- (Sub-) or Ylä- (Up-), giving the location of the holding along a waterway in relation of the main holding. (e.g. Yli-Ojanperä, Ala-Verronen)
A third, foreign tradition of surnames was introduced in Finland by the Swedish-speaking upper and middle classes which used typical German and Swedish surnames. By custom, all Finnish-speaking persons who were able to get a position of some status in urban or learned society, discarded their Finnish name, adopting a Swedish, German or (in case of clergy) Latin surnames. In the case of enlisted soldiers, the new name was given regardless of the wishes of the individual.
In the late 19th and early 20th century, the overall modernization process and especially, the political movement of fennicization caused a movement for adoption of Finnish surnames. At that time, many persons with a Swedish or otherwise foreign surname changed their family name to a Finnish one. The features of nature with endings -o/ö, -nen (Meriö < Meri "sea", Nieminen < Niemi "point") are typical of the names of this era, as well as more or less direct translations of Swedish names (Paasivirta < Hällström).
In the 21st century Finland, the use of surnames follows the German model. Every person is legally obliged to have a first and last name. At most, three first names are allowed. The Finnish married couple may adopt the name of either spouse, or either spouse (or both spouses) may decide to use a double barrelled name. The parents may choose either surname or the double barrelled surname for their children, but all siblings must share the same surname. All persons have the right to change their surname once without any specific reason. A surname that is un-Finnish, contrary to the usages of the Swedish or Finnish languages or is in use by any person resident in Finland cannot be accepted as the new name, unless valid family reasons or religious or national customs give a reason for waiving this requirement. However, persons may change their surname to any surname that has ever been used by their ancestors, if they can prove such claim. Some immigrants have had difficulty naming their children, as they must choose from an approved list based on the family's household language.
In the Finnish language, the root of the surname can be modified by consonant gradation regularly when inflected to a case. In contrast, first names do not undergo qualitative gradation (e.g. Hilta - Hiltan), only quantitative gradation (Mikko - Mikon).
Commonly, Greek male surnames end in -s, which is the common ending for Greek masculine proper nouns in the nominative case. Exceptionally, some end in -ou, indicating the genitive case of this proper noun for patronymic reasons.
Although surnames are static today, dynamic and changing patronym usage survives in middle names in Greece where the genitive of father's first name is commonly the middle name.
Because of their codification in the Modern Greek state, surnames have Katharevousa forms even though Katharevousa is no longer the official standard. Thus, the Ancient Greek name Eleutherios forms the Modern Greek proper name Lefteris, and former vernacular practice (prefixing the surname to the proper name) was to call John Eleutherios as Leftero-giannis.
Modern practice is to call the same person Giannis Eleftheriou: the proper name is vernacular (and not Ioannis), but the surname is an archaic genitive.
Female surnames, are most often in the Katharevousa genitive case of a male name. This is an innovation of the Modern Greek state; Byzantine practice was to form a feminine counterpart of the male surname (e.g. masculine Palaiologos, Byzantine feminine Palaiologina, Modern feminine Palaiologou).
In the past, women would change their surname when married, to that of their husband (again in genitive case) signifying the transfer of "dependence" from the father to the husband. In earlier Modern Greek society, women were named with -aina as a feminine suffix on the husband's first name: "Giorgaina", "Mrs George", "Wife of George". Nowadays, a woman's surname does not change upon marriage, though she can use the husband's surname socially. Children usually receive the paternal surname, though in rare cases, if the bride and groom have agreed before the marriage, the children can receive the maternal surname.
Some surnames are prefixed with Papa-, indicating ancestry from a priest, i.e. ."Papadopoulos", the "son of the priest (papas)". Others, like Archi- and Mastro- signify "boss" and "tradesman" respectively.
Prefixes such as Konto-, Makro-, and Chondro-, describe body characteristics, such as "short", "tall/long" and "fat". "Gero-" and "Palaio-" signify "old" or "wise".
Other prefixes include Hadji- which was an honorific deriving from the Arabic Hadj or pilgrimage, and indicate that the person had made a pilgrimage (in the case of Christians to Jerusalem) and Kara- which is attributed to the Turkish word for "black" deriving from the Ottoman Empire era.
Arvanitic surnames are also common. For example, the Arvanitic word for "brave" or "pallikari" (in Greek) being "çanavar" or its shortened form "çavar" was pronounced "tzanavar" or "tzavar" giving birth to traditional Arvanitic family names like "Tzanavaras" and/or "Tzavaras".
Most Greek patronymic suffixes are diminutives, which vary by region. The most common Hellenic patronymic suffixes are:
Others, less common are:
In Hungarian, like Asian languages but unlike most other European ones (see French and German above for exceptions), the family name is placed before the given names. This usage does not apply to non-Hungarian names, for example "Tony Blair" will remain "Tony Blair" when written in Hungarian texts.
Names of Hungarian individuals, however, appear in Western order in English writing.
In Iceland, most people have no family name; a person's last name is most commonly a patronymic, i.e. derived from the father's first name. For example, when a man called Karl has a daughter called Anna and a son called Magnús, their full names will typically be Anna Karlsdóttir ("Karl's daughter") and Magnús Karlsson ("Karl's son"). The name is not changed upon marriage.
India is a country with numerous distinct cultures and language groups within it. Thus, Indian surnames, where formalized, fall into seven general types. And many people from the southern states of Tamil Nadu and Kerala do not use any formal surnames, though most might have one.
In Northern India, most of the people have their family name after the given names, whereas in Southern India, the given names come after the family name.
The convention is to write the first name followed by middle names and surname. It is common to use the father's first name as the middle name or last name even though it is not universal. In some Indian states like Maharashtra, official documents list the family name first, followed by a comma and the given names.
It is customary for wives to take the surname of their husband after marriage. In modern times, in urban areas at least, this practice is not universal. In some rural areas, particularly in North India, wives may also take a new first name after their nuptials. Children inherit their surnames from their father.
In some parts of Southern India, no formal surname is used, because the family has decided to forgo its existing clan name. There has been a minor reversal of this trend in the recent times. This practice is prevalent in Tamil Nadu and Kerala. For example, people from the kongu vellala gounder community of Tamilnadu have in general two titles: the caste title Gounder and the clan name, example Perungudi. Nowadays it is common for people not to use any of these titles. So a Konguvel, son of Shanmuganathan, of say Erode, would call himself Konguvel Shanmughanathan, instead of the traditional Erode Perungudi Konguvel Gounder. This practise is of very recent origin though. Wife or child takes the given name of the husband or father (Usha married Satish, and may therefore be called Usha Satish or simply S. Usha). In many communities, especially Christian, names are formed by the given name as the first name, the family name and house name as the middle name(s) and the father's/husband's given name as the last name. Thus, the last name changes with each generation. The house name would also change as generations move out of their consanguineal family homes with the changing ownership of property upon the death of the patriarch. The Dravidian movement in the beginning of 19th century was instrumental in knocking off the concept of surnames in Tamil Nadu. Since many companies in the industrially rich Tamil Nadu managed to filter candidates just by looking at their names, the movement went on to such an extent that surnames/castenames were simply refused at primary school levels. The movement went so active that even Streets, roads and galis where names with caste name was published, road-tar was applied on caste names. For instance in a Ranganatha Mudaliar street, the
Mudaliar name was struck off with tar, leaving the street as Ranganathan Street. Similar was the case with almost all castes, Now it's hard to find a Mudaliar, Nadar, Pillai,
Goundar, Iyer, Chettiar etc in any public display. Only on arranged marriages, people feel proud to publish their caste names. In cases where people arrange their own marriages (intercaste / inter religion), the caste name almost vanishes. Hence the famous "ETHIRAJA MUDALIAR College" in Chennai is simply "ETHIRAJ COLLEGE" or "Kamaraja nadar road" is simply "Kamaraj road". This is being welcome my politicians from UP, Bihar etc.
Jains generally use Jain, Shah, Firodia, Singhal or Gupta as their last names. Sikhs generally use the words Singh ("lion") and Kaur ("princess") as surnames added to the otherwise unisex first names of men and women, respectively. It is also common to use a different surname after Singh in which case Singh or Kaur are used as middle names (Montek Singh Ahluwalia, Surinder Kaur Badal). The tenth Guru of Sikhism ordered (Hukamnama) that any man who considered himself a Sikh must use Singh in his name and any woman who considered herself a Sikh must use Kaur in her name. Other middle names or honorifics that are sometimes used as surnames include Kumar, Dev, Lal, and Chand.
The modern day spellings of names originated when families translated their surnames to English, with no standardization across the country. Variations are regional, based on how the name was translated from the local language to English in the 18th, 19th or 20th centuries during British rule. Therefore, it is understood in the local traditions that Agrawal and Aggarwal represent the same name derived from Uttar Pradesh and Punjab respectively. Similarly, Tagore derives from Bengal while Thakur is from Hindi-speaking areas. The officially-recorded spellings tended to become the standard for that family. In the modern times, some states have attempted at standardization, particularly where the surnames were corrupted because of the early British insistence of shortening them for convenience. Thus Bandopadhyay became Banerji, Mukhopadhay became Mukherji, Chattopadhyay became Chatterji etc. This coupled with various other spelling variations created several surnames based on the original surnames. The West Bengal Government now insists on re-converting all the variations to their original form when the child is enrolled in school.
Javanese people are the majority in Indonesia, and most do not have any surname. There are many individuals who have only name, such as "Suharto" and "Sukarno". These are not only common with the Javanese but also with ethnic groups who do not have the tradition of surnames. If, however, they are Muslims, they might opt to follow Arabic naming customs.
Many surnames in Ireland of Gaelic origin derive from ancestors' names, nicknames, or descriptive names. In the first group can be placed surnames such as McMurrough and McCarthy, derived from patronymics, or O'Brien and O'Grady, derived from ancestral names.
Gaelic surnames derived from nicknames include Ó Dubhda (from Aedh ua Dubhda - Aedh, the dark one), O'Doherty (from dochartaigh, "destroyer" or "obtrusive"), Garvery (garbh, "rough" or "nasty"), Manton (mantach, "toothless"), Bane (bán, "white", as in "white hair"), Finn (fionn, "fair", as in "fair hair"), and Kennedy (cinnéide, "ugly head").
In contrast to England, very few Gaelic surnames are derived from placenames or venerated people/objects. Among those that are included in this small group, several can be shown to be derivations of Gaelic personal names or surnames. One notable exception is Ó Cuilleáin or O'Collins (from cuileann, "Holly") as in the Holly Tree, considered one of the most sacred objects of pre-Christian Celtic culture. Another is Walsh (Breatnach), meaning Welsh.
In areas where certain family names are extremely common, extra names are added that sometimes follow this archaic pattern. In Ireland, for example, where Murphy is an exceedingly common name, particular Murphy families or extended families are nicknamed, so that Denis Murphy's family were called The Weavers and Denis himself was called Denis "The Weaver" Murphy. (See also O'Hay.)
For much the same reason, nicknames (e.g. the Fada Burkes, "the long/tall Burkes"), father's names (e.g. John Morrissey Ned) or mother's maiden name (Kennedy becoming Kennedy-Lydon) can become colloquial or legal surnames. The Irish family of de Courcy Ireland became so-named to distinguish them from their cousins who moved to France in the 17th and 18th centuries.
In addition to all this, Irish speaking areas still follow the old tradition of naming themselves after their father, grandfather, great-grandfather and so on. Examples include Mike Bartly Pat Reilly ("Mike, son of Bartholomew, son of Pat Reilly"), John Michel John Oge Pat Breanach ("John, son of Michael, son of young John, son of Pat Breanach"), Tom Paddy-Joe Seoige ("Tom, son of Paddy-Joe Seoige"), and Mary Bartly Mike Walsh ("Mary, daughter of Bartly, son of Mike Walsh"). Sometimes, the female line of the family is used, depending on how well the parent is known in the area the person resides, e.g. Paddy Mary John ("Paddy, son of Mary, daughter of John"). A similar tradition continues even in English-speaking areas, especially in rural districts.
Some Irish surnames can be mistaken for non-Irish. Anglicization of many surnames has been so thorough that bona-fide Irish names such as Crockwell and Harrington appear to be English. Other Irish names can appear to be German (Bruder), Italian (Costello), or even Polish (Comiskey).
i, ian, deh, dust, fard, far, ju, iya, nia, nizhad (or nejad), oo, par, parast, pour, rad, vand, vard, yar, zadeh, zad, zand
Sometimes name of cities or towns are attached as the last word in the family name such as: Tehrani, Shirazi, Esfahani, Tabrizi, Zanjani, Angurani, Samani, Farahani
Some common Persian last names are: Afsar, Agassi, Alivandi, Alizadeh, Amanpour, Ansari, Anvari, Ariani, Arki, Ashtari, Azria, Bahari, Bahrami, Bakhtiari, Bateni, Bozorgi, Dashti, Davoodi, , Ebadi, Elmi, Emami,Esfahani, Fakoor, Farahani, Feiz, Firozi, Gharani, Gharibpour, Ghasemi, Golzari, Hosseini, Kalbasi, Karimi, Kashani, Kiani, Kiyanfar, Kiyanpour, Loghmani, Mehranzadeh,Milani, Mirzapour, Motallebzadeh, Najafi, Nakhudeh, Niyazfar, Omidifar, Ovisi, Ovasi, Rabiee, Rahimi, Rastinpour, Rezaei, Rouzrokh, Samani, Sarafpour, Sattari,Shirazi, Soltanzadeh, Souriani, Talebi, Tehrani, Teymourian, Yari, Yazdani, Zahedi, Zandi,and Zandipour.
Most, but not all last names that end in "ian" and sometimes "yan" are traditionally Persian last names. Armenian last names can also contain ian, but does not mean that they have to be Persian however they still hold the Persian suffix, "ian". This is the same for "-stan" which is a Persian noun-maker suffix used for country names such as Pakistan, Afghanistan, etc. which comes from Persian meaning "land" or "province" (Ostan in Persian).
In the old traditional Persian culture the wife did not take on the husband's surname. Although she kept her name, her husband's surname was used when she was referred to or addressed directly in a formal setting.
Italy has around 350,000 surnames. Most of them derive from the following sources: patronym or ilk (e.g. Francesco di Marco, "Francis, son of Mark" or Eduardo de Filippo, "Edward belonging to the family of Philip"), occupation (e.g. Enzo Ferrari, "Enzo the Smith"), personal characteristic (e.g. nicknames or pet names like Dario Forte, "Darius the Strong"), geographic origin (e.g. Elisabetta Romano, "Elisabeth from Rome") and objects (e.g. Carlo Sacchi, "Charles Bags"). The two most common Italian family names, Russo and Rossi, mean the same thing, "Red", possibly referring to a hair color that would have been very distinctive in Italy.
Both Western and Eastern orders are used for full names: the given name usually comes first, but the family name may come first in formal or administrative settings; lists are usually indexed according to the last name.
Women usually keep their surname when married but may also be addressed with the surname of the husband, especially when they become widows. Sometimes both surnames are written (the proper first), usually separated by in (e.g. Giuseppina Mauri in Crivelli). A woman using only her birth surname may add a giovane to the name (e.g. Mauri giovane) to indicate clearly that it is not her husband's name.
In a recently proposed law, a child may receive the surname of either the mother or the father.
Sicilian and Italian surnames are common due to the close vicinity to Malta. Examples include Bonello, Camilleri, Cauchi, Chetcuti, Dalli, Darmanin, Farrugia, Giglio, Gauci, Delicata, Licari, Magri, Rizzo, Schembri, Tabone, Troisi, Vassallo, etc.
English surnames exist due to Malta forming a part of the British Empire in the 19th century and most of the 20th. Examples include Bickle, Haidon, Harmsworth, Atkins, Mattocks, Martin, Wallbank, Smith, Jones, Sixsmith, Woods, Turner, Henwood.
Semitic surnames are common, due to the early presence of Eastern and Southern Mediterranean people in Malta. Examples include Sammut, Zammit, Said, Borg, Xuereb, Xerri, Grixti, Xriha, although the last three are also written in a Italianized form, i.e. Scerri, Griscti, Sciriha, due to Maltese being written in the Italian alphabet in the 19th century.
Spanish surnames exist too. Two common surnames are Calleja and Galdes and less common surnames are Enriquez, Herrera, Guzman, Inguanez, Carabez. A variant of Galdes exists and is Galdies, with only one family possessing it.
Such as Papagiorcopoulo, Dacoutros, Vasilopoulos, Vasilis, Trakosopoulos
Such as Depuis, Montfort.
Surnames from foreign countries from Middle Ages include German ones such as von Brockdorff, Engerer, Hyzler, Schranz, Craus, Fenech
The Jews have also left a relic of their presence on the island with the surnames of Abela, Ellul, Azzopardi and Cohen.
Some Maltese women, in order to preserve a rare surname from becoming extinct after marriage, add their maiden surname to their husband's. Sometimes, it becomes a sign of social status. These include: Spiteri-Gonzi, Fleri Soler, Mifsud-Bonnici, Sammut-Alessi, Sammut-Testaferrata, Cachia-Zammit, Caruana Curran, Vella-Maistre, Zarb Cousin, Fenech-Adami, Borg Olivier, Sant Fournier.
The few original Maltese surnames are those which show places of origin, for example, Chircop (Kirkop), Lia (Lija), Balzan (Balzan), Valletta (Valletta), Sciberras (Xebb ir-Ras Hill, on which Valletta was built) and possibly Curmi from Qormi.
Recently, due to asylum seekers from third world countries, new family names have been created. An example is Nwoko, following the naturalisation of footballer Chucks Nwoko. Others include Okoh, Ohaegbu, Yekoko, Stefanov, Bogdanovic, Giorev, Mohammed, Abu Shala, Abu Shamala.
Women take a man's surname upon marriage, and their name is written as: Maria Borg née Zammit in official documents, but only as Maria Borg in informal scenarios. However some celebrities retain their old name as a stage name. Generally children take the surname of their father, but some are given the name of their mother, either alone or combined to their father's.
The custom to address a family is to use the initial and surname of the male and refer also to the family. For example, if a letter is sent to a person named David Saliba and his family, one writes Mr. and Mrs. D. Saliba.
Except for the new surnames from foreign countries, and sometimes the long, combined and rare ones, generally the Maltese people do not give a lot of importance to the origins of their surnames, and cohabit hand in hand.
Some examples of surnames from Malta are:
Mongolians do not use surnames in the way that most Westerners, Chinese or Japanese do. Since the socialist period, patronymics - then called ovog, now called etsgiin ner - are used instead of a surname. If the father's name is unknown, a matronymic is used. The patro- or matronymic is written before the given name. Therefore, if a man with given name Tsakhia has a son, and gives the son the name Elbegdorj, the son's full name is Tsakhia Elbegdorj. Very frequently, the patronymic is given in genitive case, i.e. Tsakhiagiin Elbegdorj. However, the patronymic is rather insignificant in everyday use and usually just given as initial - Ts. Elbegdorj. People are normally just referred to and addressed by their given name (Elbegdorj guai - Mr. Elbegdorj), and if two people share a common given name, they are usually just kept apart by their initials, not by the full patronymic.
Since 2000, Mongolians have been officially using clan names - ovog, the same word that had been used for the patronymics before - on their IDs. Many people chose the names of the ancient clans and tribes such Borjigin, Besud, Jalair, etc. Also many extended families chose the names of the native places of their ancestors. Some chose the names of their most ancient known ancestor. Some just decided to pass their own given names (or modifications of their given names) to their descendants as clan names. Some chose or other attributes of their lives as surnames. Gürragchaa chose Sansar (Cosmos). Clan names precede the patronymics and given names, i.e. Besud Tsakhiagiin Elbegdorj. In practice, these clan names seem to have had no really significant effect, and are not even included in Mongolian passports.
People claiming Iranian ancestry include those with family names Agha, Firdausi, Ghazali, Hamadani, Isfahani, Kashani, Kermani, Khorasani, Mir, Montazeri, Nishapuri, Noorani, Kayani, Qizilbash, Saadi, Sabzvari, Shirazi, Sistani, Yazdani, Zahedi, and Zand.
Tribal names include Abro Afaqi, Afridi, Amini, Ashrafkhel, Awan, Bajwa, Baloch, Barakzai, Baranzai, Bhatti, Bhutto,Ranjha, Bijarani, Bizenjo, Brohi, Bugti, Butt, Detho, Gabol, Ghaznavi, Ghilzai, Gichki, Jakhrani, Jamali, Jamote, Janjua, Jatoi, Jutt Joyo, Junejo, Karmazkhel, Kayani, Khan, Khar, Khattak, Khuhro, Lakhani, Leghari, Lodhi, Magsi, Malik, Mandokhel(MAYO, Marwat, Mengal, Mughal , Palijo, Paracha,Panhwar, Popalzai, Qureshi, Rabbani, Raisani, Rakhshani, Soomro, Sulaimankhel, Talpur, Talwar, Thebo, Yousafzai, and Zamani.
In Pakistan the official paperwork format regarding personal identity is as follows;
So and so, son of so and so, of such and such caste and religion and resident of such and such place. For example, Amir Khan s/o Fakeer Khan, caste Mughal Kayani or Chauhan Rajput, Follower of religion Islam, resident of Village Anywhere, Tehsil Anywhere, District.
In 1849, Governor-general Narciso Clavería y Zaldúa decreed an end to these arbitrary practices, the result of which was the Catálogo Alfabético de Apellidos ("Alphabetical Inventory of Surnames"). The book contained many words coming from Spanish and the Philippine languages such as Tagalog and many Basque surnames, such as Zuloaga or Aguirre.
In practice, the application of this decree varied from municipality to municipality. Some municipalities received only surnames starting with a particular letter. For example, the majority of residents of the island of Banton in the province of Romblon have surnames starting with F such as Fabicon, Fallarme, Fadrilan, and Ferran. Thus, although there perhaps a majority of Filipinos have Spanish surnames, such a surname does not always indicate Spanish ancestry.
The vast majority of Filipinos follow a naming system which is the reverse of the Spanish one. Children take the mother's surname as their middle name, followed by their father's as their surname; for example, a son of Juan de la Cruz and his wife Maria Agbayani may be David Agbayani de la Cruz. Women take the surnames of their husband upon marriage; so upon her marriage to David de la Cruz, the full name of Laura Yuchengco Macaraeg would become Laura Yuchengco Macaraeg de la Cruz.
There are other sources for surnames. Many Filipinos also have Chinese-derived surnames, which in some cases could indicate Chinese ancestry. Many Hispanicised Chinese numerals and other Hispanicised Chinese words, however, were also among the surnames in the Catálogo Alfabético de Apellidos. For those whose surname may indicate Chinese ancestry, analysis of the surname may help to pinpoint when those ancestors arrived in the Philippines. A hispanicised Chinese surname such as Cojuangco suggests an 18th-century arrival while a Chinese surname such as Lim suggests a relatively recent immigration. Some Chinese surnames such as Tiu-Laurel are composed of the immigrant Chinese ancestor's surname as well as the name of that ancestor's godparent on receiving Christian baptism.
In the predominantly Muslim areas of the southern Philippines, adoption of surnames was influenced by connexions to that religion, its holy places, and prophets. As a result, surnames among Filipino Muslims are largely Arabic-based, and include such surnames as Hassan and Haradji.
There are also Filipinos who, to this day, have no surnames at all, particularly if they come from indigenous cultural communities.
A common Filipino name will consist of the given name (mostly 2 given names are given), the initial letter of the mother's maiden name and finally the father's surname (i.e. Lucy Anne C. de Guzman). Also, women are allowed to retain their maiden name or use both her and her husband's surname, separated by a dash. This is common in feminist circles or when the woman hold a prominent office (e.g. Gloria Macapagal-Arroyo, Miriam Defensor-Santiago). In more traditional circles, especially those who belong to the prominent families in the provinces, the custom of the woman being addressed as Mrs. Husband's Full Name is still common.
For widows, who chose to marry again, two norms are in existence. For those who were widowed before the Family Code, the full name of the woman remains while the surname of the deceased husband is attached. That is, Maria Andres, who was widowed by Ignacio Dimaculangan will have the name Maria Andres viuda de Dimaculangan. If she chooses to marry again, this name will still continue to exist while the surname of the new husband is attached. That, if Maria marries Rene de los Santos, her new name will be Maria Andres viuda de Dimaculangan de los Santos.
However, a new norm is also in existence. The woman may choose to use her husband's surname to be one of her middle names. Thus, Maria Andres viuda de Dimaculangan de los Santos may also be called Maria A.D. de los Santos.
Children will however automatically inherit their father's surname if they are considered legitimate. If the child is born outside wedlock, the mother will automatically pass her surname to the child, unless the father gives a written acknowledgment of paternity. The father may also choose to give the child both his parents' surnames if he wishes (that is Gustavo Paredes, whose parents are Eulogio Paredes and Juliana Angeles, while having Maria Solis as a wife, may name his child Kevin S. Angeles-Paredes.
In some Tagalog regions, the norm of giving patronyms, or in some cases matronyms, are also accepted. These names are of course not official, since family names in the Philippines are inherited. It is not uncommon to refer to someone as Juan anak ni Pablo (John, the Son of Pablo) or Juan apo ni Teofilo (John, the grandson of Theophilus).
Until the 19th century, the names were primarily of the form "[given name] [father's name] [grandfather's name]". The few exceptions are usually famous people or the nobility (boyars). The name reform introduced around 1850, had the names changed to a western style, most likely imported from France, consisting of a given name followed by a family name.
As such, the name is called prenume (French prénom), while the family name is called nume or, when otherwise ambiguous, nume de familie ("family name"). Although not mandatory, middle names (Romanian numele mic, literally, "small name") are common.
Historically, when the family name reform was introduced in the mid 19th century, the default was to use a patronym, or a matronym when the father was dead or unknown. The typical derivation was to append the suffix -escu to the father's name, e.g. Anghelescu ("Anghel's child") and Petrescu ("Petre's child"). (The -escu seems to come both from Old Slavonic -ьскъ and/or from Latin -iscum, thus being cognate with Italian -esco and French -esque.) The other common derivation was to append the suffix -eanu to the name of the place of origin, especially when one came from a different region, e.g. Munteanu ("from the mountains") and Moldoveanu ("from Moldova"). These uniquely Romanian suffixes strongly identify ancestral nationality.
There are also descriptive family names derived from occupations, nicknames, and events, e.g. Botezatu ("baptised"), Barbu ("bushy bearded"), Prodan ("foster"), Bălan ("blond"), Fieraru ("smith"), Croitoru ("taylor").
Romanian family names remain the same regardless of the sex of the person.
Although given names appear before family names in most Romanian contexts, official documents invert the order, ostensibly for filing purposes. Correspondingly, Romanians often introduce themselves with their family names first, especially in official contexts, e.g. a student signing a test paper in school.
Romanians bearing names of non-Romanian origin often adopt Romanianised versions of their ancestral surnames, such as Jurovschi for Polish Żurowski, which preserves the original pronunciation of the surname through transliteration. In other cases, as with Romanians of Hungarian origin, these changes were often mandated by the state, as was the practice during the period of communist rule.
In Chinese, Japanese, Korean, and Vietnamese cultures, the family name is placed before the given names. So the terms "first name" and "last name" are generally not used, as they do not in this case denote the given and family names.
Chinese family names have many types of origins, dating back as early as pre-Qin era:
In history, some changed their surnames due to a naming taboo (from Zhuang 莊 to Yan 嚴 during the era of Liu Zhuang 劉莊) or as an award by the Emperor(Li was often to senior officers during Tang Dynasty).
In modern days, some Chinese adopt a Western given name in addition to their original given names, e.g. Lee Chu-ming (李柱銘) adopted the Western name Martin, which can often be used as a nickname of Chu-ming. The adopted Western name can be put in front of their Chinese name, e.g. Martin LEE Chu-ming. In addition, many people with Chinese names have non-Chinese first names which are commonly used. Sometimes, the Chinese name becomes used as a "middle name", e.g. Martin Chu-ming Lee, or even used a "last name", e.g. Lee Chu-ming Martin. Chinese names used in Western countries may be rearranged when written to avoid misunderstanding, e.g. cellist Yo-Yo Ma. However, some well-known Chinese names remain in the traditional order even in English literature, e.g. Mao Zedong, Yao Ming (Note that the name on the back of Yao Ming's NBA jersey is "Yao," rather than "Ming," as the former is his family name). Most people from mainland China stick with their own national standard to present their names. For example, in all Olympic events all the PRC athletes' names are presented in the Chinese ordering even when they are spelled out phonetically in Latin alphabets. Chinese athletes from other countries especially those in the US team use the Western ordering. So the non-compliance to the Western ordering is not a matter of cultural convention but a national standard adopted by PRC.
Vietnamese and Korean names are generally stated in East Asian order (family name first) even when writing in English.
In English writings originating from non-English cultures (e.g. English newspapers in China), the family name is often written with all capital letters to avoid being mistaken as a middle name, e.g. Laurence Yee-ming KWONG or using small capitals, as Laurence KWONG Yee-ming or with a comma, as AKUTAGAWA, Ryūnosuke to make clear which name is the family name. Such practice is particularly common in mass-media reporting international events like the Olympic Games. The CIA World Factbook stated that "The Factbook capitalizes the surname or family name of individuals for the convenience of [their] users who are faced with a world of different cultures and naming conventions". For example, Leslie Cheung Kwok Wing might be mistaken as Mr. Wing by readers unaware of Chinese naming conventions.
Vietnamese family names present an added complication. Like Chinese family names, they are placed at the beginning of a name, but unlike Chinese names, they are not usually the primary form of address. Rather, people will be referred to by their given name, usually accompanied by an honorific. For example, Phan Van Khai is properly addressed as Mr. Khai, even though Phan is his family name. This pattern contrasts with that of most other East Asian naming conventions.
In Japan, the civil law forces a common surname for every married couple, unless in a case of international marriage. In most cases, women surrender their surnames upon marriage, and use the surnames of their husbands. However, a convention that a man uses his wife's family name if the wife is an only child is sometimes observed. A similar tradition called ru zhui (入贅) is common among Chinese when the bride's family is wealthy and has no son but wants the heir to pass on their assets under the same family name. The Chinese character zhui (贅) carries a money radical (貝), which implies that this tradition was originally based on financial reasons. All their offspring carry the mother's family name. If the groom is the first born with an obligation to carry his own ancestor's name, a compromise may be reached in that the first male child carries the mother's family name while subsequent offspring carry the father's family name. The tradition is still in use in many Chinese communities outside of mainland China, but largely disused in China because of social changes from communism. Due to the economic reform in the past decade, accumulation and inheritance of personal wealth made a come back to the Chinese society. It is unknown if this financially motivated tradition would also come back to mainland China.
In Chinese, Korean, and Singaporean cultures, women keep their own surnames, while the family as a whole is referred to by the surnames of the husbands.
In Hong Kong, some women would be known to the public with the surnames of their husbands preceding their own surnames, such as Anson Chan Fang On Sang. Anson is an English given name, On Sang is the given name in Chinese, Chan is the surname of Anson's husband, and Fang is her own surname. A name change on legal documents is not necessary. In Hong Kong's English publications, her family names would have been presented in small cap letters to resolve ambiguity, e.g. Anson CHAN FANG On Sang in full or simply Anson Chan in short form.
Chinese women in Canada, especially Hongkongers in Toronto, would preserve their maiden names before the surnames of their husbands when written in English, for instance Rosa Chan Leung, where Chan is the maiden name, and Leung is the surname of the husband.
In Chinese, Korean, and Vietnamese, surnames are predominantly monosyllabic (written with one character), though a small number of common disyllabic (or written with two characters) surnames exists (e.g. the Chinese name Ouyang, the Korean name Jegal and the Vietnamese name Phan-Tran).
Many Chinese, Korean, and Vietnamese surnames are of the same origin, but simply pronounced differently and even transliterated differently overseas in Western nations. For example, the common Chinese surnames Chen, Chan, Chin, Cheng and Tan, the Korean surname Jin, as well as the Vietnamese surname Trần are often all the same exact character 陳. The common Korean surname Kim is also the common Chinese surname Jin, and written 金. The common Mandarin surnames Lin or Lim (林) is also one and the same as the common Cantonese or Vietnamese surname Lam and Korean family name Lim (written/pronounced as Im in South Korea). Interestingly, there are people with the surname of Hayashi (林) in Japan too. The common Chinese surname 李, translated to English as Lee, is, in Chinese, the same character but transliterated as Li according to pinyin convention. Lee is also a common surname of Koreans, and the character is identical.
Before the 19th century there was the same system in Scandinavia as in Iceland today. Noble families, however, as a rule adopted a family name, which could refer to a presumed or real forefather (e.g. Earl Birger Magnusson Folkunge ) or to the family's coat of arms (e.g. King Gustav Eriksson Vasa). In many surviving family noble names, such as Silfversparre ("silver-sparrow") or Stiernhielm ("star-helmet"), the spelling is obsolete, but since it applies to a name, remains unchanged.
Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted names in a similar fashion to that of the nobility. Family names such as the Swedish Bergman, Holmberg, Lindgren, Sandström and Åkerlund were quite frequent and remain common today. The same is true for similar Norwegian and Danish names.
Even more important a driver of change was the need, for administrative purposes, to develop a system under which each individual had a "stable" name - a name that followed the person from birth till the end. In the old days, people would be known by their name, patronymic and the farm they lived at. This last element would change if a person got a new job, bought a new farm, or otherwise came to live somewhere else. (This is part of the origin, in this part of the world, of the custom of women changing their names upon marriage. Originally it indicated, basically, a change of address, and there are numerous examples of men doing the same thing). The many patronymic names may derive from the fact that people who moved from the country to the cities, also gave up the name of the farm they came from. As a worker, you passed by your father's name, and this name passed on to the next generation as a family name. Einar Gerhardsen, the Norwegian prime minister, used a true patronym, as his father was named Gerhard Olsen (Gerhard, the son of Ola). Gerhardsen passed his own patronym on to his children as a family name. This has been common in many working class families. The tradition of keeping the farm name as a family name got stronger during the first half of the 20th century in Norway.
These names often indicated the place of residence of the family. For this reason, Denmark and Norway have a very high incidence of names derived from those of farms, many signified by the suffixes like -bø, -rud, -stuen, -løkken or even more predominantly -gaard -- the modern spelling is gård in Danish and has changed to gard in Norwegian, but as in Sweden, archaic spelling persists in surnames. The most well-known example of this kind of surname is probably Kierkegaard (original meaning: the farm located by the Church or also churchyard and cemetery [although this is unlikely in the context] which, with kierke, actually includes two archaic spellings), but many others could be cited. It should also be noted that, since the names in question are derived from the original owners' domiciles, the possession of this kind of name is no longer an indicator of affinity with others who bear it.
In many cases, names were taken from the nature around them. In Norway, for instance, there is an abundancy of surnames based on coastal geography, with suffixes like -strand, -øy, -holm, -vik, -fjord or -nes. A family name such as Dahlgren is derived from "dahl" meaning valley and "gren" meaning branch; or similarly Upvall meaning "upper-valley"; It depends on the Scandinavian country, language, and dialect.
Note: the following list does not take regional spelling variations into account.
If the name has no suffix, it may or may not have a feminine version. Sometimes it has the ending changed (such as the addition of -a). In the Czech Republic and Slovakia, suffixless names, such as those of German origin, are feminized by adding -ová (for example, Schusterová), but this is not done in neighboring Poland, where feminine versions are only used for -ski (-ska) names (this includes -cki and -dzki, which are in fact -ski preceded by a t or d respectively).
The family names are usually nouns (Svoboda, Král, Růžička), adjectives (Novotný, Černý, Veselý), verbs in a past tense of the third person (Pospíšil) or they mean nothing particular (Dvořák, Beneš). There is also a couple of names with more complicated origin which are actually complete sentences (Skočdopole, Hrejsemnou or Vítámvás). The most common Czech family name is Novák / Nováková.
Most Russian family names originated from patronymics, that is, father's name usually formed by adding the adjective suffix -ov(a) or -ev(a)). Contemporary patronymics, however, have a substantive suffix -ich for masculine and the adjective suffix -na for feminine.
For example, the proverbial triad of most common Russian surnames follows:
Feminine forms of these surnames have the ending -a:
Such a pattern of name formation is not unique to Russia or even to the Eastern and Southern Slavs in general; quite common are also names derived from professions, places of origin, and personal characteristics, with various suffixes (e.g. -in(a) and -sky (-skaia)).
Places of origin:
A considerable number of “artificial” names exists, for example, those given to seminary graduates; such names were based on Great Feasts of the Orthodox Church or Christian virtues.
Great Orthodox Feasts:
Many freed serfs were given surnames after those of their former owners. For example, a serf of the Demidov family might be named Demidovsky, which translates roughly as "belonging to Demidov" or "one of Demidov's bunch".
Grammatically, Russian family names follow the same rules as other nouns or adjectives (names ending with -oy, -aya are grammatically adjectives), with exceptions: some names do not change in different cases and have the same form in both genders (for example, Sedykh, Lata).
In Poland and most of the former Polish-Lithuanian Commonwealth, surnames first appeared during the late Middle Ages. They initially denoted the differences between various people living in the same town or village and bearing the same name. The conventions were similar to those of English surnames, using occupations, patronymic descent, geographic origins, or personal characteristics. Thus, early surnames indicating occupation include Karczmarz ("innkeeper"), Kowal ("blacksmith"), and Bednarczyk ("young cooper"), while those indicating patronymic descent include Szczepaniak ("Son of Szczepan), Józefowicz ("Son of Józef), and Kaźmirkiewicz ("Son of Kazimierz"). Similarly, early surnames like Mazur ("the one from Mazury") indicated geographic origin, while ones like Nowak ("the new one"), Biały ("the pale one"), and Wielgus ("the big one") indicated personal characteristics.
In the early 16th century, (the Polish Renaissance), toponymic names became common, especially among the nobility. Initially, the surnames were in a form of "[first name] de ("z", "of") [location]". Later, most surnames were changed to adjective forms, e.g. Jakub Wiślicki ("James of Wiślica") and Zbigniew Oleśnicki ("Zbigniew of Oleśnica"), with masculine suffixes -ski, -cki, -dzki and -icz or respective feminine suffixes -ska, -cka, -dzka and -icz on the east of Polish-Lithuanian Commonwealth. Names formed this way are adjectives grammatically, and therefore change their form depending on gender; for example, Jan Kowalski and Maria Kowalska collectively use the plural Kowalscy.
Names with masculine suffixes -ski, -cki, and -dzki, and corresponding feminine suffixes -ska, -cka, and -dzka became associated with noble origin. Many people from lower classes successively changed their surnames to fit this pattern. This produced many Kowalskis, Bednarskis, Kaczmarskis and so on. Today, although most Polish speakers do not know about noble associations of -ski, -cki, -dzki and -icz endings, such names still sound somehow better to them.
A separate class of surnames derive from the names of noble clans. These are used either as separate names or the first part of a double-barrelled name. Thus, persons named Jan Nieczuja and Krzysztof Nieczuja-Machocki might be related. Similarly, after World War I and World War II, many members of Polish underground organizations adopted their war-time pseudonyms as the first part of their surnames. Edward Rydz thus became Marshal of Poland Edward Śmigły-Rydz and Zdzisław Jeziorański became Jan Nowak-Jeziorański.
Noted exception from patronymic rule was a family name of prominent 19th century Serbia family Babadudić from Baba (literally, granny) Duda.
In some cases family name was derived from a profession (e.g. blacksmith - "Kovač" → "Kovačević").
In general family names in all of these countries follow this pattern with some family names being typically Serbian, some typically Croat and yet others being common throughout the whole linguistic region.
Children usually inherit fathers family name. In older naming convention which was common in Serbia up until mid 19th century a persons name would consist of three distinct parts persons given name, patronymic derived from father's personal name and the family name, as seen in for example in the name of language reformer Vuk Stefanović Karadžić.
Official family names do not have distinct male or female forms. Somewhat archaic unofficial form of adding suffixes to family names to form female form exists, with -eva, implying "daughter of" or "female descendant of" or -ka, implying "wife of" or "married to".
Bosniak Muslim names follow the same formation pattern but are usually derived from proper names of Islamic origin, often combining archaic Islamic or feudal Turkish titles i.e. Mulaomerović, Šabanadžović, Hadžihafisbegović etc.
Also related to Turkish influence is prefix Hadži- found in some family names. Regardless of religion, this prefix was derived from the honorary title which a distinguished ancestor eared by making a pilgrimage to either Christian or Islamic holy places. Hadžibegić, being Bosniak Muslim example.
In Croatia where tribal affiliations persisted longer, Lika, Herzegovina etc., original family name came to signify practically all people living in one area or holding of the nobles. The Šubić family owned land around the Zrin River in the Central Croatian region of Banovina. The surname became Šubić Zrinski, the most famous being Nikola Šubić Zrinski.
Among the Bulgarians, another South Slavic people, the typical surname suffix is "-ov" (Ivanov, Kovachev), although other popular suffixes also exist.
In the Republic of Macedonia, the most popular suffix today is "-ski".
However some suffixes are more uniquely characteristic to Ukrainian and Belarusian names, especially: -chuk (Western Ukraine), -enko (all other Ukraine) (both son of), -ko (little [masculine]), -ka (little [feminine]), -shyn, and -uk. See, for example, Ukrainian Presidents Leonid Kravchuk, and Viktor Yushchenko, Belarusian President Alexander Lukashenko, or former Soviet diplomat Andrei Gromyko.
In Burundi and Rwanda, most, if not all surnames have God in it, for example Hakizimana (meaning God cures), Nshimirimana (I thank God) or Havyarimana/Habyarimana (God gives birth). But not all surnames end with the suffix -imana. Irakoze is one of these. (technically meaning Thank God, though it is hard to translate it correctly in English or probably any other languages.)
The paternal grandfather's name is often used if there is a requirement to identify a person further, for example, in school registration. Also, different cultures and tribes use as the family's name the father's or grandfather's given name. For example, some Oromos use Warra Ali to mean families of Ali, where Ali, is either the householder, a father or grandfather.
In Ethiopia, the customs surrounding the bestowal and use of family names is as varied and complex as the cultures to be found there. There are so many cultures, nations or tribes, that currently there can be no one formula whereby to demonstrate a clear pattern of Ethiopian family names. In general, however, Ethiopians use their father's name as a surname in most instances where identification is necessary, sometimes employing both father's and grandfather's names together where exigency dictates.
Jewish names have historically varied, encompassing throughout the centuries several different traditions
The majority of Kurds do not hold Kurdish names because the names have been banned in the countries they primarily live in (namely Iran, Turkey and Syria). Kurds in these respective countries tend to hold Turkish, Persian or Arabic names, in the majority of cases, forcefully appointed by the ruling governments. Others hold Arabic names as a result of the influence of Islam and Arab culture.
Kurds holding authentic Kurdish names are generally found in Diaspora or in Iraqi Kurdistan where Kurds are relatively free. Traditionally, Kurdish family names are inherited from the tribes of which the individual or families are members. However, some families inherit the names of the regions they are from.
Common affixes of authentic Kurdish names are "i" and "zade".
Some common Kurdish last names, which are also the names of their respective tribes, include Baradost, Barzani, Berwari, Berzinji, Chelki, Diri, Doski, Jaf, Mutki, Rami, Rekani, Rozaki, Sindi, Tovi and Zebari. Other names include Akreyi, Alan, Amedi, Botani, Hewrami, Kurdistani (or Kordestani), Mukri, and Serhati.
Traditionally, Kurdish women did not inherit a man's last name. Although still not in practice by many Kurds, this can be more commonly found today.
Tibetan people are often named at birth by the parents, by a local Buddhist Lama or they may request a name from the Dalai Lama. They are often given two names, but they do not have a family name. Therefore all members of the family will have different names eg. Sonam Gyatso, Lhamo Drolma, Tenzin Choden etc. They may change their name throughout life if advised by a Buddhist Lama, for example if a different name removes obstacles. The Tibetans who enter monastic life take a name from their ordination Lama, which will be a combination of the Lama's name and a new name for them.
Most surnames of Adyge origin fall into six types:
"Shogen" comes from the Christian era and "Yefendi" and "Mole" come from the Muslim era.
In Circassian culture, women even when they marry, do not change their surnames. By keeping their surnames and passing that it on to the next generation, children come to distinguish relatives from the maternal side and respect her family as well as those from their father's side.
On the other hand, children cannot marry someone who bears the same surname as they do no matter how distantly related.
In the Circassian tradition, the formula for surnames is patterned to mean “daughter of ...”
Abkhaz families follow similar naming patterns reflecting the common roots of the Abkhazian, Adygean and Wubikh peoples.
Circassian family names cannot be derived from women's names and of the name of female ancestors. |
It's not just a simple case of heat stroke. To understand what heat does to a bacterium, we need to know about its structure. A bacterium is a single-celled organism. Think of it like a studio apartment, one room containing all the things a person needs to live: food, water, air. The walls of the apartment enclose the electrical wiring and gas pipes that deliver energy, along with the sewage pipes that get rid of waste products. In contrast to the size of this single-celled organism, even an animal as small as a mouse would be like a huge city with thousands of buildings and extensive infrastructure to keep it "alive."
In more scientific terms, a bacterium is made up of the cell envelope, the cytoplasm and, often, the flagella. Besides holding in the cytoplasm, the cell envelope is where energy-generating functions like photosynthesis and respiration happen. The cytoplasm refers to everything inside the cell envelope, a mixture of water, ribosomes, chromosomes, nutrients and enzymes -- all the things that keep the bacterium alive and kicking. Enzymes are especially important because they cause the chemical reactions that make up the cell's metabolism. The flagella are tiny appendages on the outside of the bacterium that help it move around, attach to surfaces or fend off enemies.
Now that we've set the scene and introduced the characters, here comes the dramatic climax. When the temperature gets hot enough, the enzymes in the bacterium are denatured, meaning they change shape. This change renders them useless, and they're no longer able to do their work. The cell simply ceases to function.
Heat can also damage the bacterium's cell envelope. Proteins and fatty acids making up the envelope lose their shape, weakening it. At the same time, fluid inside the cell expands as the temperature rises, increasing the internal pressure. The expanding fluid pushes against the weakened wall and causes it to burst, spilling out the guts of the bacterium.
Thermoduric bacteria are more heat-resistant and harder to kill. In terms of our apartment analogy, thermoduric bacteria have reinforced walls, double-paned windows, insulated pipes and an emergency supply of water and food. These heat-defying bacteria have to be kept under control by refrigeration, which keeps them from multiplying. [source: Todar] |
A term generally used to describe an adverse reaction by the body to any substance ingested by the affected individual. Strictly, allergy refers to any reactions incited by an abnormal immunological response to an ALLERGEN, and susceptibility has a strong genetic component. Most allergic disorders are linked to ATOPY, the predisposition to generate the allergic antibody immunoglobulin E (IgE) to common environmental agents (see ANTIBODIES; IMMUNOGLOBULINS). Because IgE is able to sensitise MAST CELLS (which play a part in inflammatory and allergic reactions) anywhere in the body, atopic individuals often have disease in more than one organ. Since the allergic disorder HAY FEVER was first described in 1819, allergy has moved from being a rare condition to one afflicting almost one in two people in the developed world, with substances such as grass and tree pollen, house-dust mite, bee and wasp venom, egg and milk proteins, peanuts, antibiotics, and other airborne environmental pollutants among the triggering factors. Increasing prevalence of allergic reactions has been noticeable during the past two decades, especially in young people with western lifestyles.
A severe or life-threatening reaction is often termed ANAPHYLAXIS. Many immune mechanisms also contribute to allergic disorders; however, adverse reactions to drugs, diagnostic materials and other substances often do not involve recognised immunological mechanisms and the term ‘hypersensitivity’ is preferable. (See also IMMUNITY.)
Adverse reactions may manifest themselves as URTICARIA, wheezing or difficulty in breathing owing to spasm of the BRONCHIOLES, swollen joints, nausea, vomiting and headaches. Severe allergic reactions may cause a person to go into SHOCK. Although symptoms of an allergic reaction can usually be controlled, treatment of the underlying conditon is more problematic: hence, the best current approach is for susceptible individuals to find out what it is they are allergic to and avoid those agents. For some people, such as those sensitive to insect venom, IMMUNOTHERAPY or desensitisation is often effective. If avoidance measures are unsuccessful and desensitisation ineffective, the inflammatory reactions can be controlled with CORTICOSTEROIDS, while the troublesome symptoms can be treated with ANTIHISTAMINE DRUGS and SYMPATHOMIMETICS. All three types of drugs may be needed to treat severe allergic reactions.
One interesting hypothesis is that reduced exposure to infective agents, such as bacteria, in infancy may provoke the development of allergy in later life.
Predicted developments in tackling allergic disorders, according to Professor Stephen Holgate writing in the British Medical Journal (22 January 2000) include:
Identification of the principal environmental factors underlying the increase in incidence, to enable preventive measures to be planned.
Safe and effective immunotherapy to prevent and reverse allergic disease.
Treatments that target the protein reactions activated by antigens.
Identification of how IgE is produced in the body, and thus of possible ways to inhibit this process.
Identification of genes affecting people’s susceptibility to allergic disease. |
Scientists at the Senckenberg Research Institute in Frankfurt have described the world’s oldest fossil sea turtle known to date.
The fossilized reptile is at least 120 million years old – which makes it about 25 million years older than the previously known oldest specimen. The almost completely preserved skeleton from the Cretaceous, with a length of nearly 2 meters, shows all of the characteristic traits of modern marine turtles. The study was published today in the scientific journal “PaleoBios.”
“Santanachelys gaffneyi is the oldest known sea turtle” – this sentence from the online encyclopedia Wikipedia is no longer up to date. “We described a fossil sea turtle from Colombia that is about 25 million years older,” rejoices Dr. Edwin Cadena, a scholar of the Alexander von Humboldt foundation at the Senckenberg Research Institute. Cadena made the unusual discovery together with his colleague from the US, J. Parham of California State University, Fullerton.
“The turtle described by us as Desmatochelys padillai sp. originates from Cretaceous sediments and is at least 120 million years old,” says Cadena. Sea turtles descended from terrestrial and freshwater turtles that arose approximately 230 million years ago. During the Cretaceous period, they split into land and sea dwellers. Fossil evidence from this time period is very sparse, however, and the exact time of the split is difficult to verify. “This lends a special importance to every fossil discovery that can contribute to clarifying the phylogeny of the sea turtles,” explains the turtle expert from Columbia.
The fossilized turtle shells and bones come from two sites near the community of Villa de Leyva in Colombia. The fossilized remains of the ancient reptiles were discovered and collected by hobby paleontologist Mary Luz Parra and her brothers Juan and Freddy Parra in the year 2007. Since then, they have been stored in the collections of the “Centro de Investigaciones Paleontológicas” in Villa Leyva and the “University of California Museum of Paleontology.”
Cadena and his colleague examined the almost complete skeleton, four additional skulls and two partially preserved shells, and they placed the fossils in the turtle group Chelonioidea, based on various morphological characteristics. Turtles in this group dwell in tropical and subtropical oceans; among their representatives are the modern Hawksbill Turtle and the Green Sea Turtle of turtle soup fame.
“Based on the animals‘ morphology and the sediments they were found in, we are certain that we are indeed dealing with the oldest known fossil sea turtle,” adds Cadena in summary. |
Many would claim that the Han dynasty was one of the most powerful of all of China’s dynasties, not only in terms of economic growth and border expansion but also because of its trendsetting technology. The Han dynasty inventions were some of the greatest contributions not only in the Chinese society but even across the globe. Some of the lesser known innovations developed during this period include the creation of the wheelbarrow and the seismograph. Stirrups were also believed to have been used first during this time.
There are several major Han dynasty inventions that have been famously credited to this period. These inventions have in one way or another shaped the way our world is lived in right now. The first and perhaps the most popular is the invention of the paper making process during the Han dynasty. Although historians claim that the oldest piece of wrapping paper can be traced back to the Chinese during the 2nd BCE, the process of making paper was invented during the Han period. The eunuch Cai Lun was credited for this invention. His process used mulberry bark as the main ingredient.
The invention of cast iron tools can also be credited to the people of the Han dynasty. It was during the Han dynasty that the cast iron processing was perfected. Furnaces which are able to convert iron ore into pig iron and later into cast iron were operational in China during the Han dynasty period. This resulted in vastly improved weapons, tools and domestic wares. More importantly, it paved the way for the creation of new agricultural tools which in turn helped increase the agricultural tax revenue of the empire.
Also the Han dynasty, credited for inventing the loom, set the tone for silk weaving during that era. It was because of their invention that silk could be marketed as an expensive piece of article. Their science of weaving also paved the way for the creation of the Silk Road. This meant increased revenue for the people of the Han dynasty.
True enough, the people of the Han dynasty pioneered some of the most important advancements in human history. The Han dynasty inventions are solid proof of the intellectual prowess of the Han dynasty people; another testimony to the power that was the Han dynasty. |
The essential feature of attention-deficit/hyperactivity disorder (ADD/ADHD) is a persistent pattern of inattention and/or hyperactivity-impulsivity that interferes with functioning or development exhibited by deficits in performance at school, home, and in social relationships. ADHD begins in childhood. The symptoms of inattention and/or hyperactivity need to manifest themselves in a manner and degree which is inconsistent with the child’s current developmental level. That is, the child’s behavior is significantly more inattentive or hyperactive than that of his or her peers of a similar age.
Several symptoms must be present before age 12. This age requirement supports ADHD/ADD as a neurodevelopmental disorder. In the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), symptoms were required before age 7. Now the age of 12 is seen as an acceptable criterion because it is often difficult for adults (e.g., parents) to look retrospectively and establish a precise age of onset for a child. Indeed, adult recall of childhood symptoms tends to be unreliable. Thus, the DSM-5 has added some leeway to the age cut-off.
A person can present with predominantly inattention, predominantly hyperactivity-impulsivity, or a combination of the two. To meet for each of these ADHD specifiers, a person must exhibit at least 6 symptoms from the appropriate categories below.
Symptoms of Inattention:
- Often fails to give close attention to details or makes careless mistakes in schoolwork, work, or other activities
- Often has difficulty sustaining attention in tasks or play activities
- Often does not seem to listen when spoken to directly
- Often does not follow through on instructions and fails to finish schoolwork, chores, or duties in the workplace (not due to oppositional behavior or failure to understand instructions)
- Often has difficulty organizing tasks and activities
- Often avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort (such as schoolwork or homework)
- Often loses things necessary for tasks or activities (e.g., toys, school assignments, pencils, books, or tools)
- Is often easily distracted by extraneous stimuli
- Is often forgetful in daily activities–even those the person performs regularly (e.g., a routine appointment)
Symptoms of Hyperactivity/Impulsivity:Hyperactivity
- Often fidgets with hands or feet or squirms in seat
- Often leaves seat in classroom or in other situations in which remaining seated is expected
- Often runs about or climbs excessively in situations in which it is inappropriate (in adolescents or adults, may be limited to subjective feelings of restlessness)
- Often has difficulty playing or engaging in leisure activities quietly
- Is often “on the go” or often acts as if “driven by a motor”
- Often talks excessively
- Often blurts out answers before questions have been completed
- Often has difficulty awaiting turn
- Often interrupts or intrudes on others (e.g., butts into conversations or games)
Symptoms must have persisted for at least 6 months. Some of these symptoms need to have been present as a child, at 12 years old or younger. The symptoms also must exist in at least two separate settings (for example, at school and at home). The symptoms should be creating significant impairment in social, academic or occupational functioning or relationships.
This criteria has been updated for DSM-5. See next page for diagnostic codes and related resources for ADHD.
Psych Central. (2014). Attention Deficit Hyperactivity Disorder (ADHD) Symptoms. Psych Central. Retrieved on December 21, 2014, from http://psychcentral.com/disorders/attention-deficit-hyperactivity-disorder-adhd-symptoms/
Symptom criteria summarized from:
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders, fifth edition. Washington, DC: American Psychiatric Association.
American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders, fourth edition. Washington, DC: American Psychiatric Association.
Last reviewed: By John M. Grohol, Psy.D. on 18 Jun 2014
Published on PsychCentral.com. All rights reserved. |
still living without basic sanitation. What does this mean? It is the devastating fact that for those 2.4 billion people – of which an estimated 300 million are in Africa alone – there are no measures in place to safely dispose of their waste, including human excreta. The poorly enforced measures of waste disposal mean that human faecal matter easily contaminates the clean water sources and soil. Municipal sewage (a mix of water and excrement) usually goes to a safe disposal point, but this is often not the case, leading to water borne diseases that affect mostly the poor.
One example of these water borne illnesses is Cholera, which remains rife in Africa to this day. Cholera is an extreme diarrhoeal infection that leads to dehydration, and ultimately death if left untreated. If the disease runs its course, it can kill within hours. The truly scary thing about cholera is that up to 80% of the infected show no real symptoms, meaning that they continue to infect other water sources. As if part of some twisted plot, one cure for treating Cholera is through rehydration – a therapy only possible if there is a clean water supply¹.
Typhoid fever also ravishes poorer countries or areas, especially in Northern and Western Africa, and is a disease that once again springs from contaminated water, and even from food fertilized by human excreta (and as you may have read above, soil easily becomes contaminated from poor sanitation). Some of the symptoms include fever, loss of appetite, and insomnia. If it goes untreated it can lead to bradycardia (slow heart beat) and pneumonia.
Another sanitation problem facing Africa specifically is much larger than disease, if that is possible. In Uganda, the education system is facing a sanitation crisis whereby in some schools there is only one toilet per 700 pupils, and there are no separate toilets for girls and boys. Many girls go on to drop out of school as a direct result of this². The loop then becomes endless: poor sanitation leading to illiteracy leading to poverty leading to poor sanitation.
So what solutions exist?
Naturally, governments can pump more money into sanitation, but the problem is that conventional options are not very cost effective at all. These methods also tend to exhaust a high amount of energy, which is another crisis all on its own. Therefore, low cost options that can easily be maintained are the best.
Among these is pour flush toilets. They use less water, and the water seal that is in place also prevents odours and flies. Then there is the more scientifically advanced method of degrading faecal matter using the black soldier flies’ larvae³.
The real solution should not only concern itself with health and hygiene though, but also with a human being’s right to dignity and privacy. Open defecation is a very real problem in countries like Somalia and Eritrea, and no person should have to be forced into such degrading circumstances, especially when it is the lack of the basic rights that led them to it. |
Each page of this booklet lists a natural resource, gives examples of how it is used in our daily life, and classifies it as renewable or non-renewable. These pages could also be used as informative posters.
A poster explains simple electrical circuits; this is followed by a clear, detailed reading comprehension exercise; practice work for drawing in missing wires to complete the circuits concludes the lesson.
Instructions for making a family tree, including a form; a reading comprehension for reading a family tree; hints for mapping genetic traits and four great suggestions for take-home family tree projects.
Introduce the children to a community helper and the way that person helps the community. Sort a variety of pictures to match each occupation. (Use alone or with other Community Helpers in this series) |
Dalton stated in his atomic theory that atoms could neither be created or destroyed. From a chemical perspective all mass is contained by atoms. Therefore, if atoms cannot be created or destroyed then neither can mass. This is known as the principle of "Conservation of mass". With this principle, important information about a chemical reaction can be obtained. For example, consider the combustion of magnesium metal:
In a laboratory we carefully measure 10g of the metal, set it on fire in the presence of air (combust it) and then carefully weight the ash. We find that the ash weighs 16.6g. Using the principle of conservation of mass we conclude that 16.6g - 10g = 6.6g of oxygen reacted with the magnesium. A similar calculation can be performed on all chemical reactions. |
Lesson Plan: African American Literature in Art
In this lesson plan, students compare art and literature by examining a contemporary painting by Glenn Ligon and the essay by James Baldwin that inspired it. Students then write an essay about a personal experience that relates to the theme of being an “outsider.”
Suggested Grade Level: 9-10
Estimated Time: Two class periods
- Identify the expressive qualities of mood and emotion in pictorial expression
- Draw comparisons between literature and art
- Gain a deeper understanding of social history by comparing it to personal experience
- Stranger in the Village #13
- James Baldwin’s “Stranger in the Village” from Notes of a Native Son, 1955 (or later edition)
- Have students examine and discuss Glenn Ligon’s Stranger in the Village #13. Help them grasp the size of the original work by imagining how much space it would fill on a classroom wall. Ask students to imagine how the size of the work would affect them if they were standing before it. Make a list of words that describe the mood of the painting (dark, somber, scary, angry).
- Explain to students that the text in the painting comes from an essay called “Stranger in the Village” written by James Baldwin in 1953. Introduce Baldwin and have the students read his essay. Concentrate the discussion on the passages in Ligon’s painting by asking the following questions:
- What did Baldwin encounter during his stay in Switzerland?
- Why did he go back there when the villagers made him uncomfortable?
- How did Baldwin’s experience in Switzerland help him to understand the relationship between blacks and whites in America?
- Have students return to Ligon’s painting and discuss it in more detail by asking:
- Why do you think the artist made the text difficult to read?
- How does the mood of the painting compare to that of Baldwin’s account?
- What message about the current situation of African Americans in the United States might Ligon be conveying in this work?
Encourage students to think of a time in their own lives when they felt like outsiders. Ask them to write a three-page journal entry about this experience, how they felt, how they responded to the situation, and whether or not the experience changed them. Encourage students to use descriptive language.
Base students’ evaluation on their written work and participation in discussion.
Ask each student to incorporate elements of their journal entry into a drawing or painting. Encourage them to arrange words and sentences in any manner on the page while considering how color can enhance the meaning of the text.
Illinois Learning Standards
English Language Arts: 1-3
Fine Arts: 25 |
Earth is the third planet from the Sun and the only object in the Universe known to harbor life. According to radiometric dating and other sources of evidence, Earth formed over 4 billion years ago. Earth’s gravity interacts with other objects in space, especially the Sun and the Moon, Earth’s only natural satellite. Earth revolves around the Sun in 365.26 days, a period known as an Earth year. During this time, Earth rotates about its axis about 366.26 times.
Earth’s axis of rotation is tilted, producing seasonal variations on the planet’s surface. The gravitational interaction between the Earth and Moon causes ocean tides, stabilizes the Earth’s orientation on its axis, and gradually slows its rotation. Earth is the densest planet in the Solar System and the largest of the four terrestrial planets.
Earth’s lithosphere is divided into several rigid tectonic plates that migrate across the surface over periods of many millions of years. About 71% of Earth’s surface is covered with water, mostly by oceans. The remaining 29% is land consisting of continents and islands that together have many lakes, rivers and other sources of water that contribute to the hydrosphere. The majority of Earth’s polar regions are covered in ice, including the Antarctic ice sheet and the sea ice of the Arctic ice pack. Earth’s interior remains active with a solid iron inner core, a liquid outer core that generates the Earth’s magnetic field, and a convecting mantle that drives plate tectonics.
The First Billion
Within the first billion years of Earth’s history, life appeared in the oceans and began to affect the Earth’s atmosphere and surface, leading to the proliferation of aerobic and anaerobic organisms. Some geological evidence indicates that life may have arisen as much as 4.1 billion years ago. Since then, the combination of Earth’s distance from the Sun, physical properties, and geological history have allowed life to evolve and thrive. In the history of the Earth, biodiversity has gone through long periods of expansion, occasionally punctuated by mass extinction events. Over 99% of all species that ever lived on Earth are extinct. Estimates of the number of species on Earth today vary widely; most species have not been described. Over 7.4 billion humans live on Earth and depend on its biosphere and natural resources for their survival. Humans have developed diverse societies and cultures; politically, the world has about 200 sovereign states. |
1. What is sexual orientation?
Sexual orientation can be understood as a term that is frequently used to define a person’s emotional, sexual and romantic attraction to another human being. Individuals who are attracted to people of the opposite sex have a sexual orientation that can be defined as heterosexual. When an individual is attracted to someone of the same sex, we define his or her sexual orientation as homosexual. Individuals who have a homosexual orientation are commonly called gay (this can be used about both men and women) or lesbian (women only). Sexual orientations can be understood to fall on a continuum, and individuals who are attracted to people of both the same and opposite sex, have a sexual orientation known as, bisexual. The concept of sexual orientation can be understood as more than just sexual behaviour as it also includes the person’s sense of identity and their feelings. This means that someone can identify as being gay, lesbian or bisexual without actually engaging in any sexual behaviour with the identified group.
2. What causes Homosexuality or Bisexuality?
At present there is no consensus in the scientific community about what causes an individual to develop a heterosexual, homosexual or bisexual orientation. No scientific findings have yet been proposed that conclude that a person’s sexual orientation is a result of a specific factor or factors. Yet plenty of research has been conducted examining the cultural, social, developmental, genetic and hormonal influences on a person’s sexual orientation. Although some researchers believe that both nature and nurture play an important and complex role in the development of a person’s sexual orientation, it is widely accepted within the scientific community that most people have no or little choice about their own sexual orientation. Homosexuality was once believed to be the result of damaged psychological development or disturbed family dynamics. However, these days it is widely recognised that such assumptions were based on prejudice and fabrication. Some researchers are currently searching for biological etiologies for homosexuality. One research paper published at the Salk Institute, explored how a scientist had autopsied a number of people and confirmed that men on average, have a larger proportion of nerve cells in the INAH3 region of the hypothalamus, than women do. The same researcher also discovered that in reportedly gay men, this region bore a closer resemblance to the average woman’s than to the average man’s. Other studies have illustrated sexual dimorphism in humans and the gender shifting of sexually dimorphic traits in homosexuals and bisexuals. Further research has shown how hormone levels in foetuses have a large effect on those foetuses growing into children who become heterosexual, homosexual or bisexual adults. According to some researchers female foetuses exposed to higher levels of testosterone in the womb are more likely to be gay. Researchers have also suggested that male foetuses exposed to less testosterone or more androgen or both are more likely to be gay or gender shifted.
However, at present there are no replicated scientific studies that suggest or support that there is a biological aetiology for homosexuality. To date there are also no replicated scientific studies that suggest that family dynamics or psychosocial factors play a role the development of a sexual orientation.
3. How can we help?
From birth, most of us are raised to think of ourselves as fitting into a certain mould. Our culture and our families may teach us that we are “supposed” to be attracted to people of a different sex, and that boys and girls are supposed to look, act and feel certain ways. Few of us were told we might fall in love with someone of the same sex. That’s why so many people face fear, worry or confusion when facing such truths. Opening up to the possibility that you may be lesbian, gay, bisexual, or even just questioning this possibility, may be difficult and include fear of ridicule, guilt, anxiety, depression or shame. However, it also means opening up to the idea that you’re on a path that’s your own. For many, coming to terms with their own sexual orientation and sharing this with others can be terrifying and may result in withdrawal, isolation, avoidance and other unhelpful behaviours. This is where we come in. We offer a supportive, non-judgmental, and empathic therapeutic approach that allows you to explore and process your own sexual orientation at a pace that feels appropriate to you. We recognise that each person’s experience of “coming out” is unique and that this process can result in anxiety, however, we also believe that this process can provide an opportunity for emotional growth and personal empowerment. |
Teaching the Relevance of Game-Based Learning to Preschool and
The first quote is what drew me into the article, “Game-based learning has been found to promote a positive attitude towards learning and develop memory skills, along with its potential to connect learners and help them build self-constructed learning”. In higher educations we talk often about students views regarding learning. I teach a couple of 100 entry level courses and I see student coming into the beginning of class as if learning is a struggle and they would rather be anywhere else but here. Views of what learning is, especially if they are negative views, can and do affect the way that students perform in the classroom and on the different tasks assigned to them. It is easy to confirm this in our culture, just stand on a street corner and ask people their views on math and how they would feel if they had to take a math class to keep their jobs. Negative views would abound.
Cojocariu and Boghian quote Prensky (2001) when discussing how games meet the needs for learning in part through enjoyment, passionate involvement, structure, motivation, ego gratification, adrenaline, creativity, social interactions, and emotions. At first the ego gratification threw me off but as I thought about it more, students do receive an ego boost when they do well on an assignment just like they do from achieving a new level on a game. Unfortunately our currently educational view held by some teachers is that learning should be individual instead of social. But when you look at the working world they will be entering there are few if any truly independent jobs. People are always in contact with others for a variety of different reasons in order to complete their tasks.
This article focused on the importance of helping preschool and primary teachers to understand how to use games as a learning tool within the classroom. Socially these games would encourage students to interact in a positive way to resolve a variety of different challenges. The authors gave 8 different stages that would be completed by the teachers.
- Title and aim of the game
- Presenting materials
- Explaining rules and giving examples
- Demonstrating the game by have a trial game
- Performing the game
- Complicating the game through adding versions (new rules, etc)
- Ending the game and Evaluating it
These steps gave teachers a clear guide on how to incorporate a game successfully into a classroom so that students would be engaged. The last step is key as through evaluation the teacher would be able to assess both the students learning and quantify if the game is worth playing again or if there needs to be modifications.
By giving constraints to the game in the beginning this allows children who are not familiar with that game genre to catch up to their peers without feeling left behind. The teacher would need to be aware of how each student is doing before complicating the game. When a game becomes too complicated then players tend to tune out or refuse to play.
The authors recognized the disadvantages of having game play in the classroom – time, teacher control, classroom interactions but then balanced them with the benefits – students being able to develop several skills at the same time, creating a social connection, self-confidence, learning becoming pleasant and fun, discovery, and more. I agree that one of the challenges is that it is not easy to find ways to assess learning through games without having to add an additional task for each student. We know from standardized testing that both states and the federal government want to see specific data that supports the idea that students are learning and retaining information. They advocate for a need for standardization and regulation regarding the use of games in teaching-learning-evaluating.
To this idea my question is how can we as forward thinking instructors help our unfortunately test happy government to realize that this is a better way to assess then bubble tests.
Cojocariu, V and Boghian, I. ( 2014 ). Procedia – Social and Behavioral Sciences.Teaching the Relevance of Game-Based Learning to Preschool and Primary Teachers. 142. 640 – 646. Retrieved on Feb. 25, 2016. http://creativecommons.org/licenses/by-nc-nd/3.0/ |
Alterations to our diets as a result of climate change will cause more than half a million extra deaths by 2050, according to research published today in the medical journal, The Lancet.
While the number of people dying from malnutrition around the world is expected to fall in coming decades, scientists say the benefits will be partly counteracted by the impacts of climate change on the availability of fruit, vegetables and staple crops, such as wheat and sorghum.
The first-of-a-kind study by scientists at the UK’s Oxford Martin School predicts 529,000 extra deaths among adults by 2050 from health conditions linked to lack of food or poor diet compared to a world without climate change. This the strongest evidence yet that climate change could have damaging consequences for food production and health worldwide, say the researchers.
But the effects won’t be the same all over the world. The map below from the study shows how the projected impacts of rising temperatures and changing rainfall patterns on food and diet play out in 155 regions of the world, assuming emissions continue to rise at the pace they are now.
Quantity and quality
To make the map above, the scientists combined the latest understanding of how changes to a person’s diet can affect their risk of a stroke, heart disease or cancer with model projections for how changes in temperature and rainfall are likely to affect yields of groundnuts, maize, potatoes, rice, wheat, sorghum and soybeans, as well as how red meat, fruit and vegetables are produced, consumed and traded around the world.
The novel part of the study is that it takes into account how changes in our diets – as once-staple foods can no longer be produced and are replaced by others – will affect the nutrition we get, on top of looking at how the total amount of food is likely to change in the coming decades.
The light green shading in the map above indicates the handful of regions in which the study predicts climate change will have a positive effect, saving more lives by 2050 than are expected with changes to farming practices and global trade alone.
It’s a complex picture but, overall, it is bad news. In most regions, climate change-induced food shortages and changes to diet lead to more deaths in 2050 (red shading) compared to a world that wasn’t warming.
Low- and middle-income countries in the Western Pacific and Southeast Asia will be hardest hit, according to the study, with more than a hundred avoidable deaths per million people by 2050. The highest death rates are expected to occur in China, Vietnam, Greece, South Korea and India.
On average, the new study suggests climate change will reduce the estimated amount of food available to each person by 3.2% in 2050, or 99 kcal per person per day. People are expected to consume 4% less fruit and vegetables each and 0.7% less red meat than they otherwise would.
Eating fewer fruit and vegetables has the biggest effect on health in the study, far outweighing the benefits from eating less red meat and reducing obesity. Globally, twice as many deaths in 2050 are expected to stem from eating less fruit and vegetables than from malnutrition, say the authors.
Cutting emissions would see fewer deaths from diet-related illnesses by 2050, the paper explains. If emissions are curbed steeply, such that global temperature doesn’t exceed 2C above pre-industrial levels, as laid out in the Paris Agreement, the number of extra deaths would fall by 71%, to around 153,000 more than in a world with no climate change.
Today’s research is part of a huge effort globally to understand the impacts of climate change on human nutrition, says Prof Andy Challinor, researcher in climate impacts on food security at the University of Leeds, who wasn’t involved in the new study. He tells Carbon Brief:
But there are limitations to how accurately scientists can predict food availability in future, says Challinor. One reason is that as well as changes in the average annual temperature and rainfall, which today’s study looks at, the highs and lows from one year to the next are expected to get more dramatic, too, potentially making global food markets more unpredictable.
The new study also doesn’t include potentially major disruptions to food production caused by heatwaves, droughts, floods and other types of extreme weather. Challinor tells Carbon Brief:
The authors of today’s study also acknowledge that their estimates don’t account for the impacts of climate change on fisheries and aquaculture, changes to the nutritional value of the food itself or direct impacts of heat or water stress on livestock.
How climate change will affect the way we consume food, and the broader consequences for human health, is a multi-faceted problem that no single study can pin a precise number on. But today’s study highlights the potential scale of the problem, which it says is likely to dwarf any other known climate-related impacts on human health.
Main image: Fresh fruit and vegetables. Credit: stocker1970/Shutterstock.
Source: Springmann, M. et al., (2016) Global and regional health effects of future food production under climate change: a modelling study. The Lancet. DOI: 10.1026/S0140-6736(15)01156-3
Mapped: How climate change will slow progress towards curbing malnutrition |
Network Operations Center (NOC)
UEN Security Office
Technical Services Support Center (TSSC)
Eccles Broadcast Center
101 Wasatch Drive
Salt Lake City, UT 84112
Main Curriculum Tie:
Background For Teachers:
Ways to Gain/Maintain Attention (Primacy):
Lesson Segment 1: How does the result change when the value of the variable is changed?
Q. What does it man to substitute something? Tell story of making punch and substituting salt for sugar. Q. How would the substitute change the outcome? We can substitute value in algebraic expressions. Let’s do some mental substitution. Read each expression and the value to substitute. Have the students stand as soon as they know the value of the expression after the substitution.
Q. In the mental problems we just did, would the value of the expression have been the same if we had changed the substitute?
Evaluating Expressions Bingo
2m + 3 when:
2(5 – X) when:
Lesson Segment 2: What words indicate operations? How can mathematical symbols represent verbal expressions?
“As I have been writing these algebraic expressions in our Bingo Game for you to copy, I have been saying these expressions with words. We need to be able to read math expressions using words, and we need to be able to write math expressions when we read the words.”
Write the words for and read aloud the math expressions in the Bingo game again, this time read each using a variety of words which indicate the appropriate operation. For example the second expression could be read:
Ask students to write the words and the expression on the back of the Bingo worksheet. Then, have students work together with their team to think of different ways to write and read the expression, 3b.
Do Four-Corners where one person from each team goes to a designated corner to circle up and work with others to generate a list of words that indicate an operation.
Corner 1: person 1, make a list of words that mean “add”.
After five minutes in the corner generating a list of words, each person brings their list back to their team to share the words. Have students list their words on the journal page (attached).
Using the word lists and discussion, help students complete the 12 items below the lists on the journal page.
Lesson Segment 3: Practice using a game
If needed, two players can play against one. Students should write each expression and its matching words from the cards when they get to put them in their “Expert” pile.
Assign any additional practice or application from text as needed.
Created Date : |
I am lost when it comes to these math problems. Can you please explain in detail how to find the solutions to these problems?
1) Find the complement of
8 gal 1 qt
- 3 gal 2 qt
3) Label the triangle as equilateral, isosceles, or scalene.
4) Write 30% as a fraction
5) Write as a percent.
6) 11% of what number is 77
7) What is 30% of 200?
8) The electricity costs of a business increased from $10,000 one year to $13,000 the next. To the nearest whole percent, what was the rate of increase?
9) Write the ratio of 9 nickels to 3 quarters in simplest form.
10) Garrett washes 3 cars in 72 minutes. How many minutes does it take Garrett to wash one car?
11) Determine whether the pair of fractions is proportional:
12) Write a proportion that is equivalent to the statement: If 3 gallons of gasoline cost $5.37 then 8 gallons will cost $14.32.
13) Is the object a line or a line segment? Why?
14) Is the object a line or a line segment? Why?
See the attachment.
The solution provides answers to simple math problems. |
Most sharks have an unusual combination of biological characteristics: slow growth and delayed maturation; long reproductive cycles; low fecundity; and long life spans. These factors determine the low reproductive potential of many shark species.
Slow growth and delayed maturation: Some species of sharks, including some of the commercially important species, are extremely slow growing. The picked dogfish (Squalus acanthias) has been estimated by Jones and Geen (1977) to reach maturity at about 25 years. The sandbar shark (Carcharhinus plumbeus), the most economically important species along the southeastern coast of the United States, has been estimated to reach maturity from 15-16 years (Sminkey and Musick 1995) to about 30 years (Casey and Natanson 1992).
Long reproductive cycles: Sharks produce young that hatch or are born fully developed, and that are relatively large at hatching or birth. The energy requirements of producing large, fully developed young result in great energy demands on the female, and in reproductive cycles and gestation periods that are unusually long for fishes. Both the reproductive cycle and the gestation period usually last one or two years in most species of sharks, reflecting the time it takes a female to store enough energy to produce large eggs and to nurture her large young through development (Castro 1996). The reproductive cycle is how often the shark reproduces, and it is usually one or two years long. The gestation period is the time of embryonic development from fertilization to birth, and is frequently one or two years long. The reproductive cycle and the gestation period may run concurrently or consecutively. For example, in the picked dogfish, the reproductive cycle and gestation run concurrently and both last two years. A female carries both developing oocytes in the ovary and developing embryos in the uteri concurrently for two years. Shortly after parturition, it mates and ovulates again, and the process begins anew. In this case, both ovulation and parturition are biennial. In most of the large, commercially important, carcharhinid sharks, the reproductive cycle and the gestation period run consecutively. These sharks have biennial reproductive cycles (Clark and von Schmidt 1965) with one-year gestation cycles. They accumulate the energy reserves necessary to produce large eggs for about a year, then, mate, ovulate, gestate for one year, and give birth. For example, after giving birth in the spring, a blacktip shark (Carcharhinus limbatus) enters a "resting" stage where it stores energy and nourishes its large oocytes for one year. After mating and ovulation, it begins a year gestation period, giving birth in the spring of the second year after its previous parturition (Castro 1996). Thus, these sharks also reproduce biennially. Some of the hammerhead sharks (Sphyrna) and the sharpnose sharks (Rhizoprionodon) reproduce annually (Castro 1989, Castro and Wourms 1993). Even longer cycles of three and four years have been proposed for other species without adducing any evidence.
Low fecundity: The small size of their broods, or "litters", is another factor contributing to the low reproductive potential of sharks. The number of young or "pups" per brood usually ranges from two to a dozen, although some species may produce dozens of young per brood. Most of the commercially important carcharhinid sharks usually produce less than a dozen young per brood. For example, the sandbar shark averages 8 young per brood, while the blacktip averages 4 per brood (Castro 1996). An exception, among the targeted species, is the blue shark for which broods of over 30 young have often been reported.
Long life spans: Although, many species of sharks are known to be long-lived (Pratt and Casey 1990), the reproductive life span of sharks is unknown. Because the long time before maturation and the long reproductive cycles, it appears that a given female may only produce a few broods in its lifetime (Sminkey and Musick 1995).
Many of the commercially important species use shallow coastal waters, known as "nurseries", to give birth to their young, and where the young spend their first months or years (Castro 1993b). The mating grounds are often close to the nurseries, and thus adults of both sexes congregate close to shore in large numbers. These areas are highly attractive to fishermen, because of their nearness to shore and the high concentration of sharks. Most of the commercially important species (e.g. the genera Carcharhinus, Sphyrna, Rhizoprionodon, Negaprion) have shallow water nurseries (Castro 1987, 1993b). These sharks are very vulnerable to modern fishing operations, and are easily overfished.
There is no evidence of any compensatory mechanisms by female sharks that will increase brood size or decrease the length of the ovarian and gestation cycles in response to overfishing. It is highly unlikely that those mechanisms can be evolved rapidly enough to compensate for the increase in mortality. Even if such mechanisms could be evolved, brood size would be limited by the maximum number of young that can be carried by a female, and ovulatory and gestation cycles are limited by complex metabolic processes. The long ovarian cycles and long gestation periods probably reflect he minimal times required by the species to acquire and transfer the necessary energy to large ova and young. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.