content
stringlengths 275
370k
|
---|
Summary: Reading to infants and young children is associated with stronger vocabulary skills at age three. The findings reveal parents who read to children with genetic predispositions to learning and attention disorders help improve their language acquisition skills.
Shared reading between parents and very young children, including infants, is associated with stronger vocabulary skills for nearly all children by age 3, say physicians at Rutgers Robert Wood Johnson Medical School. According to research published in The Journal of Pediatrics, this is true also for children who genetically may be vulnerable to barriers in learning, attention and behavior development.
“In a supportive environment, children who may be genetically at-risk, do just as well as their peers,” said Manuel Jimenez, a developmental pediatrician and assistant professor of pediatrics and family medicine and community health at the medical school, who is lead author of the study.
The children in the study were tested as part of the Fragile Families and Child Wellbeing Study which examined the development of children born to unmarried families who were at greater risk of living in poverty
Jimenez explained that the study looked at how children respond differently to shared reading based on genetic characteristics. Using data from the Fragile Families and Child Wellbeing Study, which has followed the development of nearly 5,000 children in large U.S. cities born between 1998 and 2000, the team assessed the difference in vocabulary skill development based on genetic differences in two neurotransmitter systems that have implications in learning development, memory and impulse control.
The study found that shared reading with children at 1 year old was associated with higher vocabulary scores on a standardized assessment at age 3, in line with previous published studies. Children with genetic variations that put them at-risk fared just as well as their peers on the assessment when shared reading was conducted at age 1. However, at-risk children who were not exposed to shared reading did poorly on the same vocabulary assessment.
“We found that reading with very young children can be quite powerful and really makes a difference in a child’s development, particularly with children who may be vulnerable to developmental delays,” said Jimenez.
According to Jimenez, scientists are just starting to understand how genes influence complex behaviors and how science can be applied to improving lives through patient care. The research underscores the importance of a positive environment with close parental contact and its direct correlation to favorable child development, even when a child may be at-risk for learning and behavioral challenges.
Daniel Notterman, a pediatrician, professor of molecular biology and co-investigator of the Fragile Families study at Princeton University, clinical professor of pediatrics at Robert Wood Johnson Medical School, and co-author of the study, concurs. “Biological measures give us another way to identify children for which interventions, in this case reading, may have the greatest benefit,” he said. “Although there is already evidence of the positive effects of shared reading, this study provides additional verification and a more quantitative picture of the link between a child’s environment, biological makeup, and development.”
Both researchers emphasized that parents need to spend time reading with their children every day, as findings from the study provide support for literacy promotion at an early age.
“The bottom line is that children respond positively to shared reading at an early age and doing so is one way to improve language skills for all children,” said Jimenez.
About this neuroscience research article
Source: Rutgers Media Contacts: Jennifer Forbes – Rutgers Image Source: The image is credited to Rutgers.
Shared Reading at Age 1 Year and Later Vocabulary: A Gene–Environment Study
Objective To assess the extent to which associations between shared reading at age 1 years and child vocabulary at age 3 years differ based on the presence of sensitizing alleles in the dopaminergic and serotonergic neurotransmitter systems.
Study design We conducted a secondary analysis of data from a national urban birth cohort using mother reports in conjunction with child assessments and salivary genetic data. Child vocabulary was assessed using the Peabody Picture Vocabulary Test. The primary exposure was mother-reported shared reading. We used data on gene variants that may affect the function of the dopaminergic and serotonergic systems. We examined associations between shared reading and Peabody Picture Vocabulary Test score using multiple linear regression. We then included interaction terms between shared reading and the presence of sensitizing alleles for each polymorphism to assess potential moderator effects adjusting for multiple comparisons.
Results Of the 1772 children included (56% black, 52% male), 31% of their mothers reported reading with their child daily. Daily shared reading was strongly associated with child Peabody Picture Vocabulary Test scores in unadjusted (B = 7.9; 95% CI, 4.3-11.4) and adjusted models (B = 5.3; 95% CI, 2.0-8.6). The association differed based on the presence of sensitizing alleles in the dopamine receptor 2 and serotonin transporter genes.
Conclusions Among urban children, shared reading at age 1 years was associated with greater vocabulary at age 3 years. Although children with sensitizing alleles on the dopamine receptor 2 and serotonin transporter genes were at greater risk when not read to, they fared as well as children without these alleles when shared reading occurred. |
A geographic information system (GIS) is a computer system designed to capture, store, manipulate, analyze, manage, and present all types of geographical data. GIS can be thought of as a system that provides spatial data entry, management, retrieval, analysis, and visualization functions. GIS applications are tools that allow users to create interactive queries (user-created searches), analyze spatial information, edit data in maps, and present the results of all these operations. GIS is the science underlying geographic concepts, applications, and systems.
More information can be found at the following links: |
SVM Classification, in GeneLinkerô, is the process of learning to separate samples into different classes. For example, a set of samples may be taken from biopsies of two different tumor types, and their gene expression levels measured. GeneLinkerô can use this data to learn to distinguish the two tumor types so that later, GeneLinkerô can diagnose the tumor types of new biopsies. Because making predictions on unknown samples is often used as a means of testing the SVM classifier, we use the terms training samples and test samples to distinguish between the samples of which GeneLinkerô knows the classes (training), and samples of which GeneLinkerô will predict the classes (test).
Types of Learning
SVM Classification is an example of Supervised Learning. Known class labels help indicate whether the system is performing correctly or not. This information can be used to indicate a desired response, validate the accuracy of the system, or be used to help the system learn to behave correctly. The known class labels can be thought of as supervising the learning process; the term is not meant to imply that you have some sort of interventionist role.
Clustering is an example of Unsupervised Learning where the class labels are not presented to the system that is trying to discover the natural classes in a dataset. Clustering often fails to find known classes because the distinction between the classes can be obscured by the large number of features (genes) which are uncorrelated with the classes. A step in SVM classification involves identifying genes which are intimately connected to the known classes. This is called feature selection or feature extraction. Feature selection and SVM classification together have a use even when prediction of unknown samples is not necessary: They can be used to identify key genes which are involved in whatever processes distinguish the classes.
Manual Feature Selection
Manual feature selection is useful if you already have some hypothesis about which genes are key to a process. You can test that hypothesis by:
i. constructing a gene list of those genes,
ii. running an SVM classifier using those genes as features, and
iii. displaying a plot which shows whether the data can be successfully classified.
Feature Selection Using the SLAMô Technology
The genes that are frequently observed in associations are frequently good features for classification with artificial neural networks or support vector machines. In GeneLinkerô, SVM classification is done using a committee of support vector machines (SVMs). SVMs find an optimal separating hyperplane between data points of different classes in a (possibly) high dimensional space. The actual Support Vectors are the points that form the decision boundary between the classes. More details on support vector machines are available in Tutorial 9. A committee of SVMs is used because an individual SVM may not be robust. That is, it may not make good predictions on new data (test data) despite excellent performance on the training data. Such a learner is referred to as being overtrained.
Each learner (ANN or SVM) is by default trained on a different 90% of the training data and then validated on the remaining 10%. (These fractions can be set differently in the Create ANN Classifier dialog or in the Create SVM Classifier dialog by varying the number of learners.) This technique mitigates the risk of overtraining at the level of the individual learner.
The committee architecture further enhances robustness by combining the component predictions in a voting scheme. Finally, by examining a chart of the voting results, difficult-to-classify samples can often be identified for re-examination or further study.
An Introduction to Classification: Feature Selection
Association Mining Using SLAMô
Creating an ANN Classifier
Classify New Data |
Remember to will not be reluctant to glimpse into them on our web page. Definition and Examples of Narratives in Crafting. The definition of narrative is a piece of writing that tells a story, and it is a person of 4 classical rhetorical modes or means that writers use to existing facts.
The others contain an exposition, which explains and analyzes an notion or set of concepts an argument, which tries to persuade the reader to a unique level of watch and a description, a published sort of a visible practical experience. Key Takeaways: Narrative Definition. A narrative is a type of creating that tells a story. Narratives can be essays, fairy tales, movies, and jokes. Narratives have 5 elements: plot, setting, character, conflict, and theme. Writers use narrator design and style, chronological get, a point of see, and other methods to inform a tale. Telling stories is an historic artwork that started off extended ahead of people invented producing.
People today explain to tales when they gossip, inform jokes, or reminisce about the earlier. Created sorts of narration include most sorts of composing: personal essays, fairy tales, small tales, novels, plays, screenplays, autobiographies, histories, even information stories have a narrative.
Report publishing program that could be a strategy to a handful of your ESL academic troubles
Narratives may perhaps be a sequence of gatherings in chronological get or an imagined tale with flashbacks or several timelines. Narrative Components. Every narrative has 5 elements that outline and form the narrative: plot, location, character, conflict, and superb paper theme. These things are seldom mentioned in a story they are revealed to the audience in the tale in delicate or not-so-delicate means, but the author requirements to have an understanding of the features to assemble her tale. Here’s an illustration from “The Martian,” a novel by Andy Weir that was made into a movie:The plot is the thread of gatherings that manifest in a story. Weir’s plot is about a gentleman who receives acc >Setting Tone and Temper. In addition to structural things, narratives have quite a few types that assist transfer the plot alongside or provide to contain the reader.
Writers define space and time in a descriptive narrative, and how they opt for to determine these properties can express a certain temper or tone. For example, chronological options can affect the reader’s impressions. Previous situations constantly take place in stringent chronological purchase, but writers can pick to blend that up, show events out of sequence, or the very same party several moments knowledgeable by different characters or described by various narrators. In Gabriel García Márquez’s novel “Chronicle of a Loss of life Foretold,” the similar couple of several hours are expert in sequence from the viewpoint of several various figures.
Inexpensive reports: Listed here You will Locate the best Essay Article author
García Márquez uses that to illustrate the peculiar just about magical lack of ability of the townspeople to stop a murder they know is likely to occur. The alternative of a narrator is another way that writers established the tone of a piece. Is the narrator somebody who professional the situations as a participant, or one who witnessed the events but wasn’t an lively participant? Is that narrator an omniscient undefined person who is aware all the things about the plot together with its ending, or is he perplexed and unsure about the gatherings underway? Is the narrator a dependable witness or lying to by themselves or the reader? In the novel “Gone Female,” by Gillian Flynn, the reader is pressured to continuously revise her viewpoint as to the honesty and guilt of the partner Nick and his missing wife. In “Lolita” by Vladimir Nabokov, the narrator is Humbert Humbert, a pedophile who frequently justifies his actions despite the problems that Nabokov illustrates he is executing. Point of Look at. Establishing a stage of check out for a narrator lets the author to filter the events via a particular character.
The most frequent place of look at in fiction is the omniscient (all-figuring out) narrator who has accessibility to all the thoughts and encounters of every single of her figures. Omniscient narrators are almost always composed in the 3rd particular person and do not typically have a purpose in the storyline. |
1 Common core state STANDARDS FOR. english Language Arts &. Literacy in History/social Studies, Science, and Technical Subjects appendix A: Research Supporting Key Elements of the Standards Glossary of Key Terms Common Core State Standards for english Language arts & Literacy in History/social studies, science, and technical subjects Reading One of the key requirements of the Common Core State Standards for Reading is that all students must be able to comprehend texts of steadily increasing complexity as they progress through school. By the time they complete the core, students must be able to read and comprehend independently and proficiently the kinds of complex texts com- monly found in college and careers. The first part of this section makes a research-based case for why the complex- ity of what students read matters. In brief, while reading demands in college, workforce training programs, and life in general have held steady or increased over the last half century, K 12 texts have actually declined in sophistication, and relatively little attention has been paid to students' ability to read complex texts independently.
2 These conditions have left a serious gap between many high school seniors' reading ability and the reading requirements they will face after graduation. The second part of this section addresses how text complexity can be measured and made a regular part of instruction. It introduces a three-part model that blends qualitative and quantitative measures of text com- plexity with reader and task considerations. The section concludes with three annotated examples showing how the model can be used to assess the complexity of various kinds of texts appropriate for different grade levels. Why Text Complexity Matters In 2006, ACT, Inc., released a report called Reading Between the Lines that showed which skills differentiated those students who equaled or exceeded the benchmark score (21 out of 36) in the reading section of the ACT college ad- missions test from those who did not. Prior ACT research had shown that students achieving the benchmark score or better in reading which only about half (51 percent) of the roughly half million test takers in the 2004 2005 academ- ic year had done had a high probability (75 percent chance) of earning a C or better in an introductory, credit-bear- ing course in history or psychology (two common reading-intensive courses taken by first-year college students).
3 And a 50 percent chance of earning a B or better in such a Surprisingly, what chiefly distinguished the performance of those students who had earned the benchmark score or better from those who had not was not their relative ability in making inferences while reading or answering questions related to particular cognitive processes, such as determining main ideas or determining the meaning of words and phrases in context. Instead, the clearest differentiator was students' ability to answer questions associated with com- plex texts. Students scoring below benchmark performed no better than chance (25 percent correct) on four-option multiple-choice questions pertaining to passages rated as complex on a three-point qualitative rubric described in the report. These findings held for male and female students, students from all racial/ethnic groups, and students from families with widely varying incomes.
4 The most important implication of this study was that a pedagogy focused only on higher-order or critical thinking was insufficient to ensure that students were ready for college and careers: what students could read, in terms of its complexity, was at least as important as what they could do with what they read. The ACT report is one part of an extensive body of research attesting to the importance of text complexity in reading achievement. The clear, alarming picture that emerges from the evidence, briefly summarized below2, is that while the reading demands of college, workforce training programs, and citizenship have held steady or risen over the past fifty years or so, K 12 texts have, if anything, become less demanding. This finding is the impetus behind the Standards'. strong emphasis on increasing text complexity as a key requirement in reading. College, Careers, and Citizenship: Steady or Increasing Complexity of Texts and Tasks Research indicates that the demands that college, careers, and citizenship place on readers have either held steady or increased over roughly the last fifty years.
5 The difficulty of college textbooks, as measured by Lexile scores, has not decreased in any block of time since 1962; it has, in fact, increased over that period (Stenner, Koons, & Swartz, in press). The word difficulty of every scientific journal and magazine from 1930 to 1990 examined by Hayes and Ward (1992). had actually increased, which is important in part because, as a 2005 College Board study (Milewski, Johnson, Glazer, &. Kubota, 2005) found, college professors assign more readings from periodicals than do high school teachers. Work- place reading, measured in Lexiles, exceeds grade 12 complexity significantly, although there is considerable variation (Stenner, Koons, & Swartz, in press). The vocabulary difficulty of newspapers remained stable over the 1963 1991 period Hayes and his colleagues (Hayes, Wolfer, & Wolfe, 1996) studied. Furthermore, students in college are expected to read complex texts with substantially greater independence ( , much less scaffolding) than are students in typical K 12 programs.
6 College students are held more accountable for what they read on their own than are most students in high school (Erickson & Strommer, 1991; Pritchard, Wilson, &. Yamnitz, 2007). College instructors assign readings, not necessarily explicated in class, for which students might be held accountable through exams, papers, presentations, or class discussions. Students in high school, by contrast, are 1. In the 2008 2009 academic year, only 53 percent of students achieved the reading benchmark score or higher; the increase appendix A |. from 2004 2005 was not statistically significant. See ACT, Inc. (2009). 2. Much of the summary found in the next two sections is heavily influenced by Marilyn Jager Adams's painstaking review of the relevant literature. See Adams (2009). 2. Common Core State Standards for english Language arts & Literacy in History/social studies, science, and technical subjects rarely held accountable for what they are able to read independently (Heller & Greenleaf, 2007).
7 This discrepancy in task demand, coupled with what we see below is a vast gap in text complexity, may help explain why only about half of the students taking the ACT Test in the 2004 2005 academic year could meet the benchmark score in reading (which also was the case in 2008 2009, the most recent year for which data are available) and why so few students in general are prepared for postsecondary reading (ACT, Inc., 2006, 2009). K 12 Schooling: Declining Complexity of Texts and a Lack of Reading of Complex Texts Independently Despite steady or growing reading demands from various sources, K 12 reading texts have actually trended downward in difficulty in the last half century. Jeanne Chall and her colleagues (Chall, Conard, & Harris, 1977) found a thirteen- year decrease from 1963 to 1975 in the difficulty of grade 1, grade 6, and (especially) grade 11 texts. Extending the period to 1991, Hayes, Wolfer, and Wolfe (1996) found precipitous declines (relative to the period from 1946 to 1962) in average sentence length and vocabulary level in reading textbooks for a variety of grades.
8 Hayes also found that while science books were more difficult to read than literature books, only books for Advanced Placement (AP) classes had vocabulary levels equivalent to those of even newspapers of the time (Hayes & Ward, 1992). Carrying the research closer to the present day, Gary L. Williamson (2006) found a 350L (Lexile) gap between the difficulty of end-of-high school and college texts a gap equivalent to standard deviations and more than the Lexile difference between grade 4 and grade 8 texts on the National Assessment of Educational Progress (NAEP). Although legitimate questions can be raised about the tools used to measure text complexity ( , Mesmer, 2008), what is relevant in these numbers is the general, steady decline over time, across grades, and substantiated by several sources in the difficulty and likely also the sophistication of content of the texts students have been asked to read in school since 1962.
9 There is also evidence that current standards, curriculum, and instructional practice have not done enough to foster the independent reading of complex texts so crucial for college and career readiness, particularly in the case of infor- mational texts. K 12 students are, in general, given considerable scaffolding assistance from teachers, class discus- sions, and the texts themselves (in such forms as summaries, glossaries, and other text features) with reading that is already less complex overall than that typically required of students prior to What is more, students today are asked to read very little expository text as little as 7 and 15 percent of elementary and middle school instructional reading, for example, is expository (Hoffman, Sabo, Bliss, & Hoy, 1994; Moss & Newton, 2002; Yopp & Yopp, 2006) . yet much research supports the conclusion that such text is harder for most students to read than is narrative text (Bowen & Roth, 1999; Bowen, Roth, & McGinn, 1999, 2002; Heller & Greenleaf, 2007; Shanahan & Shanahan, 2008), that students need sustained exposure to expository text to develop important reading strategies (Afflerbach, Pear- son, & Paris, 2008; Kintsch, 1998, 2009; McNamara, Graesser, & Louwerse, in press; Perfetti, Landi, & Oakhill, 2005.)
10 Van den Broek, Lorch, Linderholm, & Gustafson, 2001; van den Broek, Risden, & Husebye-Hartmann, 1995), and that expository text makes up the vast majority of the required reading in college and the workplace (Achieve, Inc., 2007). Worse still, what little expository reading students are asked to do is too often of the superficial variety that involves skimming and scanning for particular, discrete pieces of information; such reading is unlikely to prepare students for the cognitive demand of true understanding of complex text. The Consequences: Too Many Students Reading at Too Low a Level The impact that low reading achievement has on students' readiness for college, careers, and life in general is signifi- cant. To put the matter bluntly, a high school graduate who is a poor reader is a postsecondary student who must struggle mightily to succeed. The National Center for Education Statistics (NCES) (Wirt, Choy, Rooney, Provasnik, Sen, & Tobin, 2004) reports that although needing to take one or more remedial/developmental courses of any sort low- ers a student's chance of eventually earning a degree or certificate, the need for remedial reading appears to be the most serious barrier to degree completion (p. |
The thin atmosphere of Mars is but a straggly reminder of what it used to be, according to Nasa scientists.
In news that will give you the jitters every time you feel a stiff breeze, Nasa now has proof that most of the gas on Mars has was blown into space - and escaped from the planet, leaving behind a toxic and near vacuous atmosphere.
Mars' atmosphere is only a hundredth as dense as that of Earth, and is composed mainly of carbon dioxide. Needless to say, it would be impossible for a human to breathe on the planet, and the chance of any life still existing on its surface is extremely slim. But it wasn't always like this.
Evidence gathered by the Curiosity Rover has given further weight to the theory that Mars once had a much thicker atmosphere, and that most of it has now escaped from the planet.
The findings confirm theories from earlier missions
To confirm the theory Rover has been using its Sample Analysis at Mars (SAM) instrument to examine isotopes of argon.
SAM found four times as much of a lighter stable isotope (argon-36) compared to a heavier one (argon-38).
This is much lower than the ratio predicted for the original solar system - and indicates Mars has lost the lighter isotope over the heavier one.
Sushil Atreya, a SAM co-investigator at the University of Michigan, Ann Arbor, said: "We found arguably the clearest and most robust signature of atmospheric loss on Mars.
Curiosity's SAM suite
The latest news was also accompanied by more information on temperature, humidity and winds on Mars.
The first systematic measurements of humidity on Mars have shown it varies greatly depending on location, whereas temperature does not.
Dust in wind patterns has also been examined using the laser-firing Chemistry and Camera (ChemCam) instrument to glean information about the chemical composition of the surface.
ChemCam Deputy Principal Investigator, Sylvestre Maurice, said: "We knew that Mars is red because of iron oxides in the dust.
"ChemCam reveals a complex chemical composition of the dust that includes hydrogen, which could be in the form of hydroxyl groups or water molecules."
Curiosity will spend the rest of April carrying out instructions beamed up in March. After this the Rover will have a break of sorts, as Mars disappears behind the Sun. |
I often find that my students sometimes confuse oxygenation and ventilation as the same process. In reality they are really very different. Ventilation exchanges air between the lungs and the atmosphere so that oxygen can be absorbed and carbon dioxide can be eliminated. Oxygenation is simply the addition of oxygen to the body. You must understand the difference to understand how hypoventilation causes hypoxia.
If you hyperventilate with room air, you will lower your arterial carbon dioxide content (PaCO2) significantly, but your oxygen levels won’t change much at all. On the other hand, if you breathe a high concentration of oxygen, but don’t increase or decrease your respiratory rate, your arterial oxygen content (PaO2) will greatly increase, but your PaCO2 won’t change.
Ventilation changes PaCO2. Oxygenation changes PaO2.
Why do we need to understand this? Let’s look at some common examples. Along the way we will painlessly use the Alveolar Gas Equation to explain two common scenarios:
- how hypoventilation causes hypoxia,
- why abruptly taking all supplemental oxygen away from a carbon dioxide retainer will hurt them.
How Does Hypoventilation Cause Hypoxemia?
Hypoventilation is a common cause of too little oxygen in the blood. When breathing room air, CO2 takes up space in the alveoli, leaving less room for oxygen. Let’s see how big an effect this is. The concentration of oxygen in the alveoli can be calculated using the Alveolar Gas Equation:
PAO2 = FiO2 (PB – PH2O) – PACO2/ R
- Where: PAO2 = partial pressure of oxygen in the alveoli
- FiO2 = concentration of inspired oxygen
- PB = the barometric pressure where the patient is breathing
- PH2O = the partial pressure of water in the air (usually 47 mmHg)
- PACO2 = alveolar carbon dioxide tension
- R = respiratory quotient, a constant usually assumed to be 0.8
Let’s say that our emergency room patient with a narcotic overdose, at sea level and breathing room air, has an alveolar PACO2 of 80 mmHg, or twice normal. That carbon dioxide takes up space and leaves less room for oxygen.
Using the Alveolar Gas Equation, that PAO2 calculation is:
PAO2 = 0.21 (760 – 47) – 80/0.8 = 49 mmHg
Normal PAO2 is about 100 mmHg, so this is quite hypoxic, especially since the alveolar PAO2 is always a little higher than the arterial PaO2. If it weren’t, oxygen would not flow out of the alveoli into the blood — it would stay in the alveoli.
Now let’s treat this patient with 50% oxygen and see what happens:
PAO2 = 0.5 (760 – 47) – 80/0.8 = 256 mmHg
That’s a five-fold increase in alveolar oxygen without changing ventilation at all. Putting the patient on oxygen will buy you time for treatment. If this is a quickly reversible process, such as a narcotic overdose, you may not need to intubate. However, if this is not quickly reversible, then oxygen protects brain and heart while you manually ventilate or intubate.
This is also a good time to point out that a patient can have a normal oxygen saturation and even a normal arterial oxygen concentration and still be in respiratory distress or failure because ventilation and CO2 elimination is failing. In the above example our treated patient’s O2 saturation would be 100%, but with a PaCO2 of 80 mmHg, the pH would be about 7, a dangerous and potentially life-threatening respiratory acidosis. Don’t be lulled into missing a patient’s tenuous status just because the oxygen saturation looks good. Hypoventilation can eventually cause hypoxia.
The Challenge of the CO2 Retainer
Now let’s look at a CO2 retaining emphysema patient relying on hypoxic drive — which by the way, is only a very small minority of patients with end stage pulmonary disease. This patient was in respiratory distress from pneumonia with an arterial PaO2 of 65 mmHg upon arrival to the hospital. The nurse placed her on 50% oxygen.
After oxygen therapy, her blood gas shows her PaO2 is now 256 (good) and her PaCO2 is now 80 (bad) and she’s getting sleepy, probably from the high CO2. The high oxygen levels have decreased this particular patient’s drive to breathe.
Seeing CO2 retention, the nurse might be tempted to take all the oxygen off this patient in order to stimulate her breathing and get her CO2 down — but that would be the wrong thing to do. Why?
As we saw in the calculation above, we’d expect the alveolar PAO2 to abruptly drop to 49 with this change. A better way to deal with this situation would be to wean the oxygen back slowly, maintaining a good oxygen level while allowing the respiratory drive to improve. Keep reminding the patient to take deep breaths. Intubation might still be needed so watch the patient carefully.
Never let the fear of CO2 retention stop you from treating a COPD patient with oxygen in an emergency. The vast majority of patients with COPD do not retain CO2. And even if the patient you happen to be treating does retain CO2, the worst-case scenario is that you relieve their hypoxia and protect their brain and heart (good) but might have to temporarily assist ventilation.
May The Force Be With You
Christine E. Whitten MD
author of Anyone Can Intubate: A Step By Step Guide, 5th Edition &
Pediatric Airway Management: A Step-by-Step Guide
Please click on the covers to see inside my books at amazon.com |
This is an archive story, published in the September 1943 edition of Geographical.
All facts, figures and statistics were accurate at the time of original publication. The text has been lightly edited solely for house style reasons but otherwise remains unchanged.
For access to the entire Geographical archive dating back to 1935, check out our digital subscription options. Every issue of Geographical right at your fingertips!
In recent years the townspeople of Britain have taken considerably more interest in the moon than they did in times of peace. The power of the moon, our nearest neighbour in space, to diffuse light is, however, not its chief function, for the tides which cleanse our shores and give great ships access to many of our ports depend for their existence upon the attention of this, our only satellite, and, moreover, observations of the moon provide navigational data for mariners on the seven seas. What sort of an object is then this moon, whose importance is so much enhanced by the black-out? The study of it, which may be compared with the geography of the earth, is usually known as selenography.
The moon’s mean distance from the earth is just over sixty times the earth’s radius, which would make it about 250,000 miles. The sun is about 380 times further from us than is the moon.
The moon has no light of her own, and merely acts as a mirror for the sun’s rays. The phases of the moon, caused by the relative positions of the earth, moon and sun, are well-known, new moon occurring when the moon and sun are both on the same side of the earth, and full moon when the earth is between the sun and the moon. The moon is said to be in syzygy when it is new or full. It follows that the crescent is first seen like a sickle in the eastern sky after sunset, moving further to the east as it gets more full, until, as a full moon, it rises about the same time as the sun sets. At last quarter the moon is high in the heavens in the morning, the crescent becoming smaller and smaller as it draws closer to the sun, finally disappearing in the effulgence of our star. The first appearance of the crescent moon is especially important to the Moslems, particularly for the months of Ramadan and Bairam; from observations made with this in mind it has been found that the new moon may be seen when about twenty-four hours old and twelve degrees from the sun. In 1910, J.K. Fotheringham, the astronomer, dealing with Julius Schmidt’s observations made in Athens, claimed that this was independent of differences in latitude.
The diameter of the moon is some 2160 miles and its mass is 1/81.53 of the earth’s mass. The mean specific gravity of the moon is about 3.4, compared with the earth as a whole 5.5 and the earth's surface 2.65.
THE ORIGIN OF THE MOON
The friction of the tides in the seas of the earth caused by the moon, may be calculated to have the effect of increasing the distance between the earth and the moon by about five feet every hundred years. Sir George Darwin calculated that the initial length of our day would be equivalent to about one-sixth of our present day, and the initial distance of the moon's centre from the earth’s centre about 8,000 miles. If we could go further back would they be united? Most cosmogonists think not, unless an extremely improbable though not impossible thing happened: namely that, during the course of evolution, the tides caused by the sun on the as yet molten earth had a period which exactly coincided with the natural free period of vibration of the mass of the earth, should this molten mass be set pulsating. This would, of course, set up resonance, giving tides sufficiently high to cause rupture.
Why, at the time when something happened to cause our sun to have a planetary system, twin bodies such as the earth and moon came out of chaos into being so close together and of sizes so nearly equal at the same time, is not yet fully understood; but that they did so seems more probable, according to present use of the available evidence, than that at some distant time the moon separated from the earth tidally. The earth may even now not be solid right to the centre. There is a large core which appears to possess the properties of a liquid in that it will not transmit transverse earthquake waves; but it does appear that the moon is now solid throughout its interior. The moon has a bulge towards the earth amounting to about one part in 1,500. This, it has been thought, was caused by tides, for which the earth was responsible, when the moon was still molten; the earth-moon distance being then some 90,000 miles. At that time, the moon became solid and the bulge remained. This bulge on the moon is too great to have been caused since the time when the moon was more than 90,000 miles distant from the earth.
THE MOON’S SURFACE
The surface structure of the moon, comparable to the geology of the earth, may be called selenology. The selenologist must carry out his studies with a telescope, camera or other optical device, whereas the instrument most usually associated with a geologist is his hammer. The results of selenology' are large-scale phenomena. We learn, for instance, the relative ages of the lunar formations such as that Aristarchus is younger than Kepler, and Kepler than Copernicus. Mr H. G. Tomkins has proposed that the dark substratum seen in various places at full moon especially in the maria, can be considered as a foundation on which all lunar formations are grounded, in order to correlate their ages over wide areas on the moon. He further suggests that the apparent mottling of extensive areas over the lunar surface may be comparable to pumice or volcanic ash, as suggested by Schonberg and Brunn, or that it may be an efflorescence rather than such a crust as is possessed by the earth. Whether or not there are fossils in the lunar strata, or what the mineral formations may be, we have no means of discovering. In 1787 William Herschel was actively engaged in England making observations of the moon with his telescope. We learn from The Herschel Chronicle (Cambridge University Press), edited by Constance A. Lubbock, that on May 20, 1787, he wrote to Mr Ernest, one of King George IV’s Pages:
In view of the generally held opinion that the moon is now quite solid throughout, it appears unlikely that Herschel actually did view an eruption. Even at the time there were sceptics, for Laland in a letter to Herschel dated May 21, 1788, wrote: ‘Mt Aristarchus which is naturally very brilliant might well reflect the light of the earth in such a manner as to produce this bright appearance across the pale light of the moon’. Maybe if there were radioactive materials near the surface of the moon, and the heat could accumulate, there might be some form of volcanic activity—but this possibility may be remote and it depends on a concatenation of circumstances any one of which may be absent.
HOW, WHEN AND WHAT TO SEE
The physical features of the moon are remarkable and it would be no exaggeration to say that the moon is the most interesting of heavenly bodies for a small telescope. With a fair-sized telescope it is better to use a low power and a dark eye-piece cap rather than reduce the aperture, which affects the sharpness of the definition.
The moon is only three-quarters as bright in apogee (point of orbit most distant from earth) as in perigree (point of orbit nearest to earth), and should we not wish to see a feature which would then be in darkness, best viewing conditions are about the time of first quarter and last quarter, since the features are in greater relief especially near the terminator (dark-light boundary), on account of their shadows, than nearer to full moon. In the northern hemisphere the most favourable viewing conditions, on account of the moon’s altitude above the horizon, are at vernal equinox for the first quarter and autumnal equinox for the last quarter, and vice versa in the southern hemisphere.
More than a hundred years ago John Russell, R.A., spent some twenty years making a careful drawing of the moon, though since that time photography has played a much more important part in this study. A catalogue of over 6,000 named lunar formations was presented by Mary A. Blagg and K. Muller to the International Astronomical Union in 1932. Since the moon is approximately a quarter of a million miles from us, on looking through a telescope which magnifies, say, 1,000 times, we should still see objects only as they would appear at a naked-eye distance of 250 miles. Thus only the most pronounced features are visible to us. At full moon contrast is lost and prominent objects such as Maginus disappear for two or three days before and after. Craters appear brighter than their surroundings. Linne shows some variation, and in the south-west portion of the moon the rays or streaks may be observed. Altogether about six-tenths of the moon’s surface may be observed from time to time, while the other four-tenths have never been observed. The one-tenth is due to the apparent swaying of the moon, called the moon’s libration, which is due to the inclination of its axis to its orbit. Owing to libration we rarely see a lunar object and its shadow in the same place twice, the maximum variation amounting to over twenty degrees. Objects near the centre of the moon (approximately equidistant from the three craters, Herschel, Schroter and Triesnecker) may be seen in their true shape, but nearer the limb more and more foreshortening occurs. Objects near the limb are in profile.
Early telescopists, using low-powered instruments, imagined they had discovered extensive seas on the moon, but more perfect and higher-powered telescopes have shown these features to be vast plains, by no means level or smooth, and possibly once the beds of lunar oceans. The Sinus Iridum, bounded by great cliffs rising to peaks over 16,000 feet high, is one of the finest objects and is best viewed when the moon is eight or nine days old.
Lunar mountains and mountain ranges are much more pronounced than terrestrial ones and some attain a height of five miles. These lunar mountains may be divided roughly into two classes. The first class consists of ordinary mountain peaks, ridges, hills and chains. Possibly the most conspicuous range is the Apennines in the northern hemisphere of the moon, which rises from the Mare Imbrium. It is about 600 miles long and the highest peaks reach a height of three and a half miles. The shadows from these mountains attain a length of a hundred miles as measured with a micrometer attached to a telescope. This may be verified by measurement of photographs.
The second class is composed of features conventionally called craters. These so-called craters may be walled plains, ring plains or craters proper. Walled plains such as Albategnius, Clavius and Schiller have a diameter, approximately, of between 40 and 150 miles. They are usually surrounded by a complex succession of walls, the floor being comparatively level, usually not much lower than the outside, and the central mountain is often absent. Plato is probably the best example. The ring plains such as Kepler, Archimedes and Tycho, of diameter usually between twenty and sixty miles, form the majority of the so-called lunar craters. They are more uniform and circular than walled plains and are, more often than not, surrounded by a single mountain range. The outer slope is small and the terraced interior often steep. The comparatively level floor of the ‘crater’ is nearly always much lower than the outside; the deepest of this type is Newton with a rim 23,800 feet above the interior. Wargentin, however, which must be included in the group, has a floor which is practically level with the top of the wall. The craters which most nearly resemble terrestrial volcanic craters usually on the moon have a diameter of from four to twelve miles and a small floor with a volcanic cone. They are approximately circular with a steep outer slope. Examples are Messier, Bessel and Linne. These craters proper are usually characteristically bright which enables them to be recognized at full moon, and is a feature which probably caused Herschel to imagine they were actually in eruption.
Of the valleys, perhaps the most notable is the Great Alpine Valley, though the deep narrow winding rill of Ariadaeus, like the bed of a dried-up stream, can be seen with the aid of a two-inch telescope. The cleft of Hyginus, which may be seen with the aid of a similar telescope, is just east of the rill of Ariadaeus and is more like a crack in the smooth surface than a river valley. In a small telescope it is like a hair, but such markings are often from fifty to a hundred miles long and up to two and a half miles in width.
Faults or closed cracks in the moon’s surface are also sometimes visible because one side is higher than the other.
Lunar rays are features peculiar to the moon. They are bright streaks which are best seen about the time of full moon (unlike other lunar features) and radiate from some of the principal craters. These rays are never above or below the general surface of the moon and traverse without a break all other features such as crater walls, valleys and ‘seas’. No complete explanation of their existence has yet been given. Possibly the finest system of rays radiates from the lunar crater Tycho, in the southern hemisphere of the moon, though some other radiant points for rays are Kepler, Messier, Timocharis, Proclus and Aristarchus. There are others. The craters Euclides and Landsberg A are surrounded by a bright patch sometimes called a nimbus.
In addition to the darkness of the ‘seas’ and the brightness of the rays and patches, and also the depth of lunar shadows, the variation of brightness and colour in different parts of the moon is most interesting to a careful observer with a small telescope. The brightness varies from place to place, Aristarchus being the brightest object on the moon and Grimaldi and Riccioli the darkest. The brightness also varies from time to time as on the floor of Plato, where it has undoubtedly something to do with the altitude of the sun. The floors of the ‘seas’ are also tinted with various colours, such as Mare Crisium grey-green, Lacus Somniorum bright grey, Palus Somnii bright yellow-brown, Mare Frigoris yellowish-green, and so on.
To enable lunar features to be picked out on a map, the map is divided into quarters by the lunar equator, which is drawn very nearly through Rhaeticus and Landsberg, and lunar longitude 0° which is drawn through the centre of Walter and the east side of Aristillus. These lines intersect near the centre of the moon’s disc in mean libration and form axes of coordinates, the unit of reference then being one-thousandth of the semi-diameter of the disc. A less accurate though sometimes convenient method is to divide these quarters numbered I to IV (NW, NE, SE and SW) further into quarters by NS and EW lines, and the 16 approximately equal areas thus formed are lettered A B C D from west to east and abed from south to north.
It is usual to draw the map with west to the left and east to the right, south at the top and north at the bottom, since this is the way the moon is viewed in an ordinary inverting telescope.
The principal features have names of their own. Features near (or inside) a larger one, when not separately named, are denoted by the nearest named feature with letters alter. Eminences are usually given small Greek letters only after their names, and depressions capital Roman letters only. Double letters are used to indicate small features near larger ones. The larger feature is indicated by the first letter. Rills have Roman numbers followed by the letter r. Landsberg A is a depression to the east of Landsberg.
NO ATMOSPHERE – AND THE CONSEQUENCES
The moon has no atmosphere. When it passes in front of a distant star, the star disappears from view for the whole width of the moon even though only part of the moon’s width may be illuminated. The process is like an eclipse of the star but in this case is called occultation. Occultations of stars are frequent. For example, on March 12, 1943, at 16h. 40.2m. U.T. a Tauri disappeared behind the moon. It reappeared on the same evening at 17h. 51.8m. U.T. Times of occultation are given in the nautical almanac. The disappearance or immersion of the star always takes place on the east side of the moon and the reappearance or emersion always takes place on the west side. The immersion and emersion are always instantaneous and there is no gradual falling-off of brightness as there would be if the moon had an atmosphere. Occasionally a star seems to hang for an instant on the limb as though it may have chanced on an irregularity of the moon’s surface, though this is exceedingly rare.
If this lack of atmosphere on the moon were not at once apparent by direct observation, including the occultations of stars by the moon, we could have deduced it from observations of the moon’s gravitation and the velocities of molecules of the gases in possible atmospheres. For example, having obtained the mass and dimensions of the moon it is possible to calculate, using the known gravitational laws, how fast any object at its surface would have to be moving away from it in order to leave the surface and never return. This is called the velocity of escape and for the moon it is 2-4 kilometres per second. The dimensions of the moving object do not matter. Gravity has no favourites. The mean molecular velocity for hydrogen is 1-84 km./sec. at 0°C. and since even if the escape velocity is four times the mean velocity of the molecules the atmosphere would be almost completely lost in 50,000 years, the moon now would have no hydrogen in its atmosphere even if it had any initially. Moreover, the mean molecular velocity of a gas is proportional to the square root of its absolute temperature, and thus, on account of the sun’s rays warming up the atmosphere, the hydrogen would be doubly sure of escaping the moon’s gravitation. Further, as the mean molecular velocity is inversely proportional to the square root of the molecular weight of the gas, the moon would also lose nitrogen, oxygen and water vapour, but would retain carbon dioxide unless at some time in the past it were much hotter to increase the speed of the carbon dioxide molecules. Escape, of course, could be hindered by collisions with other objects and other molecules but would scarcely be prevented from taking place eventually. Thus we deduce by this indirect method the conclusion that the moon has no atmosphere.
Mr H. G. Wells in his phantasy of the moon imagined an underground atmosphere in caves and tunnels but we have no means of observing this. As there is no atmosphere there can be no wind, and with no water vapour there can be no rain, no ice and no snow. Ordinary denudation and the morning ‘stone showers’ known in mountainous districts on the earth would be absent on the moon. There can be no rivers on the moon, no lakes and no real seas. In the mountainous districts there may be evening ‘stone showers’, as these are caused by the variations in temperature. At the time of high noon on the moon the temperature may attain some 120°C. as there is no atmospheric protection from the sun’s rays, and this temperature is also suggested by telescopic bolometric and thermopile measurements. At the time of the moon’s night the temperature may fall well below 0°C. Should particles of rock high on one of the mountains of the moon become dislodged by the differences in temperature working on crystals of different expansibility, the particles would fall to join others on a scree at the foot of the slope, but only gravity would then act to move them further, for there would be no wind, rain or river action. The angle of rest for the scree would be quickly attained. Changes on the moon’s surface might thus be expected to be very, very slow and slight, and certainly would not be noticeable at our distance for perhaps thousands of years.
There being no gaseous envelope on our satellite, there is unlikely to be any plant or animal life at its surface. Professor Turner thought that there might have been life on the moon at some distant time, though what grounds he had for this belief I do not know. Jules Verne wrote an acknowledged fantasy on the moon, but the great lunar hoax of which the New York Sun published 60,000 copies in September 1835 must rank as one of the greatest of all time and is still talked about in America. It is not so well known in Great Britain, though an English edition of the paper was published in 1836. The author is unknown, though it may have been Nicollet, and it was possibly translated from the French by Richard Alton Locke, who may have added parts of his own, since there appear to be passages unlikely to have been written by an able astronomer. The hoax concerns a telescope alleged to have been invented by Sir John Herschel (son of William) and Sir David Brewster and first turned on the moon on January 10, 1835. This instrument is stated to have enabled the two astronomers to see everything on the moon, including the vegetation and animals. Vegetation is fully described, including rose poppies and trees. The animals are also described and include brown quadrupeds like bison. There was stated to have been seen a large amphibious creature rolling on a beach, good large sheep and even Vespertilio-homo or bat-men, four feet tall, who could fly or walk erect. They were alleged to be covered with glossy copper-coloured hair and were seen near the shores of Lake Langrenus. All was described as by an eye-witness had he been with Sir John Herschel on the night of January 10, 1835. Of course it was all false and Sir John Herschel and Sir David Brewster knew nothing about it, but the New York Sun sold the whole edition of 60,000 copies and perhaps Nicollet had a laugh over the supposedly credulous Arago, who was obnoxious to him. Whatever the true story, we believe that there can be no life on the moon.
It is said that the sun gives us 570,000 times more light than the moon – also that the average slope of the lunar mountains is 47º, thus giving much more light to us by reflection at full moon than at other times, even allowing for the area visible. But for little or much moonlight on black-out nights we echo the words of Hippolyta in Shakespeare’s Midsummer-Night’s Dream: ‘Well shone, Moon’.
Get the best of Geographical delivered straight to your inbox by signing up to our weekly newsletter and get a free collection of eBooks! |
Utilizing Technology for Learning STEM Subjects: Perceptions of Urban African-American Middle School Students
The results of this study are presented for the demographics of the respondents and separately for each research question.
A total of 150 usable surveys were returned, 124 were answered by African-American students. As presented in Table 3, for African-American students, the number of responses between boys (n = 61) and girls (n = 63) was approximately equal. Regarding grade, the survey respondents were mostly in the sixth grade (boys: n = 25, 41%; girls: n = 25, 39.7%) and the seventh grade (boys: n = 23, 37.7%; girls: n = 28, 44.4%). The age range of the participants focused between 11 to 13 years old for both boys and girls.
Research Question 1
Table 4 shows the results of general information gathered from African-American students’ perspectives on technology. Most of the respondents (73.8%) were interested in technology education class and felt they were good in using technology tools (72.2%). About half of them would like to pursue careers related to technology field (50%) and participate in technology related after-school clubs (58%).
Research Question 2
Mitts and Haynie (2010) suggested that differences existed between boys and girls in their preferences for technology education. According to Mitts and Haynie’s study, middle school girls preferred technology activities that focused on design or communication that had social significance, whereas boys had a preference for “utilizing-type” activities. In order to examine gender differences in attitudes towards technology education, African-American students’ total scores in the section of interests in technology education classes were collected and compared using the Mann Whitney U test. The results of Mann Whitney U test for the overall attitude score of boys and the overall attitude score of girls towards technology education did not show any statistical difference (U = 1750.5; p = .681 > .05). The rank average of boys’ score is 62.33, while girls had a score rank average of 59.73. The close rank averages of scores between African-American boys and African-American girls indicate that they had somewhat equal preferences for technology education.
Research Question 3
To examine grade level differences in attitudes towards technology, a Kruskal-Wallis H test was used to analyze whether African-American students’ overall scores in the section of interests in technology education were differed among grade 6, grade 7, and grade 8. Although no significant differences were found in the section of interests in technology education, χ2 (2) = 2.006, p = .367, further analysis regarding their attitudes toward each survey item revealed that their attitudes differed significantly toward the item “I think technology education class will be of value to me in the future,” χ2 (2) = 12.495, p = .002, with a mean rank attitude score of 67.74 for grade 6, 56.94 for grade 7 and 43.67 for grade 8 (table 5). They held different attitudes toward the item “I am good in learning technology tools,” χ2 (2) = 7.194, p = .027, with a mean rank attitude score of 59.96 for grade 6, 63.98 for grade 7 and 46.52 for grade 8 (table 5).
Research Question 4
Table 6 presents results for items related to respondents’ views and experiences with mobile devices in comparison to computers. These data show that most of the students owned mobile devices (82.5%) and computers (70.6%). More students have used computers (73.8%) to help with their classwork than have used mobile devices (65.9%). Over 70% of students believed that it was important to know how to use technology to help with their class work and therefore wished more opportunities to use technology for learning.
Research Question 5
To examine gender differences in attitudes towards the use of technology for learning with comparison of mobile devices and computers, the results of Mann Whitney U test for the total score of African-American boys and the total score of African-American girls in the section of the use of technology for learning did not show any statistical difference, computer use for learning, U = 1590; p = .204; mobile devices use for learning, U = 1518, p = .101.
Although no significant differences were found based on their overall scores in the section of the use of technology for learning, further analysis regarding African-American students’ attitudes toward each survey item revealed that their attitudes differed significantly toward the item “would you like to learn ways to use a computer to help with class work,” U = 1223.5, p = .019 < .05, with a mean rank attitude score of 61.54 for boys and 50.16 for girls (table 7). Students held different attitudes toward the item “if you have used a computer to help study, was it helpful,” U = 1511, p = .500, with a mean rank attitude score of 55.48 for boys and 58.49 for girls (table 7).
Research Question 6
To examine grade level differences in attitudes towards the use of technology with comparison of mobile devices and computers, no significant differences were identified among grade 6, grade 7 and grade 8 according to African-American students’ overall scores for the section of computer use for learning and the section of mobile devices use for learning, with regard to the overall six items. However, further analysis regarding their attitudes toward each survey item indicated that their attitudes differed significantly toward the item “would you like to learn ways to use a computer to help with class work,” χ2 (2) = 10.83, p = .004 < . 05, with a mean rank score of 66.65 for grade 6, 50.15 for grade 7 and 49.71 for grade 8 (table 8). Students held different attitudes toward the item “if you have used a computer to help study, was it helpful,” χ2 (2) = 7.180, p = .028 < .05, with a mean rank score of 64.20 for grade 6, 54.77 for grade 7 and 48.77 for grade 8 (table 8).
Research Question 7
Tables 9 and 10 present results concerning students’ beliefs and preferences for technology in different STEM subjects. Most respondents (n=70) thought that the mobile devices was helpful in math course, and some of them preferred computer (n=51) to help with their computer & technology course. Few respondents believed that the mobile devices (n=16; n=21) and computer (n=15; n=14) would be useful in biology or chemistry. Compared to the use of computers, mobile devices are relatively more popular for these students in STEM subjects.
The purpose of this study was to gain insight into African American middle school students’ level of access to technology and their perceived value and interests in technology education and the utilization of technology for learning, particularly in STEM areas. The participants in the study were students enrolled in a STEM-focused, Title I urban, middle school in the south central Louisiana.
Limitations of the Study
The results of our study must be interpreted within the limitations and delimitations of the inquiry. This research is based on data collected from an urban charter school in one state within the United States. The sample size is small; consequently the result of current research may not be the representative for the African American population in other areas. Albeit this survey investigation was conducted during students’ participation in Louisiana STEM expo, their response rate is rather low. Though the instrument used in this research is considered both valid and reliable (Mahoney, 2009), the modified instrument provided only the internal consistency reliability of the measure. In light of these boundaries, this study has resulted in several interesting findings.
The results for research questions 1 to 3 provide information about African-American students’ interests and attitudes toward technology in the curriculum and technology related-clubs and field trips. From the percentage of YES responses for these items, it is clear that the majorities of the respondents are interested in technology related courses and field trips. Although research reveals that the technology gap becomes wide particularly between Caucasian and African American children (Jackson et al., 2008), this result suggests African American children are confident in learning technology literacy.
Boys and girls are generally equally enthusiastic about technology related activities and programs. Younger African American children seem to feel that technology education is more important to them and they value technology related classes in and out of school according to their responses in the survey. This finding is important, since current research pays more attention on the role of gender and race in terms of technology education (Jackson et al., 2008; KAHVECI, 2010; Ritzhaupt et al., 2013), little research has investigated their age and its relationship to educational technology and its related topics, particularly for African American population.
The results for research questions 4 to 6 provide information about today’s African American middle school students’ learning environment associated with technology (e.g., mobile devices and computers) and their motivation for using technology for learning purposes. In this study, the high percentage of YES responses showed that most of African-American students at this school owned or had access to both mobile devices and computers, although they expressed a preference for using computers to help with school assignments. Most African-American students in this urban school have access to technology devices, either in the school or at home. This finding is interesting, because prior research shows the disparity in access to technology for minority groups, particularly African American children (Fairlie, 2012; Hesseldahl, 2008; Jackson et al., 2008; Ritzhaupt et al., 2013).
The results of the current study also reveals that African-American boys are more willing to learn how to use computers to help with their class work than was true of African-American girls. Corresponding to Goldstein and Puntambekar (2004), given the alternative technology that assist with their classwork, boys were greatly motivated to learn through computer than girls. Girls were less confident in using computers. However, the result also indicates that African-American girls believe it is more meaningful to use a computer to help study. Corresponding to prior research, African American males were less likely to meaningfully use ICT resources when compared to their African American female counterparts (Jackson et al., 2008).
Another interesting finding is that younger African American children have strong tendency to do class work through computers and they believe that it is beneficial when using computers to help study. Previous research showed students’ motivation to use technology for learning varied by personal characteristics such as gender, grade level, at high school level (KAHVECI, 2010). Little research has examined the effects of grade levels on the way students chose to learn at the middle school level. Nevertheless, this study suggests that students in lower grades tended to have more satisfaction in using technology compared to the higher graders, in African American groups. Younger children expressed greater enthusiasm for learning with technology.
The results for research question 7 indicate that these students felt that mobile devices would be most useful in mathematics. Similarly, they felt that computers would be useful in computer science and technology related courses. In addition, few respondents advocated the use of technology in biology and chemistry. It was not clear from this survey why the respondents felt that mobile devices would be especially useful in math and why they were less likely to use technology in biology and chemistry.Continued on Next Page » |
Amenorrhea is a deficiency where the menstruation doesn't occur at a regular period. After attaining puberty, women undergo a menstrual period once a month. The absence of menstruation is called amenorrhea. It happens during the pregnancy periods and also during the days of breastfeeding.
Each woman until attaining the menopause undergoes menstrual bleeding. During this menstrual bleeding, the ovary releases the hormones progesterone and estrogen. The absence of these bleeding is called amenorrhea.
Types of Amenorrhea:
- Primary amenorrhea.
- Secondary amenorrhea.
Primary amenorrhea occurs when a woman doesn't attain puberty at regular ages. It is very rare.
Sometimes a person doesn't attain periods for more than 3 months, without natural causes like pregnancy and breastfeeding. This is called secondary amenorrhea. It is quite common in 3 to 5 percent of women.
Symptoms & diagnosis:
The primary symptom of amenorrhea is the absence of periods. However, the other symptoms are listed below:
- Milky nipple discharge
- Hair loss
- Excessive facial hair growth
- Changes in the vision
- Weight gain
- Pelvic pain
The diagnosis is based on the type of amenorrhea associated with your body. If a girl doesn't attend the puberty at the age of 16, it is better to consult a doctor. The doctor checks for the pelvic check and examines the causes.
For secondary amenorrhea, it is better to consult a doctor, if there is a delay in the menstrual period for more than 3 months. The doctor checks for the pregnancy test. If it is negative, the generic and karyotype tests give the result.
Amenorrhea causes due to variety of reasons. The primary amenorrhea causes due to genetic disorders. It may be due to the delayed menstrual periods in their family history.
There are some genetic disorders associated with this primary amenorrhea which includes:
- Turner syndrome
- Androgen insensitivity syndrome,
- Mullerian defects
These genetic disorders stop the ovaries to work properly. There may be some malfunctions in the uterus and fallopian tubes. So, reproductive development doesn't form naturally leading to the absence of menstruation.Also, there are some natural causes of the amenorrhea which includes:
- Taking birth control pills
- Pregnancy and Breastfeeding.
Surgery is rare to treat amenorrhea. But, the treatment may include several lifestyle changes. It involves diet, physical activities and stress-reduction measures.
The birth control pills and hormonal tablets stimulate the period. There may be a need to undergo hormone therapy if required. They are very rare and used only to correct hormonal and genetic defects. |
What does it mean to “Rhyme” or to be “Rhyming”?
The terms “Rhyme” and “Rhyming”(also spelled “Rhymin'”) are nouns used in music and poetry to represent making words or sounds that sounds the same and musically sound good.
A rhyme is two or more words that sound similar or alike.
Rhyming means to be matching words that sound the same or similar by sound syllables.
Rhymes are words at the end of lines in a song or poem, or just in general speech that sound alike or similar. |
Writing Self Writing Empire intro-duces the reader to the cultural world of the Mughal Empire and
the pluralistic ways of the Mughal imperial court through the works of Chandar Bhan Brahman, a munshi or a state secretary of the empire. The munshis in the north and the karanams in the south were a significant corps of professionals within a wider community of scribes in early modern India. They were the literati of the chancellery who produced systems of knowledge, ideas and information and helped to disseminate them till at least the nineteenth century or the coming of the printing press. The roots of the scribal class during the early modern period lay in the socio-economic changes that led to the rising importance of written communications and record-keeping. Knowledge itself, argued C.A. Bayly (2001) was a social formation and as people of knowledge, the scribes represented a distinct, active social segment that encountered a number of opportunities and challenges. Specific skills and training were mandatory for the scribes, which were provided within the family structure to ensure recruitment and efficient discharge of duty while in employment. |
When we think of compassionate, intelligent creatures, cows normally don’t come to mind.
However, cows actually communicate how they feel to one another through their moos, according to a new study. The animals have individual vocal characteristics and change their pitch based on the emotion they’re feeling, according to research at the University of Sydney.
Alexandra Green, a Ph.D. student at the university and the study’s lead author, said:
“Cows are gregarious, social animals. In one sense it isn’t surprising they assert their individual identity throughout their life.”
She said it’s the first time they’ve been able to study voices to obtain evidence of this trait.
The Studies on the Communications Between Cows
Studying a herd of 18 Holstein-Friesian heifers over the course of five months, Alexandra found that the cows gave individual voice cues in different positive and negative situations. This behavior helps them communicate with the herd and express excitement, arousal, engagement, or distress.
Talking about the animals she studied, Ms. Green said:
“They have all got very distinct voices. Even without looking at them in the herd, I can tell which one is making a noise just based on her voice.”
She would record and study their “moos” to analyze their moods in various situations within the herd.
“It all relates back to their emotions and what they are feeling at the time,” she said.
Previous research has discovered that cow moms and babies use their voices to communicate individuality.
However, this new study shows how cows keep their individual moos throughout their lives, even if they’re talking to themselves. The study found that the animals would speak to each other during mating periods, while waiting for or being denied food, and when being kept separate from one another.
The research analyzed 333 cow vocalizations and has been published in Scientific Reports.
“Ali’s research is truly inspired. It is like she is building a Google translate for cows,” said Cameron Clark, an associate professor at the university.
Ms. Green said she hoped this study would encourage farmers to “tune into the emotional state of their cattle, improving animal welfare.”
Studies have shown that animals communicate with one another in similar ways to humans, taking turns in conversations. This is beneficial in the animal kingdom to communicate needs, such as where food sources are at or if the herd needs to move locations. It can also help animals communicate about an incoming threat so they can respond accordingly.
Final Thoughts About Cows Communicating
This research shows that animals are intelligent, sentient beings and deserve our respect. Vegetarianism and veganism are on the rise as people are waking up to how eliminating meat from our diets can positively impact health as well as show compassion to other living beings. Also, cows contribute greatly to greenhouse gas emissions, producing 37% of methane emissions resulting from human activity. One study showed that one cow, on average, produces between 70-120 kg of methane a year.
This is significant because across the globe, there are around 1.5 billion cattle. Many scientists are coming together to talk about how a plant-based diet could greatly help to slow down climate change. |
The Black Sea is one of the largest regional seas of the Eurasian continent and unique in many of its geographical, geological, biological, hydrographical and socio-political characteristics. With anoxic conditions in the deep, problems with invasive species and high sediment loads delivered to the system, this area has unique problems requiring long-term stations. The Black Sea is located in a geological complex area, where three major tectonic plates (Eurasian, Anatolian, Arabian) interact. Geo hazards, such as earthquarkes, submarine landslide, displacement along active faults, are present and are possible triggers of tsunami, together with extreme meteorological events.
EMSO scientific disciplines: geosciences, physical oceanography |
The Magellanic Clouds are irregular galaxies that share a gaseous envelope and lie about 22° apart in the sky near the south celestial pole. One of them, the Large Magellanic Cloud (LMC), is a luminous patch about 5° in diameter, and the other, the Small Magellanic Cloud (SMC), measures less than 2° across. The Magellanic Clouds are visible to the unaided eye in the Southern Hemisphere, but they cannot be observed from the northern latitudes. The LMC is about 160,000 light-years from Earth, and the SMC lies 190,000 light-years away. The LMC and SMC are 14,000 and 7,000 light-years in diameter, respectively, and are smaller than the Milky Way Galaxy, which is about 140,000 light-years across.
The Magellanic Clouds were formed at about the same time as the Milky Way Galaxy, approximately 13 billion years ago. They are presently captured in orbits around the Milky Way Galaxy and have experienced several tidal encounters with each other and with the Galaxy. They contain numerous young stars and star clusters, as well as some much older stars. One of these star clusters contains R136a1, the most massive star known, with a mass 265 times that of the Sun.
The Magellanic Clouds serve as excellent laboratories for the study of very active stellar formation and evolution. For example, the immense ionized-hydrogen region 30 Doradus (also called the Tarantula Nebula) contains many young, hot stars. The total mass of 30 Doradus is about 1,000,000 solar masses, and its diameter is 550 light-years, making it the largest region of ionized gas in the entire Local Group of galaxies. With the Hubble Space Telescope it is possible for astronomers to study the kinds of stars, star clusters, and nebulae that previously could be observed in great detail only in the Milky Way Galaxy. |
The Style Is Best Known For
- Naked ladies by Pablo Picasso.
- Greyish-brown still lives by Georges Braque.
- Collages and collage-like paintings.
Who, What, and When?
Cubism was developed by artists living in Paris and was popular between 1907 and the mid-1920s. Cubism is best known for its paintings and collages, but its influence filtered into other forms of visual arts, including architecture.
How to Recognize It
- Angular, faceted shapes.
- It seems like you’re looking at the work through shattered glass.
- Flatness and two-dimensionality.
- Some aspects may be recognizable, but others seem totally abstract
- Collages or collage-like paintings.
- Still lives, often of man-made objects.
- Human faces and forms.
- Faces often look mask-like.
- Browns, greys, and other dark or neutral colors.
- Space is ambiguous.
The first Cubists, notably Picasso, were inspired by a posthumous exhibition of works by the Post Impressionist Paul Cezanne. They loved his transformation of natural features into simple geometric forms. Around this same time, modernists were becoming interested in African art, which featured stylized representations of the human face and body. Picasso decided to expand on these ideas in his Les Desmoiselles d’Avignon (1907). He and his friend Georges Braque experimented together to take their ideas further and further; other French artists soon took up these ideas as well. Cubism was a short-lived movement that lost steam after World War One, but it had far-reaching influenced many subsequent movements.
- Showing a subject from more than one viewpoint at the same time.
- Breaking an object into pieces that can fit together in many different configuration.
- The painting is a physical, two-dimensional object that has its own surface. It’s not just a means of representing another object.
- The object and the background are equally important. Traditional European ideas about perspective are not important.
- Cubists wanted to tap into new ideas about the reality we live in.
- Pablo Picasso
- Juan Gris
- Fernand Leger
- Georges Braque
Analytic Cubism: Earlier – paintings that take apart objects, analyze them, and put them back together differently.
Synthetic Cubism: Later – collages that bring new elements (like newspapers) into works of art and explore their possibilities for conveying ideas and emotions.
Both types of Cubism were practiced by the same artists. The jury’s still out on which one was more radical.
Don’t Confuse It With
Other works by these artists: Cubism may have been short-lived, but these artists weren’t. Many of them painted in more than one style during their careers. Just because a work is by Picasso or Leger, that doesn’t mean it is necessarily Cubist. To differentiate, look in mind the characteristics mentioned above, and look at dates when in doubt.
Fauvism: Fauvism was another movement developing in France a few years before Cubism. Both styles grew out of Post-Impressionism, but Fauvism focused on innovating with color rather than form.
Abstract art: We’re still not there yet, but we’re getting closer.
Rewald, Sabine. “Cubism.” In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. (October 2004).
Kerrigan, Michael. Modern Art. Old Saybrook, CT: Kanecky & Kanecky, 2005. |
Whooping Cough Update
In the lead article in the January issue of Infectious Diseases in Children the authors made these important points about the whooping cough, also referrer to as pertussis:
- Whooping cough is a lot more common than we think and often goes unrecognized and untreated.
- It is one of the most vaccine-preventable diseases.
- Children between the ages of seven and ten years are now at much higher risk of getting whooping cough, although the highest incidence is between one and ten years of age.
Infectious disease specialists are concerned about why the number of children with whooping cough (pertussis) has increased in the United States, a country with high rates of vaccination against pertussis. One of the clues that a child may have the whooping cough and not just a viral or bacterial infection of the lungs is usually with pertussis there is no fever, yet other bacterial and viral infections usually have a fever accompanying the cough.
Parents can also suspect the whooping cough (pertussis) if a child keeps coughing so hard that they need to take a quick heavy breath that sounds like a “whoop” on the next deep breath after the coughing jag. It’s sort of like a catch-up breath from the breathing they missed during the episode of coughing.
Whooping cough begins with the same signs and symptoms of a common cold: runny nose and a mild cough. Yet, after two or three weeks when a “cold” should have gone away, with pertussis the cough increases in frequency and severity and can go on for another three or four weeks. During this time, whooping cough should be suspected and discussed with your doctor.
Besides being sure your child is up-to-date with his whooping cough vaccinations, the authors also stress the importance of seeing your doctor as soon as you suspect whooping cough because treatment with a mild antibiotic may at least prevent pneumonia complications and slow down the spread of the germ within families. |
Temporarily disabling a single protein inside our cells might be able to protect us from the common cold and other viral diseases, according to a study led by researchers at Stanford University and University of California-San Francisco.
The findings were made in human cell cultures and in mice.
“Our grandmas have always been asking us, ‘If you’re so smart, why haven’t you come up with a cure for the common cold?’”said Jan Carette, PhD, associate professor of microbiology and immunology. “Now we have a new way to do that.”
The approach of targeting proteins in our own cells also worked to stop viruses associated with asthma, encephalitis and polio.
Colds, or noninfluenza-related upper respiratory infections, are for the most part a weeklong nuisance. They’re also the world’s most common infectious illness, costing the United States economy an estimated $40 billion a year. At least half of all colds are the result of rhinovirus infections. There are roughly 160 known types of rhinovirus, which helps to explain why getting a cold doesn’t stop you from getting another one a month later. Making matters worse, rhinoviruses are highly mutation-prone and, as a result, quick to develop drug resistance, as well as to evade the immune surveillance brought about by previous exposure or a vaccine.
In a study published online Sept. 16 in Nature Microbiology, Carette and his associates found a way to stop a broad range of enteroviruses, including rhinoviruses, from replicating inside human cells in culture, as well as in mice. They accomplished this feat by disabling a protein in mammalian cells thatall enteroviruses appear to need in order to replicate.
Carette shares senior authorship with Or Gozani, MD, PhD, professor of biology at Stanford and the Dr. Morris Herzstein Professor of Biology; Raul Andino, PhD, professor of microbiology and immunology at UCSF; and Nevan Krogan, PhD, professor of cellular and molecular pharmacology at UCSF. The lead authors are former Stanford graduate student Jonathan Diep, PhD, and Stanford postdoctoral scholars Yaw Shin Ooi, PhD, and Alex Wilkinson, PhD.
Well-known and feared
One of the most well-known and feared enteroviruses is poliovirus. Until the advent of an effective vaccine in the 1950s, the virus spelled paralysis and death for many thousands of children each year in the United States alone. Since 2014, another type of enterovirus, EV-D68, has been implicated in puzzling biennialbursts of a poliolike disease, acute flaccid myelitis, in the United States and Europe. Other enteroviruses can cause encephalitis and myocarditis — inflammation of the brain and the heart, respectively.
Like all viruses, enteroviruses travel lightly. To replicate, they take advantage of proteins in the cells they infect.
To see what proteins in human cells are crucial to enteroviral fecundity, the investigators used a genomewide screen developed in Carette’s lab. They generated a cultured line of human cells that enteroviruses could infect. The researchers then used gene editing to randomly disable a single gene in each of the cells. The resulting culture contained, in the aggregate, cells lacking one or another of every gene in our genome.
The scientists infected the culture with RV-C15, a rhinovirus known to exacerbate asthma in children, and then with EV-C68, implicated in acute flaccid myelitis. In each case, some cells managed to survive infection and spawn colonies. The scientists were able to determine which gene in each surviving colony had been knocked out of commission. While both RV-C15 and EV-D68 are both enteroviruses, they’re taxonomically distinct and require different host-cell proteins to execute their replication strategies. So, most of the human genes encoding the proteins each viral type needed to thrive were different, too. But there were only a handful of individual genes whose absence stifled both types’ ability to get inside cells, replicate, bust out of their cellular hotel rooms and invade new cells. One of these genes in particular stood out. This gene encodes an enzyme called SETD3. “It was clearly essential to viral success, but not much was known about it,” Carette said.
The scientists generated a culture of human cells lacking SETD3 and tried infecting them with several different kinds of enterovirus — EV-D68, poliovirus, three different types of rhinovirus and two varieties of coxsackievirus, which can cause myocarditis. None of these viruses could replicate in the SETD3-deficient cells, although all proved capable of pillaging cells whose SETD3-producing capability was restored.
The researchers observed a 1,000-fold reduction in a measure of viral replication inside human cells lacking SETD3, compared with controls. Knocking out SETD3 function in human bronchial epithelial cells infected with various rhinoviruses or with EV-D68 cut replication about 100-fold.
Mice bioengineered to completely lack SETD3 grew to apparently healthy adulthood and were fertile, yet they were impervious to infection by two distinct enteroviruses that can cause paralytic and fatal encephalitis, even when these viruses were injected directly into the mice’s brains soon after they were newly born.
“In contrast to normal mice, the SETD3-deficient mice were completely unaffected by the virus,” Carette said. “It was the virus that was dead in the water, not the mouse.”
Enteroviruses,the scientists learned, have no use for the section of SETD3 that cells employ for routine enzymatic activity. Instead, enteroviruses cart around a protein whose interaction with a different part of the SETD3 molecule, in some as yet unknown way, is necessary for their replication.
“This gives us hope that we can develop a drug with broad antiviral activity against not only the common cold but maybe all enteroviruses, without even disturbing SETD3’s regular function in our cells,” Carette said.
Enterovirus pathogenesis requires the host methyltransferase SETD3. Jonathan Diep et al. Nature Microbiology (2019), https://doi.org/10.1038/s41564-019-0551-1.
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. |
Cyber Robotics 101 Curriculum
“Bring Cyber Robotics into your curriculum – use the excitement of robotics to introduce all your students to coding”
CoderZ is a classroom optimized STEM Education solution that encompasses the learning of coding and virtual robotics together with curriculum, classroom management and teacher resources. The solution is developed to enable schools and teachers to engage students with hands on STEM.
CoderZ enables all students to learn STEM with robotics. With CoderZ your students will learn how to code virtual robots accompanied by a step-by-step curriculum and gamified missions completely online. No need for expensive hardware or specialized training.
CoderZ is classroom ready, designed for teachers and school friendly
Thousands of students and teachers are already using CoderZ in the classroom and in online competitions.
Cyber Robotics 101
Cyber Robotics 101 is a flexible learning program for educators to introduce students to the core concepts of code development and robotics. Students will learn mechanics, navigation, sensors and more while being introduced to programming components like commands, variables, conditional logic, loops, smart blocks (functions) and more.
Presentations with speaker notes
Pre / post assessments
Course Progress control
Heatmaps and student reports
Course outline and learning objectives
Intro To STEM And Coderz
Overview of STEM and first steps in CoderZ learning environment.
Drive The Robot
Learn about drive systems and how to navigate your robot using computer code.
Drive The Robot
Use Steering and Smart Blocks to carry out complex maneuvers.
The Touch Sensor
Learn how to use the Robot’s touch sensor for autonomous navigation using basic coding blocks.
The Repeat Loop
Learn how to code more efficiently with the Repeat loop.
The Gyro Sensor
Learn how to make accurate turns using data from the Gyroscopic sensor.
Learn how to reset the data from the Gyroscopic sensor.
Design then make the dominoes fall in sequence.
Final Challenge Missions I
Apply all you’ve learned so far and take on an advanced challenge that puts your skills to the test.
The Ultrasonic Sensor
Learn how to avoid obstacles by sensing them from afar using the Ultrasonic sensor.
The Color Sensor
The robot can detect colors on the floor and use them to make better decisions.
Challenge Missions II
More advanced challenges to puts students’ skills to the test.
Using Motor blocks and the robot’s arm to grab and move objects.
Coding the robot to make decisions. |
Most of us remember learning about body parts with the song, “Head, Shoulders, Knees, and Toes,” but children grow more aware of their bodies as they develop. Babies discover their hands when they are manipulating toys, notice their feet when they’re able to grab at them and bring them to their mouths, and learn about other body parts during daily self-care (e.g., brushing teeth, washing face, combing hair, etc.). Here are some fun ways of helping your child learn about their body parts.
Create gross motor dice with basic body parts (e.g., arms, legs, head, fingers, etc.) and various movements (e.g., twist, shake, bend, etc.).
Paste parts of the face (e.g., eyes, nose, ears, mouth, etc.) to a blank face template.
Trace your child’s body on a large piece of paper and have him draw in missing body parts.
Play “Simon Says” with your child and ask her to touch, point, wiggle, and or shake body parts.
Cut out body parts in magazines and play the matching game. Have your child match one leg from one picture with a leg from another.
Cut out a potato and body parts from felt. Have your child place body parts onto Mr./Mrs. Potato Head.
Sing and dance to songs including “Head, Shoulders, Knees, and Toes;” “The Hokey Pokey;” and “Rub-a-Dub Song.”
Create a homemade book. Include pictures of your child and family members. Cover body parts using flaps and encourage your child to find specific body parts.
Read books about body parts. Examples include: “Is this my Nose?”; “Here are my Hands;” “My First Body;” “Toes, Ears, & Nose! A Lift-the-Flap Book;” and “Eyes, Nose, Fingers, and Toes: A first Book All About You.” |
Body language is possibly the most convincing and stimulating form of human communication. It can suggest expressions of excitement, anger, confidence and more, without the assistance of any spoken or written language. Because of its unique power, body language is the perfect medium for communication through mannequins in storefronts or advertisements. Mannequin designers have long capitalized on strategic posing in order to fully utilize mannequin forms, not only creating a display for clothes, but a communication to the public simultaneously. Mannequins in window displays everywhere speak to passers by, sending the message they were created to tell.
Depending on the merchandise, mannequins have different stories to tell. Confident stylish poses are common in designer clothing venues, while action poses are perfect for sporting goods stores. This strategic use of body language suggests to a shopper that the clothes are important and stylish, or the athletic clothing and equipment will make you a great athlete. This kind of non-verbal communication is far more effective than people realize. When a shopper perceives something visually rather than in written or spoken form, they are less likely to analyze the information and dismiss the claims. Instead, most people who see a mannequin’s body language will accept this communication as real and satisfying. This is part of what makes mannequins very powerful store fixtures.
Throughout the history of mannequins, they have embodied different interpretive messages depending on the trends or mentality of society. For this reason, the study of mannequin positions throughout history can reveal a great deal about historical societies. So many factors affect mannequin production decisions. What type of model is the mannequin drawn from? What kind of interpretation of human anatomy is present? What levels of fashion does the mannequin suggest? What materials were used? All of these questions can lend a glance at past societal differences. Mannequin forms of the 1800’s were dramatically larger in girth than today’s specimens, suggesting the logical explanation that the perception of beauty and the human body was very different then. There are also hundreds of societal conclusions that can be drawn through the body language of mannequins, clues about women’s roles in society or perhaps the hierarchy of class.
Political, artistic, and technological issues are also reflected in mannequin body language, because these silent salesmen display the imaginative needs of society and people groups. Not only are mannequins extremely effective communicators, but they are a record of old messages intended for people of the past. |
The following resources promote effective teaching about antisemitism and the Holocaust.
History of Antisemitism and the Holocaust
This lesson focuses on the history of antisemitism and its role in the Holocaust to better understand how prejudice and hate speech can contribute to violence, mass atrocity, and genocide. Learning about the origins of hatred and prejudice encourages students to think critically about antisemitism today.
Holocaust Encyclopedia Articles
The following related articles contain critical learning questions that can be used when discussing article content with students.
Understanding Nazi Symbols
By focusing on the history and meaning of the swastika, the lesson provides a model for teachers to use when examining the origins of symbols, terms, and ideology from Nazi Germany and Holocaust-era fascist movements that students are seeing in contemporary American culture, promoting critical historical thinking and analysis.
Racism fueled Nazi ideology and politics. To critically analyze actions taken by Nazi Germany and its collaborators requires an understanding of the concept of racism in general and Nazi racial antisemitism in particular.
These guides are designed to facilitate student discussion and learning about fighting prejudice, responding to genocide, religion and identity, and other topics relevant today. The lessons use the Museum’s Voices on Antisemitism podcast to illustrate the existence and broad impact of modern-day antisemitism. |
A large comet that peppered Jupiter two decades ago brought water into the giant planet’s atmosphere, according to new research from the Herschel space observatory.
Shoemaker-Levy 9 astounded astronomers worldwide when its 21 fragments hit Jupiter in June 1994. The event was predicted and observatories were trained on Jupiter as the impact occurred. The dark splotches the comet left behind were even visible in small telescopes. But apparently, those weren’t the only effects of the collision.
Herschel’s infrared camera revealed there is two to three times more water in the southern hemisphere of the planet, where the comet slammed into the atmosphere, than in the northern hemisphere. Further, the water is concentrated in high altitudes, around the various sites where Shoemaker-Levy 9 left its mark.
It is possible, researchers acknowledged, that water could have come from interplanetary dust striking Jupiter, almost like a “steady rain.” If this were the case, however, scientists expect the water would be evenly distributed and also would have filtered to lower altitudes. Jupiter’s icy moons were also in the wrong locations, researchers said, to have sent water towards the massive planet.
Internal water rising up was ruled out because it cannot penetrate the “cold trap” between Jupiter’s stratosphere and cloud deck, the researchers added.
“According to our models, as much as 95 percent of the water in the stratosphere is due to the comet impact,” said Thibault Cavalié of the Astrophysical Laboratory of Bordeaux, in France, who led the research.
While researchers have suspected for years that Jupiter’s water came from the comet — ESA’s Infrared Space Observatory saw the water there years ago — these new observations provide more direct evidence of Shoemaker-Levy 9’s effect. The results were published in Astronomy and Astrophysics.
Herschel’s find provides more fodder for two missions that are scheduled for Jupiter observations in the coming few years. The first goal for NASA’s Juno spacecraft, which is en route and will arrive in 2016, is to figure out how much water is in Jupiter’s atmosphere.
Additionally, ESA’s Jupiter Icy moons Explorer (JUICE) mission is expected to launch in 2022. “It will map the distribution of Jupiter’s atmospheric ingredients in even greater detail,” ESA stated.
While ESA did not link the finding to how water came to be on Earth, some researchers believe that it was comets that delivered the liquid on to our planet early in Earth’s history. Others, however, say that it was outgassing from volcanic rocks that added water to the surface.
Conventional theory dictates ice was in our solar system from when it was formed, and today we know that many planets have water in some form. Last year, for example, water ice and organics were spotted at Mercury’s north pole.
Mars appeared to be full of water in the ancient past, as evidenced by a huge, underground trench recently discovered by scientists. There is frozen water at the Martian poles, and both the Curiosity and Spirit/Opportunity rover missions have found evidence of flowing water on the surface in the past.
The outer solar system also has its share of water, including in all four giant planets (Jupiter, Saturn, Uranus and Neptune) and (in ice form) on various moons. Even some exoplanets have water vapor in their atmospheres.
“All four giant planets in the outer solar system have water in their atmospheres, but there may be four different scenarios for how they got it,” added Cavalié. “For Jupiter, it is clear that Shoemaker-Levy 9 is by far the dominant source, even if other external sources may contribute also.”
Source: European Space Agency |
When animals lose something,
be it a tail, finger, limbs, eyes or teeth, usually a vestige is left behind.
When turtles lost their ancestral teeth,
they should have left empty alveoli along their jaw rims. And the place to look for empty alveoli in turtles is in the most primitive turtle in the large reptile tree, the late-surviving Niolamia (Fig. 1), one of the great horned meiolaniid turtles.
along the maxilla (Fig. 1) seem to show where tiny teeth once erupted in Niolamia.
Earlier we looked at similar alveoli in the jaw tips of a gray whale where desmostylian tusks once emerged. |
An analysis of trends and practices of teaching social studies in the elementary school. An emphasis will be made on how to transfer theory into practice through preparation of activities and materials appropriate for the elementary classroom and critical reflection on those very materials and approaches. Students will plan instruction considering student-based diversity, instructional demands of the field and the best integration of other tools and disciplines. Teaching candidates will be required to prepare these instructional elements focused on the Virginia Standards of Learning.
For information regarding prerequisites for this course, please refer to the Academic Course Catalog.
Cognitive skills evolved from the need to place a heavier emphasis on content being developed in the General Studies while the technique for communicating those skills to elementary grade children has been designated as the role of the School of Education. In the area of social sciences, the focus must be upon the development of cognate skills as opposed to the mere retention of facts. Current teachers must stay abreast of the latest approaches to instructional practice in elementary social studies, to include general philosophical approaches and specific tools and techniques.
Measurable Learning Outcomes
Upon successful completion of this course, the candidate will be able to:
- Analyze the ten thematic strands of social studies education as espoused by the National Council for the Social Studies (NCSS).
- Evaluate the basic philosophical approaches to social studies instruction in the elementary school.
- Discuss effective planning capabilities while reflecting critically upon choices made in design as applied to social studies instruction in the elementary school.
- Integrate varying disciplines and tools into instructional design for social studies education in the elementary school.
- Create activities and approaches for the instruction of social studies in the elementary school that reflect best practices and instructional considerations.
- Discuss various methods of integrating biblical principles in the social studies classroom.
Textbook readings and lecture presentations/notes (MLO: A, B, D)
Course Requirements Checklist
After reading the Course Syllabus and Student Expectations, the candidate will complete the related checklist found in the Course Overview.
There will be 6 Discussions in this course. The original post (thread) to the Discussion question posed must be 400–500 words. This parameter helps to promote writing that is both thorough and yet concise enough to permit other candidates to read all the posts. Appropriate references must be made in current APA format. The candidate is allowed to use first person voice in his/her posts. There are to be at least 2 replies to other candidates’ original threads. Each interaction reply must be between 200–250 words. (MLO: A, B, C, D).
For this assignment, the candidate will identify forms and applications of technology for use in an elementary school social studies classroom and will describe the general applications of these technologies, specific applied activities in the general social studies arena, and provide an evaluation. A portfolio of 10 technologies that could be used in an elementary school social studies classroom, identifying general uses, aligned appropriate national social studies standards, potential activities, and good and bad points to that technology’s use is to be developed. (MLO: A, C, D., E)
Literature Review: Topic Submission
The candidate will select a topic in the general area of social studies instruction in elementary grade education to research and analyze literature on the topic's latest trends and issues.
Literature Review: Presentation
The candidate will examine accompanying literature related to their selected topic to identify its latest trends and issues. Results will be put into a PowerPoint presentation of 10–20 slides to identify these trends in elementary social studies education associated with a set of identified articles in the literature. (MLO: A, B, C, D, E).
Each candidate is expected to complete a standard lesson plan as part of this course. The standard lesson plan format provided must be used and all materials must be included in the final submission. (MLO: A, D, E, F)
Civics Education Module
The candidate will complete VDOE’s Civics Education Module. The civics module serve as online training that is required for candidates seeking the elementary endorsement or the middle and/or secondary social science endorsement. Upon completion of the module, the candidate will be granted a certificate of completion.
Each candidate will produce a unit of instruction. The elements are provided within the course. The plan will be given as a PowerPoint presentation with a maximum of 20 slides. The intent of this project is to help the candidate see the big picture for instructional design with regard to a specific topic. (MLO, A, C, D, E, F)
Each candidate will select 15 chapters from the Obenchain textbook and write a strategy similar to what the text teaches. (MLO: A, B, C, D, E, F) |
Embedded systems are often mobile or deployed in remote locations off the grid, and must run reliably for years. The small size of these embedded computing systems combined with performance demands creates small, localized processor hot spots. How can designers power mobile electronics and better address the concerns of hot spots? Harness the heat energy for power. Arizona State University School of Computing, Informatics, and Decision Systems Engineering Assistant Professor Carole-Jean Wu is investigating the use of energy harvesting capabilities of thermoelectric modules in processors. This technique harvests waste heat and converts it to electricity, which can be used to enhance system cooling or be stored for future use.
"The heat distribution of modern computing platforms offers an interesting opportunity for waste heat energy harvesting," Wu says. "In particular, the unique heat distribution enables the use of thermoelectric materials in embedded applications."
Thermoelectric coolers (TECs) are often used for active cooling of CPU hot spots, and thermoelectric generators (TEGs) can be used in other areas of the CPU to turn remaining heat waste into useful electricity.
"Thermoelectric modules operate based on the phenomenon where a difference in temperature creates an electric voltage difference and vice versa," Wu says. "When a voltage is applied to a thermoelectric material, the splitting and combination of electron hole pairs results in a temperature difference on the material, called the Peltier effect. Conversely, if the material is subjected to a difference in temperature, a voltage difference is created, called the Seebeck effect."
The energy harvesting technique used in Wu’s research exploits the spatial temperature difference between hot and cold components in a three-step process: perform system temperature and heat distribution characterization; identify thermal points and apply thermoelectric devices to generate electricity from temperature differences; and find native applications that exist in the system to use the harvested energy (Figure 1).
Wu and her team were able to recover 0.3 W to 1 W of power with an Intel Ivy Bridge processor running at 70 °C to 105 °C with a thermoelectric device on the CPU. The recovered energy when three TEG modules were used was at least enough to power a fan, and can be a significant amount of power for mobile and wearable applications.
Though preliminary studies show promise for thermoelectric modules, there are still challenges to overcome, Wu says. Material efficiency and additional thermal resistance introduced to embedded systems by the energy harvesting materials are two critical challenges that must be addressed for energy harvesting to become more widespread in embedded systems.
"We are currently investigating important applications using thermoelectric modules at the processor architecture granularity," Wu says. "Our preliminary results indicated that, if managed intelligently, the temperature increase caused by thermoelectric generators can be tolerated and will not increase the overall temperature of the processor. The harvested energy is then used to lower the operating temperature of processors, which will, in turn, improve the chip reliability and the total cooling cost of the chip. We have filed a provisional invention closure on this work and are working on the first prototype of the design."
Generating power for cooling from the waste heat that already exists is a very promising solution for embedded designers. Users’ battery complaints could be addressed and that bothersome excess heat from the processor could be mitigated and put to use. Designers should definitely keep an eye on where this research is going. |
African elephants are sturdy beasts. The largest of these behemoths tips the scales at over 13,000 pounds of flesh and bone, and this presents a rather hefty challenge when elephants pass away. The moment the beast falls, a diverse deconstruction crew gets to work.
There’s an entire science devoted to the afterlives of organisms. It’s called taphonomy, usually summed up as “what happens to an organism between death and discovery.” Why fossil plants are beautifully preserved in one place but not another or why one dinosaur is a complete skeleton while another is just a pile of bone fragments are the kinds of puzzles investigated by this science. The key to understanding those ancient conundrums, though, is looking to what happens to modern species, including the largest land mammals on the planet.
Paleontologists have been carefully documenting elephant afterlives for decades, but one of the more recent studies focused on a postmortem pachyderm observed by paleontologist P.A. White on a lakeshore in Zambia’s Kafue National Park. What killed the elephant isn’t known, but the body was found on the day on October 10th, 2010. From there White watched what happened to the carcass every day for two weeks, then twice weekly for the next month, and then intermittently until September, 2011. At the end of each visit, White would brush away the tracks and scats of visiting animals to get a better idea of who was coming by the body when he wasn’t looking.
While the decomposition of an entire elephant might seem like a chaotic process, dictated only by nature itself, there was a defined order to the proceedings. As is their wont, big carnivores were the first to arrive. Lions, soon followed by spotted hyenas, pulled the earliest shifts, and they went for the trunk, viscera, and, apparently of special interest to the hyenas, the feet. After two weeks being exposed, White and study coauthor C.G. Diedrich later wrote, “the elephant carcass was largely desiccated and all easily accessible soft tissue have been removed by small scavengers” including side-striped jackals, civets, genets, and vultures. By then the elephant was little more than dried-out hide suspended on a framework of remaining bones.
A little rain can freshen up even the driest carcass, however. The first rain of the wet season moistened the elephant just enough to reinvigorate the interest of local spotted hyenas who gnawed on the rather spare remains over four days. By the next month, however, the rains jumbled up what was left. Between December and April, during Zambia’s wet season, the elephant’s remains were submerged by a shallow lake that filled the divot where it died. When it was visible again in May, the pachyderm’s bones were scattered through the tall grass, and that’s the way it stayed. By September of 2011, nearly a year after it died, the elephant had been changed into a mess of bones strewn across the ephemeral lakeshore.
So what does this all mean? While watching an elephant fall apart may seem like a rather tedious pastime, studies like this can help paleontologists better understand what happened in the deep past. Big carnivores like lions and hyenas are an important part of carcass breakdown, for example, cutting through the thick hide of the elephant and opening up the juicy bits smaller scavengers later pick from. It’s also relevant to our past, too.
Multiple fossil sites through the Old World show the presence of humans and prehistoric carnivores at elephant carcasses. A paper published just last year, for example, places both prehistoric people and the giant hyena Pachycrocuta at the same elephant carcass in Spain, with humans moving in fast to cut off meaty limbs to consume away from the competing jaws of the bestial carnivores. The rate at which modern elephants break down only reinforces the idea that our ancestors would have to move quick if they wanted a meaty meal, within days of death if they wanted anything more than desiccated hide and bone. Dining at an elephant carcass is a first come, first-served affair.
White, P., Diedrich, C. 2012. Taphonomy story of a modern African elephant Loxodonta africana carcass on a lakeshore in Zambia (Africa). Quaternary International. doi: 10.1016/j.quaint.2012.07.025 |
Since its discovery in 19471, Zika virus (ZIKV) transmission has been reported in 76 countries worldwide as of February 2, 20172. In the general population, infection with ZIKV is often either asymptomatic or mildly symptomatic; development of Guillain-Barré syndrome, hospitalization, and death are unusual3. However, infection during pregnancy may lead to adverse fetal and infant outcomes, including congenital Zika syndrome (CZS)4. Characteristics of CZS include anomalies of the brain and cranial morphology, ocular anomalies, congenital contractures, and neurological sequelae4.
Epidemiology and Pathogenesis
ZIKV is a single-stranded RNA flavivirus (genus Flavivirus, family Flaviviridae) that is closely related to dengue as well as other viruses (e.g., West Nile, Japanese encephalitis, yellow fever)5. Analyses of different isolates of ZIKV indicate at least two major lineages (African and Asian), with further differentiation of the former into west and east African strains6,7. ZIKV infection is transmitted by the bite of infected Aedes mosquitoes, including Aedes aegypti and Aedes albopictus8,9. Other modes of transmission include mother-to-child transmission (MTCT), both in utero10,11 and peripartum12, sexual transmission13,14, laboratory exposure15, and transplantation or transfusion of blood or blood products16. ZIKV RNA17 and infective viral particles12,18 have been detected in breast milk, but transmission of ZIKV through breastfeeding has not been reported.
ZIKV was named for the area where it was discovered, the Zika Forest in Uganda1. The first case of natural ZIKV infection in humans was reported in 196415. Prior to 2007, only 16 cases of human infections with ZIKV had been reported15,19,20,21,22,23, at least 3 of which were laboratory-acquired. In 2007, however, an outbreak of ZIKV infection was reported in Yap State, Federated States of Micronesia24. Of 49 confirmed cases of ZIKV infection, symptoms reported by most of the 31 individuals who provided information were macular or papular rash (90%), fever (65%), arthritis or arthralgia (65%), and nonpurulent conjunctivitis (55%)24. In 2013–2014, another ZIKV outbreak occurred in French Polynesia25, where cases of Guillain-Barré syndrome also were observed26. Subsequently, in 2015, an outbreak in Brazil occurred27,28.
In a study in Brazil during the ZIKV epidemic, 119 individuals with laboratory-confirmed ZIKV infection presented with macular or papular rash (97%), pruritus (79%), prostration (73%), headache (66%), arthralgia (63%), myalgia (61%), nonpurulent conjunctivitis (56%), and low back pain (51%)29. Based on various studies of ZIKV outbreaks24,29, clinical illness due to ZIKV infection appears to be generally mild and relatively short-lived. Preliminary data regarding the duration of viremia and of detection of virus in urine and other bodily fluids have been reported30. The median (95th percentile) for the length of time until the end of ZIKV RNA detection was 14 days (54 days) in serum, 8 days (39 days) in urine, and 34 days (81 days) in semen30. Occurrences of Guillain-Barré syndrome31 and other neurologic disorders (e.g., encephalitis and myelitis), hospitalization, and death are unusual3. Although many infections with ZIKV may be asymptomatic, the exact proportion of infected individuals who are symptomatic and asymptomatic is unknown.
After the onset of the ZIKV epidemic in Brazil in 2015, an increased number of infants born with microcephaly were reported there32, and retrospective analyses of data from French Polynesia revealed an increased number of infants with abnormalities, including microcephaly33. An early study of the pathogenicity of ZIKV in animals revealed that the virus was neurotropic in immunodeficient mice34. Since ZIKV RNA and live virus have been found in the brain tissue of microcephalic infants born to women with ZIKV infection during pregnancy10,11,35,36,37, damage to the central nervous system (CNS) observed in infants with in utero exposure to ZIKV has been attributed to direct cellular damage by the virus10,11,35,36,37. Neural progenitor cells appear to be the main target of ZIKV, but other cells, including immature neurons, astrocytes, microglia, and endothelial cells, also may be infected35,38,39,40.
Clinical Manifestations in the Infant
The full spectrum of outcomes of MTCT of ZIKV appears to be broad. Infants with congenital ZIKV infection may be severely abnormal at birth32,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67, or they may appear normal at birth and neurologic abnormalities may be detected later65,68,69.
Severe abnormalities in the fetus and infant, as well as fetal demise, have been described. In a case series of 11 fetuses and infants with congenital ZIKV infection, 27% mortality was observed in the perinatal period, and neurological abnormalities were observed in all 11 cases, including lissencephaly with hydrocephalus, cerebellar hypoplasia, ventriculomegaly, a reduction in cerebral volume, and microcephaly41. A prospective cohort study enrolled 134 symptomatic, ZIKV-infected pregnant women (all exhibited a rash that had developed within the previous 5 days) and reported pregnancy outcomes42. Of the 125 ZIKV-infected women with pregnancy outcome data reported, nine pregnancies (7.2%) ended with fetal demise and 117 live infants were born to 116 women, including one set of twins. Among these infants, 42% had abnormal brain imaging results (e.g., cerebral calcifications, cerebral hypoplasia or atrophy, and ventriculomegaly), other clinical findings (e.g., small for gestational age, microcephaly, hypertonicity, hyperreflexia, clonus, spasticity, contractures, seizures, and abnormal movements), or both. Abnormal findings were reported in all trimesters of pregnancy: 55% reported infection in the first trimester, 52% after second trimester, and 29% after third trimester infection.
Overall, abnormal findings were observed among 46% of children of ZIKV-infected women versus 11.5% of 57 children of ZIKV-uninfected women (P < 0.001)42. In a study of 442 women with completed pregnancies, birth defects were reported among fetuses and infants born to women with possible ZIKV infection during pregnancy43. Birth defects potentially related to ZIKV infection were identified in 26 fetuses or infants (6%). Among 395 live births and 47 pregnancy losses, birth defects were reported for 21 infants and 5 fetuses, respectively. Of the 26 fetuses and infants, 14 had microcephaly and brain abnormalities on neuroimaging, 4 had microcephaly without reported neuroimaging having been performed, 4 had brain abnormalities but no microcephaly, and the remaining ones had the following abnormalities: 2 had encephalocele, 1 had eye abnormalities, and 1 had hearing abnormalities. Brain abnormalities reported included hydrocephaly, ventriculomegaly, cerebral atrophy, abnormal cortical formation, corpus callosum abnormalities, and intracranial calcifications.
Birth defects were reported in 9 of 85 (11%) of completed pregnancies with laboratory evidence of maternal ZIKV infection in the first trimester or the periconceptional period, and in 15 of 211 (7%) of completed pregnancies with laboratory evidence of possible ZIKV spanning multiple trimesters, including the first trimester. No birth defects were reported among pregnancies with ZIKV infection or ZIKV exposure only in the second (76 cases) or third trimester (31 cases)43. The risk of microcephaly with maternal ZIKV infection during the first trimester was previously estimated to be as high as 13.2%44.
In a review of clinical reports of congenital ZIKV infection from French Polynesia, Brazil, the United States, and Spain, characteristics of CZS included certain anomalies of the brain and cranial morphology, ocular anomalies, congenital contractures, and neurological sequelae4. First, severe microcephaly (more than three standard deviations below the mean) among infants with in utero exposure to ZIKV may occur with other abnormalities, constituting the fetal brain disruption sequence45,46. The fetal brain disruption sequence comprises a pattern of defects, including moderate to severe microcephaly, overlapping cranial sutures, prominence of the occipital bone, and scalp rugae45,46. This syndrome is believed to result from partial destruction of the brain during the second or third trimester with subsequent fetal skull collapse, as well as severe neurologic impairment45,46. Although fetal brain disruption syndrome is not unique to CZS, it was reported only rarely before the ZIKV epidemic4.
The neuropathology observed with congenital ZIKV infection is similar to that observed with congenital cytomegalovirus infection47, except for the distribution of intracranial calcifications (subcortical with ZIKV, periventricular with cytomegalovirus)47,48. Other brain anomalies observed with CZS include ventriculomegaly and cerebral hypoplasia or atrophy, as well as manifestations such as seizures, spasticity, hypertonicity, hyperreflexia, clonus, and abnormal movements42.
Several anomalies of the eye have been reported in infants with presumed or confirmed in utero exposure to ZIKV, including microphthalmia, intraocular calcifications, cataracts, chorioretinal atrophy, optic nerve atrophy or other anomalies, and posterior ocular findings32,49,50,51,52,53,54,55,56,57,58,59. Fetal central nervous system abnormalities can lead to decreased fetal movements and contractures60,61, and congenital contractures of one or multiple joints have been reported in fetuses and infants with presumed or confirmed congenital ZIKV infection32,54,55,62,63. Finally, neurological sequelae of congenital ZIKV infection include tremors and posturing (consistent with extrapyramidal dysfunction)32,64,65, hypertonia, spasticity, irritability, hypotonia, and dysphagia32,65. Sensorineural hearing loss also has been reported66,67.
MTCT of ZIKV has been reported among infants, born to mothers with ZIKV infection during pregnancy, who appeared normal at birth but in whom abnormalities were detected later. In various reports65,68,69, children with presumed or confirmed congenital ZIKV infection, but without diagnosed microcephaly at birth and with postnatal development of microcephaly, have been described.
Laboratory confirmation of suspected ZIKV infection involves the detection of viral RNA during acute infection with a nucleic acid amplification test (NAAT) or a serological assay to detect anti-ZIKV immunoglobulin M (IgM) within weeks to months after the acute infection70. Anti-ZIKV IgG assays are in development but are not yet widely available. NAATs such as reverse transcription polymerase chain reaction (RT-PCR) assays can confirm ZIKV infection only transiently after acute infection (e.g., when the patient is viremic)70.
Only preliminary data are available regarding the duration of viremia (the presence of the virus in other bodily fluids), and recommendations as of April 6, 2017, specify NAAT testing within 2 weeks of exposure or symptom onset71,72,73. Serological assays are limited by cross-reactivity; ZIKV antibodies cross-react with other flaviviruses (e.g., dengue, yellow fever, and West Nile viruses), such that current or previous infection with, or vaccination against, another flavivirus often results in false positive or indeterminate ZIKV serology test results70. Although the exact duration of detection of IgM is unknown, ZIKV IgM testing is recommended within 2–12 weeks of exposure or symptom onset71,72,73. (See the “Treatment” section of this chapter for information regarding RT-PCR testing of tissue specimens.)
Diagnostic testing for ZIKV infection in pregnant women with possible ZIKV exposure should be initiated with RT-PCR, IgM assays, or both71. RT-PCR testing of serum or urine should be performed as the first step for two groups of pregnant women: (a) symptomatic women who have had symptoms for less than 2 weeks; and (b) asymptomatic women, not living in an area with active ZIKV transmission, who had possible exposure less than 2 weeks previously71. In these women, a positive RT-PCR assay is interpreted as representing recent ZIKV infection71.
It is recommended that women with a negative ZIKV RT-PCR assay have additional testing (ZIKV, dengue virus IgM assays, or both), and depending upon the results of these assays and whether the individual resides in an area endemic for dengue, neutralizing antibody testing [plaque reduction neutralization test (PRNT)] may be required71. PRNT testing may not discriminate between ZIKV infection and infection with another flavivirus (e.g., in individuals with previous flavivirus exposure, as would be likely in the Commonwealth of Puerto Rico, a U.S. territory, and other dengue-endemic areas)72.
ZIKV and dengue virus IgM assays should be performed as the first step for symptomatic women at 2–12 weeks after symptom onset71. In these women, negative ZIKV and dengue virus IgM assays are interpreted as representing no recent ZIKV infection71. Women with other ZIKV and dengue virus IgM results should undergo further testing (RT-PCR, PRNT, or both)71. Pregnant women are considered to have confirmed recent ZIKV infection based on the following laboratory evidence: (a) ZIKV, ZIKV RNA, or antigens detected in any body fluid or tissue specimen; or (b) ZIKV or dengue virus IgM (positive or equivocal results) in serum or cerebrospinal fluid (CSF) specimens, with a positive PRNT titer against ZIKV and a negative PRNT titer against the dengue virus71.
Laboratory testing to diagnose congenital ZIKV infection is recommended for (a) infants with findings suggestive of congenital ZIKV infection (regardless of maternal ZIKV laboratory testing results); and (b) infants born to ZIKV-positive mothers (i.e., women with ZIKV RNA detected by RT-PCR in any maternal specimen, ZIKV IgM-positive serum results, or both, with PRNT confirmation of neutralizing antibody titers against ZIKV or a flavivirus not otherwise specified)73. ZIKV RT-PCR testing is performed on infant urine and serum specimens, and ZIKV IgM testing is performed on infant serum73. If CSF is obtained for other reasons, it should undergo ZIKV RT-PCR and ZIKV IgM testing73. Ideally, infant testing for congenital ZIKV infection should be performed within the first 2 days of life to help distinguish prenatal from postnatal infection73. The diagnosis of congenital ZIKV infection can be confirmed by detection of viral RNA by RT-PCR assay73. Congenital ZIKV infection is considered probable in infants with positive ZIKV IgM assays (regardless of PRNT results)73.
If the initial infant IgM assay is ZIKV-positive, testing of an infant specimen by PRNT may be necessary (although results will reflect transplacentally transferred maternal antibodies, generally detectable during the first 18 months of life)73. Recent data suggest that PCR and IgM testing may be negative in infants with clinical features consistent with congenital ZIKV infection74. Theoretically, such a situation may result from incomplete testing (e.g., testing performed on suboptimal specimens), late testing (e.g., infection occurred early in gestation, and testing was performed after ZIKV RNA and ZIKV IgM had waned), or failure of the fetus to mount an IgM antibody response. Visit the CDC website for any changes that may have occurred in diagnostic testing recommendations since this book was published: https://www.cdc.gov/zika/index.html.
Currently, there are no antiviral medications to treat ZIKV infection. Symptomatic ZIKV infection can be treated with interventions to ameliorate symptoms (i.e., rest, hydration, and acetaminophen to reduce fever and pain76. (To reduce the risk of bleeding, aspirin and other nonsteroidal anti-inflammatory drugs are not recommended until dengue can be ruled out.)76
The recommended clinical management of pregnant women with confirmed or possible ZIKV infection includes serial fetal ultrasounds every 3–4 weeks during pregnancy to monitor growth and to evaluate fetal neuroanatomy71. Several of the findings associated with congenital ZIKV infection (e.g., intracranial calcifications, ventriculomegaly, microcephaly, abnormalities of the cerebrum cerebellum, corpus callosum, and eyes, and arthrogryposis) may be detected through fetal ultrasound71. (The time period between maternal infection and the development and detection of abnormalities on fetal ultrasound is unknown.)
Amniocentesis could be considered, although only limited information is available regarding the utility of this procedure in terms of diagnosing congenital ZIKV infection; although detection of ZIKV RNA in amniotic fluid can be interpreted as suggestive of fetal infection, its absence does not eliminate the possibly of fetal infection as the sensitivity of this test has not been established71. Finally, persistent detection of ZIKV RNA in maternal serum during pregnancy has been reported75, but the clinical implications of viral RNA persistence are unknown. Following delivery, pathology evaluation of fetal tissue (for fetal loss or stillbirth) and of the placenta and umbilical cord (for live-born infant) may be useful to establish or confirm maternal ZIKV infection in certain situations71. The correlation of placental and umbilical cord findings with infant outcomes is unknown. Pathologic evaluation may include RT-PCR assay, immunohistochemical staining of the placenta or fixed tissue, or both37,71.
The initial evaluation and recommended outpatient management for infants with possible congenital ZIKV infection is based on maternal and infant laboratory testing and clinical findings in the infant73. For infants whose mothers had laboratory evidence of ZIKV infection but whose initial clinical examination did not reveal any abnormalities, the following is recommended prior to the infant’s hospital discharge: physical examination, including weight, length, and head circumference, and neurologic examination; hearing screen; head ultrasound; and infant ZIKV testing73. If the infant ZIKV testing is negative, follow-up throughout infancy comprises routine care, including repeated measurements of head circumference and assessments of neurodevelopment73. If there is laboratory evidence of ZIKV infection, additional recommended evaluations include an ophthalmology examination and auditory brainstem response testing73. Further testing and evaluations during infancy depend upon initial evaluation results73.
For infants whose mothers had laboratory evidence of ZIKV infection and the initial infant clinical examination revealed abnormalities consistent with congenital ZIKV infection, the following are recommended prior to hospital discharge: laboratory testing (complete blood count, metabolic panel, liver function tests); ophthalmology examination; and auditory brainstem response testing73. In addition, advanced neuroimaging should be considered prior to hospital discharge73. If the infant ZIKV testing is negative, evaluation for other causes of birth defects should be considered, with further management as clinically indicated73. If there is laboratory evidence of ZIKV infection, additional recommended evaluations include thyroid screens, neurologic examinations, an ophthalmology examination, and auditory brainstem response testing73. Routine preventive healthcare, including monitoring of feeding and growth, should be provided, along with routine and congenital infection–specific anticipatory guidance, and referral to specialists (including evaluation for other causes of birth defects as needed)73.
For infants whose mothers were not tested, or who were tested outside of the appropriate window (see earlier in this chapter for the durations of the testing windows for ZIKV NAAT and IgM assays), but whose initial clinical examination did not reveal any abnormalities, the following are recommended prior to hospital discharge: maternal ZIKV testing; routine care (physical examination, weight, length, head circumference, and neurological examination); hearing screening; and head ultrasound73. Consideration should be given to testing the placenta for ZIKV73. If there is laboratory evidence of ZIKV infection in the mother, infant ZIKV testing should be performed73. Subsequent infant management should be based upon the infant’s clinical examination and test results73.
For infants whose mothers were not tested, or who were tested outside of the appropriate window, and the initial infant clinical examination revealed abnormalities consistent with congenital ZIKV infection, the following are recommended prior to hospital discharge: maternal ZIKV testing; routine care (physical examination, weight, length, head circumference, and neurological examination); head ultrasound; laboratory testing (complete blood count, metabolic panel, liver function tests); ophthalmology examination; auditory brainstem response testing; and infant ZIKV testing73. In addition, advanced neuroimaging should be considered prior to hospital discharge73. If the infant ZIKV testing is negative, evaluation for other causes of birth defects should be pursued, and further management would be as clinically indicated73.
If laboratory evidence of ZIKV infection of the infant is obtained, additional recommended evaluations include thyroid screens, neurologic examinations, an ophthalmology examination, and additional hearing evaluation73. Routine preventive healthcare, including monitoring of feeding and growth, should be provided, along with routine and congenital infection–specific anticipatory guidance, and referral to specialists (including evaluation for other causes of birth defects as needed)73.
Although efforts to develop ZIKV vaccines are underway, there is not yet a licensed ZIKV vaccine76. Prevention of ZIKV infection relies primarily upon the prevention of vector-borne transmission through mosquito bites and prevention of sexual transmission76. Methods to protect against mosquito bites include wearing long-sleeved shirts and long pants; staying and sleeping in places with air conditioning and window/door screens to keep mosquitoes outside; sleeping under a mosquito bed net if air conditioning and window/door screens are not available or if sleeping outdoors; taking steps to control mosquitoes inside and outside the home (e.g., cleaning, covering, or discarding items that hold water); and using insect repellent77. Use of a U.S. Environmental Protection Agency (EPA)–registered insect repellent with one of the following active ingredients is recommended: DEET, picaridin, IR3535, oil of lemon eucalyptus, or para-menthane-diol77.
To prevent mosquito transmission of ZIKV to other people, it is especially important that individuals who test positive for ZIKV infection should protect themselves from mosquito bites for at least 3 weeks after symptom onset77. Abstinence and the use of barrier methods (e.g., condoms) are recommended for prevention of the sexual transmission of ZIKV78. Pregnant women should not travel to areas with ZIKV transmission risk, but if travel to such areas cannot be avoided, they should discuss such travel with their doctor first79. Methods of preventing mosquito bites while traveling and sexual transmission (both during and after travel) should be implemented79. Aside from primary prevention (i.e., prevention of acquisition of ZIKV infection by a pregnant woman), it is not known how to prevent MTCT of ZIKV infection. To date, there is no evidence that previous infection with ZIKV will affect the outcome of future pregnancies.
Many questions remain regarding the epidemiology, clinical manifestations, diagnosis, treatment, and prevention of ZIKV infection. Only preliminary data are available regarding the duration of viremia (or the presence of ZIKV in other bodily fluids)30. This, in turn, relates to the duration of infectiousness through sexual or vector-borne transmission. Determining the overall rate of and risk factors for MTCT of ZIKV, as well as for severe manifestations of congenital ZIKV infection, are important. For example, one study of microcephalic infants born to ZIKV-infected mothers in Brazil suggested that the earlier maternal symptomatology (specifically, rash) occurred during pregnancy, the smaller the infant’s mean head circumference at birth69.
Additional research to understand the epidemiology of MTCT of ZIKV more completely is urgently needed. An important area for future research is to determine the full range of clinical presentations of ZIKV infection and the proportion of infected individuals who are symptomatic and asymptomatic. Increased availability and specificity of ZIKV IgG assays, especially for screening women of reproductive age for previous exposure to ZIKV (prior to pregnancy) and neonates for in utero exposure to ZIKV, is essential. Development of such IgG assays has been hampered by the problem of cross-reactivity noted for IgM assays: ZIKV antibodies cross-react with other flaviviruses such that infection with, or vaccination against, another flavivirus often results in false positive or indeterminate ZIKV serology test results. Treatment for ZIKV infection, as well as interventions such as vaccines to prevent infection, especially for women of reproductive age and pregnant women, are needed.
Although generally asymptomatic or mildly symptomatic, ZIKV infection during pregnancy may lead to severe adverse fetal and infant outcomes, including the CZS. The full spectrum of outcomes of congenital ZIKV infection seems to be broad, with the clinical manifestations of congenital ZIKV infection appearing to range from asymptomatic infection at birth, with possible later manifestation of significant abnormalities, to severe abnormalities in the fetus and infant.
Although our understanding of pathogenesis, rates, and manifestations of congenital ZIKV infection has improved rapidly and dramatically, much remains unknown or poorly understood regarding this potentially devastating congenital infection. Because of this, a broad research agenda addressing the pathogenesis, epidemiology, clinical manifestations, diagnosis, treatment, and prevention of ZIKV infection is being implemented.
The findings and conclusions in this report are those of the author and do not necessarily represent the official position of the Centers for Disease Control and Prevention (CDC).
1. Dick GW, Kitchen SF, Haddow AJ. Zika virus. I. Isolations and serological specificity. Trans R Soc Trop Med Hyg 1952;46:509–520.Find this resource:
2. World Health Organization. Zika Situation Report. February 2, 2017. Accessed May 8, 2017, from http://www.who.int/emergencies/zika-virus/situation-report/en/.
3. Puerto Rico Department of Health. Weekly Arboviral Infection Report. March 8, 2017. Accessed May 8, 2017, from http://www.salud.gov.pr/Estadisticas-Registros-y-Publicaciones/Informes%20Arbovirales/Reporte%20ArboV%20semana%208%202017.pdf
4. Moore CA, Staples JE, Dobyns WB, Pessoa A, Ventura CV, Borges da Fonseca et al. Characterizing the pattern of anomalies in congenital Zika syndrome for pediatric clinicians. JAMA Pediatrics 2016 (November 3); DOI:10.1001/jamapediatrics.2016. 3982.Find this resource:
5. Simmonds P, Becher P, Collet MS, Gould EA, Heinz FX, Meyers G, et al. Flaviviridae. In AMG King, MJ Adams, EB Cartens, EJ Lefkowitz (eds.), Ninth Report of the International Committee on Taxonomy of Viruses. San Diego: Academic Press Elsevier, 2011, 1008–1020.Find this resource:
6. Lanciotti RS, Kosoy OL, Laven JJ, Velez JO, Lambert AJ, Johnson AL, et al. Genetic and serologic properties of Zika virus associated with an epidemic, Yap State, Micronesia, 2007. Emerg Infect Dis 2008;14:1232–1239.Find this resource:
7. Haddow AD, Schuh AJ, Yasuda CY, Kasper MR, Hearing V, Huy R, et al. Genetic characterization of Zika virus strains: geographic expansion of the Asian lineage. PLoS Negl Trop Dis 2012;6:e1477.Find this resource:
8. Marchette NJ, Garcia R, Rudnick A. Isolation of Zika virus from Aedes aegypti mosquitoes in Malaysia. Am J Trop Med Hyg 1979;18:411–415.Find this resource:
9. Grard G, Caron M, Mombo IM, Nkoufge D, Mboui Ondo S, Jiolle D, et al. Zika virus in Gabon (Central Africa)—2007: a new threat from Aedes albopictus? PLoS Negl Trop Dis 2014;9:e2681.Find this resource:
10. Martines RB, Bhatnagar J, Keating MK, Silva-Flannery L, Muehlenbachs A, Gary J, et al. Evidence of Zika virus infection in brain and placental tissues from two congenitally infected newborns and two fetal losses—Brazil, 2015. MMWR 2016;65:159–160.Find this resource:
11. Martines RB, Bhatnagar J, de Oliveira Ramos AM, Pompeia Freire Davi H, D’Andretta Igleizias S, Kanamura CT, et al. Pathology of congenital Zika syndrome in Brazil: a case series. Lancet 2016;388:898–964.Find this resource:
12. Besnard M, Lastere S, Teissier A, Cao-Lormeau VM, Musso D. Evidence of perinatal transmission of Zika virus, French Polynesia, December 2013 and February 2014. Euro Surveill 2014;19(12):pil-20751.Find this resource:
13. Foy BD, Kobylinski KC, Foy JLC, Blitvich BJ, Travassos de Rosa A, Haddow AD, et al. Probable non-vector-borne transmission of Zika virus, Colorado, USA. Emerg Infect Dis 2011;17:880–882.Find this resource:
14. Hills SL, Russell K, Hennessey M, Williams C, Oster AM, Fischer M, Mead P. Transmission of Zika virus through sexual contact with travelers to areas of ongoing transmission—continental United States, 2016. MMWR 2016;65(8):215–216.Find this resource:
15. Simpson DI. Zika virus infection in man. Trans R Soc Trop Med Hyg 1964;58:335–338.Find this resource:
16. Motta IJF, Spencer BR, Cordeiro da Silva SG, Arruda MB, Dobbin JA, Gonzaga YBM, et al. [letter] Evidence for transmission of Zika virus by platelet transfusion. N Engl J Med 2016;375 (11):1101–1103.Find this resource:
17. Dupont-Rouzeyrol M, Biron A, O’Connor O, Huguon E, Descloux E. Infectious Zika viral particles in breastmilk. Lancet 2016;387:1051.Find this resource:
18. Sotelo JR, Sotelo AB, Sotelo FJ, Doi AM, Pinho JRR, de Cassia Oliveira R, et al. Persistence of Zika virus in breast milk after infection in late stage of pregnancy. Emerg Infect Dis 2017;23(5). doi: 10.3201/eid2305.161538 [Epub ahead of print].Find this resource:
19. Filipe AR, Martins CMV, Rocha H. Laboratory infection with Zika virus after vaccination against yellow fever. Arch fur die gesamte Virusforschung 1973;43:315–319.Find this resource:
20. Berge T. (Ed.) (1975). International Catalog of Arboviruses. 2nd ed. Washington, DC: National Institute of Allergy and Infectious Diseases and the Centers for Disease Control and Prevention.Find this resource:
21. Moore DL, Causey OR, Carey DE, Reddy S, Cooke AR, Akinkugbe FM, et al. Arthropod-borne viral infections of man in Nigeria, 1964–1970. Ann Trop Med Parasitol 1975;69:49–64.Find this resource:
22. Fagbami AH. Zika virus infections in Nigeria: virological and seroepidemiological investigations in Oyo State. J Hyg (Lond) 1979;83:213–219.Find this resource:
23. Olson JG, Ksiazek TG, Suhandiman, Tribibowo. Zika virus, a cause of fever in Central Java, Indonesia. Trans R Soc Trop Med Hyg 1981;75:189–191.Find this resource:
24. Duffy MR, Chen TH, Hancock WT, Powers AM, Kool JL, Lanciotti RS, et al. Zika virus outbreak on Yap Island, Federated States of Micronesia. N Engl J Med 2009;360:2536–2543.Find this resource:
25. Cao-Lormeau VM, Roche C, Teisier A, Robin E, Berry A-L, Mallet H-P, et al. [letter] Zika virus, French Polynesia, South Pacific, 2013. Emerg Infect Dis 2014;20:1085–1086.Find this resource:
26. Oehler E, Watrin L, Larre P, Leparc-Goffart I, Lastère S, Valour F, et al. Zika virus infection complicated by Guillain-Barré syndrome—case report, French Polynesia, December 2013. Euro Surveill 2014;19(9).Find this resource:
27. Campos GS, Bandeira AC, Sardi SI. Zika virus outbreak, Bahia, Brazil. Emerg Infect Dis 2015;21:185–1886.Find this resource:
28. Zanluca C, Melo VC, Mosimann AL, Satos GI, Santos CN, Luz K. First report of autochthonous transmission of Zika virus in Brazil. Mem Inst Oswaldo Cruz 2015;110:569–572.Find this resource:
29. Brasil P, Calvet GA, Siqueira AM, Wakimoto M, Carvalho de Sequeira P, Nobre A, et al. Zika virus outbreak in Rio de Janeiro, Brazil: Clinical characteization, epidemiological and aspects. PLoS Negl Trop Dis 2016;10 (4):e0004636.Find this resource:
30. Paz-Bailey G, Rosenberg ES, Doyle K, Munoz-Jordan J, Santiago GA, Klein L, et al. Persistence of Zika virus in body fluids—preliminary report. N Engl J Med 2017;DOI:10.1056/NEJMoa1613108.Find this resource:
31. Dirlikov E, Major CG, Mayshack M, Medina N, Matos D, Ryff KR, et al. Guillain-Barré syndrome during ongoing Zika virus transmission—Puerto Rico, January 1–July 31, 2016. MMWR 2016;65 (34):910–914.Find this resource:
32. Schuler-Faccini L, Ribeiro EM, Feitosa IML, Horovitz DDG, Cavalcanti DP, Pessoa A, et al. Possible association between Zika virus infection and microcephaly—Brazil, 2015. MMWR 2016;65 (3):59–62.Find this resource:
33. Cauchemez S, Besnard M, Bompard P, Dub T, Guillemette-Artur P, Eyrolle-Guignot D, et al. Association between Zika virus and microcephaly in French Polynesia, 2013–2015: a retrospective study. Lancet 2016;387:2125–2132.Find this resource:
34. Dick GW. Zika virus. II. Pathogenicity and physical properties. Trans R Soc Trop Med Hyg 2952;46:521–534.Find this resource:
35. Driggers RW, Ho C-Y, Korhonen EM, Kuivanen S, Jaaskelainen AJ, Smura T, et al. Zika virus infection with prolonged maternal viremia and fetal brain abnormalities. N Engl J Med 2016;374:2142–2151.Find this resource:
36. Mlakar J, Korva M, Tul N, Popovic M, Poljsakprijatelj M, Mraz J, et al. Zika virus associated with microcephaly. N Engl J Med 2016;374:951–958.Find this resource:
37. Bhatnagar J, Rabeneck DB, Martines RB, Reagan-Steiner S, Ermias Y, Estetter LBC, et al. Zika virus RNA replication and persistence in brain and placental tissue. Emerg Infect Dis 2017;23(3):405–414.Find this resource:
38. Garcez PP, Loiola EC, da costa RM, Higa LM, Trindade P, Delvecchio R, et al. Zika virus impairs growth in human neurospheres and brain organoids. Science 2016;352:816–818.Find this resource:
39. Tang H, Hammack C, Ogden SC, Wen Z, Qian X, Li Y, et al. Zika virus infects human cortical neural progenitors and attenuates their growth. Cell Stem Cell 2016 18(5):587–590.Find this resource:
40. Qian X, Nguyen HN, Song MM, Hadiono C, Ogden SC, Hammack C, et al. Brain-region-specific organoids using mini-bioreactors for modeling ZIKV exposure. Cell 2016;165(5):1238–1254.Find this resource:
41. de Oliveira Melo AS, Aguiar RS, Amorim MMR, Arruda MB, de Oliveira Melo F, Tais Clemento Ribeiro S, Gean Medeiros Batista A, et al. Congenital Zika virus infection beyond neonatal microcephaly. JAMA Neurol 2016;73(12):1407–1416.Find this resource:
42. Brasil P, Pereira JP Jr, Moreira ME, Ribeiro Nogueira RM, Damasceno L, Wakimoto M, et al. Zika virus infection in pregnant women in Rio de Janeiro. N Engl J Med 2016;375:2321–2334.Find this resource:
43. Honein MA, Dawson AL, Petersen EE, Jones AM, Lee EH, Yazdy MM, et al. Birth defects among fetuses and infants of US women with evidence of possible Zika virus infection during pregnancy. JAMA 2016: doi:10.100/jama.2016.19006.Find this resource:
44. Johansson MA, Mier-y-Teran-Romero L, Reefhuis J, Gilboa SM, Hills SL. Zika and the risk of microcephaly. N Engl J Med 2016;375:1–4.Find this resource:
45. Russell LJ, Weaver DD, Bull MJ, Weinbaum M. In utero brain destruction resulting in collapse of the fetal skull, microcephaly, scalp rugae, and neurologic impairment: the fetal brain disruption sequence. Am J Med Genetics 1984;17:509–521.Find this resource:
46. Moore CA, Weaver DD, Bull MJ. Fetal brain disruption sequence. J Pediatr 1990;116 (3):383–386.Find this resource:
47. Parmar H, Ibrahim M. Pediatric intracranial infections. Neuroimaging Clin N Am 2012:23(4):707–725.Find this resource:
48. Averill LW, Kandula VV, AKyol Y, Epelman M. Fetal brain magnetic resonance imaging findings in congenital cytomegalovirus infection with postnatal imaging correlation. Semin Ultrasound CT MR. 2015;36(6):476–486.Find this resource:
49. Ventura CV, Maia M, Bravo-Filho V, Gois AL, Belfort R Jr. [letter] Zika virus in Brazil and macular atrophy in a child with microcephaly. Lancet 2016;387 (10015):228.Find this resource:
50. Calvet G, Aguiar RS, Melo AS, Sampaio SA, de Filippis I, Fabri A, et al. Detection and sequencing of Zika virus from amniotic fluid of fetuses with microcephaly in Brazil: a case study. Lancet Infect Dis 2016:16 (6):653–660.Find this resource:
51. de Paula Freitas B, de Oliveira Dias JR, Prazeres J, Sacramento GA, Icksang Ko A, Maia M, et al. Ocular findings in infants with microcephaly associated with presumed Zika virus congenital infection in Salvador, Brazil. JAMA Ophthalmol 2016;134(5):529–535.Find this resource:
52. Microcephaly Epidemic Research Group. Microcephaly in infants, Pernambuco State, Brazil, 2015. Emerg Infect Dis 2016;22 (6):1090–1093.Find this resource:
53. Oliveira Melo AS, Malinger G, Ximenes R, Szejnfeld PO, Sampaig SA, Bispo de Filippis AM. Zika virus intrauterine infection causes fetal brain abnormality and microcephaly: tip of the iceberg? Ultrasound Obstet Gynecol 2016;47(1):6–7.Find this resource:
54. Sarno M, Aquino M, Pimentel K, Cabral R, Costa G, Bastos F, Brites C, et al. Progressive lesions of central nervous system in microcephalic fetuses with suspected congenital Zika virus syndrome. Ultrasound Obstet Gynecol 2016. doi:10.1002/uog.17303.Find this resource:
55. van der Linden V, Filho EL, Lins OG, van der Linden A, de Fátima Viana Vasco Aragão M, Mertens Brainer-Lima A, et al. Congenital Zika syndrome with arthrogryposis: retrospective case series study. BMJ 2016;354:i3899.Find this resource:
56. Ventura CV, Maia M, Travassos SB, Martins TT, Patriota F, Nunes ME, et al. Risk factors associated with the ophthalmoscopic findings identified in infants with presumed Zika virus congenital infection. JAMA Ophthalmol 2016;134(8):912–918.Find this resource:
57. Ventura CV, Maia M, Ventura BVV, van der Linden V, Araújo EB, Ramos RC, et al. Ophthalmological findings in infants with microcephaly and presumable intra-uterus Zika virus infection. Arq Bras Oftalmol 2016;79 (1):1–3.Find this resource:
58. de Miranda HA II, Costa MC, Monteiro Frazão MA, Simão N, Franchischini S, Moshfeghi DM. Expanded spectrum of congenital ocular findings in microcephaly with presumed Zika infection. Ophthalmol 2016;123(8):1788–1794.Find this resource:
59. Valentine G, Marquez L, Pammi M. Zika virus–associated microcephaly and eye lesions in the newborn. J Pediatric Infect Dis Soc 2016;5(3):323–328.Find this resource:
60. Kowalczyk B, Felus J. Arthrogryposis: an update on clinical aspects, etiology, and treatment strategies. Arch Med Sci 2016;12(1):10–24.Find this resource:
61. Bamshad M, Van Heest AE, Pleasure D. Arthrogryposis: a review and update. J Bone Joint Surg Am 2009;91 (suppl 4):40–46.Find this resource:
62. Perez S, Tato R, Cabrera JJ, Lopez A, Robles O, Paz E, et al. Confirmed case of Zika virus congenital infection, Spain, March 2016. Euro Surveill 2016;21 (24).Find this resource:
63. Besnard M, Eyrolle-Guignot D, Guillemette-Artur P, Lastère S, Bost-Bezeaud F, Marcells L, et al. Congenital cerebral malformations and dysfunction in fetuses and newborns following the 2013 to 2014 Zika virus epidemic in French Polynesia. Euro Surveill 2016;21(13).Find this resource:
64. Culjat M, Darling SE, Nerurkar VR, Ching N, Kumar M, Min SK, et al. Clinical and imaging findings in an infant with Zika embryopathy. Clin Infect Dis 2016;63 (6):805–811.Find this resource:
65. Moura da Silva AA, Ganz JS, Sousa PS, Rodvalho Doriqui MJ, Rodriegues Costa Ribeiro M, dos Remedios Freitas Carvalho Branco, M, et al. Early growth and neurologic outcomes of infants with probably congenital Zika virus syndrome. Emerg Infect Disease. doi:10.320/eid2211.160956Find this resource:
66. Leal MC, Muniz LF, Caldas Neto SD, van der Linden V, Ramos RCF. Sensorineural hearing loss in a case of congenital Zika virus. Braz J Otorhinolaryngol. doi.10.1016/j.bjorl.2016.06.001Find this resource:
67. Leal MC, Muniz LF, Ferreira TSA, Santos CM, Almeida LC, van der Linden V, et al. Hearing loss in infants with microcephaly and evidence of congenital Zika virus infection: Brazil, November 2015–May 2016. MMWR Morb Mortal Wkly Rep 2016;65 (34):917–919.Find this resource:
68. van der Linden V, Pessoa A, Dobyns W, Barkovitch AJ, van der Linden H Jr, Rolim Filho EL, et al. Description of 13 infants born during October 2015–January 2016 with congenital Zika virus infection without microcephaly at birth—Brazil. MMWR 2016;65(47):1343–1348.Find this resource:
69. Franҫa GVA, Schuler-Faccini L, Oliveira WK, Henriques CMP, Carmo EH, Pedi VD, et al. Congenital Zika virus syndrome in Brazil: a case series of the first 1501 livebirths with complete investigation. Lancet 2016;388 (10047):891–897.Find this resource:
70. Landry ML, St. George K. Laboratory diagnosis of Zika virus infection. Arch Pathol Lab Med 2017;141:60–67.Find this resource:
71. Oduyebo T, Igbinosa I, Petersen EE, Polen KND, Pillai SK, Ailes EC, et al. Update: Interim guidance for health care providers caring for pregnant women with possible Zika virus exposure—United States, July 2016. MMWR 2016;65(29):739–744.Find this resource:
72. Rabe IB, Staples JE, Villaneuva J, Hummel KB, Johnson JA, Rose L, et al. Interim guidance for interpretation of Zika virus antibody test results. MMWR 2016;65:543–546.Find this resource:
73. Russell K, Oliver SE, Lewis L, Barfield WD, Cragan J, Meaney-Delman D, et al. Update: Interim guidance for the evaluation and management of infants with possible congenital Zika virus infection—United States, August 2016. MMWR 2016;65 (33):870–878.Find this resource:
74. de Araujo TVB, Rodrigues LC, de Alencar Ximenes RA, de Barros Miranda-Filho D, Ramos Montarroyos U, Lopes de Melo AP, et al. Association between Zika virus infection and microcephaly in Brazil, January to May, 2016: preliminary report of a case-control study. Lancet Infect Dis 2016;16:1356–1363.Find this resource:
75. Meaney-Delman D, Oduyebo T, Polen KND, White JL, Bingham AM, Slavinski SA, et al. Prolonged detection of Zika virus RNA in pregnant women. Obstet Gynecol 2016;128:724–730.Find this resource:
76. Centers for Disease Control and Prevention (CDC). Zika Virus—Prevention. Accessed on May 8, 2017, from https://www.cdc.gov/zika/prevention/index.html.
77. Centers for Disease Control and Prevention (CDC). Zika Virus—Prevention—Prevent Mosquito Bites. Accessed on May 8, 2017, from https://www.cdc.gov/zika/prevention/prevent-mosquito-bites.html.
78. Centers for Disease Control and Prevention (CDC). Zika Virus—Prevention—Protect Yourself During Sex. Accessed on May 8, 2017, from https://www.cdc.gov/zika/prevention/protect-yourself-during-sex.html.
79. Centers for Disease Control and Prevention (CDC). Zika Virus—Plan for Travel. Accessed on May 8, 2017, from https://www.cdc.gov/zika/plan-for-travel.html. |
Thinking as You Play focuses on how to teach, not what to teach. Sylvia Coats gives piano teachers tools to help students develop creativity and critical thinking, and guidelines for organizing the music taught into a comprehensive curriculum. She suggests effective strategies for questioning and listening to students to help them think independently and improve their practice and performance. She also discusses practical means to develop an awareness of learning modalities and personality types. A unique top-down approach assists with presentations of musical concepts and principles, rather than a bottom-up approach of identifying facts before the reasons are known.
Thinking as You Play is one of the few available resources for the teacher of group piano lessons. Ranging from children’s small groups to larger university piano classes, Coats discusses auditioning and grouping students, strategies for maximizing student productivity, and suggestions for involving each student in the learning process. |
Technical Tutoring Home · Site Index · Advanced Books · Speed Arithmetic · Math Index · Algebra Index · Trig Index · Chemistry Index · Gift Shop · Harry Potter DVDs, Videos, Books, Audio CDs and Cassettes · Lord of the Rings DVDs, Videos, Books, Audio CDs and Cassettes · Winnie-the-Pooh DVDs, Videos, Books, Audio CDs, Audio Cassettes and Toys · STAR WARS DVDs and VHS VideosGeneral Polynomials
Terminology and Notation · Factoring Large Polynomials · Fundamental Theorem of Algebra · Rational Zeros Theorem · Example · Irreducible Expressions · Numerical Methods · Summary · Recommended Books
Terminology and Notation
First, we present some notation and definitions. A general polynomial has the form
This function is really a mathematical expression rather than an equation since the f(x) to the left of the equals sign is just a label or abbreviation for the long expression to the right of the first equals sign. The large symbol to the right of the second equals sign is called the sigma notation, and reads, "sum the product of the kth a and the kth power of x from k=1 up to k=n". This notation comes in handy when we are adding up a large number of terms that look alike.
We are really interested in the xs which satisfy the equation
These xs are called zeros of f(x) or roots of the equation f(x) = 0. The distinction between these terms is small (albeit precise) and the terms are often used interchangeably. Suppose we find the n numbers
(read this last expression as "the set of all complex x which make f(x) = 0"; the first two expressions are two different ways of listing the individual xs) that are all the possible roots of the equation. Then, we can express the polynomial in a much simpler form:
The pi notation is similar to the sigma notation described above, except that it describes a product of like terms. There are several advantages of knowing all the roots of an equation. First, we know exactly where the function becomes zero. Second, we can examine the factors (x xk) and find repeated roots, complex roots, irrational roots, etc. In short, the inner workings of the function are more exposed with this notation.
back to top
Factoring Large Polynomials
Large polynomials (larger than quadratics, equations involving powers of x larger than x2 ) get harder to factor the bigger they get. While there are advanced techniques to directly calculate the roots of a cubic (x3) and (in some cases) a quartic (x4), these methods are quite complicated and require an advanced sophistication in algebra to be comprehensible. The reader is welcome to take a look at both of these cases to verify our opinion. We will concentrate on some theorems that offer factoring help on a less advanced basis.
To be sure, use of these theorems amounts to educated guessing, but such guessing is actually more likely to get an answer faster than the advanced solution techniques. At the least, the techniques we offer will show whether an elementary answer (like an integer or a rational number) can be expected. Failing that, we will explore a scheme for finding an answer numerically (a refinement of trial and error) using a calculator or computer. If the numerical technique is done carefully, we can sometimes use the decimal expansion calculated to guess a familiar irrational number. If all else fails, we can resort to the "big guns" and use one of the advanced techniques.
back to top
Fundamental Theorem of Algebra
The nth degree polynomial
has exactly n roots. The roots may be repeated (i.e., not all distinct), complex (i.e., not real) or irrational, but need not be any of these (i.e., they might be integers or rational numbers). We wont bother to prove this theorem, since the proof is very involved and really does not contribute much to our problem solving techniques.
Put very simply, an nth degree polynomial has n roots.
back to top
Rational Zeros Theorem
Suppose the coefficients
in the polynomial equation
are all integers. If
is a rational fraction in lowest terms (i.e., p and q are both integers and have no common divisors other than 1) which satisfies the equation (i.e. is a root of f(x) = 0), then p divides a0 (i.e., a0 / p is an integer) and q divides an.
Since p/q is a root of the equation, we have
Multiplying through by qn produces
Subtracting a0qn from both sides gives
Since each term on the left contains at least one p, we can factor it out:
The term in parentheses on the left is the sum of many products of integers and so is an integer. Call this integer I and we have
We already knew the as are integers, so a0qn must be an integer, too. The equation is true by assumption, so p must divide a0qn . Since p and q have no common divisors other than 1, the same must be true of p and qn, which leaves p dividing a0.
We could have subtracted anpn from the equation after multiplying through by qn, giving us
Notice that q is a common term for the left side,
From here the proof is similar, and is left as an exercise for the reader. The result is that q is proved to divide an.
back to top
Use the rational zeros theorem to guess the possible rational roots of
Then use synthetic division to find which of the possible roots is actually a root.
According to the theorem, we are looking for numerators that divide 2 (1 and 2) and denominators that divide 3 (1 and 3). Thus, the possible roots are:
Well skip trials of each root and jump directly to the correct answer:
We should point out that even if the seven wrong answers had to be tried first, synthetic division is fast enough to go through all the potential roots in a matter of minutes.
back to top
In the last example, the quotient polynomial is
The primitive polynomial equation
has no real solutions and is considered irreducible. It is the polynomial analogue of a prime number. If we allow complex number solutions, then the above equation has the solutions
We will normally stop factoring a polynomial when we encounter an irreducible quotient, since exceptions are normally reserved for more advanced subject matter than that we cover here.
back to top
For readers who can use a programmable calculator, the following method can be used to get a decimal expansion of a root. If one know the decimal expansions of a few irrational numbers, guesses can be made based on the computed decimal expansion and checked. This method is a bit more cumbersome than the above guessing scheme for rational roots and should be tried only if one fails to find a rational root. The reader is advised that algebra and arithmetic errors are extremely common when learning to handle polynomials, and should be eliminated first before trying a numeric solution. Practical problems often yield messy answers, so numerical solutions are more attractive when doing mathematics for the sake of mathematics is really beside the point i.e., when solving scientific, engineering or financial problems.
We will briefly outline a numerical method for solving the cubic
First, we need to rewrite the equation so that we have a single x on one side of the equal sign and a function of x on the other side that is weaker than x (i.e., has a smaller power of x).
This last equation
is the one we will use. The basic idea is to guess an x, put it in the right hand side, calculate the function. This generates a new x on the left, which is then in turn put back into the right hand side, until the difference between the input x and the output x is "small enough".
Well do the example calculation and tabulate the results.
N = iteration number
Change in x
We could at this point presume that this is really a root, try synthetic division, and factor the polynomialThis method can be used with many, but not all, polynomials. Numerical methods can work very well, like the example above. In a wide variety of cases, this method is very frustrating because the right-hand side of the equation bounces around and does not converge nicely. Numerical analysis is a big topic, and gets technical very fast.
See the other examples on numerical methods for further tips.back to top Summary
To solve a general polynomial:
The reader is reminded that the study of general polynomials is a very complicated field. We have provided guidelines that work in a fairly large number of cases the mathematics student is likely to see, but will prove inadequate for an even larger class of problems. There are more advanced, specialized methods appropriate for different fields of study, in particular science and engineering. These advanced methods are beyond the current scope for our purposes, so we will content ourselves for now with what we have presented above.
back to top
College Algebra (Schaum's Outlines)
The classic algebra problem book - very light on theory, plenty of problems with full solutions, more problems with answers
Schaum's Easy Outline: College Algebra
A simplified and updated version of the classic Schaum's Outline. Not as complete as the previous book, but enough for most students
back to top |
New heat wave formula can help public health agencies prepare for extreme temperatures
COLUMBIA, Mo. (Feb. 25, 2016) — Extreme heat can pose several health risks, such as dehydration, hyperthermia and even death, especially during sustained periods of high temperatures. However, a uniform definition of a heat wave doesn't exist. As a result, public health agencies may be unsure of when to activate heat alerts, cooling centers and other protective measures. A University of Missouri School of Medicine researcher has developed a uniform definition of a heat wave that may help public health agencies prepare for extreme temperatures.
"According to climate models, temperatures in Florida are predicted to increase over the next 100 years, yet there can be confusion regarding what constitutes a heat wave," said Emily Leary, Ph.D., assistant research professor in the Biostatistics and Research Design Unit at the MU School of Medicine and lead author of the study. "As temperatures rise, it's important to have a uniform definition that best allows public health agencies to prepare for heat waves, whether that means issuing more frequent heat advisories or opening more cooling stations. Using Florida as our model — a state known for its heat — we set out to develop a data-driven definition of a heat wave that can be used for public health preparation. This formula can be adapted and applied to other parts of the country as well."
The U.S. National Weather Service currently initiates heat alert procedures when the heat index — the perceived temperature in relation to humidity — is expected to exceed 105 to 110 degrees Fahrenheit, depending on the area. However, the United Nations' Intergovernmental Panel on Climate Change defines a heat wave as five or more consecutive days with maximum temperatures approximately 9 degrees Fahrenheit higher than normal. These definitions become confusing when different sources use differing methods to define climatology norms, Leary said.
Additionally, the definitions may not be suited for certain regions, such as Florida, because the area may have consistently high temperatures and fewer true seasons, which do not account for extreme temperatures or resident acclimation. Previous research also has shown that using local or region-specific meteorological thresholds better reflect a temperature extreme for a certain area.
Leary's definition, which is informed by previous research, factors in relative and absolute heat index thresholds for a given region and time. The temperature must exceed the 80 percent relative heat index threshold, meaning the heat index must be higher than 80 percent of the region's temperatures for a given period. In addition, a region also should have at least three non-consecutive days with a heat index above an absolute regional heat index threshold, a predetermined temperature based on regional climates.
For example, in Pensacola, Florida, a heat index higher than 100.6 degrees Fahrenheit for three days means that the area has the potential to experience a heat wave. A heat index higher than 110 degrees Fahrenheit for three days would be considered a heat wave.
"This formula better explains when a heat wave is occurring because it accounts for missing weather data and better captures what extreme heat means for a region," Leary said. "Because this formula uses National Weather Service regions, there also is an existing infrastructure to communicate alerts."
The study, "Identifying Heat Waves in Florida: Considerations of Missing Weather Data," recently was published in PLOS ONE, an international, peer-reviewed and open-access publication. Research reported in this publication was supported by the Centers for Disease Control and Prevention under grant number U38-EH000941 awarded to the Florida Environmental Public Health Tracking Network Implementation. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agency. |
In what may be a critical breakthrough for creating artificial organs, Harvard researchers say they have created tissue interlaced with blood vessels.
Using a custom-built four-head 3-D printer and a “disappearing” ink, materials scientist Jennifer Lewis and her team created a patch of tissue containing skin cells and biological structural material interwoven with blood-vessel-like structures. Reported by the team in Advanced Materials, the tissue is the first made through 3-D printing to include potentially functional blood vessels embedded among multiple, patterned cell types.
In recent years, researchers have made impressive progress in building tissues and organ-like structures in the lab. Thin artificial tissues, such as a trachea grown from a patient’s own cells, are already being used to treat patients (see “Manufacturing Organs”). In other more preliminary examples, scientists have shown that specific culture conditions can push stem cells to grow into self-organized structures resembling a developing brain, a bit of a liver, or part of an eye (see “Researchers Grow 3-D Human Brain Tissues,” “A Rudimentary Liver Is Grown from Stem Cells,” and “Growing Eyeballs”). But no matter the method of construction, all regenerative projects have run up against the same wall when trying to build thicker and more complex tissues: a lack of blood vessels.
Lewis’s group solved the problem by creating hollow, tube-like structures within a mesh of printed cells using an “ink” that liquefies as it cools. The tissue is built by the 3-D printer in layers. A gelatin-based ink acts as extracellular matrix—the structural mix of proteins and other biological molecules that surrounds cells in the body. Two other inks contained the gelatin material and either mouse or human skin cells. All these inks are viscous enough to maintain their structure after being laid down by the printer.
A third ink with counterintuitive behavior helped the team create the hollow tubes. This ink has a Jell-O-like consistency at room temperature, but when cooled it liquefies. The team printed tracks of this ink amongst the others. After chilling the patch of printed tissue, the researchers applied a light vacuum to remove the special ink, leaving behind empty channels within the structure. Then cells that normally line blood vessels in the body can be infused into the channels.
Building actual replacement tissues or organs for patients is a distant goal, but one the team is already weighing. “We think it’s a very foundational step, and we think it’s going to be essential toward organ printing or regeneration,” says Lewis, who is member of the Wyss Institute for Biologically Inspired Engineering at Harvard University.
The smallest channels printed were about 75 micrometers in diameter, which is much larger than the tiny capillaries that exchange nutrients and waste throughout the body. The hope is that the 3-D printing method will set the overall architecture of blood vessels within artificial tissue and then smaller blood vessels will develop along with the rest of the tissue. “We view this as a method to print the larger vessels; then we want to harness biology to do the rest of the work,” says Lewis. |
The uses of organic polymers (PPVs) include sensors, LEDs, displays and solar cells. They are characterized by high electrical conductivity and their interaction with light. After four years of hard work at Vienna University of Technology , these characteristics were improved significantly by replacing the oxygen atom that links the side groups to the rest of the polymer with sulfur: the O-PPV (O for oxygen) has become the new S-PPV (S for sulfur).
The researchers have also discovered a simple and cheap method of synthesis for the use of S-PPV on an industrial scale. Monomers are first produced using microwave radiation and are then polymerized and modified in the side groups. The method is scalable for industrial quantities and the process is easy to reproduce, according to Florian Glöcklhofer from Vienna University of Technology. In addition, the new class of polymers has greater stability, is “comparatively non-toxic”, and is biologically compatible.
Electroluminescence of polymers was first observed in 1989 in the study of the dielectric properties of a thin PPV film; “Spektrum der Wissenschaft” reported on this in 1995. At that time, organic electroluminescence was regarded as “a respected area of research” and the “first small-scale products” were anticipated over the following years. Nowadays many branches of industry could not do without it. |
Technology leads to speedy developments and changes. But at the same time technology gobbles up resources very fast. It means that the depletion of raw materials is also very fast. If whatever cotton produced in one year is consumed by a textile mill in six months, what would happen ? The workers would have nothing to do for the rest of the year. If cotton is grown in larger areas where some important crops such as wheat and rice are grown, there will increased production be of which cotton fulfils the needs of the textile mill. But these will be a decreased production of food crops.
Can one afford to go hungry in order to wear more clothes? Sometimes it so happens that a farmer becomes tempted to grow a particular cash crop such as cotton in fields used for growing wheat and rice. But this is a short-sighted approach. If every farmer starts doing this, there would be shortfall in overall grain production. This is where national planning of priorities becomes important. At the national level, a decision has to be taken regarding the scale of cultivation of different varieties of crops so that the country may become self-sufficient and maintain adequate buffer stock for emergency situations.
Let us now take the case of forests. In olden days cutting of trees was done manually. These days machines are used for this purpose. Hundreds of trees can be brought down in a day. Unless there is a balance between reforestation and felling of trees, there will be no forests left on this earth, and it will spell doom on the environment. This will play havoc with the lives of animals including human beings. In the past decade, the tropical rain forest was reduced from 4.7 to 4.2 billion acres. In the past two decades one million species vanished from world’s tropical forest.
Let us do a simple case study. Assume that there is a forest with 100 trees and each tree takes 10 years to grow. People cut down 10 trees per year and also replant 10 trees. If we plot a graph of the number of fully grown trees in the forest every year, it will be like the one shown in Figure 7.1. After 10 years, we find that only 10 trees are left (remember, each tree takes10 years to grow). This goes on to show that the resources available to us have taken a long time to become as we see them today. It is very easy to use up these resources, but every difficult to get them back.
Coal and petroleum reserves are the products of millions of years of natural processing of dead trees and fossils. The manner and the proportion in which these resources are being used up, we shall be left with nothing of these in a couple of hundred years.
Whenever we adopt a new technology for our advantage, we have to look both the sides of the coin, i.e., we also have to find out whether it can indirectly create a condition or a situation in which man may find himself trapped. One of the basic questions that we have to ask is: How fast are we converting resources into non-resources and what will happen if all the resources (for example, coal and petroleum) are exhausted? Let us take another example should be grow more cotton than creates? Should we use fertilizers indiscriminately just because its application increases the yield? Growing more cotton may mean less gain production and excessive use of fertilizer may render the soil infertile. It is just possible that for our immediate and short-term gains we are causing irreparable damage to our environment.
Development of science and technology has no doubt improved living conditions and saved man from many diseases and calamities. These days’ people are not afraid of epidemics like plague, cholera and pox. Their causes have been determined and control measures have need worked out. Infant, mortality has gone down because of greater health care measures adopted before and after the birth of a child. Many life-saving drugs are available. The science of nutrition has helped in reducing the incidence of ailments. All these things have resulted in the decline of unnatural and premature death rates, and have increased life expectancy. To understand how fast the population of the world is increasing, try this. Plot y = zx, taking x= 0, 1, 2, 3, 4, … you will get rapidly increasing values of y and we call this increase an exponential increase. Figure 7.2. Shows the growth of world population in the past two hundred years or so and you will notice that the nature of the plot is similar to the one for y = zx. The population of the world was about 1 billion in 1830. It doubled to 2 billion in 100 years and is expected to cross the 8 billion mark by the year AD 2000.
If human population were to remain constant on this earth, the birth and death-rate should be equal. Earlier, when technology was not developed, a natural law used to be effective in maintaining the death rate in proportion to the birth-rate. Developments in medicine and surgery have ensured greater chances of survival and prolonged life-span. But there has been no corresponding reduction in birth-rate. As a result, there has been a steady increase in human population, leading to corresponding increase in the consumption of global resources and greater exploitation of the environment. This is the other side of the coin. Does it then mean that we should forsake technology? NO. All we have to do is to bring down birth-rate in the same proportion in which death-rate has been reduced. Again, technology offers us birth control methods to bring down birth-rate. |
|Every student finished the lesson with a bibliography that looked like this one above|
Last week, I described how we began the research process with 6th graders by learning how to create a bibliography. I was happy with that lesson, but I felt concerned that the students did not really understand the elements that went into making each citation. I taught them how to create a citation using Easybib. This meant that they did not have to find the author, title, publisher, etc. because it was done automatically for them. I had an idea to go one step further with bibliographies to make sure that the students understood the process that we had done the week before.
I took resources that the students had used to create their own bibliography with four citations and created a sample with eight citations. The teachers posted the document in Word on their online agendas so the students could each open a document that they could edit. The first thing I did was talk about MLA format and how the beginning of the paper should be formatted. We edited the heading, and the students inserted their name where my name had been. See the picture below.
The next step was to actually dissect the bibliography. All of the students were in Word, and I showed them how we were going to use the highlighter. We began by highlighting all the titles in yellow, then the databases in lime green, the publisher in magenta and so on. We did this step by step so that every student would end up with a matching document that they could save and submit for a grade. You can see my colored document at the beginning of this blog entry. Who knew that coloring a page on the computer could be so engaging. The students loved the activity. I know that they might not remember all the rules for MLA, but I am pretty sure that they now know that there are rules and that there are tools out there to help them follow those rules.
We will continue the research process next by finding sources to read and to begin learning how to take notes on what we read. |
FC: Weather By: Laurie Guerra
1: Hurricanes are severe tropical storms that form in the ocean. They rotate in a counter clockwise direction around an eye. They can cause severe damage with heavy rain, waves and winds.
2: Landslides take place when dirt, pebbles, rocks and boulders slide down a slope together. Sometimes these landslides are small, and hardly noticeable. Other times however, they can be substantial, involving the entire side of a mountain.
3: Tornadoes come from powerful thunderstorms and appear as rotating, funnel-shaped clouds. Tornado winds can reach 300 miles per hour. They cause damage when they touch down on the ground.
4: A tsunami (pronounced soo-nahm-ee) is a series of huge waves that happen after an undersea disturbance, such as an earthquake or volcano eruption. | This picture is of Sumata before and after a tsunami struck the area. Notice the mass devastation.
5: Earthquakes are the shaking, rolling or sudden shock of the earth’s surface. Earthquakes happen along "fault lines" in the earth’s crust. Although they only last a couple of minutes, they can cause major destruction. |
-Edgar Degas (1834-1917)-
HOW IS IT MADE?
Many artists consider drawing the art form closest to thinking. Drawings express how we think and what we think about. For some artists drawing is just one part of the creative process. A sculptor might sketch out several angles of the envisioned sculpture before putting chisel to stone. For other artists, like Edgar Degas, a pastel drawing of a ballerina is considered suitable for his expression.
Artists draw using a variety of tools, such as, graphite, pen and ink, markers, pastels, contë crayons, inked brushes, charcoal, and colored pencils. Paper is a common drawing support. Artists also draw on canvas, leather, cardboard, and plastic. Drawing is one of the most familiar, yet intimidating, art forms. Children often express themselves through crayons and paper, yet when the same children are asked to draw using a pencil in art class they often feel uncomfortable at first. These students see what they want to draw in their heads, but they have a difficult time translating their thoughts onto paper.
Similar to how a pianist will practice scales to both train and loosen his fingers, student artists often enhance their draftsmanship by completing quick studies called gesture drawings and blind contour drawings. An artist does more than simply copy life. An artist sees, contemplates, and emotes. |
If we create two or more members having same name but different in number or type of parameter, it is known as C++ overloading. In C++, we can overload:
- constructors, and
- indexed properties
It is because these members have parameters only.
Types of overloading in C++ are:
- Function overloading
- Operators overloading
C++ Function Overloading
Having two or more function with same name but different in parameters, is known as function overloading in C++.
The advantage of Function overloading is that it increases the readability of the program because you don't need to use different names for same action.
C++ Function Overloading Example
Let's see the simple example of function overloading where we are changing number of arguments of add() method.
C++ Operators Overloading
Operator overloading is used to overload or redefine most of the operators available in C++. It is used to perform operation on user define data type.
The advantage of Operators overloading is to perform different operations on the same operand.
C++ Operators Overloading Example
Let's see the simple example of operator overloading in C++. In this example, void operator ++ () operator function is defined (inside Test class). |
Barbara McClintock/Citable Version
- For the illustrator of the same name, see Barbara McClintock (illustrator).
Barbara McClintock (June 16 1902 – September 2 1992) was an American scientist who pioneered the use of cytogenetics to understand the structure of chromosomes and mechanisms of genetic recombination. Her later work began the science of gene regulation. Her accomplishments are particularly remarkable because she made them at a time when women were formally discriminated against in academic science. McClintock received her PhD in botany from Cornell University in 1927, where she was a leader in the development of maize cytogenetics. That field remained the focus of her research for the rest of her career. From the late 1920s, McClintock studied chromosomes and how they change during reproduction. Her work was groundbreaking: she advanced techniques to visualize chromosomes using light microscopy and used microscopic analysis to demonstrate many fundamental genetic ideas, including genetic recombination by crossing-over during meiosis—a mechanism by which chromosomes exchange information. She produced a genetic map for maize, linking regions of the maize chromosomes with physical traits, and she demonstrated the role of the telomere and centromere, regions of the chromosome that are important in the conservation of genetic information. She was recognized as amongst the best in the field, awarded prestigious fellowships and elected a member of the National Academy of Sciences in 1944.
During the 1940s and 1950s, McClintock discovered transposition and used it to show how genes are responsible for turning physical characteristics on or off. She developed theories to explain the repression or expression of genetic information from one generation to the next. Encountering skepticism of her research and its implications, she stopped publishing her data in 1953. Nonetheless, she continued in science, and later made an extensive study of the cytogenetics and ethnobotany of maize races from South America. McClintock's research became generally appreciated by the scientific community in the 1970s and 1980s, after other researchers confirmed the mechanisms of genetic change in other model systems. Awards and recognition of her contributions to the field followed, including the Nobel Prize in Physiology or Medicine awarded to her in 1983 for the discovery of genetic transposition; to date, she has been the first and only woman to receive an unshared Nobel Prize in that category.
Barbara McClintock was born in Hartford, Connecticut, the third of four children of physician Thomas Henry McClintock and Sara Handy McClintock. She was independent from a very young age, a trait McClintock described as her "capacity to be alone." From about the age of three until the time she started school, McClintock lived with an aunt and uncle in Massachusetts in order to reduce the financial burden on her parents while her father established his medical practice. The McClintocks moved to semi-rural Brooklyn, New York in 1908. She was described as a solitary and independent child, and a tomboy. She was close to her father, but had a difficult relationship with her mother.
McClintock completed her secondary education at Erasmus Hall High School in Brooklyn. She discovered science at high school, and wanted to attend Cornell University to continue her studies. Her mother resisted the idea of higher education for her daughters on the theory that it would make them unmarriageable. Family financial problems also worked against her admission, and Barbara was almost prevented from starting college. It was her father's intervention that allowed her to enter Cornell in 1919.
Education and research at Cornell
McClintock began her studies at Cornell's College of Agriculture. She studied botany, receiving a BSc four years later, in 1923. Her interest in genetics had been sparked when she took her first course in that field in 1921. Taught by C. B. Hutchison, a plant breeder and geneticist, this course was the only one of its type offered to undergraduates in the United States at the time . Hutchison was impressed by McClintock's interest, and telephoned to invite her to participate in the graduate genetics course at Cornell in 1922, despite her status as an undergraduate. McClintock pointed to Hutchison's invitation as the reason she continued in genetics: "Obviously, this telephone call cast the die for my future. I remained with genetics thereafter."
Women could not major in genetics at Cornell, and therefore her MA and PhD — earned in 1925 and 1927, respectively — were officially awarded in botany. During her graduate studies and her postgraduate appointment as a botany instructor, McClintock was instrumental in assembling a group that studied the new field of cytogenetics, choosing maize as their species for experimental research. This group brought together plant breeders and cytologists, and included Rollins Emerson, Charles R. Burnham, Marcus Rhoades, and George Beadle (who became a Nobel laureate in 1958 for showing that genes control metabolism). McClintock's cytogenetic research focused on developing ways to characterize the chromosomes in cells. She developed a technique using carmine staining to visualize chromosomes, and showed for the first time that maize had 10 chromosomes. This particular part of her work influenced a generation of students, as it was included in most textbooks. By studying the banding patterns of the chromosomes, McClintock was able to link to a specific chromosome groups of traits that were inherited together. Marcus Rhoades noted that McClintock's 1929 Genetics paper on the characterization of triploid maize chromosomes triggered scientific interest in maize cytogenetics, and attributed to his female colleague 10 of the 17 significant advances in the field that were made by Cornell scientists between 1929 and 1935.
In 1930, McClintock was the first person to describe cross-shaped interaction of homologous chromosomes during meiosis. During 1931, McClintock and a graduate student, Harriet Creighton, proved the link between chromosomal crossover during meiosis and the recombination of genetic traits. They observed by microscopy that the regions of paired chromosomes that are physically crossing-over during meiosis are concurrently involved in exchange of genes. Until this point, it had only been hypothesized that genetic recombination could occur during meiosis. McClintock published the first genetic map for maize in 1931, showing the order of three genes on maize chromosome 9. In 1932, she produced a cytogenetic analysis of the centromere, describing the organization and function of that chromosomal structure.
McClintock's breakthrough publications, and support from her colleagues, led to her being awarded several postdoctoral fellowships from the National Research Council. This funding allowed her to continue to study genetics at Cornell, the University of Missouri - Columbia, and the California Institute of Technology, where she worked with Thomas Hunt Morgan. During the summers of 1931 and 1932, she worked with geneticist Lewis Stadler at Missouri, who introduced her to the use of X-rays as a mutagen. (Exposure to X-rays can increase the rate of mutation above the natural background level, making it a powerful research tool for genetics.) Through her work with X-ray-mutagenized maize, she identified ring chromosomes, which form when the ends of a single chromosome fuse together after radiation damage. From this evidence, McClintock hypothesized that there must be a structure on the chromosome tip that would normally ensure stability, which she called the telomere. She showed that the loss of ring-chromosomes at meiosis caused variegation in maize foliage in generations subsequent to irradiation resulting from chromosomal deletion. During this period, she demonstrated the presence of what she called the nucleolar organizers on a region on maize chromosome 6, which is required for the assembly of the nucleolus during DNA replication.
McClintock received a fellowship from the Guggenheim Foundation that made possible six months of training in Germany during 1933 and 1934. She had planned to work with Curt Stern, who had demonstrated crossover in Drosophila just weeks after McClintock and Creighton had done so; however, in the meantime, Stern emigrated to the United States. Instead, she worked in Germany with geneticist Richard B. Goldschmidt. She left Germany early, amid mounting political tension in Europe, and returned to Cornell, remaining there until 1936, when she accepted an Assistant Professorship offered to her by Lewis Stadler in the Department of Botany at the University of Missouri - Columbia.
University of Missouri - Columbia
During her time at Missouri, McClintock expanded her research on the effect of X-rays on maize cytogenetics. McClintock reported the breakage and fusion of chromosomes in irradiated cells. She also showed that, in some plants, spontaneous chromosome breakage occurred in the endosperm. Over the course of mitosis, she observed that the ends of broken chromatids were rejoined after the chromosome replication. In the anaphase of mitosis, the broken chromosomes formed a chromatid bridge, which was broken when the chromatids moved towards the cell poles. The broken ends were rejoined in the interphase of the next mitosis, and the cycle was repeated, causing massive mutation, which she could detect as variegation in the endosperm. This cycle of breakage, fusion, and bridge, also described as the breakage–rejoining–bridge cycle, was a key cytogenetic discovery for two reasons. First, it showed that the rejoining of chromosomes was not a random event, and second, it demonstrated a source of large-scale mutation. As a cause of major mutation, it remains an area of interest in cancer research today.
Although her research was progressing well at Missouri, McClintock was not satisfied with her position at the University. She was excluded from faculty meetings, and was not made aware of positions available at other institutions. In 1940 she wrote to Charles Burnham, "I have decided that I must look for another job. As far as I can make out, there is nothing more for me here. I am an assistant professor at $3,000 and I feel sure that that is the limit for me." She was also aware that her position had been especially created for her by Stadler and may have depended on his presence. McClintock believed she would not gain tenure at Missouri, although according to some accounts she knew she would be offered a promotion by Missouri in the Spring of 1942. In the summer of 1941 she took a leave of absence from Missouri to visit Columbia University, where her Cornell colleague Marcus Rhoades was a professor. He offered to share his research field at Cold Spring Harbor on Long Island. In December 1941 she was offered a research position by Milislav Demerec, and she joined the staff of the Carnegie Institution of Washington's Department of Genetics Cold Spring Harbor Laboratory.
Cold Spring Harbor
After her year-long appointment, McClintock accepted a full-time research position at Cold Spring Harbor. Here, she was highly productive and continued her work with the breakage-fusion-bridge cycle, using it to substitute for X-rays as a tool for mapping new genes. In 1944, in recognition of her prominence in the field of genetics during this period, McClintock was elected to the National Academy of Sciences — only the third woman to be so elected. In 1945, she became the first woman president of the Genetics Society of America. In 1944 she undertook a cytogenetic analysis of Neurospora crassa at the suggestion of George Beadle, who had used the fungus to demonstrate the one gene–one enzyme relationship. He invited her to Stanford to undertake the study. She successfully described the number of chromosomes, or karyotype, of N. crassa and described the entire life cycle of the species. N. crassa has since become a model species for classical genetic analysis.
Discovery of controlling elements
In the summer of 1944 at Cold Spring Harbor, McClintock began systematic studies on the mechanisms of the mosaic color patterns of maize seed and the unstable inheritance of this mosaicism. She identified two new dominant and interacting genetic loci that she named Dissociator (Ds) and Activator (Ac). She found that the Dissociator did not just dissociate or cause the chromosome to break, it also had other specific effects on neighboring genes - as long as the Activator locus was also present. In early 1948, she made the surprising discovery that the Dissociator and Activator loci could both transpose, or change position, on the chromosome .
She observed the effects of the transposition of Ac and Ds by the changing patterns of coloration in maize kernels over generations of controlled crosses, and described the relationship between the two loci through intricate microscopic analysis. She concluded that Ac controls the transposition of the Ds from chromosome 9, and that the movement of Ds is accompanied by the breakage of the chromosome. When Ds moves, the aleurone-color gene is released from the suppressing effect of the Ds and transformed into the active form, which initiates the pigment synthesis in cells. The transposition of Ds in different cells is random, it may move in some but not others, and that random variation causes color mosaicism in kernals (the seeds of maize). The size of the colored spot on the seed is determined by stage of the seed development during dissociation. McClintock also found that the transposition of Ds and the is determined by the number of Ac copies in the cell.
Between 1948 and 1950, she developed a theory by which these mobile elements regulated the genes by inhibiting or modulating their action. She referred to Dissociator and Activator as "controlling units"—later, as "controlling elements"—to distinguish them from genes. She hypothesized that gene regulation could explain how complex multicellular organisms made of cells with identical genomes have cells of different function. McClintock's discovery challenged the concept of the genome as a static set of instructions passed between generations. In 1950, she reported her work on Ac/Ds and her ideas about gene regulation in a paper entitled "The origin and behavior of mutable loci in maize" published in the journal Proceedings of the National Academy of Sciences. In summer 1951, she reported on her work on gene mutability in maize at the annual symposium at Cold Spring Harbor, the paper she presented was called "Chromosome organization and genic expression".
Her work on controlling elements and gene regulation was conceptually difficult and was not immediately understood or accepted by her contemporaries; she described the reception of her research as "puzzlement, even hostility". Nevertheless, McClintock continued to develop her ideas on controlling elements. She published a paper in Genetics in 1953 where she presented all her statistical data and undertook lecture tours to universities throughout the 1950s to speak about her work. She continued to investigate the problem and identified a new element that she called Suppressor-mutator (Spm), which, although similar to Ac/Ds displays more complex behavior. Based on the reactions of other scientists to her work, McClintock felt she risked alienating the scientific mainstream, and from 1953 stopped publishing accounts of her research on controlling elements.
The origins of maize
In 1957, McClintock received funding from the National Science Foundation, and the Rockefeller Foundation sponsored her to start research on maize in South America, an area that is rich in varieties of this species. She was interested in studying the evolution of maize, and being in South America, where maize agriculture had originated, would allow her to work on a larger scale. McClintock explored the chromosomal, morphological, and evolutionary characteristics of various races of maize. From 1962, she supervised four scientists working on South American maize at the North Carolina State University in Raleigh. Two of these Rockefeller fellows, Almeiro Blumenschein and T. Angel Kato, continued their research on South American races of maize well into the 1970s. In 1981, Blumenschein, Kato, and McClintock published Chromosome constitution of races of maize, which is considered a landmark study of maize that has contributed significantly to the fields of evolutionary botany, ethnobotany, and paleobotany.
Rediscovery of McClintock's controlling elementsMcClintock officially retired from her position at the Carnegie Institution in 1967, and was awarded the Cold Spring Harbor Distinguished Service Award; however, she continued to work with graduate students and colleagues in the Cold Spring Laboratory as scientist emerita. Referring to her decision 20 years earlier no longer to publish detailed accounts of her work on controlling elements, she wrote in 1973:
Over the years I have found that it is difficult if not impossible to bring to consciousness of another person the nature of his tacit assumptions when, by some special experiences, I have been made aware of them. This became painfully evident to me in my attempts during the 1950s to convince geneticists that the action of genes had to be and was controlled. It is now equally painful to recognize the fixity of assumptions that many persons hold on the nature of controlling elements in maize and the manners of their operation. One must await the right time for conceptual change
The importance of McClintock's contributions only came to light in the 1960s, when the work of French geneticists Francois Jacob and Jacques Monod described the genetic regulation of the lac operon, a concept she had demonstrated with Ac/Ds in 1951. Following Jacob and Monod's paper 1961 Nature paper "Genetic regulatory mechanisms in the synthesis of proteins", McClintock wrote an article for American Naturalist comparing the lac operon and her work on controlling elements in maize. McClintock's contribution to biology is still not widely acknowledged as amounting to the discovery of genetic regulation.
McClintock was widely credited for discovering transposition following the discovery of the process in bacteria and yeast in the late 1960s and early 1970s. During this period, molecular biology had developed significant new technology, and scientists were able to show the molecular basis for transposition . In the 1970s, Ac and Ds were cloned and were shown to be Class II transposons. Ac is a complete transposon that can produce a functional transposase, which is required for the element to move within the genome. Ds has a mutation in its transposase gene, which means that it cannot move without another source of transposase. Thus, as McClintock observed, Ds cannot move in the absence of Ac. Spm has also been characterized as a transposon. Subsequent research has shown that transposons typically do not move unless the cell is placed under stress, such as by irradiation or the breakage, fusion, and bridge cycle, and thus their activation during stress can serve as a source of genetic variation for evolution. McClintock understood the role of transposons in evolution and genome change well before other researchers grasped the concept. Nowadays, Ac/Ds is used as a tool in plant biology to generate mutant plants used for the characterization of gene function.
Honors and recognition
McClintock was awarded the National Medal of Science by Richard Nixon in 1971. Cold Spring Harbor named a building in her honor in 1973. In 1981 she became the first recipient of the MacArthur Foundation Grant, and was awarded the Albert Lasker Award for Basic Medical Research, the Wolf Prize in Medicine and the Thomas Hunt Morgan Medal by the Genetics Society of America. In 1982 she was awarded the Louisa Gross Horwitz Prize for her research in the "evolution of genetic information and the control of its expression." Most notably, she received the Nobel Prize for Physiology or Medicine in 1983, credited by the Nobel Foundation for discovering 'mobile genetic elements', over thirty years after she initially described the phenomenon of controlling elements.
She was awarded 14 Honorary Doctor of Science degrees and an Honorary Doctor of Humane Letters. In 1986 she was inducted into the National Women's Hall of Fame. During her final years, McClintock led a more public life, especially after Evelyn Fox Keller's 1983 book A feeling for the organism brought McClintock's story to the public. She remained a regular presence in the Cold Spring Harbor community, and gave talks on mobile genetic elements and the history of genetics research for the benefit of junior scientists. An anthology of her 43 publications The discovery and characterization of transposable elements: the collected papers of Barbara McClintock was published in 1987. McClintock died near Cold Spring Harbor in Huntington, New York, on September 2, 1992 at the age of 90; she never married or had children.
Since her death, McClintock has been the subject of a biography by science historian Nathaniel C. Comfort, The tangled field : Barbara McClintock's search for the patterns of genetic control. Comfort contests some claims about McClintock, described as the 'McClintock Myth', which he claims was perpetuated by the earlier biography by Keller. Keller's thesis was that McClintock was long ignored because she was a woman working in the sciences, while Comfort notes that McClintock was actually well regarded by her professional peers, even in the early years of her career. The initial lack of interest towards McClintock's discoveries in transposition and her ideas on gene regulation may well have had little or nothing to do with sex discrimination, but instead have been generated by the very forces that she herself had blamed: the inability of the scientific community to accept a revolutionary concept 'before its time'.
She has been widely written about in the context of women's studies, and most recent biographical works on women in science feature accounts of her experience. She is held up as a role model for girls in such works of children's literature as Edith Hope Fine's Barbara McClintock, Nobel Prize geneticist, Deborah Heiligman's Barbara McClintock: alone in her field and Mary Kittredge's Barbara McClintock.
On May 4 2005 the United States Postal Service issued the American Scientists commemorative postage stamp series, a set of four 37-cent self-adhesive stamps in several configurations. The scientists depicted were Barbara McClintock, John von Neumann, Josiah Willard Gibbs, and Richard Feynman. McClintock was also featured in a 1989 four stamp issue from Sweden which illustrated the work of eight Nobel Prize winning geneticists. A small building at Cornell University bears her name to this day.
- McClintock B A short biographical note: Barbara McClintock (1983) Nobel Foundation .pdf
- Rhoades, Marcus M The golden age of corn genetics at Cornell as seen though the eyes of MM Rhoades undated pdf
- McClintock B. Letter from Barbara McClintock to Charles R. Burnham (16 September 1940) .pdf
- Comfort NC (2002) Barbara McClintock's long postdoc years. Science 295:440
- Transposon silencing keeps jumping genes in their place Gross L PLoS Biology 4, No. 10, e353 doi:10.1371/journal.pbio.0040353
- Fedoroff N (2004) The discovery of transposition. James and Bartlett Virtual Text, Great Experiments. Last Revised Oct 20 2004
- McClintock, Barbara. "Introduction" in The discovery and characterization of transposable elements: the collected papers of Barbara McClintock
- McClintock B. Letter from Barbara McClintock to JRS Fincham (1973) pdf
- Kleckner NJ, Roth J, Botstein D (1977) Genetic engineering in vivo using translocatable drug-resistance elements. J Mol Biol 116:125-59
- Berg DE, Howe MM (1989) Mobile DNA, ASM Press, Washington, DC
- Beckwith J, Silhavy TJ (1992) Session 9. Transposition, pages 555-614 in The Power of Bacterial Genetics: A Literature Based Course Cold Spring Harbor Laboratory Press NY ISBN 0-87969-379-7 |
Flora and fauna
· Tropical Forests
· Temperate Forests
Why are these animals endangered?
How much money do the poachers make?
High fir forests dominate the altitudes between 2900 and 3500m, especially in the transition zone between the main Himalayas and the dry cold deserts. At higher elevations the trees become stunted. Some broad leaved species also accompany the conifers in the lower altitudes. Average temperatures in summers range from 20 to 22 degrees Celsius. Winter temperatures are usually well below the freezing point accompanied by lots of snow. Birch forests join the fir forests at an elevation of above 3000m.
Low rhododendron evergreen forests can also be found alongside the birch forests. The forests are open with the occasional grasslands in between. The winters are so severe in the region that vegetative growth virtually stops in the winters.
In the inner dry valleys and parts of the transhimalayas, dwarf rhododendrons grow along with patches of grasslands. This vegetation succeeds the sub-alpine forests and merges with the snowline at a higher elevation.
Just below the snowline is a growth of dry alpine scrub. Trees are absent. Shrubs along with patches of pasture. The scrub thrives in shady depressions and along streams formed by snow melt waters. Dwarfed junipers also occur sporadically. The soil is very poor in nutrients. Dry arctic conditions are experienced, and snow covers the area for 5 to 6 months ever year. In the summers. migratory cattle graze on the shrubs.
All rights reserved |
The resource has been added to your collection
Free app for iOS and Android
The eighth grade, in Language Arts, is often seen as the bridge to more complex Common Core Standard expectations in high school. Therefore, in this iTooch 8th Grade Language Arts app, students are challenged to fine-tune their understanding of the conventions of the English language (e.g. complex grammatical structures), further develop concepts that they will eventually encounter on the SAT (editing, inferring the meaning of vocabulary in context, critical reading etc.), analyze the language in written form (e.g. identifying the author’s purpose, interpreting opinions, nuances, and literary texts etc.), and use reference materials (e.g. working with informational graphs). Through 49 chapters and over 1,500 exercises this iTooch app tests students’ knowledge in a fun and interactive way.
This resource has not yet been reviewed.
Not Rated Yet. |
Post provided by Delphine Chabanne
Wildlife isn’t usually uniformly or randomly distributed across land- or sea-scapes. It’s typically distributed across a series of subpopulations (or communities). The subpopulations combined constitute a metapopulation. Identifying the size, demography and connectivity between the subpopulations gives us information that is vital to local-species conservation efforts.
What is a Metapopulation?
Richard Levins developed the concept of a metapopulation to describe “a population of populations”. More specifically, the term metapopulation has been used to describe a spatially structured population that persists over time as a set of local populations (or subpopulations; or communities). Emigration and immigration between subpopulations can happen permanently (through additions or subtractions) or temporarily (through the short-term presence or absence of individuals). |
Our 7th Grade Religion program teaches the Scriptures, Beatitudes, Sacraments, Parables, Miracles, and the Message of Jesus. Our students have many opportunities to engage in community service actions. Part of our faith formation journey includes personal reflection in response to answering the Gospel call of serving others. Students pray the Rosary and attend Mass weekly.
Using a hand- on approach to learning the scientific method. In 7th Grade, our students become familiar with applying the scientific method to class projects and beyond. We focus on matter, atoms and molecules, mass, volume, and density. These concepts help students understand the layering and structure of Earth’s atmosphere, water, crust, and interior.
Our 7th and 8th Grade students have the opportunity to attend the Teton Science School every other year in Jackson, Wyoming. Our students attend the school with four faculty members in September, for four science filled days. The Teton School provides a hands-on science learning program. The school fosters leadership skills and builds awareness of the unique natural history of the Greater Yellowstone Geo-ecosystem. Our students experience lessons in field ecology, geology, animal tracking, field journaling, sketching and much more.
Language Arts at Blessed Sacrament Catholic School includes Literature and English. We follow Diocesan standards and curriculum to teach students in the following areas: comprehension strategies before, during, and after reading; vocabulary building; independence in reading; reading across genres; using writing process strategies and the 6+1 traits writing standards; and the use of process strategies during public speaking. The text, Voyages in English, is a rigorous grammar program with ongoing assessment. There will be in-depth study of unique writing genres, with writing skills lessons. Our students work on speaking and writing practice enabling them to communicate with clarity, accuracy, and ease. There will be many opportunities to apply skills using technology. Grade 7 students are required to submit a research paper using the MLA format.
We require 900 pages of independent reading per quarter. As part of our study of literature, students complete oral reports, book projects, participate in reading groups, and complete reading logs each quarter. Students reflect orally and in writing on their reading as they develop critical reading skills. Students are assessed through observation, conferencing, classwork, and written tests.
A Prentice Hall literature text is used. This text provides extensive instruction in reading strategies including predicting, questioning, re-reading, scanning, drawing inferences, determining word meaning from context, using prior knowledge, identifying main ideas, and summarizing. Assessment of skills occurs in post selection materials. The text, Vocabulary In Action, is be used. Weekly lessons that increase students’ literacy are assigned. Weekly assessments are required.
Our 7th Grade class novel is The Outsiders, by S. E. Hinton.
Our 7th Grade program is a general math class designed to prepare students for the 8th Grade emphasis on algebra. Students study:
- Number and Operations and Algebra and Geometry: Developing an understanding of and applying proportionality, including similarity.
- Measurement and Geometry and Algebra: Developing an understanding of and using formulas to determine surface areas and volumes of three-dimensional shapes.
- Number and Operations and Algebra: Developing an understanding of operations on all rational numbers and solving linear equations.
Using a five themed approach to geography, we investigate five areas using location, place, human and environment interaction, movement, and region. Students learn how civilizations developed in Africa, the Americas, Asia, and The Middle East. We include principles of Catholic social teaching and care for God’s creation. Students learn about current issues facing the world as well as discover potential opportunities for solutions.
In Social Studies, we incorporate the Reality Town program provides an excellent hands-on opportunity to teach our 6th and 7th Grade students about fiscal responsibility. As the students try to work within their income and experience the challenge of paying one month’s worth of bills on their income, they gain a greater appreciation for what they will experience as adults. Reality Town teaches students about the importance of living within their income and differentiating between a need and a want as well as the value of education. We start the student off on a fact finding mission on their computers. Students investigate career choices including fields of study, work environment, salary, and educational requirements.
Developing intermediate research skill characterizes 7th Grade library classes. More abstract concepts in the thinking and research processes are taught.
Students continue building the foundations for Spanish as a second language. Students write, read and have fun with Spanish related activities.
In 7th Grade music classes, we review note reading, apply note reading, play instruments, practice singing techniques, and performance. All students perform in our annual Christmas Program and Spring Concert.
More complex art work is seen in 7th Grade art class. Students learn to incorporate contour and gesture lines, study color theory, study art history and the different styles of painting. Linoleum block printing is introduced along with sculpting techniques.
Improving presentation skills using technology is fun. Students continue honing their skills in writing and internet-literacy though collaborative group-research and presentation projects.
The Middle School Exploratory program gives 6th through 8th Grade students the unique opportunity to “explore” a wide variety of subjects. Based on student interests, we learn about everything from interior design and origami to game design and photography. The second semester of Exploratory is dedicated to learning about different aspect of film making and film history. We end the year with a special Middle School Film Festival to showcase our student-made movies.
Through physical activity and discussion, we promote healthy growth, development and maintenance through physical education. Students identify the components of good health and skills related to activities for developing lifetime respect of exercise and sports. |
In June, researchers from the University of Rochester announced they had located a potential planet around another star so young that it defied theorists' explanations. Now a new team of Rochester planet-formation specialists are backing up the original conclusions, saying they've confirmed that the hole formed in the star's dusty disk could very well have been formed by a new planet. The findings have implications for gaining insight into how our own solar system came to be, as well as finding other possibly habitable planetary systems throughout our galaxy.
"The data suggests there's a young planet out there, but until now none of our theories made sense with the data for a planet so young," says Adam Frank, professor of physics and astronomy at the University of Rochester. "On the one hand, it's frustrating; but on the other, it's very cool because Mother Nature has just handed us the planet and we've got to figure out how it must have been created."
Intriguingly, working from the original team's data, Frank, Alice Quillen, Eric Blackman, and Peggy Varniere revealed that the planet was likely smaller than most extra-solar planets discovered thus far—about the size of Neptune. The data also suggested that this planet is about the same distance from its parent star as our own Neptune is from the Sun. Most extra-solar planets discovered to date are much larger and orbit extremely close to their parent star.
The original Rochester team, led by Dan Watson, professor of physics and astronomy, used NASA's new Spitzer Space Telescope to detect a gap in the dust surrounding a fledgling star. The critical infrared "eyes" of the infrared telescope were designed in part by physics and astronomy professors Judith Pipher, William Forrest, and Watson, a team that has been among the world leaders in opening the infrared window to the universe. It was Forrest and Pipher who were the first U.S. astronomers to turn an infrared array toward the skies: In 1983, they mounted a prototype infrared detector onto the University telescope in the small observatory on top of the Wilmot Building on campus, taking the first-ever telescopic pictures of the moon in the infrared, a wavelength range of light that is invisible to the naked eye as well as to most telescopes.
The discovered gap strongly signaled the presence of a planet. The dust in the disk is hotter in the center near the star and so radiates most of its light at shorter wavelengths than the cooler outer reaches of the disk. The research team found that there was an abrupt dearth of light radiating at all short infrared wavelengths, strongly suggesting that the central part of the disk was absent. Scientists know of only one phenomenon that can tunnel such a distinct "hole" in the disk during the short lifetime of the star—a planet at least 100,000 years old.
This possibility of a planet on the order of only 100,000 to half a million years old was met with skepticism by many astronomers because neither of the leading planetary formation models seemed to allow for a planet of this age. Two models represent the leading theories of planetary formation: core accretion and gravitational instability. Core accretion suggests that the dust from which the star and system form begins to clump together into granules, and those granules clump into rocks, asteroids, and planetoids until whole planets are formed. But the theory says it should take about 10 million years for a planet to evolve this way—far too long to account for the half-million-year-old planet found by Watson.
Conversely, the other leading theory of planetary formation, gravitational instability, suggests that whole planets could form essentially in one swoop as the original cloud of gas is pulled together by its own gravity and becomes a planet. But while this model suggests that planetary formation could happen much faster—on the order of centuries—the density of the dust disk surrounding the star seems to be too sparse to support this model either.
"Even though it doesn't fit either model, we've crunched the numbers and shown that yes, in fact, that hole in that dust disk could have been formed by a planet," says Frank. "Now we have to look at our models and figure out how that planet got there. At the end of it all, we hope we have a new model, and a new understanding of how planets come to be."
This research was funded by the National Science Foundation. |
Grades: PreK - 1
Product Rating: No ratingPage Count: 32
Children will develop important early childhood skills by tracing, writing, and learning numbers and counting from 1-20. Kids will use objects to help them understand numerals and quantities, count in sequence, and the concept of “more than” or “less than.” The full-color pages provide plenty of practice to help build learning confidence and increase school success.
You must log in to submit a review. |
Bartonella are bacteria that infect humans and other animals, living inside the lining of the blood vessels. Bartonella was discovered in 1905 by Alberto Barton in Peru when he noticed there was an outbreak of an unknown sickness among foreign workers that worked for the railway. Bartonella was carried by fleas and lice and was known as the “cat scratch fever”. It was proven that people could still develop the disease with tick bites but with no exposure to cats. Many patients from the railway were transferred to Guadalupe Hospital, these fourteen were studied by Barton. Barton found that theses patients bacillus within their red blood cells, which would change to a cocci shape in the patient survived the severe acute phase. Barton found that if the patients developed lesions the bacteria would disappear from the peripheral blood system.
Alberto Barton was born inLima Peru in 1870 and was the fourth of nine brothers. His father was an Uruguayan chemist who along with his wife traveled to Peru in 1874 exposing Alberto to Peru for the first time. Alberto Barton received a grant for training in tropical diseases and bacteriology in Edinburgh and at the London School of Tropical Medicine. He returned to Peru and began working at the Guadalupe Hospital. This is when he began his first research activities. |
Lake Water Quality
Monitoring water quality in lakes and reservoirs is key in maintaining safe water for drinking, bathing, fishing and agriculture and aquaculture activities. Long-term trends and short-term changes are indicators of environmental health and changes in the water catchment area. Directives such as the EU's Water Framework Directive or the US EPA Clean Water Act request information about the ecological status of all lakes larger than 50 ha. Satellite monitoring helps to systematically cover a large number of lakes and reservoirs, reducing needs for monitoring infrastructure (e.g. vessels) and efforts.
The Lake Water Products (lake water quality, lake surface water temperature) provide a semi-continuous observation record for a large number (nominally 4,200) of medium and large-sized lakes, according to the Global Lakes and Wetlands Database (GLWD) or otherwise of specific environmental monitoring interest. Next to the lake surface water temperature that is provided separately, this record consists of three water quality parameters:
- The turbidity of a lake describes water clarity, or whether sunlight can penetrate deeper parts of the lake. Turbidity often varies seasonally, both with the discharge of rivers and growth of phytoplankton (algae and cyanobacteria).
- The trophic state index is an indicator of the productivity of a lake in terms of phytoplankton, and indirectly (over longer time scales) reflects the eutrophication status of a water body.
- Finally, the lake surface reflectances describe the apparent colour of the water body, intended for scientific users interested in further development of algorithms. The reflectance bands can also be used to produce true-colour images by combining the visual wavebands. |
It is rightly said that a child learns more by doing things instead of memorizing it. Science allows the students to learn about the things around them by doing some experiments. Performing activities help the students retain information easily. Keeping this in mind, an intriguing activity was organized for students of Grade 3 to learn the properties of water. Through this activity, they got a chance to learn with the aid of experiential learning. Concepts related to science cannot be explained to students using the rote learning method. It needs to be integrated with some fun activities so, that the students feel connected with the subject and grasp the concepts easily. |
The Research Brief is a short take about interesting academic work.
The big idea
When computer science courses are delivered through career and technical education in high school, the courses can help students with learning disabilities feel better about their ability to succeed in STEM. The classes also help the students see the usefulness of computer science.
We used national survey data from more than 20,000 students across the country to dig into this connection between computer science and science, technology, engineering or mathematics, a group of subjects generally known as STEM.
In our work, we found that – compared with other students with learning disabilities – those who took computer science courses in a career and technical education program were more likely to believe they could succeed in STEM. They were also more likely to believe STEM was useful for future employment or college options.
We also found that – within career and technical education programs – students with learning disabilities were just as likely to take computer science courses as students without learning disabilities. All our findings were still evident even after we took into account key student characteristics, such as family income, first language, gender and racial or ethnic identity.
Students with learning disabilities in our study are those who have a disability that affects their learning to write, read, spell or perform mathematical calculations.
Why it matters
Computer science is one of the fastest-growing fields in the current economy. Employment experts predict a 13% increase – about 667,000 new jobs – in these computer occupations from 2020 to 2030. That’s more than three times the rate of anticipated overall job growth.
However, there have not been enough computer science graduates in recent years to fill these jobs.
Based on our work, computer science courses appear to help students with learning disabilities develop positive attitudes toward STEM. These attitudes are linked to persistence in both computer science and STEM more generally. This makes it important for educators to encourage students to study, and stick with, computer science and STEM and make sure these students have access to these courses.
At the moment, students with learning disabilities are underrepresented in computer science fields in college and the labor market. Specifically, fewer than 8% of students in undergraduate computer science programs have any disability. This is compared with about 19% of all undergraduates.
What still isn’t known
A big question that remains is why students with learning disabilities don’t persist in computer science fields in college and, ultimately, pursue careers in the field. Even though computer science courses in high school help develop confidence and a sense of purpose, that may not be enough to encourage them to stick with it longer term.
One possible explanation might be that students with learning disabilities don’t see themselves as part of the STEM community. In our research, we looked to see if there was a link between computer science coursework and a feeling of STEM community membership. We found this connection for general education students but not for students with learning disabilities.
Another possible explanation may be that students with learning disabilities start high school with lower levels of STEM confidence and less of a sense that computer science will be useful to them in the future. Just participating in computer science courses may not be enough to make up the difference in this regard.
One important next step will be to look at the factors that help students with learning disabilities keep studying computer science and STEM. For example, does a positive attitude toward STEM actually lead students with learning disabilities to study computer science or pursue careers in the field? We plan to explore such a question in future work.
Jay S. Plasman is an assistant professor of workforce development and education in the Career and Technical Education program, College of Education and Human Ecology at Ohio State. He receives funding from the National Science Foundation and the US Department of Education, Institute of Education Sciences.
Shaun M. Dougherty is an associate professor of public policy and education at Vanderbilt University. |
The government control measures, all over the world, keep business cycles under control. What has gone nearly uncontrolled over the time is the problem of almost continuous increase in the general price level (this is the problem of inflation). The problem of inflation got accentuated since the early 1970s. It emerged as the most intractable economic problem for both theoreticians and policy-makeovers all over the world. Inflation has been a common problem of the developed and the developing economies.
“Inflation means generally a considerable and persistent rise in the general level of prices or the cost of living.”
A decline in the value of money.
The general tendency in changes of prices of goods and services over a time is called price level. The sustained rise in general price level is called inflation.
During the period of inflation, purchasing power of money declines.
When the general price level rises, each unit of currency buys fewer goods and services.
Inflation reflects reduction in purchasing power per unit of money
But, falling inflation does not mean falling prices and a slowdown in inflation does not mean deflation, for it to happen inflation has to be negative.
A modern rate of inflation is considered to be desirable for the economy. The limit of desirable inflation varies from country to country and from time to time.
Based on past experience, it is sometimes suggested that 1-2% inflation in developed countries and 4-6% inflation in less developed countries is appropriate and desirable limit of modern inflation.
So as long as:-
The general level of price rises at an annual average rate of 2-3% in developed countries and 4-5% in less developed countries and
Macro-variables are not adversely affected by price rise; policy measures to control inflation are not required because controlling inflation under these conditions may distort the price system and disturb employment and growth process.
A price rise is not considered inflationary |
Hyperactivity means an increase in movement, impulsive actions, being easily distracted, and shorter attention span. Some people believe that children are more likely to be hyperactive if they eat sugar, artificial sweeteners, or certain food colorings. Other experts disagree with this.
Some people claim that eating sugar (such as sucrose), aspartame, and artificial flavors and colors lead to hyperactivity and other behavior problems in children. They argue that children should follow a diet that limits these substances.
Activity levels in children vary with their age. A 2-year old is most often more active, and has a shorter attention span, than a 10-year old.
A child's attention level also will vary depending on his or her interest in an activity. Adults may view the child's level of activity differently depending on the situation. For example, an active child at the playground may be OK. However, a lot of activity late at night may be viewed as a problem.
In some cases, a special diet of foods without artificial flavors or colors works for a child, because the family and the child interact in a different way when the child eliminates these foods. These changes, not the diet itself, may improve the behavior and activity level.
Refined (processed) sugars may have some effect on children's activity. Refined sugars and carbohydrates enter the bloodstream quickly. Therefore, they cause rapid changes in blood sugar levels. This may make a child become more active.
Several studies have shown a link between artificial colorings and hyperactivity. On the other hand, other studies do not show any effect. This issue is yet to be decided.
There are many reasons to limit the sugar a child has other than the effect on activity level.
- A diet high in sugar is a major cause of tooth decay.
- High-sugar foods tend to have fewer vitamins and minerals. These foods may replace foods with more nutrition. High-sugar foods also have extra calories that can lead to obesity.
- Some people have allergies to dyes and flavors. If a child has a diagnosed allergy, talk to a dietitian.
- Add fiber to your child's diet to keep blood sugar levels more even. For breakfast, fiber is found in oatmeal, shredded wheat, berries, bananas, whole-grain pancakes. For lunch, fiber is found in whole-grain breads, peaches, grapes, and other fresh fruits.
- Provide "quiet time" so that children can learn to calm themselves at home.
- Talk to your health care provider if your child cannot sit still when other children of his or her age can, or cannot control impulses.
Diet - hyperactivity
Ditmar MF. Behavior and development. In: Polin RA, Ditmar MF, eds. Pediatric Secrets. 7th ed. Philadelphia, PA: Elsevier; 2021:chap 2.
Katzinger J, Murray MT, Lyon MR. Attention deficit hyperactivity disorder. In: Pizzorno JE, Murray MT, eds. Textbook of Natural Medicine. 5th ed. St Louis, MO: Elsevier; 2021:chap 151.
Sawni A, Kemper KJ. Attention deficit disorder. In: Rakel D, ed. Integrative Medicine. 4th ed. Philadelphia, PA: Elsevier; 2018:chap 7.
Review Date 5/24/2021
Updated by: Neil K. Kaneshiro, MD, MHA, Clinical Professor of Pediatrics, University of Washington School of Medicine, Seattle, WA. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team. |
Influenza, commonly known as “the flu”, is an infectious disease caused by the influenza virus. Symptoms can be mild to severe. The most common symptoms include: a high fever, runny nose, sore throat, muscle pains, headache, coughing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children there may be nausea and vomiting but these are not common in adults. Nausea and vomiting occur more commonly in the unrelated infection gastroenteritis, which is sometimes inaccurately referred to as “stomach flu” or “24-hour flu”. Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.
Usually, the virus is spread through the air from coughs or sneezes.This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are sick. The infection may be confirmed by testing the throat, sputum, or nose for the virus.
Influenza spreads around the world in a yearly outbreak, resulting in about three to five million cases of severe illness and about 250,000 to 500,000 deaths. In the Northern and Southern parts of the world outbreaks occur mainly in winter while in areas around the equator outbreaks may occur at any time of the year. Death occurs mostly in the young, the old and those with other health problems. Larger outbreaks known as pandemics are less frequent. In the 20th century three influenza pandemics occurred: Spanish influenza in 1918, Asian influenza in 1958, and Hong Kong influenza in 1968, each resulting in more than a million deaths. The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June of 2009. Influenza may also affect other animals, including pigs, horses and birds.
Frequent hand washing reduces the risk of infection because the virus is inactivated by soap. Wearing a surgical mask is also useful. Yearly vaccinations against influenza is recommended by the World Health Organization in those at high risk. The vaccine is usually effective against three or four types of influenza. It is usually well tolerated. A vaccine made for one year may be not be useful in the following year, since the virus evolves rapidly. Antiviral drugs such as the neuraminidase inhibitors oseltamivir among others have been used to treat influenza. Their benefits in those who are otherwise healthy do not appear to be greater than their risks. No benefit has been found in those with other health problems. |
What is Periodontal (Gum) Disease?
The term “periodontal” means “around the tooth.” Periodontal disease (also known as periodontitis and gum disease) is a common inflammatory condition that affects the supporting and surrounding soft tissues of the tooth, eventually affecting the jawbone itself in the disease’s most advanced stages.
Periodontal disease is most often preceded by gingivitis which is a bacterial infection of the gum tissue. A bacterial infection affects the gums when the toxins contained in plaque begin to irritate and inflame the gum tissues. Once this bacterial infection colonizes in the gum pockets between the teeth, it becomes much more difficult to remove and treat. Periodontal disease is a progressive condition that eventually leads to the destruction of the connective tissue and jawbone. If left untreated, it can cause shifting teeth, loose teeth, and eventually tooth loss.
Periodontal disease is the leading cause of tooth loss among adults in the developed world and should always be promptly treated.
Types of Periodontal Disease
When left untreated, gingivitis (mild gum inflammation) can spread to below the gum line. When the gums become irritated by the toxins contained in plaque, a chronic inflammatory response causes the body to break down and destroy its own bone and soft tissue. There may be little or no symptoms as periodontal disease causes the teeth to separate from the infected gum tissue. Deepening pockets between the gums and teeth are generally indicative that soft tissue and bone is being destroyed by periodontal disease.
Here are some of the most common types of periodontal disease:
Chronic periodontitis – Inflammation within supporting tissues cause deep pockets and gum recession. It may appear the teeth are lengthening, but in actuality, the gums (gingiva) are receding. This is the most common form of periodontal disease and is characterized by progressive loss of attachment, interspersed with periods of rapid progression.
Aggressive periodontitis – This form of gum disease occurs in an otherwise clinically healthy individual. It is characterized by rapid loss of gum attachment, chronic bone destruction and familial aggregation.
Necrotizing periodontitis – This form of periodontal disease most often occurs in individuals suffering from systemic conditions such as HIV, immunosuppression and malnutrition. Necrosis (tissue death) occurs in the periodontal ligament, alveolar bone and gingival tissues.
Periodontitis caused by systemic disease – This form of gum disease often begins at an early age. Medical condition such as respiratory disease, diabetes and heart disease are common cofactors.
Treatment for Periodontal Disease
There are many surgical and nonsurgical treatments the periodontist may choose to perform, depending upon the exact condition of the teeth, gums and jawbone. A complete periodontal exam of the mouth will be done before any treatment is performed or recommended.
Here are some of the more common treatments for periodontal disease:
Scaling and root planing – In order to preserve the health of the gum tissue, the bacteria and calculus (tartar) which initially caused the infection, must be removed. The gum pockets will be cleaned and treated with antibiotics as necessary to help alleviate the infection. A prescription mouthwash may be incorporated into daily cleaning routines.
Tissue regeneration – When the bone and gum tissues have been destroyed, regrowth can be actively encouraged using grafting procedures. A membrane may be inserted into the affected areas to assist in the regeneration process.
Pocket elimination surgery – Pocket elimination surgery (also known as flap surgery) is a surgical treatment which can be performed to reduce the pocket size between the teeth and gums. Surgery on the jawbone is another option which serves to eliminate indentations in the bone which foster the colonization of bacteria.
Dental implants – When teeth have been lost due to periodontal disease, the aesthetics and functionality of the mouth can be restored by implanting prosthetic teeth into the jawbone. Tissue regeneration procedures may be required prior to the placement of a dental implant in order to strengthen the bone.
Please contact our office if you have questions or concerns about periodontal disease, periodontal treatment, or dental implants. |
How reducing the digital divide can enhance inclusive education
Since the beginning of the COVID pandemic, inclusive education is more important than ever. Today, European governments are confronted to the challenge of securing high quality education and learning opportunities for all.
On 7 December, European Schoolnet brought together European ministers of education, policymakers, and industry leaders to discuss how digital education can make the world more inclusive, at its annual flagship event, EMINENT 2021.
Inclusive education in the post- COVID world
The current global situation and the increasing switch to digital technologies revealed both innovation and inequality.
In the current context, new technologies, such as artificial intelligence (AI) are contributing to innovate and offer great opportunities for distance teaching and learning to schools, teachers and students. However, the crisis has also highlighted inequalities and showed important gaps on skills and competences.
"Continuous learning – life-long and life-wide – has never been as urgent as it is today. Digitalisation has an important role to play, especially when opening up new learning paths, regardless of the learners' life situation", - said Li Andersson, Finnish Minister of Education, during the Eminent 2021 opening session.
Before the crisis, the digital divide was usually perceived as a lack of skills and general readiness for the digital world. However, the crisis exposed the problem of outdated infrastructure, lack of connectivity or insufficient bandwidth at home, and shortage of devices for digital access in households. Primarily, it relates to vulnerable and unprivileged families who cannot afford to purchase digital devices or have a stable and fast Internet connection.
The need to reduce the digital divide for a more inclusive society is crucial for people with special needs. Modern technologies and methodologies can provide universal and personal learning design for each pupil or student. "It is my strong belief that ICT can make the biggest difference for pupils with special needs. For those students, technology can make the difference between a social life or isolation, between the ability to learn or a lack of opportunity", - said Jan De Craemer, Chair of European Schoolnet.
Artificial intelligence (AI) and other emerging technologies are becoming increasingly essential in our daily lives and it is essential to help people understand AI and its importance to develop teaching and learning opportunities.
Professor Rose Luckin at Learner Centred Design at the UCL Knowledge Lab in London, presented at the conference different examples on how emerging technologies can be used in current learning scenarios, such as language learning.
She explained that two key features of AI in the form of machine learning are: adaptivity and autonomy. The ability to adapt means that machine learning can learn how a student interacts with that technology and this learning enables that system to adapt very effectively to the individual needs of that student, based on real time data.
But Prof. Luckin in line with other keynote speakers outlined that technology should be seen as a helping tool and never as a replacement of teachers or real human interaction: "AI can provide quick analysis of where students need help and support. So, it can help teachers be the most effective they can be. But after all, the human teacher is certainly still the most important resource in any education system" she added.
At the same time, we should not forget to adapt digital technologies to each national context, looking at legislation, ethics, equal opportunities, personal data protection, and privacy.
Ulf Matysiak, CEO of Teach First Deutschland, noticed that there is a visible lack of learning materials related to digital education in languages other than English. Translation and releasing such materials in more languages could positively affect inclusive education and diminish digital divide.
Inclusion at the center of European policy
As highlighted during Eminent 2021, by the Slovenian minister of Education Science and Sport, Dr Simona Kustec, the Council of the European Union has adopted a recommendation on blended learning approaches to achieve the goal of high-quality and inclusive primary and secondary education. Blended learning can be defined as taking more than one approach to the learning process, combining school site and other physical environments away from the school site as well as digital (including online learning) and non-digital learning tools.
"In the past one and a half year, home schooling and distance learning became a new reality for many pupils, teachers and parents. While we all hope we can overcome this pandemic as quickly as possible we should keep some of the learnings of this period in mind when looking at the future of education. I encourage us all to explore how blending different teaching environments as well as learning tools such as face-to-face and digital learning can make our education better equipped for the future."
As Ministers of Education speaking at the conference explained, EU member states are already designing new initiatives and ambitious reforms to find new models of learning and have identified digital education as a strategic priority in their Recovery Resilience Facility plans, after the pandemic. But to implement those plans, European cooperation and in particular the support of the EU Recovery and Resilience Facility Instrument will be crucial.
According to Georgi Dimitrov, Head of Unit "Digital Education" at European Commission at DG EAC: "Member States will need more dedicated support, guidance and leadership from the EC to continue investing smartly, while undertaking the necessary policy reforms in enabling effective and inclusive digital education."
To discover more about the new national reforms and innovative approaches proposed by the Ministers of Education from Croatia, Finland, Greece, Hungary, Italy, Malta, Portugal, Slovenia and Spain, during the EMINENT conference during the EMINENT conference, click here and watch our conference:
About EMINENT: every year, European Schoolnet organises its annual EMINENT conference, an expert meeting in education networking bringing stakeholders from across Europe together to discuss the next challenges and potential solutions to ensure an innovative education for all. This year, the event was hosted online on 7 December 2021, generating great interest among a very diverse and wider audience; with more than 780 registrations from 35 countries and almost 1200 visitors to the virtual exhibition, who followed the conference.
Keynote speakers session |
Digital Safety builds a strong sense of digital responsibility and clarifies healthy choices in the digital world. Using previously learned personal safety skills, students will learn the 4 Rules for Online Safety and apply them to their own use of the internet.
Digital Dangers introduces the students to various challenges and dangerous situations they might encounter while online. These include cyberbullying, online predators, and inappropriate content. Students will learn how to spot Red Flags -warning signs that something might not be safe online. |
Periodontal (gum) disease is an infection caused by bacterial plaque, a thin, sticky layer of microorganisms (called a biofilm) that collects at the gum line in the absence of effective daily oral hygiene. Left for long periods of time, plaque will cause inflammation that can gradually separate the gums from the teeth — forming little spaces that are referred to as “periodontal pockets.” The pockets offer a sheltered environment for the disease-causing (pathogenic) bacteria to reproduce. If the infection remains untreated, it can spread from the gum tissues into the bone that supports the teeth. Should this happen, your teeth may loosen and eventually be lost.
When treating gum disease, it is often best to begin with a non-surgical approach consisting of one or more of the following:
- Scaling and Root Planing. An important goal in the treatment of gum disease is to rid the teeth and gums of pathogenic bacteria and the toxins they produce, which may become incorporated into the root surface of the teeth. This is done with a deep-cleaning procedure called scaling and root planing (or root debridement). Scaling involves removing plaque and hard deposits (calculus or tartar) from the surface of the teeth, both above and below the gum line. Root planing is the smoothing of the tooth-root surfaces, making them more difficult for bacteria to adhere to.
- Antibiotics/Antimicrobials. As gum disease progresses, periodontal pockets and bone loss can result in the formation of tiny, hard to reach areas that are difficult to clean with handheld instruments. Sometimes it's best to try to disinfect these relatively inaccessible places with a prescription antimicrobial rinse (usually containing chlorhexidine), or even a topical antibiotic (such as tetracycline or doxycyline) applied directly to the affected areas. These are used only on a short-term basis, because it isn't desirable to suppress beneficial types of oral bacteria.
- Bite Adjustment. If some of your teeth are loose, they may need to be protected from the stresses of biting and chewing — particularly if you have teeth-grinding or clenching habits. For example, it is possible to carefully reshape minute amounts of tooth surface enamel to change the way upper and lower teeth contact each other, thus lessening the force and reducing their mobility. It's also possible to join your teeth together with a small metal or plastic brace so that they can support each other, and/or to provide you with a bite guard to wear when you are most likely to grind or clench you teeth.
- Oral Hygiene. Since dental plaque is the main cause of periodontal disease, it's essential to remove it on a daily basis. That means you will play a large role in keeping your mouth disease-free. You will be instructed in the most effective brushing and flossing techniques, and given recommendations for products that you should use at home. Then you'll be encouraged to keep up the routine daily. Becoming an active participant in your own care is the best way to ensure your periodontal treatment succeeds. And while you're focusing on your oral health, remember that giving up smoking helps not just your mouth, but your whole body.
Often, nonsurgical treatment is enough to control a periodontal infection, restore oral tissues to good health, and tighten loose teeth. At that point, keeping up your oral hygiene routine at home and having regular checkups and cleanings at the dental office will give you the best chance to remain disease-free.
Understanding Gum (Periodontal) Disease Have your gums ever bled when you brushed or flossed? This most commonly overlooked simple sign may be the start of a silent progressive disease leading to tooth loss. Learn what you can do to prevent this problem and keep your teeth for life... Read Article
Treating Difficult Areas Of Periodontal Disease Local antimicrobial or antibiotic therapy is sometimes used to treat difficult areas of periodontal (gum) disease. However, it is important to realize that while periodontal disease is a bacterially induced and sustained disease, mechanical cleaning to reduce bacteria is the best and most often used treatment... Read Article |
Although girls and boys had different roles in Sioux society, their parents were not disappointed if they had a daughter. Parents in this Native American tribe doted on all of their children. Jonathan Carver, an explorer who visited the Sioux on the Great Plains in the mid-eighteenth century, observed that “Nothing can exceed the tenderness shown to them by their offspring.” Sioux children rarely got spanked and their parents allowed them to make decisions. Unlike American families today, which usually have only two parents and their children in one home, in a Sioux family, children lived with their parents as well as aunts, uncles, and other extended family members. In this way, Sioux children received extra attention and advice because they had many adults to look after them.
Like American kids, Sioux boys and girls played with toys. Their toys would prepare them for their roles in the community. Girls played with dolls and small tepees to prepare them for motherhood and domestic tasks. Boys played with bows and arrows, which would be sharpened when they were older so they could practice the skills they needed to become braves. By age eight, boys and girls spent more time with their elders.
Girls learned to plant, harvest, sew, and cook alongside their mothers. Cooking must have been a challenge based on the variety of meat Carver saw the women preparing. He wrote, “All their victuals are either roasted or boiled…their food usually consists of the flesh of the bear, the buffalo, the elk, the deer, the beaver, and the raccoon.” The Sioux did not forget to eat their vegetables, either. They ate corn, which the women harvested as well as the inside barks of a shrub that Carter was not familiar with, but he said it tasted good. Women were also responsible for cleaning and decorating the family home—the tepee. By the time a girl became a teenager, she looked forward to marrying a Sioux brave and using her new skills as a wife.
Boys spent their preteen years learning to ride horses and shoot moving targets. They also learned to shoot on horseback. These skills were important because men were expected to hunt the food and bring it to the women. Also, Native American tribes rarely got along with each other so boys needed to know the skills of warfare. By the age of fifteen, young men could join the other warriors.
Prior to becoming a warrior, however, boys were initiated into manhood through their “vision quest.” The young man entered a hut called a sweat lodge with his elders. Heated rocks were brought into the hut and cold water was poured over them. The steam that was created purified the boy’s soul. Then he spent four days alone on a hilltop without eating. The quest would prove the boy’s bravery as well as his willpower since he would hear strange noises outside at night. During this time, the boy prayed that he would have dreams that would help him decide what he would do when he grew up. After the four days, an elder brought the boy home and interpreted his dreams. Grown men and occasionally women would participate in more than one vision quest if they felt the need for guidance; however, the first vision quest for a boy was the most important. |
Learn problem-solving and analytical thinking through coding
Book your free coding class now
Check out the benefits of learning to code
Coding helps develop problem-solving skills
Through coding, a child learns how to identify a problem statement and then develop an app, website or a code to solve these problems.
Coding helps develop analytical thinking
In programming, kids learn how to "handle errors" - that is to anticipate problems that will probably emerge, and then writing the correct codes that will prevent the problems from happening or correcting the code when the error happens.
Coding helps develop structural thinking
Coding helps a child learn how to build small pieces of codes and then get these different blocks of codes to work together teaching them how to think structurally.
Coding simplifies complex math concepts
It's a misconception that only someone who's good at math can code. In fact, Coding shows practical application of math concepts and kids learn how to use mathematical concepts in a problem to move it forward.
Coding helps develop resilience
The act of debugging a problem in coding teaches kids resilience. They learn how to test multiple solutions of solving a problem if one way does not work.
Want to drive overall development for your child?
Two weeks ago, I could not write basic code. But, just after 7 classes at Codingal, I can now build an app
Through Codingal I realised that coding is not something very difficult and complex but actually interesting and fun!
Codingal is on a mission to inspire school kids to fall in love with coding. Coding is proven to develop creativity, logical thinking and problem solving skills in kids. Coding is an essential skill of the future and more than 60% of all jobs in STEM-related fields require knowledge of code. Kids who start learning to code at a young age are guaranteed to become leaders, creators and entrepreneurs of the future. |
The Essentials of Human Eye Anatomy
The human eye is made up of 3 main frameworks. The iris, lens, and also posterior chamber create the former and the posterior section specifically. Each is made up of distinct components. Furthermore, the human eye has 2 layers, the iris and the sclera. The iris, the center-placed hole in the facility of the eye, is surrounded by the sclera, a white layer. These three components are linked by a limbus. The student is a black circle in the center of the eye, which keeps track of the quantity of light coming into the eye. When the light is as well intense, the student agreements and expands, respectively. The iris, the colored component of the eye, functions to readjust the size of the pupil, thanks to the muscle mass in the iris. Additionally, the ciliary body and the sphenoid bones are also vital parts of the human eye. The iris is the center part of the eye, where the student is located. The retina is the light-sensitive layer, as well as it generates the impulses to the brain. The sclera also holds the eyeball in position, while the muscle mass that move the eyeball attach to the sclera. The lens is connected to the ciliary body by the suspensory tendon. The eyeball's upper eyelid covers the top section of the eye, which is generally closed. The iris is also comprised of layers that manage the amount of light entering the eye. The eye contains 3 layers, with the outermost component, the cornea, the middle layer, as well as the innermost part, the choroid. The cornea and also the sclera provide the form of the eye and also shield it from external pressures. The conjunctiva, a thin membrane covering the front component of the eye, is the second layer of the eye, and consists of the retina and also the ciliary body. There are a number of illness that can impact the eye. These illness can impact the iris, cornea, as well as uvea, the colored component of the eye. Diabetes mellitus, injury, and infection are the most common root causes of retinal detachment, and also require immediate medical fixing. The eye can additionally establish long-term conditions, consisting of strabismus. An individual can create glaucoma, or have a discrepancy between the iris and the retina. The retina is composed of two kinds of photoreceptor cells, called rods and cones. The poles are sensitive to light, while the cones are delicate to a lot more light. The retina has around 6 million cones, which are in charge of acute vision. The cones are thought to be sensitive to various main colours. The various other colours are combinations of these colours. The eye additionally includes a cornea, which is the transparent circular component at the front of the eyeball. The cornea has no capillary, however is exceptionally sensitive to pain. The retina additionally includes specialized cells called the retinal pigment epithelium. These fibers transfer aesthetic pictures to the mind. They additionally lug signals pertaining to darkness and also color. A retinal nerve lies in the facility chamber. If the optic disc does not fill up the retina, the eye can not focus on items. This is why it is usually described as the dead spot. So, it is important to recognize the composition of the eye before you attempt to correct it! |
- 1 Introduction
- 2 Educational Implications
- 2.1 To Understand the Child Problems
- 2.2 To Understand the Individual Difference
- 2.3 To Choose the Appropriate Teaching Methods
- 2.4 Assists in Classroom Environment
- 2.5 Useful in the Organization of Various School Activities
- 2.6 To Keep a Track on the Student’s Academic Performance
- 2.7 To Assess the Overall Development Process
- 2.8 To Presume the Students’ Behavior
- 3 Conclusion
It becomes quite important for the teachers as well as for the parents to get acquainted with the principles of growth and development which can assist in guiding them. Therefore, these principles can be defined with the implications of these principles of growth and development into education which are as follows.
The principles of growth and development have importance in education and the implication of these principles into education areas of the following:
To Understand the Child Problems
The knowledge of these principles of growth and development provides a chance for the teacher to understand the kind of problems a student can have which can assist in the academic as well as in the personality development of the child.
To Understand the Individual Difference
If a teacher would be knowing these principles of growth and development, the teacher can consider the individual difference while assessing the child’s performance in the class.
To Choose the Appropriate Teaching Methods
The knowledge of these principles of growth and development can help the teacher to choose an appropriate way of teaching method as well as teaching aids to be used in the classroom for the betterment of a children’s development.
Assists in Classroom Environment
If a Teacher would know the principles of growth and development then the teacher would provide a free environment in the classroom which can assist the student to feel free to ask any doubt and have any discussion in the classroom.
Useful in the Organization of Various School Activities
Not only that, the school can also take advantage of these principles of growth and development through which various activities and organizations will be held in accordance with the students’ requirements.
To Keep a Track on the Student’s Academic Performance
The growth and development principles also help the teacher to keep track of the students’ academic performance and how many marks have they obtained along with the areas of improvement.
To Assess the Overall Development Process
Specifically, the principles of growth and development assist the teacher as well as the school authorities to assess a student’s overall development process in which all kinds of development are included such as physical development, mental development, personality development, and emotional development over the period of time.
To Presume the Students’ Behavior
The principles of growth and development can also help the teacher to predict the behavior of the students in the classroom and the previous knowledge a student can have about the topic as per the cognitive abilities.
Therefore, in conclusion, it can be said that these principles of growth and development can be implied in the field of education with certain important educational implications which are beneficial for the teachers to understand before teaching the learners and the students in the classroom to enhance the teaching-learning process. |
As we’ve seen, a pointer is an object that can point to a different object. As a result, we can talk independently about whether a pointer is const and whether the objects to which it can point are const. we use the top-level const to indicate that the pointer itself is a const. When a pointer can point to a const object, we refer to that const as a low-level const.
More generally, top-level const indicates that an object itself is const. Top-level const can appear in any object type,i.e., one of the built-in arithmetic types, a class type, or a pointer type. Low-level const appears in the base type of compound types such as pointer or reference. Note that pointer types, unlike most other type, can have both top-level and low-level const independently:
int i= 0; int *const p1= &i; // we can't change the value of p1;const is top-level const int ci= 42; // we can't change ci;const is top-level const int *p2= &ci; // we can't change p2;cosnt is low-level const int *const p3= p2; // right-most const is top-level,left-most is not const int &r= ci; // const in reference types is always low-level
The auto Type Specifier
It is not uncommon to want to store the value of an expression in a variable. To declare the variable, we have to know the type of that expression. When we write a program, it can be surprisingly difficult–and sometimes even impossible–to determine the type of an expression. Under the new standard, we can let the compiler figure out the type for us by using the auto type specifier. Unlike type specifiers, such as double, that names a specifier type, auto tells the compiler to deduce the type from the initializer. By implication, a variable that uses auto as its type specifier must have initializer:
// the type of item is deduced from the type of the result of adding val1 and val2 auto item= val1+val2; // item initialized to the result of vad1+vad2
Here the compiler will deduce the type of item from the type returned by applying + to val1 and val2.
First, as we’ve seen, when we use a reference, we are really using the object to which the reference refers. In particular, when we use a reference as in initializer, the initializer is the corresponding object. The compiler uses that object’s type for auto’s type deduction:
int i= 0; &r= i; auto a= r; // a is an int(r is an alias for i, which has type int)
Second, auto ordinarily ignores top-level consts. As usual in initializations, low-level consts, such as when an initializer is a pointer to const, are kept:
const int ci= i, &cr= ci; auto b= ci; // b is an int(top-level const in ci is dropped) auto c= cr; // c is an int(cr is an alias for ci whose const is top-level) auto d= &i; // d is an int*(& of an int object is int*) auto e= &ci; // e is cosnt int*(& of a const object is low-level const)
If we want the deduced type to have a top-level const, we must say so explicitly:
const auto f= ci; // deduced type of ci is int; f has type const int
We can also specify that we want a reference to the auto-deduced type. Normal initialization rules still apply:
auto &g= ci; // g is a const int& that is bound to ci auto &h= 42; // error: we can't bind a plain reference to a literal const auto &j= 42; // ok: we can bind a const reference to a literal
when we ask for a reference to an auto-deduced type, top-level consts in the initializer are not ignored. As usual, consts are not top-level when we bind a reference to an initializer.
When we define several variables in the same statement, it is important to remember that a reference or pinter is part of a particular declarator and not part of the base type for the declaration. As usual, the initializers must provide consitent auto-deduced types:
auto k= ci, &l= i; // k is int; l is int& auto &m= ci, *p= &ci; // m is a const int&; p is a pointer to cosnt int // error: type deduced from i is int; type deduced from &ci is const int auto &n= i, *p2= &ci;
The decltype Type Specifier
Sometimes we want to define a variable with a type that the compiler deduces from an expression but do not want to use that expression to initialize the variable. For such cases, the new standard introduced a second type specifier, decltype, which returns the type of its operand. The compiler analyzes the expression to determine its type but does not evaluate the expression:
decltype(f()) sum= x; // sum has whatever type f returns
Here, the compiler does not call f, but it uses the type that such a call would return as the type for sum. That is, the compiler gives sum the same type as the type that would be returned if we were to call f.
The way decltype handles top-level const and references differs subtly from the way auto does. When the expression to which we apply decltype is a variable, decltype returns the type of that variable, including top-level const and references:
cosnt int ci= 0, &cj= ci; decltype(ci) x= 0; // x has type const int decltype(cj) y= x; // y has type cosnt int& and is bound to x; decltype(cj) z; // error: z is reference and must be initialized
Because cj is a reference, decltype(cj) is a reference type. Like any other reference, z must be initialized.
It is worth nothing that decltype is the only context in which a variable defined as a reference is not treated as a synonym for the object to which it refers.
decltype and References
When we apply decltype to an expression that is not a variable, we get the type that expression yields.
// decltype of an expression can be a reference type int i= 42, *p= &i, &r= i; decltype(r+0) b; // ok: addition yields an int; b is an (uninitialized) int decltype(*p) c; // error: c is int& and must be initialized
Here r is a reference, so decltype(r) is a reference type. If we want the type to which r refers, we can use r in an expression, such as r+0, which is an expression that yields a value that has a nonreference type.
On the other hand, the dereference operator is an example of an expression for which decltype returns a reference. As we’ve seen, when we dereference a pointer, we get the object to which the pointer points. Moreover, we can assign to that object. Thus, the type deduced by decltype(*p) is int&, not plain int.
Another important difference between decltype and auto is that the deduction done by decltype depends on the form of its given expression. What can be confusing is that enclosing the name of variable in parentheses affects the type returned by decltype. When we apply decltype to a variable without any parentheses, we get the type of that variable. If we warp the variable’s name in one or more sets of parentheses, the compiler will evaluate the operand as an expression. A variable is an expression that can be the left-hand side of an assignment. As a result, decltype on such an expression yields a reference:
// decltype of a parenthesized variable is always a reference decltype((i)) d; // error: d is int& and must be initialized decltype(i) e; // ok: e is an (uninitialized) int
Remember that decltype((variable))(note,double parentheses) is always a reference type, but decltype(variable) is reference type only if variable is a reference.
版权声明:本文为 NoMasp柯于旺 原创文章,未经许可严禁转载!欢迎访问我的博客:http://blog.csdn.net/nomasp |
In 1885, Charles Lallemand, director general of the geodetic measurement of altitudes throughout France, published a graphical calculator for determining compass course corrections for the ship, Le Triomphe. It is a stunning piece of work, combining measured values of magnetic variation around the world with eight magnetic parameters of the ship also measured experimentally, all into a very complicated formula for magnetic deviation calculable with a single diagram plus a transparent overlay. This chart has appeared in a number of works as an archetype of graphic design (e.g., The Handbook of Data Visualization) or as the quintessential example of a little-known graphical technique that preceded and influenced d’Ocagne’s invention of nomograms—the hexagonal chart invented by Lallemand himself. Here we will have a look at the use and design of this interesting piece of mathematics history, as well as its natural extension to graphical calculators based on triangular coordinate systems. Part I of this essay covers Lallemand’s L’Abaque Triomphe, while Part II covers the general theory of hexagonal charts and triangular coordinate systems. A printer-friendly Word/PDF version with more detailed images is linked at the end of the essay.
As an engineer Lallemand (1857-1928) [1, 2] created a number of ingenious devices to assist in determining altitudes, water and tides in France involving water levels, mercury baths, air bubbles, and other gauge techniques. Slow changes in these measurements led him to theoretical investigations of lunar tides in the Earth’s crust. He also created the modified polyconic form of map projection. Maurice d’Ocagne was his deputy from 1891 to 1901; his indebtedness to Lallemand is evident in his detailed treatment of hexagonal charts and his brief description of L’Abaque Triomphe in his masterpiece Traité de Nomographie and other works on nomography and its foundations [d’Ocagne 1891/1889/1921, Soreau 1902/1921].
What is Magnetic Deviation?
A well-constructed compass on a ship will fail to point to true (geographic) north due of two factors:
Magnetic variation (or magnetic declination): the angle between magnetic north and true north based on local direction of the Earth’s magnetic field, and
Magnetic deviation: the angle between the ship compass needle and magnetic north due to iron within the ship itself.
Magnetic variation has been mapped over most of the world since the year 1700, although it changes over time due to drifting of the magnetic poles of the Earth. The compass correction for magnetic variation can be made based on published magnetic variation tables.
Magnetic deviation arises from the magnetic effects of both hard and soft iron in the ship. Hard iron possesses permanent magnetism as well as semi-permanent magnetism imprinted by the Earth’s magnetic field under the pounding of the iron during the ship’s construction, or from traveling long distances in the same direction under the influence of this field. Collisions, lightning strikes and time will cause significant changes in this magnetism. External fields such as the Earth’s magnetic field induce magnetism in soft iron in the ship on a near real-time basis, an effect that varies with location as the Earth’s magnetic field varies in strength and direction. The combination of magnetic fields from the iron of a particular ship produces a magnetic field that affects the accuracy of compasses onboard that ship, sometimes dramatically. A detailed account of the origins and history of magnetic deviation can be found in another essay of mine.
The Equations of Magnetic Deviation
Lallemand’s L’Abaque Triomphe is shown below (a high-resolution version is also available). It provides a graphical means (an abaque) for calculating the magnetic deviation of the ship Le Triomphe for a given compass course and location on Earth using equations developed by Archibald Smith in 1843. The magnetic deviation essay mentioned above provides the background and analysis of these equations (where the mathematical derivation is given by a hyperlink in the online version of the essay and in the Appendix of the PDF version hyperlinked at the end of the webpage).
The magnetic deviation equations use both non-bold and bold variables A, B, C, D and E, as well as measured magnetic parameters of the ship. Here the angle ξ′ is the compass course, or the angle from north indicated by the compass needle, and δ is the magnetic deviation, or the angle correction to be applied to the compass course to counteract the effects of magnetic deviation.
where at a given location of the ship,
and A, D, E, λ, c, P, f and Q are parameters deduced for a particular ship. These formulas assume a magnetic deviation of less than about 20° in order that B and C can be expressed as simple arcsine functions, and so a certain amount of correction for magnetic deviation may be needed in the binnacle holding the compass. Also, the heeling of the ship, i.e., the leaning of the ship due to wind as well as transient rolling and pitching of the ship, is not taken into account in these equations.
Now the equations for magnetic deviation are provided along the top of Lallemand’s chart, along with the measured values of the ship magnetic parameters. The coefficients in bold in the equations are represented on the chart in their more traditional German Blackletter (Fraktur) font. Also, the term “ctg θ” on the chart should be understood as “c tan(θ)” and “ftg θ” should be understood as “f tan(θ)”.
You can see that there is a mistake in the printed formulas on the chart—the terms in the inner parentheses in B and C should be divided, not multiplied. With D given as 6°45′, the value of ½ sin D is relatively small (about 0.06), and the error has a quite small effect on the overall result. It is not clear whether these incorrect formulas were used in designing the chart or whether they are due to an error on the part of the letterer or printer. As I will discuss a bit later, I performed quite a few tests of the accuracy of this chart based on a model of the Earth’s magnetic field at that time; from those tests it appears that the chart design itself was based on the incorrect formulas, but the differences in the results are small and the inherent inaccuracies in the chart and model make the distinction difficult.
Using Lallemand’s Chart
Directions for the use of the L’Abaque Triomphe are provided along the bottom of the chart, and there is even an example in dashed lines worked out on the chart itself.
Let’s follow the dashed line example marked on the chart highlighted in the figure below. The ship Le Triomphe is located at latitude 42°N and longitude 20°W and has a compass heading (or compass course) of 41.5° (read clockwise from North).
Step 1: The navigator locates the lat/long point on the map along the left side, moves from this position horizontally to the radial line pointing to the 41.5° course along the top, and marks this point.
Step 2: The same lat/long position is found in the upper map on the right side and followed along the guide lines to the line pointing again to the 41.5° course along the edge, marking that point.
Step 3: A transparent or translucent overlay about the size of the paper and marked with a hexagon as shown in lower right of the figure is aligned square to the page with two of its radial arms crossing the two marked points from Steps 1 and 2. The Appendix of this essay contains a printable hexagonal overlay for use with the charts in this essay.
Step 4: The course correction is read from intersection of the next hexagon arm and the deviation scale (11.8°).
The compass course has this 11.8° deviation easterly from North, so the compass course has to be adjusted to obtain a true course of 41.5°. It surprises me that the correction for magnetic variation is not included in the result, as we will see that it was used in the calculation of the magnetic deviation.
If the compass course were southerly (90° to 270°), step 2 would be performed based on the lower map on the right side rather than the upper map.
The Accuracy of Lallemand’s Chart
So how accurate is it? The U.S. Geological Survey has modeled the magnetic variation around the world over the last few centuries. The figures below show the horizontal component and inclination (dip) of the Earth’s magnetic field in 1884, the year prior to the creation of the abaque. One microTesla is equivalent to 1/100 Gauss, so for example the horizontal intensity of the magnetic variation in Paris in 1884 was 19µT or 0.19G.
We can insert values from these figures at different locations on Earth into Lallemand’s equations and compare the result to that obtained graphically from the abaque. It is important to note that the prime meridian (0° longitude) is located at Paris in Lallemand’s chart; the French did not accept Greenwich as the prime meridian until 1911.
Also, there is no indication of the units of the “magnetic force” used in Lallemand’s chart, and any units could be used since the constants would scale any units appropriately; unfortunately, there are no units listed with these constants. Initially I presumed that H would be magnetic flux density in units of Gauss, since Maxwell and Thomson extended the cgs system of units with such electromagnetic units in 1874, but these units do not produce consistent results in the abaque. I later bought a copy of the Admiralty Manual for the Deviations of the Compass from 1893, in which Archibald Smith and F.J. Evans lay out the rationale for the equations used in Lallemand’s chart, and discovered that they normalized H to 1.0 at its value at Greenwich. We can assume that Lallemand normalized H to 1.0 at Paris instead (the difference is not large), so a horizontal intensity H from the 1884 USGS figure has to be multiplied by 1.0/0.19 = 5.26 before using that value in Lallemand’s formulas.
The results of my tests for various locations and courses are found below. The first row compares the computed value with the graphical value at the canonical location of 42°N 20°W. The rest of the rows are for different locations and/or different compass courses. The top spreadsheet compares the graphical results with computations based on the formulas listed on the chart, while the second spreadsheet uses the mathematically correct formulas for B and C.
A lower average absolute error over the tested locations and compass courses is found in the top spreadsheet, suggesting that the chart was drawn using the incorrect formulas found on the chart, although the uncertainties in the graphical readings make this less than certain. In any event, the small difference between the two formulas is apparent.
The results are not bad at all given that we are estimating values off a model, certainly much, much better than not correcting for magnetic deviation at all. In addition, once you start taking measurements off the chart, you begin to notice that the abaque is a bit sketchy at places (look closely at the spacing of the vertical longitude lines in the map along the left side) and was most likely a proof-of-principle graphic.
How Does Lallemand’s Chart Work?
So how does it all work? Hexagonal charts in general are the subject of the next section of this essay, but at this point we state the conclusion: the hexagon arms point to three scales oriented 120° to each other and the value (offset) of the magnetic deviation scale δ is the sum of the values (offsets) of the other two scales. In the figure below the green lines cut the three scales at their zero points, and since these lines nearly intersect at the same point, then within some small error the hexagon will connect the zero values on the three scales (0+0=0). The values Y1, Y2 and Y3 are the values (offsets) of the scales for the example of 42°N 20°W. With a ruler you can verify that Y3 =Y1 + Y2 in length except for a small error (~1mm at the scale of the full-page version shown earlier) due to the inaccuracy in the chart as manifested by the inexact intersection of the green lines.
Let’s look at the construction of the three scales. Each represents an offset from an axis, and one of the advantages of this type of chart is that it doesn’t make any difference where along this axis this offset occurs. So in the first scale the offset Y1 can occur anywhere along the vertical green x-axis of the scale, and this is true for Y2 on the second scale. This allows the scales to be shifted anywhere along the green axes for optimum placement of the scales, and in fact it allows the Deviation (δ) scale to be tucked in the narrow space between the leftmost map and the central cone.
Now the leftmost map is drawn in such a way that the value of B for any latitude and longitude position provides a vertical offset B from the axis passing through the center of the cone. Extending this offset to the right provides a Y1 value for the first term in the formula for magnetic deviation:
Y1 = B sin ξ′
All of the other terms are combined into an offset generated from the maps on the right side of the chart :
Y2 = A + C cos ξ′ + D cos 2ξ′ + E cos 2ξ′
To demonstrate the construction of the twisted cylindrical plot on the right side of the chart, I’ve plotted a graph here that shows Y2 as a function of ξ′ for values of
- Latitude = 42°
- Longitude = 20° West (-20°)
- Inclination θ = +70° downward
- Horizontal H = 16 microTesla = .16 Gauss
- B = (1/.84)[0.106*tan θ + (-.033)/H)] = 0.101
- B = arcsin[B[1 + (sin(6.75°) / 2)]] = 6.15°
- C = (1/.84)[-.013*tan θ + (-.020)/H)] = 0.106
- C = arcsin[C[1 – (sin(6.75°) / 2)]] = 5.73°
Rotating this plot counterclockwise by 30° yields a y-axis that is 60° clockwise from the vertical axis of the chart, lying along the next arm of the hexagon. Note that the angles shown in this plot vary from 0° to 90° and 270° to 360°, which corresponds to compass courses ξ′ in northern half of the compass rose. The range ξ′ = 90° to 270° cross in the opposite direction, which is why there is a separate map used for compass courses ξ′ in the southern half of the compass rose. The offsets for the various latitude and longitude locations, using H and θ for the local magnetic variation, provides the curved lines on the lower and upper maps. This is where the enormous manual effort by Lallemand to create this chart is most apparent.
In the end the third arm of the hexagon overlay provides the sum of these two offsets, or
Y3 = Y1 + Y2 = A + B sin ξ′ + C cos ξ′ + D cos 2ξ′ + E cos 2ξ′
which is the required equation for magnetic deviation.
Lallemands L’Abaque Triomphe is a uniquely interesting graphic because of the sophistication inherent in this first published hexagonal chart. No other chart of this type exists to my knowledge, although the use of hexagonal charts continued for some time until nomograms finally displaced them for good. The principles and history of hexagonal charts and their relatives, triangular coordinate systems, are the subject of Part II of this essay. |
What are fruit and vegetables
Fruits and vegetables are different groups of plant food that vary greatly in their content of fibers, nutrients and energy. They are universally promoted as healthy diet servings. Fruits and vegetables supply dietary fiber, and fiber intake is linked to lower incidence of cardiovascular disease and obesity. Fruits and vegetables also supply vitamins and minerals to the diet and are sources of phytochemicals that function as antioxidants, phytoestrogens, and anti-inflammatory agents. The importance of fiber for the normal functioning of the digestive system has been long appreciated.
Importance of fruits in weight management
With the increasing prevalence of obesity, it has become important for us to find out the means to prevent it. Our vegetables and fruits play a vital role in the same. Regular consumption of fruits leads to avoidance of caloric dense foods during snack time and lead to feeling of fullness. Such a feeling forces us to eat fewer amounts of carbohydrate rich foods, helping us to lose weight.
Importance of fruits and vegetables in prevention of chronic diseases
Both fruits and vegetables are rich sources of fiber – soluble and insoluble. It has been seen that people who have a diet rich in fiber, are less prone to cardiovascular disease. They have less chances of suffering from diabetes. Fibers also reduce chances of hypertension and constipation. It is also said that they also decrease chances of certain cancers. It is recommended that we should eat at least 14 g of fibers per 1000 kcal we take and this can be only fulfilled by taking good amount of fruits and vegetates.
Other than fibers, they are also rich sources of vitamins, minerals and antioxidants. These components of fruits and vegetables helps us in fighting day to day stress and inflammation happening in our body and make us healthy by preventing chronic diseases.
Effect of fruits and vegetables on immune system
A weakened immune system increases the incidence of virus and bacterial invasion-Issues such as skin disorders, delayed wound healing, upper respiratory infections, aging and chronic illness. Eating more fruits and vegetables as a part of a healthy diet is an important key to help boost the immune health.
How much to be taken daily?
At least five kinds of vegetables and two kind of fruits should be taken on a daily basis.
Things to remember
- Fruits and vegetables contain important vitamins, minerals and plant chemicals. They also contain fiber.
- There are many varieties of fruits and vegetables available and many ways to prepare, cook and serve them.
- A diet high in fruit and vegetables can help protect you against cancer, diabetes and heart disease.
- Eat five kinds of vegetables and two kinds of fruits every day for good health.
To know more email us at [email protected] |
Hemochromatosis is a condition where there is excessive iron accumulation in the organs and body resulting in organ toxicity. It is most commonly caused by an autosomal recessive hereditary condition. The hereditary cause of hemochromatosis is also the commonest cause of severe iron overload. Approximately 75 percent of those with hereditary hemochromatosis are asymptomatic. Based on the clinical symptoms, the diagnosis of hemochromatosis can be achieved. Since most patients have mild to no symptoms, most are incidentally diagnosed when their serum iron levels are noticeably elevated during routine screening.
To confirm the diagnosis of hemochromatosis, some of the tests that should be included are genetic testing for HFE mutations, hepatic iron concentration, transferrin saturation levels, and serum ferritin studies. Echocardiography and chest radiography may also be performed as part of imaging studies in the evaluation of cardiac diseases among those with hemochromatosis. Early diagnosis is important among those with hemochromatosis. The main treatment goal is the removal of iron before it results in irreversible parenchymal damage. Once confirmed, hemochromatosis can be treated through phlebotomy to rid the excess iron and maintain normal iron stores.
Chelation agents such as deferiprone and deferoxamine can also be used. In severe arthropathy or end stage liver disease, surgery may be required. In the United States, approximately 1 in 200 to 500 individuals suffer from hereditary hemochromatosis. Most patients with hemochromatosis are of northern European origin. The highest prevalence of hemochromatosis is seen among those of Celtic origin. Prevalence is approximately similar in western countries, Europe, and Australia. Patients with hemochromatosis can also make dietary changes that may be beneficial for them. While normal individuals absorb about 1 milligram of iron every day, those with hemochromatosis can absorb about four times the amount. Diet modifications can help to reduce the level of body iron. From our diet, there are two types of iron known as iron in heme and nonheme iron.
Hemochromatosis Diet Food #1: Vegetables
Vegetables are always an important source of iron. However, in hemochromatosis, the intake of vegetables should not be restricted as they provide many other important nutrients and minerals. Vegetables can also be beneficial for hemochromatosis patients as they may contain substances that can help inhibit the iron uptake.
Patients are always advised to consume a variety of vegetables with a minimum of 200 grams a day. However, patients should try to limit the intake of vegetables that are rich in iron. These include vegetables that are dark green and leafy such as spinach, chard, fennel, and green beans. Vegetables rich in iron should not be eaten with meat. |
- Individual instruction with proprietary curriculum
- Students are motivated by playing with others in a band
- A band builds teamwork, develops social skills, fosters self-esteem and leads to lasting friendships.
- Everyone learns faster playing music they like
- Students are motivated to learn when they see early results
- Introducing very young children to music supports cognitive development and motor skills.
Benefits of Creating Music
The Benefits of Creating Music
Learning to play a musical instrument challenges the brain in new ways. In addition to being able to discern different tonal patterns and groupings, new motor skills must be learned and coordinated in order to play the instrument. These new learnings cause profound and seemingly permanent changes in the brain structure. For example, the auditory cortex, the motor cortex, the cerebellum, and the corpus callosum are larger in musicians than non-musicians.
(How the Brain Learns by David A Sousa, pg 225)
Music and Mathematics
Of all the academic subjects, mathematics seems to be most closely connected to music. Music relies on fractions for tempo and on time divisions for pacing, octaves, and chord intervals. Here are some basic mathematical concepts that are basic to music.
- Patterns - Music is full of patterns of chords, notes and key changes. Musicians learn to recognize these patterns and use them to vary melodies. Inverting patterns, called counterpoint, helps form different kinds of harmonies.
- Counting - Counting is fundamental to music because one must count beats, count rests, and count how long to hold notes.
- Geometry - Music students use geometry to remember the correct finger positions for notes or chords. Guitar players' fingers, for example, form triangular shapes on the neck of the guitar.
- Sequences - Music and Mathematics are related through sequences called intervals. A mathematical interval is the difference between two numbers; a musical interval is the ratio of their frequencies. Here's another sequence: Arithmetic progressions in music correspond to geometric progressions in mathematics.
Ratios and Proportions, and Equivalent Fractions - Reading music requires an understanding of ratios and proportions, that is, a whole note needs to be played twice as long as a half note, and four times as long as a quarter note. Because the amount of time allotted to one beat is a mathematical constant, the duration of all the notes in a musical piece are relative to one another on the basis of that constant. It is also important to understand the rhythmic difference between 3/4 and 4/4 time signatures.
(How the Brain Learns by David A Sousa, pg 227)
Music and Reading
Several studies confirm a strong association between music instruction and standardized tests of reading ability. Although we cannot say that this is a causal association (that taking music instruction caused improvement in reading ability), this consistent finding in a large groups of students builds confidence that there is a strong relationship (Butzlaff, 2000). Researchers suggest that this strong relationship may result because of positive transfer occurring between language and reading.
Studies done with 4 and 5 year old children revealed that the more music skills children had, the greater their degree of phonological awareness and reading development. Apparently, music perception taps and enhances auditory areas that are relevant to reading (Anvari, Trainor, Woodside, & Levy, 2002).
(How the Brain Learns by David A Sousa, pg 229) |
Airspeeds – Get to Know the Operation and Manufactures
How it works
Pitot pressure is forced into the diaphragm causing it to expand like a balloon. Static pressure is contained within the indicator case and surrounding the diaphragm. As the static pressure changes it, will either cause the diaphragm to compress, as the aircraft loses altitude or allow it to expand as the aircraft gains altitude. This expansion and contraction of the diaphragm is mechanically linked to the pointer causing it to move around the dial thereby displaying the speed of the aircraft as a function of the difference between the Pitot and static pressures.
Range marks are a reminder to the pilot of the aircrafts basic operating envelope as it pertains to airspeed. Typical range marks found on an air speed dial are:
White Arc – VFE – This is the maximum speed at which the aircraft can operate safely with the flaps extended.
Green Arc – This is the normal operating range
Yellow (Orange) Arc – Caution
Red Radial – VNE – Never exceed speed
Blue Radial – This is the minimum operating speed using one engine on a two engine aircraft.
Note: Maneuvering speed is not marked on the dial, it is normally on a placard, which is located on the instrument panel.
The following companies all have manufactured air speed indicators and are the most common that you will see: |
Lake Water Quality
Monitoring water quality in lakes and reservoirs is key in maintaining safe water for drinking, bathing, fishing and agriculture and aquaculture activities. Long-term trends and short-term changes are indicators of environmental health and changes in the water catchment area. Directives such as the EU's Water Framework Directive or the US EPA Clean Water Act request information about the ecological status of all lakes larger than 50 ha. Satellite monitoring helps to systematically cover a large number of lakes and reservoirs, reducing needs for monitoring infrastructure (e.g. vessels) and efforts.
The Lake Water Products (lake water quality, lake surface water temperature) provide a semi-continuous observation record for a large number (nominally 4,200) of medium and large-sized lakes, according to the Global Lakes and Wetlands Database (GLWD) or otherwise of specific environmental monitoring interest. Next to the lake surface water temperature that is provided separately, this record consists of three water quality parameters:
- The turbidity of a lake describes water clarity, or whether sunlight can penetrate deeper parts of the lake. Turbidity often varies seasonally, both with the discharge of rivers and growth of phytoplankton (algae and cyanobacteria).
- The trophic state index is an indicator of the productivity of a lake in terms of phytoplankton, and indirectly (over longer time scales) reflects the eutrophication status of a water body.
- Finally, the lake surface reflectances describe the apparent colour of the water body, intended for scientific users interested in further development of algorithms. The reflectance bands can also be used to produce true-colour images by combining the visual wavebands. |
Introducing the Assyrians
Where and when was ‘Assyria’?
The mighty Assyrian empire began as the small city-state of Ashur in what is now the north-eastern region of Iraq. It first asserted control over a large area in the 14th century BC, but by the 12th century BC it had collapsed.
During the 10th and 9th centuries BC, Assyria gradually recovered, reclaiming lost lands, and campaigning in new ones. By the 7th century BC, the last great Assyrian king, Ashurbanipal, ruled over a geographically and culturally diverse empire, shaping the lives of peoples from the eastern Mediterranean to western Iran. When people talk of ‘Assyria’, it generally means the time of its great flourishing between the 9th and 7th centuries BC, sometimes referred to today as the ‘Neo-Assyrian empire’.
Living in luxury
The kings of the Neo-Assyrian empire built on a lavish scale. They ruled from their capital cities at Ashur, Nimrud, Khorsabad and Nineveh. When a king decided to move his capital or to simply rebuild it, they made sure it was bigger and better than what came before.
Nineveh was transformed by King Sennacherib (reigned 705–681 BC) into a metropolis whose size and splendour would astonish the ancient world. It covered 7 square kilometres and its palaces and temples were adorned with colossal sculptures and brilliantly carved reliefs. An intricate system of canals and aqueducts watered the king’s pleasure gardens and game parks. Sennacherib’s grand residence, the ‘Palace Without Rival’, was built ‘to be an object of wonder for all the people’. Visitors entered the palace through massive gateways flanked by colossal human-headed winged bulls (called lamassu) that protected the king from dangerous supernatural forces.
Sennacherib’s grandson Ashurbanipal ruled from this palace for most of his reign (669–631 BC), before moving to a new royal residence at Nineveh. His ‘North Palace’ was decorated with reliefs painted in vivid colours that glorified his rule and achievements.
The king’s power was absolute, assigned through the divine will of the Assyrian deity Ashur. For the Assyrians, the heartland of the empire, with its magnificent cities, was the perfect vision of civilised order. Foreign lands were thought to be full of chaos and disorder. As the earthly representative of the gods, it was the king’s duty to create order in the world by conquering foreign lands and absorbing them into Assyria.
As the shepherd of his people, the king also protected Assyria from foreign enemies or wild animals. The most dangerous animal in Assyria was the lion, which came to symbolise all that was wild and chaotic in the world. Assyrian kings proved they were worthy by hunting these fearsome beasts. The royal hunt was a drama-filled public spectacle staged at game parks near the cities. Lion hunting was represented in Assyrian art, most famously in the reliefs from king Ashurbanipal’s palace.
Assyria’s rapid expansion was achieved through force. By the mid-8th century BC, Assyrian kings commanded a professional standing army with chariots, cavalry and infantry. This massive army was supplemented by the king’s personal bodyguard of elite troops. The army grew as it absorbed members of defeated enemy armies, which gave rise to a multicultural military force drawn from all corners of the empire.
The Assyrians preferred to mount surprise attacks against an inferior force to guarantee an easy victory. Large fortified cities with multiple moats, walls and towers could take years to capture. A city’s fortifications could be breached using siege engines, battering rams and sappers. To avoid heavy casualties, the Assyrian army would blockade a city with siege forts to cut off its supplies, reinforcements, and any means of escape.
Military conquest was followed by the extraction of wealth through plunder, tribute payments, taxation – and even people. Entire populations from defeated kingdoms were forcibly deported and resettled elsewhere within the empire. Deportees could be exploited – conscripted into the army, made to populate newly established cities, and resettled in underdeveloped provinces to work the land. The most valued – elite families, specialist craftsmen and scholars – were settled and put to work in the major cities. Here they could support public works, produce luxury goods, and generate knowledge for the benefit of the empire.
As the earthly representative of the gods, it was the king’s duty to punish Assyria’s enemies. Captured enemy leaders and rebels were displayed alongside the spoils of war and publicly humiliated in triumphal parades. Some were forced to wear the heads of their accomplices around their necks, others were chained to the gates of the city like dogs, or hitched up to the king’s chariot like horses. The message was simple – mess with Assyria and you will face the consequences. This kind of violence was thought to be a form of divine justice against those who had opposed the king and the gods
Assyrian kings liked to present themselves as the sole protectors of the empire. In reality, the empire was organised into a patchwork of provinces, each supervised by a governor appointed by the king. The governors made up a group of officials called the ‘Great Ones’, which formed the king’s cabinet. The ‘Great Ones’ held considerable power, so much so that they could even threaten the king’s rule.
Initially, these state positions were inherited, but their considerable wealth and influence posed a threat to the king. To counter this, the Assyrians devised an innovative scheme to ensure that positions of power were awarded on merit and not through family ties. They appointed eunuchs to positions of power because they could not father children and therefore could not build dynasties of their own. Only the king could pass power down family lines.
A library fit for a king
Much of what we know about Assyrian history and culture is from written records. Assyrians used the ancient writing technique of cuneiform, which was first developed by the Sumerians around 3000 BC. Texts were written by pressing a reed pen into soft clay. The characteristic wedge-shaped strokes give the writing its modern name (cuneiform means simply ‘wedge-shaped’). Cuneiform tablets were used to record everything from day-to-day administration to science and literature.
King Ashurbanipal seems to have wanted a copy of every book worth having. His interest in books was not for entertainment. They helped him communicate with the gods and learn what the future held. There were books about omens from sacrifices, the heavens, and the earthly world. Alongside them were rituals and calendars, hymns and prayers, and magic and medicine.
The empire’s unity depended on a reliable and efficient communications network. To speed up the transfer of information, the empire was connected by an innovative system of royal roads, along which express mail could travel. It only took a few days for news to travel between the capital and the furthest reaches of the empire. If a message was particularly sensitive, state letters would travel with a trusted envoy across the entire distance to hand deliver the message.
Access to the royal mail service was only granted to the king’s most trusted officials. Each wore a golden signet ring engraved with an image of the royal seal – the Assyrian king slaying a ferocious lion. Letters sealed by these rings carried royal authority and any instructions had to be obeyed. By delegating royal power, the king could be in many places at once.
The end of an empire
Following the death of Ashurbanipal around 631 BC, it took just under 20 years for the empire to crumble. The Babylonians, under their leader Nabopolassar, rebelled against Assyrian rule, causing chaos throughout the land. Conditions under siege were dire, the populace stricken by disease and famine. Parents were forced to sell their children to buy food. Gradually, Nabopolassar won the upper hand and advanced into Assyria. A war of independence became a fight for Assyria’s survival.
Assyria was doomed when the Medes from western Iran, lead by Cyaxares, joined the assault by sacking the holy city of Ashur. Nabopolassar and Cyaxares swore an alliance that was to seal Assyria’s fate. In 612 BC the two armies converged on Nineveh. The greatest city in existence fell, its palaces and temples burnt to the ground, and the last Assyrian king to reign from Nineveh, Sin-shar-ishkun, perished in the flames
Assyria into legend
The fall of Assyria was an iconic event that was recorded in passages of the Bible and by Greek and Roman writers. Accounts describe how Assyria was punished for the moral depravity of its rulers, who surrounded themselves with great riches and luxury. According to classical sources, Assyria’s last king was so debauched that he caused the empire’s complete destruction. Realising that Nineveh was lost, he erected an enormous pyre in the palace and consigned himself to the flames along with his vast wealth, concubines and eunuchs.
The negative image of Assyria was challenged by archaeological discoveries of the mid-nineteenth century, which established Assyria as one of the great civilisations of the ancient world.
Although a number of travellers and explorers had visited the Assyrian sites of Nimrud and Nineveh, they weren’t excavated until the mid-19th century, when a young British diplomat, Austen Henry Layard, started work at Nimrud. Layard’s remarkable discoveries at Nimrud included colossal winged bulls and carved stone reliefs from the Assyrian palaces, which attracted sponsorship from the British Museum. Layard moved his team to the main mound at Nineveh in 1847, where he discovered the ‘Palace Without Rival’, king Sennacherib’s great royal residence.
Arrangements were made with the Ottoman government to have the Assyrian sculptures shipped to Britain. Due to the size of the sculptures, this proved to be some task. Firstly, the sculptures were transported to the river Tigris, where they were loaded on rafts that sailed to the city of Basra in southern Iraq. From here they were placed on a steamship and taken to Bombay in India, before sailing around Africa to England, where they were finally transported to the British Museum.
Layard’s discoveries caused a media sensation and captured the public imagination. This had a major impact on painting and applied arts, in the UK and beyond, during the second half of the nineteenth century, which led to a brief phase of ‘Assyrian revival’. The Assyrian sculptures at the British Museum largely remain today where they were first installed over 160 years ago.
Discover more about Assyria and its last great king in the BP exhibition I am Ashurbanipal: king of the world, king of Assyria (8 November 2018 – 24 February 2019).
Supported by BP
Logistics partner IAG Cargo |
Scientists from the University of Cambridge and the Australian National University have discovered numerous ancient stars in the center of our galaxy. These scientists believe that the stars represent some of the oldest stars in our galaxy, and quite possibly the universe as a whole.
Such ancient discoveries are important because they allow astronomers to peer into the past, seeing what the universe was once like, and how it evolved into what we know today.
For centuries now researchers have been peering into the sky, trying to unlock the universe’s mysteries. This time around scientists were searching for stars low in metals. Wondering why that’s so significant?
Most scientists believe that when the universe came into creation some 13.7 billion years ago, it consisted only of hydrogen, helium, and trace amounts of lithium.
Metals and all the other elements we know were formed through the fusion processes that power stars. As atoms were smashed together by immense amounts of gravity, they fused, creating heavier and heavier atoms, like iron.
Scientists believe that many of the oldest stars would be all but devoid of metal and should be located somewhere in the middle of the galaxy.
To hone in on the stars, researchers used telescopes in both Australia and Chile and selected a pool of about 14,000 candidate stars. Then, they used spectrography to hone in on 23 stars with low metal content.
Next, the researchers looked at the remaining stars and their trajectory more closely. They found that seven of these stars were sitting right in the middle of the galaxy, far from other stars, and were most likely born at the beginning of the universe.
These ancient stars probably weren’t the first stars ever born. Most of those stars are likely dead. As far as still burning stars, however, they may well be the oldest still shining.
The stars are detailed in a new paper, published this week in the journal Nature. |
Science Fair Project Encyclopedia
The horn is a brass instrument consisting of tubing wrapped into a coiled form. Many people call this instrument the French horn, although this usage is uncommon among players of the instrument. In other languages, the instrument is named Horn, corno (plural corni), cor, etc.
Compared to the other brass instruments commonly found in the orchestra, the typical range of the French horn is set an octave higher in its harmonic series, facilitated by its small, deep mouthpiece, giving it its characteristic "mellow" tone. The typical playing range of a French horn goes from the written F at the bottom of the staff in bass clef to the C above the staff in treble clef.
Early French horns were much simpler than current horns, which consist of complicated tubing and a set of three to five valves (depending on the type of horn). These early horns were simply brass tubing wound a few times and flared into a larger opening at the end (called the bell of the horn). They evolved from the early hunting horns and, as such, were meant to be played while riding on a horse. The hornist would grip the horn on the piping near the mouthpiece and rest the body of the horn across his arm so that only one hand was needed to play and the other could be free to guide his steed. The only way to change the pitch was to use the natural harmonics of that particular length of tubing by changing the speed at which the lips vibrated against the mouthpiece.
Later, horns became interesting to composers, and were used to invoke an out-of-doors feeling and the idea of the chase. Even in the time of Wolfgang Amadeus Mozart, however, the horn player (now a part of the early orchestra) still had a much simpler version of the horn; he carried with him a set of crooks, which were curved pieces of tube of different length which could be used to change the length of the horn by removing part of the tubing and inserting a different length piece. The player now held the horn with both hands, holding the tubing near the mouthpiece with one, and putting the other into the bell, which was either rested upon the right knee of the player or the entire horn was lifted into the air. Now the pitch played could be changed in several ways. First the player could change the harmonic series which the instrument as a whole had by removing and inserting different sized crooks into the instrument, changing the length of the horn itself. Less globally, given a particular crook, the vibration of the lips could be varied in speed, thus moving to a different pitch on the given harmonic series. Finally, now that the player had his hand in the bell, the hand became an extension on the length of the horn, and by closing and opening the space available for air to leave the bell, he could bend the pitch to interpolate between the elements of a harmonic series. This interpolation finally made the horn a true melodic instrument, not simply limited to a harmonic series, and some of the great composers started to write concerti for this new instrument. The Mozart Horn Concerti , for example, were written for this type of horn, called the natural horn in the modern literature.
Around 1815, the horn took on new form, as valves were introduced, which allowed the player to switch between crooks without the effort of manually removing one from the horn and inserting a new one. At this same time, the standard horn came to be the horn on the F harmonic series, and there were then three valves added to it. Using these three valves, the player could play all the notes reachable in the horn's range.
Types of horns
The single F horn, despite this improvement, had a rather irksome flaw. As the player played higher and higher notes, the distinctions a player had to make with his or her embouchure from note to note became increasingly precise. An early solution was simply to use a horn of higher pitch -- usually B-flat. The relative merits of F versus B-flat were a hotbed of debate between horn players of the late nineteenth century, until the German horn maker Kruspe produced a prototype of the "double horn" in 1897.
The double horn combines two instruments into one frame: the original horn in F, and a second, higher horn keyed in B-flat. By using a fourth valve operated by the thumb, the horn player can quickly switch from the deep, warm tones of the F horn to the higher, brighter tones of the B-flat horn (commonly called "sides"). In the words of Reginald Morley-Pegge , the invention of the double horn "revolutionized horn playing technique almost as much as did the invention of the valve." [Morley-Pegge, "Orchestral," 195]
While most modern instruments are of the F/B-flat double horn variety, various special-purpose instruments are available (usually at a very high price).
The most common is the descant horn, which is a single horn pitched in F alto, one octave higher than the traditional F horn. The descant is used largely for extended playing in the high register, such as in Bach's Brandenburg Concerti. Double horns in B-flat/High F (or High E-flat) are increasingly popular for works that only use the upper and upper-middle registers of the instrument.
Single horns in F or B-flat still see use, notably in operatic settings. Their lighter weight renders them much more suitable for the extended and strenuous playing required of Wagnerian operas.
The triple horn is the result of merging an F/B-flat double horn with an F-alto descant, adding a fifth valve to an already complex instrument. While the horn is suitable for work in nearly every register of horn literature, the added weight makes it tiresome to play, and for this reason it is not widely used.
The Viennese Horn is a horn traditionally played in the Vienna Philharmonic. It is a standard single horn with a dual piston mechanism for each valve. This page shows a bit more about the differences between this and the other horns listed above.
The Wagner tuba is an instrument generally played by the horn players of the orchestra which resembles a mix of a horn and a tuba.
The mellophone is, in appearance, very different from any of the above types of horn, but it is nevertheless used in place of the horn in marching bands. In fact marching band is the only connection between the horn and the mellophone. This instrument is harmonically much more similar to an elongated trumpet.
Some of these techniques are not unique to the horn, but are applicable to most or all wind instruments.
Normal tonguing consists of interrupting the air stream by tapping the back of the front teeth with the tongue as said in the syllable 'da' or 'ta'. Double tonguing is alternating between the 'ta' sound and the 'ka' sound. Try saying the word kitty repeatedly to get the idea. Triple tonguing is most used for patterns of three and is made with the syllables 'ta-ka-ta' said repeatedly.
This is the act of fully closing off the bell with the right hand or a special stopping mute. This results in a somewhat nasal sound. The usual notation is a '+' above the note followed by a 'o' above notes that are open. For longer stopped passages the word is just written out. Below is a list for different languages:
- English: stopped ... open
- German: gestopft ... offen
- Italian: chiuso ... aperto
- French: bouché ... ouvert (not to be confused with cuivre which means brassy.)
Stopping a note does not raise the pitch by a half-step. Any time the hand is placed in the bell, the pitch lowers gradually until 1/2 step above the next lower partial (harmonic) when the bell is completely covered/closed (stopped). Hand horn technique developed in the classical period (Mozart's 4 Horn Concerti and Concert Rondo were written with this technique in mind, as was all the music Beethoven and Brahms wrote for the horn) makes use of covering the bell to various degrees, and thereby lower in the pitch accordingly.
For example: if one plays a middle C (F-horn, open) and then slowly covers the bell into stopped horn, you will pend the pitch down to a major 3rd to Ab (or 1/2 step above G, the next lower partial). However, if one plays a 3rd space C (F-horn, open) and repeats the process, the pitch will only bend down a half-step to a B-natural (or 1/2 step above Bb, the next lower partial).
Practically, it is too cumbersome to keep track of what partial you are playing and what the "1/2 step above the next lower partial" would be. As such, when playing stopped, horn players over blow one partial while playing stopped, play the partial above the note then intended, cover the bell completely and thereby arrive at 1/2 step above their intended pitch, and then compensate by fingering a half step below the written pitch. Thus most horn players are taught that stopped horn "raises the pitch 1/2 step".
This is crucial to understand this difference, between practical application of the player and the theory behind it, because several modern composers have incorrectly notated that the horn is to bend an open pitch upward to a stopped pitch. This is impossible. The horn pitch can only be bent downward into a stopped pitch.
In the middle register, try F-horn fingers when playing stopped horn. In the upper register, however, experimentation with traditionally flat fingerings on the B♭ horn (For example, 1st valve G) can yield more secure notes without sacrificing good intonation. Some B♭ horns have an a stopping valve that compensates for this, allowing the player to use normal fingerings with the stopping valve.
There is also an effect that is occasionally called for, usually in French music, called "echo horn", "hand mute" or "sons d'écho" (see Dukas Sorcerer's Apprentice) which is like stopped horn, but different in that the bell is not closed as tightly. The player closes the hand enough so that the pitch drops 1/2 step, but, especially in the middle register, this is not closed as tightly as for stopped horn. Consequently, when playing echo horn, the player fingers one half step higher.
The difference between stopping and "echo horn" is a source of much confusion to younger players, especially ones whose hands are not big enough to close the bell all the way for stopped horn. Instead of stopping properly, they erroneously close the bell insufficiently and finger 1/2 step higher.
For more information on stopped horn see "Extended Techniques for the Horn" by Douglas Hill (ASIN: B00072T6B0) -- Professor of Horn at the University of Wisconsin-Madison, http://www.geocities.com/Vienna/3941/stopping.html also has more information about stopped horn and the physics behind it; for more information on hand horn see "A modern valve horn player's guide to the natural horn" by Paul Austin (ASIN: B0006PCD4A)
Some confusion arrives when a composer marks a passage muted but also puts '+'s above the notes. This is usually a typographical error or a lack of understanding the difference between stopped horn and muted horn by the composer. Muted horn is just the use of a mute in the horn. It is therefore impossible for a note to be stopped and muted simultaneously. For marking this in music the following are used:
- English: muted ... open (or remove mute)
- German: gedampft ... dampfer weg
- Italian: con sordino ... senza sordino
- French: avec sourdine ... enlevez la sourdine
Before the advent of the valve horn, a player would increase the number of playable notes beyond the normal harmonic series by changing the position of his hand in the bell. It is possible to use a combination of stopping, hand-muting (3/4 stopping), and half-stopping (to correct notes that would otherwise be out of tune) to play almost every note of a mid-range chromatic scale on one fingering. Most modern pieces for hand-horn tend to spend more time in the higher ranges, as there are more notes that can be played naturally (without altering hand position and maintaining pure tone) above the 8th note of any harmonic series.
Many older pieces for horn were written for a horn not keyed in F as is standard today. As a result a requirement for modern orchestra hornists is to be able to read music directly in these keys. This is most commonly done by transposing the music on the fly into F. A reliable way to transpose is to liken the written notes (which rarely deviate from written C,D,E, and G) to their counterparts in the scale the F horn will be playing in. Commonly seen transpositions include:
- B♭ alto – up a perfect fourth 1
- A alto – up a major third
- G – up a major second
- E – down a minor second
- E♭ – down a major second 2
- D – down a minor third
- C – down a perfect fourth
- B♭ basso – down a perfect fifth 1
Some less common transpositions include:
- A♭ alto – up a minor third
- G♭ – up a minor second
- D♭ – down a major third (used in some works by Berlioz, Verdi and Strauss (Der Rosenkavalier))
- B – down a tritone (used by Brahms) 3
- A basso – down a minor sixth (used in some works by Verdi)
- A♭ basso – down a major sixth (used in some works by Verdi)
- G basso – up a minor seventh (used in some works by Verdi)
It has been speculated that one of the reasons Brahms wrote for horn in the awkward key of B was to encourage the horns to use the natural horn; he did not like the sound of the new valved horns. One example showing this is when Brahms picked the second horn player, Wilhelm Kleinecke, in the Vienna Opera for a performance of the Horn Trio in E flat, op. 40 over the first horn, Richard Lewy, because Lewy only played the valved horn. (Here is an article about the Brahms Horn Trios.)
Sometimes it is ambiguous to know if a piece should be transposed up or down (i.e. B♭ alto versus B♭ basso when only B♭ is written). It is usually safe to assume that the most common and reasonable (i.e. it stays in the normal horn range) transposition is the intended transposition. From the history of the composer more can be decided. For example Verdi and other opera composers used many low and odd transpositions. For Haydn symphonies that have trumpets the lower transposition for the horns is usually correct otherwise the high transposition is usually correct. Much experience is needed to decided this in the end.
1 In older scores (many times German), B♭ alto and basso are written simply as "B."
2 E♭ horns were used extensively in military bands in the early 20th century, therefore band parts written for chromatic E♭ horns are common.
3 Brahms indicated the key of B♮ as "H."
Multiphonics is the act of producing more than one pitch simultaneously on the horn. To do this one note is produced as normal while another is sung. Doing this it is quite difficult to produce an aesthetically pleasing sound, but nonetheless can be done. Like other wind instrument techniques, it is not unique to the horn. One of its earliest uses however occurs in the Concertino for Horn and Orchestra by Carl Maria von Weber. Another kind of multiphonics can be achieved by simultanously playing two neighbouring notes of the harmonic series. A practical way of doing this is by placing the lower lip under and outside the mouthpiece, playing one note, and then gently, by increasing air pressure and adjusting one's lip-position, halfway slurring upwards to the next harmonic step. This might be frustrating at first, and the technicque is quite an unstable one to perform in real-time, especially when compared with similar practices with other brass instruments, esp. trombone. It is occasionally recommanded in contemporary music where, successfully performed, it might evoke an interesting effect.
Information on this subject can be found at the article on circular breathing.
Tips and tricks
- Quick valve water emptying
- Every horn is different and every hornist must learn how to get the water out of their instrument. This trick however is nearly universal across all standard double horns. Hold the horn so the bell is up in the air. Press down the third valve and flip the first and second while rotating the horn back to the normal position. All the water in the valves is now in the third valve tubing.
- Fake high C
- On some horns a high c can pop out while pressing the first valve down halfway. This is not recommended for performance as the tone quality of this note suffers. A way to try it is to play a normal third space c on the f side and slowly press down the first valve. (discussion)
Well-known horn players
Use of the French horn in jazz
The horn is used only rarely in jazz, but there have been a few notable players:
- Gunther Schuller
- Mark Taylor
- Tom Varner
- Julius Watkins
- Arkady Shilkloper
- Claudio Pontiggia
- Rick Todd
Pieces for horn
- Johann Georg Albrechtsberger: Concerto in F minor for horn and orchestra
- Hermann Baumann : Elegia, for solo natural horn
- Ludwig van Beethoven: Sonata for Piano and Horn, Op. 17
- Vicenzo Bellini : Concerto in F major for horn and orchestra
- Johannes Brahms:Horn Trio in E flat, op. 40
- Benjamin Britten: Serenade for Tenor, Horn and Strings
- Emmanuel Chabrier: Larghetto for horn and orchestra
- Reinhold Gliere: Concerto op. 91 in B minor, for horn and orchestra
- Michael Haydn: Concertino for 2 horns and orchestra
- Joseph Haydn: Concerto in E flat major for 2 horn and orchestra
- Joseph Haydn: Horn Concerto No. 1
- Paul Hindemith: Concerto for horn and orchestra
- Paul Hindemith: Sonata for Horn
- Franz Anton Hoffmeister: Romance for 3 horn and orchestra
- Heinrich Hübler : Concerto for 4 horn and orchestra
- Leopold Mozart: Sinfonia da Caccia in G major "Jagdsinfonie" ("Hunting Symphony"), for 4 horns, ammunition boxes, and string orchestra
- Wolfgang Amadeus Mozart: Horn Concerto No. 1 in D major K. 412 (in two movements)
- Wolfgang Amadeus Mozart: Horn Concerto No. 2 in E flat major K. 417 (in three movements)
- Wolfgang Amadeus Mozart: Horn Concerto No. 3 in E flat major K. 447 (in three movements)
- Wolfgang Amadeus Mozart: Horn Concerto No. 4 in E flat major K. 495 (in three movements)
- Wolfgang Amadeus Mozart: Concert Rondo for horn and orchestra in E flat major, K. 371
- Wolfgang Amadeus Mozart: Divertimento for two horns and strings, A Musical Joke, (Ein Musikalischer Spaß,) K. 522
- Francis Poulenc: Elegy for horn and piano
- Joseph Reicha : Concerto op.5, for 2 horns and orchestra
- Antonio Rosetti: Concerto No. 6 in E for horn and orchestra
- Camille Saint-Saens: Morceau du Concert for horn and orchestra
- Camille Saint-Saens: Romance Op. 36, for horn and orchestra
- Robert Schumann: Konzertstück ("concert piece") in F major op. 86 for 4 solo horns, piccolo, 2 flutes, 2 oboes, 2 clarinets, 2 bassoons, 2 horns (ad lib.), 2 trumpets, 3 trombones, timpani and strings
- Carl Stamitz: Concerto in E flat for solo horn, 2 flutes, 2 horns, and strings
- Franz Strauss: Fantasie op. 6 for horn and orchestra
- Richard Strauss: Horn Concerto No. 1 in E flat major, Op. 11 (1883)
- Richard Strauss: Horn Concerto No. 2 in E flat major (1942)
- Richard Strauss: Andante for horn and piano, Op. posthumous
- Richard Strauss: Introduction, Theme and Variations for horn and piano, Opus 17
- Georg Philipp Telemann: Concerto in D major for horn and orchestra
- Georg Philipp Telemann: Concerto in E flat major; Tafelmusik ("table music") for 2 horns, strings, and continuo
- Carl Maria von Weber: Concertino for horn and orchestra, Op. 45
Listed below are some of the manufacturers of horns. Not all still exist today. For additional manufacturers consult this page.
- The International Horn Society
- Professor John Q. Ericson's Horn Links
- Some online horn articles
- An online collection of horn orchestral excerpts
- How the valved horn emerged from the early Industrial Revolution
- HornRoller.com, News from the Hornosphere
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details |
Revised June 2018
Many people don't understand why or how other people become addicted to drugs. They may mistakenly think that those who use drugs lack moral principles or willpower and that they could stop their drug use simply by choosing to. In reality, drug addiction is a complex disease, and quitting usually takes more than good intentions or a strong will. Drugs change the brain in ways that make quitting hard, even for those who want to. Fortunately, researchers know more than ever about how drugs affect the brain and have found treatments that can help people recover from drug addiction and lead productive lives.
What Is drug addiction?
Addiction is a chronic disease characterized by drug seeking and use that is compulsive, or difficult to control, despite harmful consequences. The initial decision to take drugs is voluntary for most people, but repeated drug use can lead to brain changes that challenge an addicted person’s self-control and interfere with their ability to resist intense urges to take drugs. These brain changes can be persistent, which is why drug addiction is considered a "relapsing" disease—people in recovery from drug use disorders are at increased risk for returning to drug use even after years of not taking the drug.
It's common for a person to relapse, but relapse doesn't mean that treatment doesn’t work. As with other chronic health conditions, treatment should be ongoing and should be adjusted based on how the patient responds. Treatment plans need to be reviewed often and modified to fit the patient’s changing needs.
What happens to the brain when a person takes drugs?
Most drugs affect the brain's "reward circuit," causing euphoria as well as flooding it with the chemical messenger dopamine. A properly functioning reward system motivates a person to repeat behaviors needed to thrive, such as eating and spending time with loved ones. Surges of dopamine in the reward circuit cause the reinforcement of pleasurable but unhealthy behaviors like taking drugs, leading people to repeat the behavior again and again.
As a person continues to use drugs, the brain adapts by reducing the ability of cells in the reward circuit to respond to it. This reduces the high that the person feels compared to the high they felt when first taking the drug—an effect known as tolerance. They might take more of the drug to try and achieve the same high. These brain adaptations often lead to the person becoming less and less able to derive pleasure from other things they once enjoyed, like food, sex, or social activities.
Long-term use also causes changes in other brain chemical systems and circuits as well, affecting functions that include:
Despite being aware of these harmful outcomes, many people who use drugs continue to take them, which is the nature of addiction.
Why do some people become addicted to drugs while others don't?
No one factor can predict if a person will become addicted to drugs. A combination of factors influences risk for addiction. The more risk factors a person has, the greater the chance that taking drugs can lead to addiction. For example:
- Biology. The genes that people are born with account for about half of a person's risk for addiction. Gender, ethnicity, and the presence of other mental disorders may also influence risk for drug use and addiction.
- Environment. A person’s environment includes many different influences, from family and friends to economic status and general quality of life. Factors such as peer pressure, physical and sexual abuse, early exposure to drugs, stress, and parental guidance can greatly affect a person’s likelihood of drug use and addiction.
- Development. Genetic and environmental factors interact with critical developmental stages in a person’s life to affect addiction risk. Although taking drugs at any age can lead to addiction, the earlier that drug use begins, the more likely it will progress to addiction. This is particularly problematic for teens. Because areas in their brains that control decision-making, judgment, and self-control are still developing, teens may be especially prone to risky behaviors, including trying drugs.
Can drug addiction be cured or prevented?
As with most other chronic diseases, such as diabetes, asthma, or heart disease, treatment for drug addiction generally isn’t a cure. However, addiction is treatable and can be successfully managed. People who are recovering from an addiction will be at risk for relapse for years and possibly for their whole lives. Research shows that combining addiction treatment medicines with behavioral therapy ensures the best chance of success for most patients. Treatment approaches tailored to each patient’s drug use patterns and any co-occurring medical, mental, and social problems can lead to continued recovery.
More good news is that drug use and addiction are preventable. Results from NIDA-funded research have shown that prevention programs involving families, schools, communities, and the media are effective for preventing or reducing drug use and addiction. Although personal events and cultural factors affect drug use trends, when young people view drug use as harmful, they tend to decrease their drug taking. Therefore, education and outreach are key in helping people understand the possible risks of drug use. Teachers, parents, and health care providers have crucial roles in educating young people and preventing drug use and addiction.
Points to Remember
- Drug addiction is a chronic disease characterized by drug seeking and use that is compulsive, or difficult to control, despite harmful consequences.
- Brain changes that occur over time with drug use challenge an addicted person’s self-control and interfere with their ability to resist intense urges to take drugs. This is why drug addiction is also a relapsing disease.
- Relapse is the return to drug use after an attempt to stop. Relapse indicates the need for more or different treatment.
- Most drugs affect the brain's reward circuit by flooding it with the chemical messenger dopamine. Surges of dopamine in the reward circuit cause the reinforcement of pleasurable but unhealthy activities, leading people to repeat the behavior again and again.
- Over time, the brain adjusts to the excess dopamine, which reduces the high that the person feels compared to the high they felt when first taking the drug—an effect known as tolerance. They might take more of the drug, trying to achieve the same dopamine high.
- No single factor can predict whether a person will become addicted to drugs. A combination of genetic, environmental, and developmental factors influences risk for addiction. The more risk factors a person has, the greater the chance that taking drugs can lead to addiction.
- Drug addiction is treatable and can be successfully managed.
- More good news is that drug use and addiction are preventable. Teachers, parents, and health care providers have crucial roles in educating young people and preventing drug use and addiction.
For information about understanding drug use and addiction, visit:
For more information about the costs of drug abuse to the United States, visit:
For more information about prevention, visit:
For more information about treatment, visit:
To find a publicly funded treatment center in your state, call 1-800-662-HELP or visit:
Get this Publication
Cite this article
NIDA. (2018, June 6). Understanding Drug Use and Addiction. Retrieved from https://www.drugabuse.gov/publications/drugfacts/understanding-drug-use-addiction |
In English grammar and morphology a prefix is a letter or set of letters that are added to the beginning of a root word in order to create a new word with a diverse meaning. For example, when the prefix ‘dis’ is added to the word appear, it gives birth to a brand new word ‘disappear’.
Interestingly enough, the word prefix itself contains the prefix “pre-” which means ‘before’ and the root word ‘fix’ which means ‘to place’, thus the word itself means “to place before.” Prefixes are bound morphemes, which mean they can’t stand alone. Generally, if a cluster of letters is a prefix, it can’t also be a word. However, the process of placing a prefix to a word, or prefixation, is a common tactic of forming new words in English Language.
Prefixes can be either derivational, creating a new word with a new semantic meaning, or inflectional, creating a new form of the word with the same basic meaning. Particularly in the study of languages, prefix is also called as preformative, because it alters the form of the word parts to which it is affixed.
Take a look at the list of common prefixes given below.
To make you understand this in a fun way, Vocab Tunes, an accelerated vocabulary building platform, has come up with a brilliant idea to introduce your young ones to the world of prefixes with the help of its educative song ‘Prefixes’.
So, what are you waiting for? Let’s hear this song and discover multiple words using prefixes such as Disapart, Ingest, Intervene, Dismantle, Extend, and many more.
Apart from this song, the program also lists 20 more songs pertaining to different root words and suffixes that will help you to enrich your vocabulary. |
Arctic amplification is the fact that temperature rise in polar regions is large in comparison to the temperature rise in lower latitudes, may further accelerate climate warming well beyond the Arctic. The warming trend in the Arctic is almost twice as large as the global average in recent decades. The possible causes are: changes in cloud cover, increases in atmospheric water vapour, more atmospheric heat transport from lower latitudes and declining sea ice have all been suggested as contributing factors.
The loss of sea ice is one of the most cited reasons. When reflective ice melts, a darker ocean dominates; this amplifies the warming trend because the ocean surface absorbs more sun heat than the surface of snow and ice. This means decrease in sea ice reduces Earth’s albedo.
Other possible causes are thunderstorms, which are much less likely to occur in the Arctic than in tropic regions. The storms transport heat from the surface to higher levels of the atmosphere, where global wind patterns sweep it toward higher latitudes. The abundance of thunderstorms creates a near-constant flow of heat away from the tropics, a process that dampens warming near the equator and contributes to Arctic amplification. |
The simple way to send data to multiple users simultaneously is to transmit individual copies of the data to each user. However, this is highly inefficient, since multiple copies of the same data are sent from the source through one or more networks. Multicasting enables a single transmission to be split up among multiple users, significantly reducing the required bandwidth.
Multicasts that take place over the Internet are known as IP multicasts, since they use the Internet protocol (IP) to transmit data. IP multicasts create "multicast trees," which allow a single transmission to branch out to individual users. These branches are created at Internet routers wherever necessary. For example, if five users from five different countries requested access to the same stream, branches would be created close the original source. If five users from the same city requested access to the same stream, the branches would be created close to users.
IP multicasting works by combining two other protocols with the Internet protocol. One is the Internet Group Management Protocol (IGMP), which allows users or client systems use to request access to a stream. The other is Protocol Independent Multicast (PIM), which is used by network routers to create multicast trees. When a router receives a request to join a stream via IGMP, it uses PIM to route the data stream to the appropriate system.
Multicasting has several different applications. It is commonly used for streaming media over the Internet, such as live TV and Internet radio. It also supports video conferencing and webcasts. Multicasting can also be used to send other types of data over the Internet, such as news, stock quotes, and even digital copies of software. Whatever the application, multicasting helps reduce Internet bandwidth usage by providing an efficient way of sending data to multiple users.
Updated: April 21, 2011 |
Physics is the study of extremes – the fastest, the smallest, the most energetic, the coldest and the hottest. Because physics researchers are pushing the limits of what can be measured, they often must build their own instruments from scratch to do the measuring. What physicists actually study—the fundamental particles and forces that form and govern our universe—may not be as familiar to you as the concerns of other fields, like medical research. However, the ideas behind these experiments and the tools built to accomplish them have had real, lasting impacts on daily life.
For example, by developing a way to study some of these fundamental particles, physicists laid the foundation for the widespread use of a technology doctors and patients now rely on—magnetic resonance imaging, or MRI. It started in the late 1970’s, when researchers began constructing the Tevatron, a high-energy particle accelerator now used to give scientists a glimpse at the smallest particles that make up our universe. (for more about particle accelerators and their applications, click here). The Tevatron consists of hundreds of very powerful, “superconducting” magnets arranged in a four-mile long tunnel under the prairie at Fermilab, just 40 miles west of Northwestern. Beams of particles such as protons are shot along this tunnel where they collide with one another at speeds approaching the speed of light, releasing huge amounts of energy that ultimately produce new particles. The superconducting magnets are needed to store and guide these beams along the correct path.
One goal researchers had for the Tevatron was to reveal new particles, like the top quark, that were predicted to exist in the early history of the universe but no longer occur in nature. At the time the Tevatron was built, superconducting magnets were laboratory tools, and each was custom-built for a specific purpose. However, Fermilab needed even stronger and more efficient magnets to control the high-energy beams—approaching a trillion electric volts—necessary to create collisions powerful enough to produce a top quark. So, in order to find this new particle, Fermilab had to develop completely new magnet technologies.
A prototype magnet was developed, but, to fill the tunnel, Fermilab needed 135,000 pounds of niobium-titanium wire to create 500 ten-ton magnets. At the time, no company in the world had ever produced more than of a few hundred pounds of wire for special orders. Fermilab scientists worked with industry to create the processes that allowed large-scale production of superconducting wire.
This development had a major impact on the scientific community—the Tevatron has run many successful experiments in its 20+ years of existence. However, the superconducting wire and cable produced to complete the accelerator have also had a major impact on the healthcare community, as superconducting magnets form the backbone of MRI technology. Now, millions of people a year receive MRI scans that would not have been possible without elementary particle physicists. Of course, particle physicists did not invent MRI, but their successful push to discover the top quark helped launch the superconducting cable industry as an added benefit to society.
Inspired by the success of the Tevatron, scientists at CERN, the European Organization for Nuclear Research, launched the effort to build an even more powerful particle accelerator, the Large Hadron Collider (LHC), in the early 1980’s. Constructing the LHC—a collider with 14 times the energy of the Tevatron, performing up to 600 million collisions per second—and developing the experiments it would perform proved to be an enormously complex project. By its completion in the fall of 2008, more than 8,000 researchers from nearly sixty countries became partners in the project, including the United States.
With so many people involved from different parts of the world, the need for a better way to communicate was necessary. Early in the LHC project, Tim Berners-Lee, a CERN computer scientist, invented a tool he coined the World Wide Web. The Web provides a virtual platform on which users can post pages of information and media for others to access via the Internet (the Internet, by contrast, is essentially a network of connected computers and other devices). Now, billions of people across the world exchange information almost instantly because of it—in fact, you are using it to read this article right now.
Physicists continue to imagine and develop new tools that may eventually touch your life. For example, LHC experiments will generate an extraordinary amount of data, more than any other science projects in the world, and this data will be used by thousands of scientists around the globe. Sharing that much information quickly and efficiently will require the next generation of computer network technology.
Toward this end, Northwestern’s International Center for Advanced Internet Research (iCAIR) has been developing new techniques and technologies to support data intensive sciences such as high energy physics. One example is the StarLight International Communications Exchange, funded by the National Science Foundation and located on the Chicago campus, which connects advanced research and education networks around the world. These advanced networks enable universities and laboratories, such as Fermilab and Northwestern, to efficiently access the LHC data at speeds many time higher than the standard internet. These activities are creating new communications services that will quickly migrate to wider communities, possibly some day making your own web-browsing experience a faster one.
As physicists have progressed towards the very small and the highly energetic, they have left a trail of tools by the way—X-rays, nuclear energy, superconducting cables, medical accelerators, techniques for distributed high performance computing, and the World Wide Web. In addition to revealing astounding information about how our universe was formed and the forces that now govern it, physicists have also paved the way for technology that enhances medical care and even our everyday lives. High energy physicists are often asked, “How does the top quark affect me?” The top quark itself has little practical use. Rather, it is the tools created to find the top quark, and accomplish other seemingly impossible tasks, that touch people in a more tangible way. |
Trees are beneficial to the landscape because they create oxygen and provide shade, fruits and flowers. Animals use trees for both food and shelter. Because of those reasons, trees usually are cut down only under certain circumstances. After a tree is cut down, a stump is left behind, and the tree's roots often stop growing unless suckers grow from the roots or the stump. Taking measures prevents the growth of suckers and roots.
A tree's roots cannot grow after the tree is cut down because roots need nutrients supplied by the tree's leaves. Roots do not receive the fuel necessary for their proper growth if the tree has no leaves to undergo photosynthesis. Photosynthesis is the process of absorbing sunlight and combining it with water and carbon dioxide to create oxygen and carbohydrates,
A tree can resprout after it is cut down, however. When that occurs, the new growth can develop leaves and carry out photosynthesis, providing roots with the fuel to continue growing. Sucker sprouts can grow from the roots or from the stump.
New Growth Control Methods
Sucker growth from a stump or roots can be controlled by spraying the new growth with a brush killer. Treating the stump with herbicide after the tree is cut down is an effective method to prevent sucker growth. Apply the herbicide to the stump immediately after cutting down the tree. Doing so allows the herbicide to make contact with the stump's vascular system before it seals itself. Common herbicide treatments include the use of glyphosate and water or the use of triclopyr in crop oil. Combine 1 portion of glyphosate with 1 portion of water to use on the stump, or combine 1 portion of triclopyr with 4 portions of crop oil to use on the stump.
Tree Removal Reasons
Because trees are valuable, arborists recommend removing a tree only for certain reasons, such when a tree is dead or dying, is planted to close to a power line, is overcrowded or blocks a view. Before planting a tree, know its growth rate, mature size and whether or not its roots can damage paved areas. Knowing that information can prevent planting a tree that will have to be removed. Many times, a tree grows taller than expected or its roots cause damage, making it necessary to cut down the tree.
- New Mexico State University Extension and Outreach: Southwest Yard and Garden -- Do Tree Roots Grow After the Tree Is Cut?
- The New Sunset Western Garden Book; Kathleen Norris Brenzel, Editor
- Colorado State University, Denver County Extension Master Gardener: Controlling Sucker Sprouts from Roots and Stumps
- University of California Agriculture and Natural Resources: Stem Treatment Control Methods
- Jupiterimages/Photos.com/Getty Images |
May also be called: Acute Dentoalveolar Abscess, Acute Apical Dental Abscess, Acute Dental Abscess, Apical Abscess, Tooth Abscess, Dental Abscess, Periapical Infection, Tooth Infection, Abscessed Tooth
A periapical (per-ee-AP-ih-kul) abscess is a collection of infected material (pus) that forms at the tip of the root of a tooth.
Periapical abscesses form after bacteria enter the tooth and cause an infection in the pulp — the innermost portion of the tooth that consists of connective tissue, nerves, and blood vessels. This is usually the result of tooth decay or an injury that causes the tooth to chip or crack. When the pulp becomes infected, the body's immune system sends white blood cells to fight the infection. It's these white blood cells, along with other debris, that can form a collection of pus near the tiny hole (apical foramen) that sits at the tip of the root of the tooth.
Periapical abscesses can cause severe tooth pain and sensitivity to temperature; a fever; pain while chewing; and swelling in the gum, glands of the neck, and upper or lower jaw. Treatment for a periapical abscess can involve antibiotic medications, draining the abscess, or performing root canal surgery to save the tooth. In rare cases, the tooth may have to be pulled.
If left untreated, periapical abscesses can get worse and cause serious complications. In many cases, however, prompt treatment can cure the infection and save the affected tooth. Practicing good dental hygiene can reduce the risk of a periapical abscess.
All A to Z dictionary entries are regularly reviewed by KidsHealth medical experts.
|American Dental Association (ADA) The ADA provides information for dental patients and consumers.|
|American Academy of Periodontology The American Academy of Periodontology provides information for consumers and dental patients about gum disease and oral health.|
|Centers for Disease Control and Prevention (CDC) The CDC (the national public health institute of the United States) promotes health and quality of life by preventing and controlling disease, injury, and disability.|
|What Causes Bad Breath? Bad breath, or halitosis, can be a major problem, especially when you're about to snuggle with your sweetie or whisper a joke to your friend. The good news is that bad breath often can be easily prevented.|
|A to Z: Abscess Learn about immune system responses, infections, and issues that affect the skin.|
|Gum Disease Gum disease doesn't just happen to people your grandparents' age - it can happen to teens too. Get the details here.|
|Going to the Dentist What happens when you go to the dentist? Find out in this article for kids.|
|Peritonsillar Abscess A peritonsillar abscess is an area of pus-filled tissue at the back of the mouth, next to one of the tonsils. Find out how it happens and what to do.|
|Peritonsillar Abscess Older kids and teens with tonsilitis sometimes develop this painful abscess, a pus-filled tissue at the back of the mouth.|
|Abscess An abscess is a pocket of pus that forms in the body. Find out how to spot a skin abscess and when to call the doctor.|
|Keeping Your Child's Teeth Healthy Here are the basics about how to care for your child's teeth - and when.|
|Abscess An abscess is a sign of an infection, usually on the skin. Find out what to do if your child develops one.|
|Mouth and Teeth Our mouth and teeth play an important role in our daily lives. Here's a course on the basics - including common problems of the mouth and teeth.|
|Abscess People can get abscesses on the skin, under the skin, in a tooth, or even inside the body. Most abscesses are caused by infection, so it can help to know what to do. Find out in this article for teens.|
|Taking Care of Your Teeth There's a lot more to taking care of your teeth than breath mints and mouth sprays. Read this article to learn the facts on flossing, how to give plaque the brush-off, and much more.|
|Taking Care of Your Teeth The healthier your teeth are, the happier you look. That's why it's important to take great care of your teeth by brushing, flossing, and visiting the dentist. Learn more.|
What to expect when coming to Akron Children's
For healthcare providers and nurses
Residency & Fellowships, Medical Students, Nursing and Allied Health
For prospective employees and career-seekers
Our online community that provides inspirational stories and helpful information. |
The Late Jurassic dinosaur Archaeopteryx was capable of powered flight, suggests a study published in Nature Communications this week. Previous studies have left open the question of whether Archaeopteryx used its feathered wings for active flight or passive gliding.
Dennis Voeten and colleagues analyzed the bone architecture of the wings of Archaeopteryx, using a technique called phase-contrast synchrotron microtomography to visualize the interior of the bones without damaging the fossils. By comparing a wide range of species, from extinct pterosaurs to modern birds, they found that flight style could be predicted reliably from bone architecture - and that Archaeopteryx matched modern birds that flap their wings to fly short distances or in bursts.
Despite the similarities in internal bone structure, Archaeopteryx anatomy is not compatible with the flight strokes of modern birds. Therefore, the authors suggest that Archaeopteryx would have used a different flapping motion and aerial posture than that of modern birds. |
By 2013 it was believed that one in five of the millions of invertebrate species on Earth was at risk of extinction, but probably some of the most cherished species of all—butterflies—showed signs of a significant decline in population if not outright disappearance. Whereas slugs, mites, flies, or squid might not garner the due attention of the public, butterflies are emblematic, and they can serve as flagship species for a world at risk of losing much of its biodiversity.
Butterflies have always held a certain fascination for mankind and particularly for collectors of the species. They played a central role in numerous studies as well, notably in the scientific work of British naturalist Alfred Russel Wallace and in the artistry of Spanish painter Salvador Dalí. These scaly-winged insects, however, not only are for the pleasures of the scientific or creative mind but are also an important part of ecosystems throughout the world. They and their moth relatives continue to be important as pollinators for flowers and, in their larval stage, as food sources for birds and as herbivores that keep plant populations in balance.
Most important, scientists in recent decades have successfully used butterflies as tools for conservation research and public education. The popularity of butterflies makes them useful motivators to get citizen scientists—nonexperts who dedicate time to science projects that would otherwise lack the manpower—involved in preservation efforts. Programs in the U.K. and the U.S. have thousands of volunteers, who provide data critical to analyzing populations of hundreds of species. Beyond public involvement, these programs provide crucial lessons that help convey how humans are negatively affecting the wilderness around them.
In June 2013 two butterflies known only from South Florida were officially declared likely to be extinct. Extensive searches conducted for over a decade indicated that the Zestos skipper (Epargyreus zestos oberon) and the rockland grass skipper (Hesperia meskei pinocayo) had disappeared. There had been only four known extinct butterflies native to the U.S., and the last one to be cited for extinction was cited 50 years earlier. The loss of two species from one area represented a 50% increase in extinctions for the entire country and raised an alarm.
South Florida, symbolic of many at-risk areas of the world, hosts multiple endangered butterfly species. The area boasts unique ecosystems—hammocks, islands, and river drainages—within small areas. These ecosystems contain endemic species—species that are found only in a particular region—a factor that can make them more vulnerable to extinction. Although human development in Florida has largely destroyed the habitat of some species, others have had their habitats fragmented. To thrive, many organisms require one large area rather than several smaller separate regions. Thus, human development and the resultant habitat fragmentation can be sufficient to cause the extinction of a species. Whereas residential and commercial development in Florida has been responsible for fragmenting habitats there, other regions of the world might experience similar fragmentation from activities such as agriculture and mining.
Invasive Species and Pesticides
Florida’s butterfly species are not at risk only from habitat destruction. The threats of invasive species and poorly managed pesticide use can be sufficient to eliminate a species from an area, even without overt habitat destruction. Florida harbours several severe human diseases of which mosquitoes are a vector, and pesticide use for mosquito control previously demonstrated a negative effect on butterfly populations. It is vital to closely monitor the use of pesticides. Invasive species too have become an increasing problem. Wildlife managers are worried that nonnative fire ants might eat butterfly eggs and caterpillars and that the African snail might destroy host-plant vegetation. The combination of these factors with the pressures of human development is not unique to Florida, however; it can be sufficient to push a butterfly species into extinction in other parts of the world as well.
Global Climate Change
As has been well documented, the effects of future global warming are not always warmer temperatures: climates can become colder, wetter, or drier for various regions, all factors that can affect plants and animals in ways that are difficult to predict. This global climate change is just that—change—from the natural state of global ecosystems. Some species, such as arctic land mammals, could appear to benefit from predictions that their range might be extended. Such extensions would not be natural, however, and it is hard to predict the ways in which these animals might affect the ecosystems into which they would be encroaching.
Test Your Knowledge
Gadgets and Technology: Fact or Fiction?
Using years of butterfly survey data and global climate change models, scientists have created predictions of warmer climate’s impact on butterfly populations in the U.K. These studies suggest that the populations of more than half the butterfly species in the U.K. are expected to expand in range, a circumstance that typically translates to a higher, more diverse population and a lower risk of extinction. Though this range shift might appear to be good news for butterflies, it comes with a dark side. It is unclear how these populations of butterflies would influence the other species in the areas into which they expanded, and it could bring a range extension for any parasites or viruses that could accompany the butterflies. What happens in the U.K., however, does not happen to all butterflies. Also, not all butterflies in the world have ranges that could extend or shift well in warmer weather, as many species of butterflies have adapted for cooler high-altitude climates found in mountain ranges.
With global climate change, extreme weather is expected to occur more frequently. For example, in 2012 in the U.K. (which experienced its wettest summer in 100 years), enormous population losses occurred for two species. The numbers of the high brown fritillary (Argynnis adippe) reportedly fell 46%, and the black hairstreak (Satyrium pruni) dropped an astounding 98%.
High-altitude species of butterflies are facing considerable threats owing to a warming climate. Their ranges are restricted to certain zones on mountains, where climate, oxygen levels, host plants, community composition, and other factors contribute to the unique ecosystem in which they thrive. Temperature increases have caused some butterfly populations to shift upward in elevation to tolerate changing temperature; these areas might not be optimal for their survival, however, considering the multitude of factors involved. Some regions of the Sierra de Guadarrama, in central Spain, for example, suffered a 90% decline in species richness in butterfly communities.
The monarch butterfly is considered the most recognized backyard butterfly in North America, and it is known for its annual migrations of more than 3,219 km (2,000 mi) over several generations. Most of the monarchs of North America east of the Rocky Mountains migrate south to overwinter in a small pine forest area in Mexico known as the Monarch Butterfly Biosphere Reserve. The closely monitored numbers found at that site provide an indication of the health of the North American monarch butterfly population. The winter of 2012–13 showed a worrisome 59% decrease in monarch populations from the previous year; it was the lowest count recorded in at least two decades.
For years monarch butterfly conservation efforts were concentrated on preserving the overwintering site in Mexico, but the focus has gradually turned northward. The loss of milkweed habitat—milkweed being the primary host plant upon which monarch caterpillars feed—has been attributed to an increase in the use of Roundup in agriculture. The herbicide can be liberally applied to genetically modified (GM) crops without risk to them, but species such as milkweed (normally found in fields) have been suffering, and monarchs might be paying the price for the decimation of the plants.
Butterfly numbers continue to decline in many areas of the world owing to human activities. The impact from anthropogenic habitat destruction and pollution can be obvious. Alternatively, the impact can be cloudy and difficult to assess owing to limited resources, including the number of researchers available to interpret the data. Alarmingly, current trends pertaining to human development, agriculture, and pollution have caused several butterfly species to go extinct and have placed many others under considerable ecological pressure. |
Using rockets as a vehicle to learn is quite engaging as you can imagine, however, it is more than just blowing stuff up. We are teaching kids some basic scientific capabilities, such as:
Gather and interpret data.
Engage with science.
Thanks to the House of Science we have been able to get our hands on this amazing resource, there are three main ideas that we are exploring and their learning outcomes.
Seek and describe simple patterns in physical phenomena. Students discuss what causes things to fly/move and discuss some simple forces involved.
Physical inquiry and physics concepts: Students will explore, describe and represent patterns and trends for everyday examples of physical phenomena, such as movement and forces.
We started by investigating the two most common forces we encounter every day, push and pull. We then looked at what acts upon an object to create these forces and what Sir Isaac Newton discovered when an apple fell on his head and what the laws of motion have to do with launching a rocket.
Watch our Rocket video here
I thought is was fun watching the rockets explode. I learnt that push and pull are forces and that Isaac Newton has three laws. Emma |
Students in elementary statistics traditionally see experiments and data merely as words and numbers in a text. They plug numbers into formulas and make conclusions about briefly described experiments. They receive little or no exposure to the important statistical activities of sample selection, data collection, experimental design, randomization, development of statistical models, etc. In short, they leave the first course without a firm understanding of the role of applied statistics in scientific investigations. It is proposed to establish a prototype elementary statistics lab and to create a one semester hour lab course to be taken with or after the completion of the traditional freshman-sophomore level elementary statistics course. The lab will guide the student through simple, but meaningful experiments which illustrate important points of applied statistics. In each session the student will discuss and perform an experiment, collect and analyze data, and write a report. The lab will differ from traditional science labs in that the emphasis will be on statistical concepts. Students in the lab will be compared with a control group with regard to performance in elementary statistics and propensity to enroll in additional statistics courses. Student and teacher's manuals will be prepared so that the course can be used at other colleges and universities.
1. Introduction to Macintosh and Minitab
2. Measurement of Pulse Rate
(Descriptive Statistics and Variability)
3. Parking Lot Sampling
(Construction of Frame and Random Sampling)
Begin Plant Experiment
(Randomization and Designed Experiments)
4. Real and Perceived Distances
(Scatterplots and Variation)
5. Traffic Counts
6. Coke Versus Ritz Taste Test
(Proportion, Binomial Distribution, Paired Comparison)
7. Variation in Carpet Tacks
(Variation, Quality Improvement)
8. Sampling Distribution of Sample Mean and Median
(Simulation, Estimation, Central Limit Theorem)
9. Absorbency of Paper Towels
(Sampling, Confidence Interval for Mean)
10. Breaking Strength of String and Fishing Line
(Confidence Intervals, Hypothesis Testing)
11. Airplane Flight Distance
(2 Factor Design, Selection of Factors)
12. Normal Walking Versus Exaggerated Arm Movement
(Dependent Sample Comparison of Means)
13. Conclusion of Plant Experiment
(2 Factor Experiment, Model Building)
14. Prediction of Hickory Nut Weight
(Regression, Correlation, Plotting)
Generating enrollment among the target audience.
Knowing how much writing to require.
Giving the students guidance about what a written report should look like.
Consistent grading of written reports.
Dealing with late arrivals to lab.
Dealing with missed lab. |
1.) The tests of falling bodies in a frame under constant acceleration prove all masses accelerate to the floor at the same rate, the rate at which the frame is accelerating. 2.) The same tests in a frame under the constant "force" of acceleration prove the mass of the body dropped determines the rate it accelerates to the floor. Can a frame be accelerated to show the first results without applying force to the frame? |
- Garden Shop
Fruit trees require a minimum of 4 hours of sun daily, a well-drained, slightly acidic soil which is high in organic matter. A heavy soil should be amended with gypsum, manures and peat moss to lighten the soil, so that the tree's roots can breathe and penetrate through the soil. A sandy soil needs to be amended with peat moss and manures to retain even moisture so that the trees do not dry out so quickly. A soil test is recommended so that the proper pH can be obtained to aid in your tree's growth and production of fruit. Fruit trees will not grow in wet soils.
The following is a list of fruit trees and their pollination requirements:
Apples: Two different varieties are required for cross pollination. Yellow Delicious will not pollinate McIntosh
Apricots: When planting apricots in colder regions, two trees are recommended for best results.
Cherry: Sour cherries are self-fruitful. Sweet cherries need a pollinator. Stella (a sweet cherry) is self-fruitful.
Nut Trees: Two or more trees of the same variety are required for good pollination, as they are wind pollinated.
Peach: Peach trees are self-fruitful.
Pear: Plant two different varieties for cross-pollination. Bartlett is a poor pollinator for Seckel.
Plum: Plant two different varieties.
European and oriental types do not cross-pollinate.
A general purpose complete fertilizer with the analysis of 5-10-5 or 5-10-10 is appropriate. These three numbers represent Nitrogen, Phosphorus and Potassium.
Nitrogen:Gives the plant green growth.
Phosphorus: Aids the plants in root development.
Potassium: Helps the plant build cells, aids in root development and enables the plant to stay vigorous and fight off diseases. Greensand is an excellent source of potassium. Greensand comes from the ocean floor and has small quantities of many nutrients in addition to potassium.
Sol-Po-Mag: This is an organic fertilizer that provides the plants with sulfur, potassium and magnesium. These three elements are often depleted from our soils and need to be added to aid in fruit production and to help the tree maintain its fruit while ripening.
Dehydrated Manure: Is an excellent source of organic fertilizer that can be applied to the trees. We recommend you apply a complete fertilizer in addition to manure to ensure a balanced diet for your trees. |
When I last talked to my neighbors kid's, I was surprised to learn that they did not know what an audio cassette is. With the era of mix tapes behind us, the medium had faded to obscurity. Yet, one older music medium refuses to disappear into the void: the round and flat vinyl surface known as the record.
Record factories are operating at maximum capacity, pressing hundred of thousands of new vinyls everyday. Although records have always been the medium of choice of collectors, the general population is showing renewed interest. Retailers believe in this renewal, with retail giants like Amazon opening dedicated sections for them. But why would a media as old as Vinyl still be popular? Understanding requires a bit of sound theory.
Sound is a vibration in the air that translates to a signal when it hits our ear. When music is recorded on a DVD or an MP3, that analog signal is transformed into a digital format. That digital recording is actually an approximation of the original analog signal. The quality of this approximation depends on two factors: the precision and quality of the approximation.
The precision of the conversion is akin to precision when measuring. For example, when measuring a table, one can use meters, decimetres, centimetres or millimetres. The smaller the unit, the better the precision. However, smaller units require more digits to represent the same value (1 m vs 1000 mm). When encoding an analog signal to digital, the more precise the encoding is, the more faithful the recording, but the larger the file.
The second factor, quality of the conversion, depends on the software use to convert the signal and the skill of the person doing the conversion. A higher quality software will make better decision on how the signal is approximated. Furthermore, a skilled person can better correct the encoding, further improving the conversion.
A high quality digital conversion creates very large files, which can be unpractical to transmit and store. Luckily, the size of the file can be reduce by compressing it. An audio compression can either be lossless or lossy.
Compression can be illustrated by taking the sentence "The ball is red." and rewriting it to "TheBallIsRed.". By using capital letters to represent spaces, the number of character required to store the sentence is decreased by three. The sentence can easily be returned to it's original state (decompressed) by adding a space in front of capital letters. As no information is loss, this compression is considered lossless.
The sentence can be further compressed by removing vowels : "tHBlLsRd". In this case, the capital letter is used to indicate that a vowel was removed. Recreating the message requires the vowel to be guessed. This compression is considered lossy, and can create a significant loss of precision, but can also provide significant improvements.
Popular media formats like MP3 and MPEG are lossy compressions. However, Compact Disc are lossless: no quality is loss beyond the initial conversion to digital.
In the last decade, a trend has been noticed in the digital conversion of music. Dubbed the Loudness War, the audio level of digital recording has been continuously increased. Given that digital sound formats are designed to only store a limited amplitude of sound, critics argue that this increase clips out peak sounds and introduces distortion into the recording.
The image found above, taken from here on Wikipedia, illustrates how the Guitar Hero downloadable version (bottom) of Death Magnetic is far less enhanced than the CD release (top).
As mentioned, music is a signal created by vibration in the air. On a record, that signal is physically etched into the spiral grooves of the record. It is then read by a needle that will travel the grooves and measure the variations.
Given that signal has not been converted, compressed or amplified, it represents the most faithful presentation of the music as recorded by the artist.
Unfortunately, records are far from behind the perfect medium. Even though a record might store an incredible recording, the ability to play it back faithfully depends on the quality of your sound system. An entry level turntable will start at 300 USD, and anything below that entry level cost is not even worth buying.
Being a physical analog medium also has an incredible drawback. Given that the needle must touch the record to read it, the sound quality of the record will degrade over time. Although laser systems for reading a record exist, they a prohibitively expensive.
Finally, a typical record will only hold 3 or 4 songs per side. If you are planing to listen to a full album, be prepared to do some flipping and swapping every ten minutes or so.
Can I Heard the Difference?
Comparing a record to its CD counterpart, the experience can be best described as richer. The difference between a low note and a high note will be more pronounced. However, for many records, the difference is very mild. As such, I believe there are three goods reasons to favour vinyl over traditional digital music format:
- You demand the utmost in audio quality.
- You like things retro.
- The digital conversion of your favorites albums were butchered.
If you don't know much about the medium, I'd suggest you visit a vinyl shop to learn more.
- Photo by Tomasz G. Sienicki |
Want more working memory? Then you need to expand your brain. Credit: Flickr/Elena Gatti, CC BY
By Joel Pearson
Before we had mobile phones, people had to use their own memory to store long phone numbers (or write them down). But getting those numbers into long-term memory could be a real pain.
People had to write the number down, say it over and over again to themselves. With each verbal iteration, something annoying would happen, and the number would often fade out of memory. To get the number into long-term memory you had to keep repeating the number fast enough to beat the fade-away.
This short-term, fast-fading memory is called working memory. It’s like the RAM in a computer: it holds everything in your mind ready for action, simulation or a decision.
Working memory capacity has been linked to IQ and even to some mental disorders, but we don’t know why some people can fit a lot more information into their working memory than others.
In a recent paper published in the journal Cerebral Cortex, we showed that the size of the part of the brain responsible for processing vision is linked to working memory capacity.
How much can you remember?
Some people can hold huge amounts of information in their mind and even manipulate it, trying out different ideas, while other people can only hold small amounts.Flickr/Ben Simo, CC BY-NC
Why do people have the particular capacity they have? How can we investigate these differences between people? It turns out the key to answering these questions is to get people to remember information in only one of their five senses, for example, vision.
By doing this we narrow down the field of things to investigate. We can look at the precise brain anatomy related to just that one sense in different people and figure out which parts of their brain allow for greater information capacity.
This is exactly what we did in our Cerebral Cortex paper. We found that people with a physically larger visual cortex – the part at the back of the brain that deals with what we see – could hold more temporary information in their memory.
This is interesting for a number of reasons because it suggests that the physical parameters of our brains set the limits to what we can do with our minds.
An easy way to think about this is to picture the visual cortex as a bucket: the larger the bucket the more water it can hold.
The larger your visual cortex the more visual information it can hold. But the “visual cortex bucket” has to actively hold on to the information. It takes voluntary effort on your behalf to continually hold this information and then use it.
This was one of the surprising things from our study. Most research suggests that voluntarily holding information in your mind like this is done by high-level brain areas such as the frontal cortex, not the earliest stages of sensory processing such as the primary visual cortex.
Until recently it was thought that these early stages of processing ran on autopilot, “mindlessly” processing incoming information from the senses. But our study and other recent papers now suggest that high-level processing such as voluntarily holding things in mind crucially depend on the earliest levels of processing.
The memory test
We used a computer-based visual memory test in which our participants were shown between two and four small visual patterns for just one second before they were removed.Credit: Joel Pearson
They had to try to remember all the patterns for nine seconds before a second single “test” image was shown again, but this time slightly rotated. The test was to see if they could remember all the patterns well enough to know which direction that single pattern had changed.
This allowed us to measure each participant’s capacity for visual working memory for the nine seconds. To make sure this test was reliable we did this again in each person two weeks later.
After these behavioural tests, we put each person in a brain scanner (functional Magnetic Resonance Imaging: fMRI) and mapped out the visual parts of their brain. This allowed us to directly compare how much each person could hold in memory and the size or volume of his or her visual cortex.Credit: Joel Pearson
It is worth noting that size is not everything. Many other brain factors can and will influence your mental life and indeed your working memory capacity.
These factors include the degree of internal connections between different brain areas, the level of neural transmitters, the hormones in your body and brain, and of course the amount of stress you are under.
But all this doesn’t make it any fairer for those that can’t hold much “in mind”.
How big is your brain?
The next logical question is: why do I have a large or small visual brain? When it comes to the visual cortex the data suggests that our genes play a role.Flickr/Neil Conway, CC BY
The cortex, the outer layer of the brain, is like a gooey grey sheet that is all wrinkled up on itself. In fact, there are two different components to the size or volume of the primary visual cortex: thickness and surface area.
These two different measures seem unrelated to each other – they are not correlated – but both have a heritable component.
In our study, we found that both the thickness and the surface size of the visual cortex independently predicted how much people could hold in visual working memory. So indirectly at least, it seems that your parents or ancestors might have passed their visual cortex down to you, or at least its size.
So does all this come down to luck? Well, as with most things, yes and no. Some promising research is now looking at how training or practice can literally change the architecture of your brain.
A few studies have demonstrated that learning and practicing juggling can induce anatomical changes in some parts of the visual cortex. But exactly how such changes might affect visual working memory is still unknown.
For now, things might have to remain a little unfair, although many people do subscribe to the notion of use it or lose it when it comes to the brain.
A simple hack to get around any capacity limits in short-term memory is to use physical props, such as a smartphone, whiteboard or even pen and paper as a form of mind-extension.
For more on our research check out our lab site.
Joel Pearson, Academic Scientist at UNSW Australia, receives funding from the Australian Research Council and the National Health and Medical Research Council. |
examples of bases and their uses
Best Results From Wikipedia Yahoo Answers Encyclopedia Youtube
In chemistry, the term base metal is used informally to refer to a metal that oxidizes or corrodes relatively easily, and reacts variably with diluted hydrochloric acid (HCl) to form hydrogen. Examples include iron, nickel, lead and zinc. Copper is considered a base metal as it oxidizes relatively easily, although it does not react with HCl.
Base is used in the sense of low-born, in opposition to noble or precious metal. In alchemy, a base metal was a common and inexpensive metal, as opposed to precious metals, mainly gold and silver. A long-time goal of the alchemists was the transmutation of base metal into precious metal.
In mining and economics, base metals refers to industrial non-ferrous metals excluding precious metals. These include copper, lead, nickel and zinc. The U.S. Customs and Border Protection is more inclusive in its definition. It includes, in addition to the four above, iron and steel, aluminium, tin, tungsten, molybdenum, tantalum, magnesium, cobalt, bismuth, cadmium, titanium, zirconium, antimony, manganese, beryllium, chromium, germanium, vanadium, gallium, hafnium, indium, niobium, rhenium and thallium.
Bases are considered the chemical opposite of acids because of their ability to neutralize acids. In 1887 the Swedish physicist and chemist Svante Arrhenius defined a base as the chemical substance that produces hydroxide ions (OH−) and cations. A typical base, according to the Arrhenius definition, is sodium hydroxide (NaOH). The neutralization of an acid with a base to yield salt and water may be represented as HCl (aq ) + KOH (aq ) ⇆ H2O (l ) + KCl (aq )      (1) A major problem with Arrhenius's definition of bases is that several chemical compounds, such as NaHCO3, Na2CO3, Na3PO4, which produce basic solutions when dissolved in water, do not contain hydroxide ions. The Brønsted-Lowry theory, which was proposed independently by Danish chemist Johannes Brønsted and English chemist Thomas Lowry in 1923, states that a base accepts hydrogen ions and an acid donates hydrogen ions. This theory not only includes all bases containing hydroxide ions, but also covers any chemical species that are able to accept hydrogen ions in aqueous solution . For example, when sodium carbonate is dissolved in solution, the carbonate ion accepts a hydrogen ion from water to form the bicarbonate ion and hydroxide ion. (2) The Brønsted-Lowry theory includes water as a reactant and considers its acidity or basicity. In reaction (2) a new acid and base are formed, which are called the conjugate acid and conjugate base, respectively. The strength of a base is determined by the extent of its ionization in aqueous solution. Strong bases, such as NaOH, are 100 percent ionized in aqueous solution and weak bases, such as ammonia, are only partially ionized in aqueous solution. (3) The partial ionization is a dynamic equilibrium , as indicated by the double arrow in equation (3). The strength of acids and bases also determines the strength of their conjugate bases and conjugate acids, respectively. Weak acids and bases have strong conjugate bases and acids. For example, when ammonium chloride is dissolved in water, it gives an acidic solution because ammonium ion is a strong conjugate acid of the weak base ammonia, but chloride ion is a weak conjugate base of the strong acid hydrochloric acid. NH4+ (aq ) + H2O (l ) → NH3 (aq ) + H3O+ (aq )      (4) The carbonate ion in equation (2) yields a basic solution because it is the strong conjugate base of the weak acid HCO3−. When NaHCO3 is dissolved in water, it gives a basic solution, even though a hydrogen ion is available. Predicting this requires one to consider the strength of carbonic acid, H2CO3, which is a very weak acid. H2CO3 (aq ) + H2O (l ) ⇆ HCO3− (aq ) + H3O+ (aq )      (5) However, HCO3− will act as an acid if a strong base is added. HCO3− (aq ) + OH− (aq ) → H2O (l ) + CO32− (aq )      (6) This ability to act as a base or an acid is called amphoterism. Any anions of polyprotic acids, such as HCO3−, H2PO4−, and HPO42−, which contain replaceable hydrogen ions, are amphoteric. Some hydroxides, such as Al(OH)3 and Zn(OH)2, are also amphoteric, reacting with a base or acid, as illustrated by the following equations: Al(OH)3 (s ) + OH− (aq ) → Al(OH)4− (aq )      (7) Al(OH)3 (s ) + 3 H3O+ (aq ) → Al3+ (aq ) + 6 H2O (l )      (8) Equations (7) and (8) can also be explained by American chemist Gilbert Lewis's acid-base theory. A Lewis acid is a substance that can accept a pair of electrons to form a new bond, and a Lewis base is a substance that can donate a pair of electrons to form a new bond. (9) All Arrhenius and Brønsted-Lowry bases are also Lewis bases. All metal cations are potential Lewis acids. Complexes of metal ions with water, ammonia, and hydroxide ion are examples of Lewis acid-base reactions. For example, [Al(H2O)6]3+ may be regarded as a combination of the Lewis acid, Al3+, with six electron pairs from six H2O molecules. Buffer solutions contain a base and an acid that can react with an added acid or base, respectively, and they maintain a pH very close to the original value. Buffers usually consist of approximately equal quantities of a weak acid and its conjugate base, or a weak base and its conjugate acid. For example, one of the buffers used to keep the pH of the blood near 7.45 is the H2PO4−/HPO42− acid/conjugate base system. Small amounts of an acid or base react with one of the components of the buffer mixture to produce the other component as follows: H2PO4− (aq ) + OH− (aq ) → H2O (l ) + HPO42− (aq )      (10) HPO42− (aq ) + H3O+ (aq ) → H2O (l ) + H2PO4− (aq )      (11) see also Acid-Base Chemistry; Arrhenius, Svante; BrØnsted, Johannes Nicolaus; Chemical Reactions; Lewis, Gilbert N.; Solution Chemistry. Melvin D. Joesten Joesten, Melvin D., and Wood, James L. (1996). The World of Chemistry, 2nd edition. Fort Worth, TX: Saunders College. Moore, John W.; Stanitski, Conrad L.; Wood, James L.; Kotz, John C.; and Joesten, Melvin D. (1998). The Chemical World, 2nd edition. Philadelphia: Saunders. Carpi, Anthony. "Acids and Bases: An Introduction." Visionlearning. Available from
acids and bases two related classes of chemicals; the members of each class have a number of common properties when dissolved in a solvent, usually water. Properties Acids in water solutions exhibit the following common properties: they taste sour; turn litmus paper red; and react with certain metals, such as zinc, to yield hydrogen gas. Bases in water solutions exhibit these common properties: they taste bitter; turn litmus paper blue; and feel slippery. When a water solution of acid is mixed with a water solution of base, water and a salt are formed; this process, called neutralization , is complete only if the resulting solution has neither acidic nor basic properties. Classification Acids and bases can be classified as organic or inorganic. Some of the more common organic acids are: citric acid , carbonic acid , hydrogen cyanide , salicylic acid, lactic acid , and tartaric acid . Some examples of organic bases are: pyridine and ethylamine. Some of the common inorganic acids are: hydrogen sulfide , phosphoric acid , hydrogen chloride , and sulfuric acid . Some common inorganic bases are: sodium hydroxide , sodium carbonate , sodium bicarbonate , calcium hydroxide , and calcium carbonate . Acids, such as hydrochloric acid, and bases, such as potassium hydroxide, that have a great tendency to dissociate in water are completely ionized in solution; they are called strong acids or strong bases. Acids, such as acetic acid, and bases, such as ammonia, that are reluctant to dissociate in water are only partially ionized in solution; they are called weak acids or weak bases. Strong acids in solution produce a high concentration of hydrogen ions, and strong bases in solution produce a high concentration of hydroxide ions and a correspondingly low concentration of hydrogen ions. The hydrogen ion concentration is often expressed in terms of its negative logarithm, or p H (see separate article). Strong acids and strong bases make very good electrolytes (see electrolysis ), i.e., their solutions readily conduct electricity. Weak acids and weak bases make poor electrolytes. See buffer ; catalyst ; indicators, acid-base ; titration . Acid-Base Theories There are three theories that identify a singular characteristic which defines an acid and a base: the Arrhenius theory, for which the Swedish chemist Svante Arrhenius was awarded the 1903 Nobel Prize in chemistry; the Brönsted-Lowry, or proton donor, theory, advanced in 1923; and the Lewis, or electron-pair, theory, which was also presented in 1923. Each of the three theories has its own advantages and disadvantages; each is useful under certain conditions. The Arrhenius Theory When an acid or base dissolves in water, a certain percentage of the acid or base particles will break up, or dissociate (see dissociation ), into oppositely charged ions. The Arrhenius theory defines an acid as a compound that can dissociate in water to yield hydrogen ions, H + , and a base as a compound that can dissociate in water to yield hydroxide ions, OH -  . For example, hydrochloric acid, HCl, dissociates in water to yield the required hydrogen ions, H + , and also chloride ions, Cl -  . The base sodium hydroxide, NaOH, dissociates in water to yield the required hydroxide ions, OH - , and also sodium ions, Na + . The Brönsted-Lowry Theory Some substances act as acids or bases when they are dissolved in solvents other than water, such as liquid ammonia. The Brönsted-Lowry theory, named for the Danish chemist Johannes Brönsted and the British chemist Thomas Lowry, provides a more general definition of acids and bases that can be used to deal both with solutions that contain no water and solutions that contain water. It defines an acid as a proton donor and a base as a proton acceptor. In the Brönsted-Lowry theory, water, H 2 O, can be considered an acid or a base since it can lose a proton to form a hydroxide ion, OH - , or accept a proton to form a hydronium ion, H 3 O + (see amphoterism ). When an acid loses a proton, the remaining species can be a proton acceptor and is called the conjugate base of the acid. Similarly when a base accepts a proton, the resulting species can be a proton donor and is called the conjugate acid of that base. For example, when a water molecule loses a proton to form a hydroxide ion, the hydroxide ion can be considered the conjugate base of the acid, water. When a water molecule accepts a proton to form a hydronium ion, the hydronium ion can be considered the conjugate acid of the base, water. The Lewis Theory Another theory that provides a very broad definition of acids and bases has been put forth by the American chemist Gilbert Lewis. The Lewis theory defines an acid as a compound that can accept a pair of electrons and a base as a compound that can donate a pair of electrons. Boron trifluoride, BF 3 , can be considered a Lewis acid and ethyl alcohol can be considered a Lewis base.
From Yahoo Answers
Answers:Aluminum hydroxide (Amphojel , AlternaGEL ) Magnesium hydroxide (Phillips Milk of Magnesia) Aluminum hydroxide and magnesium hydroxide (Maalox , Mylanta ) Aluminum carbonate gel (Basaljel ) Calcium carbonate (Alcalak , Calcium Rich Rolaids , Quick-Eze , Rennie , Titralac , Tums ) Sodium bicarbonate (Bicarbonate of soda, Alka-Seltzer ) Hydrotalcite (Mg6Al2(CO3)(OH)16 4(H2O); Talcid ) Bismuth subsalicylate (Pepto-Bismol) Magaldrate + Simethicone (Pepsil)
Answers:Acids: HNO3, HCl, H2SO4, H2O, NH4+ Bases: NaOH, NH3,
Answers:http://forum.purseblog.com/general-discussion/anyone-good-in-chemistry-330196.html http://www.justanswer.com/questions/30vvx-trying-to-understand-acids-bases-and-salts-i-understand http://chestofbooks.com/health/materia-medica-drugs/Treatise-Therapeutics-Pharmacology-Materia-Medica-Vol2/3-Effects-And-Uses-Of-Alkalies-As-Dynamic-Agents.html http://www.lef.org/protocols/prtcl-027.shtml ====================================================== Acids: ascorbic acid, , gamma-amino-butyric acid Alpha-Lipoic Acid--is a powerful antioxidant that regulates gene expression and preserves hearing during cisplatin therapy Bases: magnesium hydroxide (Milk of magnesia) , sodium bicarbonate, calcium carbonate (Rolaids)(Bronsted base) ==================================================== Salts: sodium chloride, potassium chloride, sodium fluoride Ascorbic acid (vitamin C) can be taken internally to neutralize "free radicals," otherwise known as non-essential ionic compounds and play a role in natural metabolism. & treatment for Scurvey Sodium bicarbonate reduces stomach acids and can make the urine less acidic. It is used as an antacid to treat heartburn, indigestion, and other stomach disorders. It is also used to treat various kidney disorders and to increase the effectiveness of sulfonamides. Sodium fluoride is used in dental products to fortify hydroxyapatite composing the enamelin found in teeth to prevent cavities.
Answers:I'll answer your biological problem first. First, your stomach contains HCl. That's right, hydrochloric acid. The pH is usually around 3-4 or so. A very potent acid. And the main reason why we have it is to kill any bacteria in our food. Bacteria are really prone to pH changes, and since your mouth is slightly basic, your acidic stomach environment will almost instantly kill nearly all bacteria. It's a line of defense. Now, acid and bases are used quite a lot in nature. As described above, your mouth is basic. Also, your blood contains what's called a buffer. That is, carbonic acid. It breaks down and is reconstructed to keep your blood pH around 7.4. Not to mention that your muscles release lactic acid when they're strained. I hope those are enough examples |
- The Constitution
- Repulic of Ghana
- Ghana History
- Government and Politics
- Education System
- The Economy
Endowed with gold and oil palms and situated between the trans- Saharan trade routes and the African coastline visited by successive European traders, the area known today as Ghana has been involved in all phases of Africa’s economic development during the last thousand years. As the economic fortunes of African societies have waxed and waned, so, too, have Ghana’s, leaving that country in the early 1990s in a state of arrested development, unable to make the “leap” to Africa’s next, as yet uncertain, phase of economic evolution.
As early as the thirteenth century, present-day Ghana was drawn into long-distance trade, in large part because of its gold reserves. The trans-Saharan trade, one of the most wide-ranging trading networks of pre-modern times, involved an exchange of European, North African, and Saharan commodities southward in exchange for the products of the African savannas and forests, including gold, kola nuts, and slaves. Present-day Ghana, named the Gold Coast by European traders, was an important source of the gold traded across the Sahara. Centralized states such as Asante controlled prices by regulating production and marketing of this precious commodity. As European navigational techniques improved in the fifteenth century, Portuguese and later Dutch and English traders tried to circumvent the Saharan trade by sailing directly to its southernmost source on the West African coast. In 1482 the Portuguese built a fortified trading post at Elmina and began purchasing gold, ivory, and pepper from African coastal merchants.
Although Africans for centuries had exported their raw materials—ivory, gold, kola nuts—in exchange for imports ranging from salt to foreign metals, the introduction of the Atlantic slave trade in the early sixteenth century changed the nature of African export production in fundamental ways. An increasing number of Ghanaians sought to enrich themselves by capturing fellow Africans in warfare and selling them to slave dealers from North America and South America. The slaves were transported to the coast and sold through African merchants using the same routes and connections through which gold and ivory had formerly flowed. In return, Africans often received guns as payment, which could be used to capture more slaves and, more importantly, to gain and preserve political power.
An estimated ten million Africans, at least half a million from the Gold Coast, left the continent in this manner. Some economists have argued that the slave trade increased African economic resources and therefore did not necessarily impede development, but others, notably historian Walter Rodney, have argued that by removing the continent’s most valuable resource—humans—the slave trade robbed Africa of unknown invention, innovation, and production. Rodney further argues that the slave trade fueled a process of underdevelopment, whereby African societies came to rely on the export of resources crucial to their own economic growth, thereby precluding local development of those resources. Although some scholars maintain that the subsequent economic history of this region supports Rodney’s interpretation, no consensus exists on this point. Indeed, in recent years, some historians not only have rejected Rodney’s interpretation but also have advanced the notion that it is the Africans themselves rather than an array of external forces that are to blame for the continent’s economic plight.
When the slave trade ended in the early years of the nineteenth century, the local economy became the focus of the so-called legitimate trade, which the emerging industrial powers of Europe encouraged as a source of materials and markets to aid their own production and sales. The British, in particular, gained increasing control over the region throughout the nineteenth century and promoted the production of palm oil and timber as well as the continuation of gold production. In return, Africans were inundated with imports of consumer goods that, unlike the luxuries or locally unavailable imports of the trans-Saharan trade, quickly displaced African products, especially textiles.
In 1878 cacao trees were introduced from the Americas. Cocoa quickly became the colony’s major export; Ghana produced more than half the global yield by the 1920s. African farmers used kinship networks like business corporations to spread cocoa cultivation throughout large areas of southern Ghana. Legitimate trade restored the overall productivity of Ghana’s economy; however, the influx of European goods began to displace indigenous industries, and farmers focused more on cash crops than on essential food crops for local consumption.
When Ghana gained its independence from Britain in 1957, the economy appeared stable and prosperous. Ghana was the world’s leading producer of cocoa, boasted a well-developed infrastructure to service trade, and enjoyed a relatively advanced education system. At independence, President Kwame Nkrumah sought to use the apparent stability of the Ghanaian economy as a springboard for economic diversification and expansion. He began process of moving Ghana from a primarily agricultural economy to a mixed agricultural-industrial one. Using cocoa revenues as security, Nkrumah took out loans to establish industries that would produce import substitutes as well as process many of Ghana’s exports. Nkrumah’s plans were ambitious and grounded in the desire to reduce Ghana’s vulnerability to world trade. Unfortunately, the price of cocoa collapsed in the mid-1960s, destroying the fundamental stability of the economy and making it nearly impossible for Nkrumah to continue his plans. Pervasive corruption exacerbated these problems. In 1966 a group of military officers overthrew Nkrumah and inherited a nearly bankrupt country.
Since then, Ghana has been caught in a cycle of debt, weak commodity demand, and currency overvaluation, which has resulted in the decay of productive capacities and a crippling foreign debt. Once the price of cocoa fell in the mid-1960s, Ghana obtained less of the foreign currency necessary to repay loans, the value of which jumped almost ten times between 1960 and 1966. Some economists recommended that Ghana devalue its currency—the cedi—to make its cocoa price more attractive on the world market, but devaluation of the cedi would also have rendered loan repayment in United States dollars much more difficult. Moreover, such a devaluation would have increased the costs of imports, both for consumers and nascent industries.
Until the early 1980s, successive governments refused to devalue the currency (with the exception of the government of Kofi A. Busia, which devalued the cedi in 1971 and was promptly overthrown). Cocoa prices languished, discouraging cocoa production altogether and leading to smuggling of existing cocoa crops to neighboring countries, where francs rather than cedis could be obtained in payment. As production and official exports collapsed, revenue necessary for the survival of the economy was obtained through the procurement of further loans, thereby intensifying a self-destructive cycle driven by debt and reliance on vulnerable world commodity markets.
By the early 1980s, Ghana’s economy was in an advanced state of collapse. Per capita gross domestic product ( GDP) showed negative growth throughout the 1960s and fell by 3.2 percent per year from 1970 to 1981. Most important was the decline in cocoa production, which fell by half between the mid-1960s and the late 1970s, drastically reducing Ghana’s share of the world market from about one-third in the early 1970s to only one-eighth in 1982-83. At the same time, mineral production fell by 32 percent; gold production declined by 47 percent, diamonds by 67 percent, manganese by 43 percent, and bauxite by 46 percent. Inflation averaged more than 50 percent a year between 1976 and 1981, hitting 116.5 percent in 1981. Real minimum wages dropped from an index of 75 in 1975 to one of 15.4 in 1981. Tax revenue fell from 17 percent of GDP in 1973 to only 5 percent in 1983, and actual imports by volume in 1982 were only 43 percent of average 1975-76 levels. Productivity, the standard of living, and the government’s resources had plummeted dramatically.
In 1981 a military government under the leadership of Flight Lieutenant Jerry John Rawlings came to power. Calling itself the Provisional National Defence Council (PNDC), the Rawlings regime initially blamed the nation’s economic problems on the corruption of previous governments. Rawlings soon discovered, however, that Ghana’s problems were the result of forces more complicated than economic abuse. Following a severe drought in 1983, the government accepted stringent International Monetary Fund ( IMF) and World Bank loan conditions and instituted the Economic Recovery Program (ERP).
Signaling a dramatic shift in policies, the ERP fundamentally changed the government’s social, political, and economic orientation. Aimed primarily at enabling Ghana to repay its foreign debts, the ERP exemplified the structural adjustment policies formulated by international banking and donor institutions in the 1980s. The program emphasized the promotion of the export sector and an enforced fiscal stringency, which together aimed to eradicate budget deficits. The PNDC followed the ERP faithfully and gained the support of the international financial community. The effects of the ERP on the domestic economy, however, led to a lowered standard of living for most Ghanaians. |
To be able to communicate effectively in a foreign language, a fundamental insight into the grammar of the foreign language in question is required.
Basic English Grammar is a fundamental introduction to the grammar of the English language, both basic sentence analysis and the fundamental grammatical aspects of the major word classes.
The grammar is basic in the sense that it assumes only the most fundamental knowledge of English grammar. It is thus suited as a textbook for introductory courses in English grammar at business colleges, colleges of education, business schools and universities, but it can also be used for self-tuition.
Although Basic English Grammar is an introduction to English grammar, it does not conceal the fact that there are disagreements among grammarians about the analysis of specific grammatical phenomena. In such instances, the book points out disagreements in an attempt to make readers think about grammatical phenomena. |
Viruses And Bacteria Falling From The Skies.
An astonishing number of viruses are circulating the Earth’s atmosphere and falling from it, according to new research from scientists in Canada, Spain and the U.S. The study marks the first time scientists have quantified the viruses being swept up from the Earth’s surface into the free troposphere, that layer of atmosphere beyond Earth’s weather systems but below the stratosphere where jet airplanes fly. The viruses can be carried thousands of kilometres there before being deposited back onto the Earth’s surface. “Every day, more than 800 million viruses are deposited per square metre above the planetary boundary layer – that’s 25 viruses for each person in Canada,” said University of British Columbia virologist Curtis Suttle, one of the senior authors of a paper in the International Society for Microbial Ecology Journal that outlines the findings. “Roughly 20 years ago, we began finding genetically similar viruses occurring in very different environments around the globe,” says Suttle. “This preponderance of long-residence viruses travelling the atmosphere likely explains why it’s quite conceivable to have a virus swept up into the atmosphere on one continent and deposited on another.” Bacteria and viruses are swept up in the atmosphere in small particles from soil-dust and sea spray.
Suttle and colleagues at the University of Granada and San Diego State University wanted to know how much of that material is carried up above the atmospheric boundary layer above 2,500 to 3,000 metres. At that altitude, particles are subject to long-range transport unlike particles lower in the atmosphere. Using platform sites high in Spain’s Sierra Nevada Mountains, the researchers found billions of viruses and tens of millions of bacteria are being deposited per square metre per day. The deposition rates for viruses were nine to 461 times greater than the rates for bacteria.
“Bacteria and viruses are typically deposited back to Earth via rain events and Saharan dust intrusions. However, the rain was less efficient removing viruses from the atmosphere,” said author and microbial ecologist Isabel Reche from the University of Granada. The researchers also found the majority of the viruses carried signatures indicating they had been swept up into the air from sea spray. The viruses tend to hitch rides on smaller, lighter, organic particles suspended in air and gas, meaning they can stay aloft in the atmosphere longer.
Credit: Science Daily for the University Of Columbia, 6 February 2018. |
As long as there have been kids and schools, there have been “schoolyard bullies”, but today, the act of bullying can take place beyond the school grounds.
To be considered a bullying situation, there must an imbalance of power between the two parties—physically, socially and psychologically. Bullying is intentional, threatening behavior toward another (physical, written, verbal, social) with the purpose of causing the victim harm or fear of harm. It is typically a recurring problem and happens one-on-one, in groups, in schools, and in cyberspace.
The bully often perceives the victim as an easy target in some way. Therefore, students with learning differences or disabilities, those who are socially awkward or who have emotional or psychological problems, LGBTQ teens and many more are at risk for being bullied. Compounding the issue is that some middle school or high school students have yet to develop the social skills, emotional maturity, or self-esteem to know how handle those situations. Some young people will resort to bullying in the form of social exclusion when they see their victim as a threat to their own social standing. In Odd Girl Out (2011), Rachel Simmons documents how this frequently occurs among middle school girls with a bully who will her encourage peers to engage in organized but very subtle forms of exclusion to socially isolate the girl perceived to be a threat.
Bullying acts include:
- Destruction or theft of another’s personal property.
- Picking verbal or physical fights with the other (girls tend to be verbal bullies, boys more physical).
- Doing physical harm or physically attacking in other ways such as tripping, spitting, shoving.
- Public taunting, insults, or ostracizing (in person or online).
- Humiliating or threatening messages or pictures via email, social media, texts.
- Online gossip that goes unchecked or involves a group.
- Sustained, organized efforts to exclude or socially isolate an individual.
School Bullying Facts
The National Center for Education Statistics (NCES) reports that:
- There is noticeably more bullying in middle school (grades 6, 7, and 8) than in senior high school.
- Emotional bullying is the most prevalent type; pushing/shoving/tripping/spitting on someone is second.
- Cyberbullying occurs with greater frequency in the last three years of high school than in grades 6 – 9.
- Most school bullying occurs inside the school; for the middle schooler, it’s the school bus.
- Middle schoolers are more likely to be injured from bullying than high school students; the highest prevalence is among sixth graders and the injury percentage goes down every grade.
Cyberbulling: a 21st-Century Problem
Technology-related bullying via digital devices and platforms is a very damaging and dangerous trend. The threat is real and unfortunately, we’ve all read the harrowing accounts of teens being bullied in cyberspace and in school, with many of those stories ending tragically.
i-SAFE, a leading Internet safety education organization, cites these cyberbullying statistics:
- Over half of adolescents and teens have been bullied online, and about the same number have engaged in cyber bullying.
- More than 1 in 3 young people have experienced cyberthreats online.
- Over 25 percent of adolescents and teens have been bullied repeatedly through their cell phones or the Internet.
- Well over half of young people do not tell their parents when cyber bullying occurs.
A Youth Risk Behavior Surveillance Survey (Centers for Disease Control and Prevention) found that 16 percent of high school students (grades 9 –12) were electronically bullied in the past year and that, nationwide, 20 percent of high school students experienced some form of bullying.
Is Your Child Being Bullied?
Parents may notice obvious physical signs of cuts, bruises, or worse, that don’t have any explanation, or possessions are damaged or missing. Some adolescents will openly discuss their problems or verbalize bullying with complaints that “no one at school likes me,” or “I hate that girl, she’s so mean.” These situations open the door to gentle questioning and discussion.
However, bullying has many emotional and psychological effects that might not be readily associated with it; these can lead to anxiety, depression, self-harm, substance abuse, and even suicide. For many teens, the effects of bullying may show up as:
- Low self-esteem
- Difficulty in trusting others
- Changes in behavior, mood swings
- Lack of assertiveness or feelings of helplessness
- Aggression or anger management problems
- Isolation, withdrawing socially
- School phobia or reluctance to go to school
- A drop in school performance
- Changes in eating or sleeping patterns
- Depression, talk of suicide or self-injury
- Avoiding certain places
What to Do
Any one of the symptoms cited is troubling and, in combination, can be alarming; all should be addressed in a calm, supportive manner. It is becoming increasingly important for school administrators and parents to be aware of the signs a student is being bullied, and provide a safe environment for the child to come for help.
At Sage Day schools, we have no tolerance for harassment, intimidation, or bullying (HIB); each school has an anti-HIB team and a comprehensive policy (https://www.sageday.com/hib-policy/) on how to handle these issues in the school. Our students know the door is always open for them to speak to administrators if they feel they are being bullied, and we can help provide the necessary social skills to support them through this upsetting time.
We urge all our Sage Day families to watch for any signs their children might be bullied and talk to them about it. Together, we can help our students rise above the threat and become stronger along the way. |
How do scientists identify cancer stem cells?
Scientists use several techniques to identify cancer stem cells
Even under a microscope, there's no way to distinguish cancer stem cells from other malignant cells just by looking at them. To identify stem cells, scientists use specialized equipment to detect specific proteins on the cell's surface. These proteins are not found on regular cancer cells. A biochemical assay developed at the U-M Cancer Center can identify breast cancer stem cells.
The ultimate test to prove that cells are true cancer stem cells is to inject cells from a human tumor into mice that are genetically engineered to lack a cancer-fighting immune system. If the mouse does not get cancer, scientists know the injected cells were not stem cells, because ordinary tumor cells will divide a few times and then die. But if the mouse develops a tumor with the same types of cells as the human tumor, scientists know that the injected cells were true cancer stem cells. |
The water of life can be viewed and further extracted to reflect this unit in a more abstract sense. We viewed this sense of life meets water through the war, Industrial Revolution and this rebirth of antiquity. If one thinks of life and the qualities of water, then our abstraction makes perfect sense. Water is fluid and spreads quickly, touching everything in its path. Water is easily manuvered and can sometimes soak up and ruin any the things it interacts with. For example, the news, advancements in travel and so forth, can be viewed as the living water of America in the 18th-19th century. Faster travel led to more news about life and the news spread FAST.
Another example of this transportation and travel reflecting water and was of the east meets west idea of the times. The Silk Road and other such trade routes began blooming from all angles of the world. These routes led to the desire for “exotic” artifacts to be apart of European design. The World’s Fair of 1851 in London showcased many cultures and allowed for them to learn from each other, to intermingle and to influence and inspiring each other.
To switch gears within this unit, we discussed the French middle and upper classes and their relationship to one another. These classes were completely different, however, in the 18th-19th century, the middle class began to construct homes and have them resemble that of the wealthy upper class palaces. They created meager ponds in the center of their clustered homes in the middle class to try to reflect the large fountains and lakes of the upper class royalty.
Crystal Palace, 1851 Cultural mixing
Cites for images: www.crystalpalace51.org/ |
Did you know READING and WRITING start with IMAGINATION?
It’s watching a child holding a wooden block to their ear as a toddler and pretending to talk on the phone or using a big lego as a camera. It’s taking a box and making it into a house, monster, castle or dragon. It’s making a cake from mud or popcorn from pompoms or mulch. It's building a tower, a car a rocket or a stage out of pieces of wood. It's gathering sticks to create a camp fire and camp out beside it. It's climbing on a tree and pretending it will take flight to another world.
It’s CREATING something NEW from something OLD that builds IMAGINATION!
READING is just that, taking LETTERS to create SOUNDS and then joining them together for form a WORD that represents a REAL object. In order for your child to read and write they need an active imagination and life experiences. The best way to create a great reader and writer is to BUILD their IMAGINATION from the very beginning your CHILD'S life.
One really fun way to build your child’s imagination is by creating a Natural Playscape Playground in your Back Yard. You can also spend time at local parks and plan to encourage imaginary play while you're there. Here are some fun ideas that you can do outdoors for hours and hours of imaginary play.Ideas for Creating a Natural Playscape Playground
- Create a stage from an old tree that needed to be cut down.
- Use the rest of the tree to create circles to be used for a variety of purposes and just the right size for little hands to use.
- Add some bamboo sticks to build all sorts of creations with.
- Hang a few bells in the bushes for some magical musical fun
- Add some reused containers and water for an outdoor kitchen
- Visit a Local Park and bring props for play, especially if you're limited on space
Here's part of Amand'a Natural Playscape that she shared on The Educators' Spin On It. Here's a quick video of her Music Bush. We hope to share more about our backyard's this summer, so check back for more details. Until then check out where we get our inspirations from, Our Pinterest NATURAL PLAYSCAPES PLAYGROUND BOARD. |
As your child progresses through school, the ability to work with larger numbers becomes an important requirement. Luckily, bigger numbers do not necessarily mean the problems themselves are more difficult. These equations simply build upon concepts your child has already learned and challenges them to apply these concepts in different situations. One such example is multiplying using 3-digit numbers. Drawing upon your child’s knowledge of basic multiplication, place value, and addition, multiplying 3-digit numbers can be a fairly simple concept when the fundamentals are understood.
Multiplying by 3-digit numbers can be broken down into a few steps. We will use numbers A and B to refer to our two 3-digit numbers:
Let’s take a look at an example to better understand this concept:
First, we must use one of the two numbers (either 502 or 336) to multiply using the values in the ones, tens, and hundreds place. For this example, we will use 336. Thus, we multiply 502 by 6, the number in the ones place of 336.
After multiplying 502 by 6, add a zero beneath the two in 3,012. The reason for doing this is because we will now be multiplying 502 by 30 since the 3 is in the tens place of 336.
After completing this step, you may proceed to multiply 502 by 3 on the second line after the zero.
Next, multiply 502 by the 3 in the hundreds place of 336. Similar to the previous step, add two zeros on the next line below 15,060 (in the ones and tens place values). We add two zeros in this step since we are multiplying 502 by 300.
Now, multiply 502 by 3, adding those values after the two zeros we just placed.
Finally, add up the values found from multiplying the different numbers in the ones, tens, and hundreds place values.
After adding up all the values, we reach our final answer: 168,672.
As seen in the example, multiplying numbers with at least 3 digits is not a difficult task. While these problems have numerous steps and may take more time, they are simple in nature. However, your kids will be able to solve them with ease as they practice these types of problems and gain a better grasp of the concepts used to complete them. |
Overview of Amblyopia
Amblyopia, also known as "lazy eye," is a condition characterized by diminished vision in one eye. It is not correctable by eyeglasses or contact lenses and is not usually triggered by an eye disease. Instead, amblyopia can develop when:
- the extraocular muscles fail to align the eyes properly and the part of the brain that controls vision "favors" one eye over the other;
- an eye with a significant refractive error in one eye goes uncorrected for a period of time; or
- there is a large difference in the refractive power, and one eye is favored.
All babies are born with poor eyesight that normally improves as they grow. In amblyopia, one eye becomes stronger. If the weaker eye is untreated, eyesight will progressively worsen.
Incidence & Prevalence of Amblyopia
Amblyopia is the most common cause of visual impairment restricted to one eye in children and young to middle-aged adults. About 5% of children in the United States have amblyopia.
Types of Amblyopia
The two most common types of amblyopia are strabismic and anisometropic. In strabismic amblyopia, strabismus is present and the eyes are not aligned properly resulting in one eye being used less than the other. The nonpreferred eye is not adequately stimulated and the visual brain cells do not develop normally. With anisometropic amblyopia, the eyes have different refractive powers. For example, one eye may be nearsighted and the other farsighted. It may be difficult for the brain to balance the difference and it favors the stronger eye.
Risk Factors for Amblyopia
Anything that interferes with equal development of vision in both eyes between birth and about 6 years can result in amblyopia. Strabismus and anisomtreopia are the most common causes of amblyopia. Other risk factors include congenital cataracts, something that blocks the cornea or lens and a droopy eyelid that obstructs the field of vision in one eye.
Signs and Symptoms of Amblyopia
Amblyopia may not produce symptoms that are obvious to a parent or the affected child. Amblyopia caused by an undetected refractive error may go unnoticed for years, due to the fact that one of the eyes is functioning normally. As a result, many children remain unaware of vision problems, especially before they begin school. The condition is often diagnosed during the first eye examination at a later age, when improvement in vision to its fullest potential may no longer be possible.
Sometimes, though, a child may squint or close one eye, which indicates a visual problem. A child old enough to verbalize may complain of headaches or eyestrain. In strabismic amblyopia, the crossed eye is an obvious sign. |
Following the discovery of the fundamental laws of chemistry, units called, for example, "gram-atom" and "gram-molecule", were used to specify amounts of chemical elements or compounds. These units had a direct connection with "atomic weights" and "molecular weights", which are in fact relative masses. "Atomic weights" were originally referred to the atomic weight of oxygen, by general agreement taken as 16. But whereas physicists separated the isotopes in a mass spectrometer and attributed the value 16 to one of the isotopes of oxygen, chemists attributed the same value to the (slightly variable) mixture of isotopes 16, 17 and 18, which was for them the naturally occurring element oxygen. Finally an agreement between the International Union of Pure and Applied Physics (IUPAP) and the International Union of Pure and Applied Chemistry (IUPAC) brought this duality to an end in 1959/60. Physicists and chemists have ever since agreed to assign the value 12, exactly, to the so-called atomic weight of the isotope of carbon with mass number 12 (carbon 12, 12C), correctly called the relative atomic mass Ar(12C). The unified scale thus obtained gives the relative atomic and molecular masses, also known as the atomic and molecular weights, respectively.
The quantity used by chemists to specify the amount of chemical elements or compounds is now called "amount of substance". Amount of substance is defined to be proportional to the number of specified elementary entities in a sample, the proportionality constant being a universal constant which is the same for all samples. The unit of amount of substance is called the mole, symbol mol, and the mole is defined by specifying the mass of carbon 12 that constitutes one mole of carbon 12 atoms. By international agreement this was fixed at 0.012 kg, i.e. 12 g.
Following proposals by the IUPAP, the IUPAC, and the ISO, the CIPM gave a definition of the mole in 1967 and confirmed it in 1969. This was adopted by the 14th CGPM (1971, Resolution 3):
- The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12; its symbol is "mol".
- When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles.
It follows that the molar mass of carbon 12 is exactly 12 grams per mole, M(12C) = 12 g/mol.
In 1980 the CIPM approved the report of the CCU (1980) which specified that
In this definition, it is understood that unbound atoms of carbon 12, at rest and in their ground state, are referred to.
The definition of the mole also determines the value of the universal constant that relates the number of entities to amount of substance for any sample. This constant is called the Avogadro constant, symbol NA or L. If N(X) denotes the number of entities X in a specified sample, and if n(X) denotes the amount of substance of entities X in the same sample, the relation is
n(X) = N(X)/NA.
Note that since N(X) is dimensionless, and n(X) has the SI unit mole, the Avogadro constant has the coherent SI unit reciprocal mole.
In the name "amount of substance", the words "of substance" could for simplicity be replaced by words to specify the substance concerned in any particular application, so that one may, for example, talk of "amount of hydrogen chloride, HCl", or "amount of benzene, C6H6". It is important to always give a precise specification of the entity involved (as emphasized in the second sentence of the definition of the mole); this should preferably be done by giving the empirical chemical formula of the material involved. Although the word "amount" has a more general dictionary definition, this abbreviation of the full name "amount of substance" may be used for brevity. This also applies to derived quantities such as "amount of substance concentration", which may simply be called "amount concentration". However, in the field of clinical chemistry the name "amount of substance concentration" is generally abbreviated to "substance concentration".
The recommended symbol for relative atomic mass (atomic weight) is Ar(X), where the atomic entity X should be specified, and for relative molecular mass of a molecule (molecular weight) it is Mr(X), where the molecular entity X should be specified.
The molar mass of an atom or molecule X is denoted M(X) or MX, and is the mass per mole of X.
When the definition of the mole is quoted, it is conventional also to include this remark. |
Each year, the number of children diagnosed with autism, ADHD, and other learning disabilities increases. Public and private school systems must keep up with the influx of students with special needs. However, it is difficult for all educational programs to have the tools necessary to effectively continue teaching students with ADHD in a way that can integrate them into a core curriculum with success.
How to Improve Teaching Students with ADHD
ACTIVATE™, created by neuroscientists at Yale University, is a system that tests cognitive skills in order to increase cognitive capacity for a better educational and overall life experience for children with autism, ADHD, and other learning disabilities. Teaching students with ADHD and other learning disabilities is not a simple process; there is not formula that works for every child. The spectrum of disabilities that children with learning disabilities face varies greatly from child to child. C8 Sciences, with ACTIVATE™ NIH Toolbox tests, pinpoints the specific cognitive disabilities and needs of each child in a way that is powerful, accessible, and feasible for schools and kids alike. C8 Sciences works to improve student attention, enhance academic skills, and put IEP students on a faster track to inclusion.
The National Institute of Health has developed a “Toolbox” to assess cognitive growth. With the use of the NIH Toolbox, children’s cognitive capacity can be tested and improved. At a relatively low cost, students can be tested with the NIH Toolbox and put on a track to academic success. C8 Sciences, with ACTIVATE™ mental cross-training, uses the NIH Toolbox tests for the cognitive capacity of students in repeated case studies. The studies have shown that students make drastic improvements in sustained attention, working memory, and processing speed when using ACTIVATE™. The ACTIVATE™ system is extremely helpful in teaching students with ADHD & autism. Academic success depends on cognition of the students, which is just as important for the school as it is for the student.
C8 Sciences uses video games that increase students’ abilities to sustain their attention. With visual and auditory feedback, these games help the student make progress with sustaining attention. As the child progresses, the elements of the video game fade away until he or she can focus on longer sessions of work. Physical exercise is also a part of ACTIVATE™. Mind and body are interconnected; while physical exercise can be especially important in children with autism and other learning disabilities, it is an aspect of a child’s learning that is often overlooked. The ACTIVATE™ Physical Exercise Program exercises the same cognitive skills as the computer program does. In addition, ACTIVATE™ Education can enhance academic skills to get IEP students on track. Reading level, comprehension, and mathematic skills consistently increase with the number of ACTIVATE™ sessions in repeated studies. The effective, beneficial aspects of C8 Sciences’ ACTIVATE™ system for teaching students with ADHD are too great not to take advantage of.
It is extremely important that all Special Education Directors consider implementing the C8 Sciences system into their programs. With the most recent of neuroscience-based research, C8 Sciences can improve the lives of children that must learn with Individual Education Programs. With early testing and inclusion, these children can strive to reach their full academic potential most effectively. Their unique qualities can enhance their education rather than keep them from the benefits of it. Teachers can better educate and support children through the system, with the increased knowledge they have of their students’ strengths and weaknesses, and strategies to teach them accordingly. It is imperative that teachers sign up for a free webinar to learn more about C8 sciences for the ease and effectiveness that this system offers in the teaching of special education. |
Barred Sulphur (Eurema daira)
The barred sulphur is a common butterfly found in weedy areas throughout much of the Deep South. It is particularly abundant in late summer and early fall. The barred sulphur comes in distinctly different looking seasonal forms that are determined by the environmental conditions under which the larvae develop. Long days and warm temperatures result in summer form adults that have almost pure white wings beneath. Males have light yellow colored wings with black borders above and a black bar along the lower edge of the forewing (its namesake). Females are pale yellow to white above with muted black markings. Winter form adults have brown to brick red colored wings with numerous markings beneath, and are darker yellow above. Winter forms can be found from November to March; they survive through the coldest weather as adults in a state of reproductive dormancy.
The small, white, spindle shaped eggs are deposited singly on the leaves or flowers of the host plant. The plain green caterpillars feed exposed on the leaves and develop rapidly. The pupae may be green, green with black markings, or black. Numerous generations are produced each year. |
Directional transmitting antenna arrays consist of twoantennas broadcasting the same signal in phase. Suppose twoidentical radio antennas (S1 and S2) are placed 10.0 m apart, asshown in figure. The frequency of the radiated waves is 60 M Hz.The intensity at a distance of 700 m in the +x-direction is I=0.020W/m2.
A. What is the intensity in the direction θ=4.0degrees?
B In what direction near q = 0 degrees is the intensityI/2?
C. In what direction is the intensity zero? |
The Solar System is a really big place, and it takes forever to travel from world to world with traditional chemical rockets. But one technique, developed back in the 1960s might provide a way to dramatically shorten our travel times: nuclear rockets.
Of course, launching a rocket powered by radioactive material has its own risks as well. Should we attempt it?
Let’s say that you wanted to visit Mars using a chemical rocket. You would blast off from Earth and go into low Earth orbit. Then, at the right moment, you’d fire your rocket, raising your orbit from the Sun. The new elliptical trajectory you’re following intersects with Mars after eight months of flight.
This is known as Hohmann transfer, and it’s the most efficient way we know how to travel in space, using the least amount of propellant and the largest amount of payload. The problem of course, is the time it takes. Throughout the journey, astronauts will be consuming food, water, air, and be exposed to the long term radiation of deep space. Then a return mission doubles the need to resources and doubles the radiation load.
We need to go faster.
It turns out NASA has been thinking about what comes next after chemical rockets for almost 50 years.
Nuclear thermal rockets. They definitely speed up the journey, but they’re not without their own risks, which is why you haven’t seen them. But maybe their time is here.
In 1961, NASA and the Atomic Energy Commision worked together on the idea of nuclear thermal propulsion, or NTP. This was pioneered by Werner von Braun, who hoped that human missions would be flying to Mars in the 1980s, on the wings of nuclear rockets.
Well that didn’t happen. But they did perform some successful tests of nuclear thermal propulsion and demonstrated that it does work.
While a chemical rocket works by igniting some kind of flammable chemical and then forcing the exhaust gases out a nozzle. Thanks to good old Newton’s third law, you know, for every action there’s an equal and opposite reaction, the rocket receives a thrust in the opposite direction from the expelled gases.
A nuclear rocket works in a similar way. A marble-sized ball of Uranium fuel undergoes the process of fission, releasing a tremendous amount of heat. This heats up a hydrogen to almost 2,500 C which is then expelled out the back of the rocket at high velocity. Very very high velocity, giving the rocket two to three times the propulsion efficiency of a chemical rocket.
Remember the 8 months I mentioned for a chemical rocket? A nuclear thermal rocket could cut the transit time in half, maybe even 100 day trips to Mars. Which means less resources consumed by the astronauts, and a lower radiation load.
And there’s another big benefit. The thrust of a nuclear rocket could allow missions to go when Earth and Mars aren’t perfectly aligned. Right now if you miss your window, you have to wait another 2 years, but a nuclear rocket could give you the thrust to deal with flight delays.
The first tests of nuclear rockets started in 1955 with Project Rover at the Los Alamos Scientific Laboratory. The key development was miniaturizing the reactors enough to be able to put them on a rocket. Over the next few years, engineers built and tested more than a dozen reactors of different sizes and power outputs.
With the success of Project Rover, NASA set its sights on the human missions to Mars that would follow the Apollo landers on the Moon. Because of the distance and flight time, they decided nuclear rockets would be the key to making the missions more capable.
Nuclear rockets aren’t without their risks, of course. A reactor on board would be a small source of radiation to the astronaut crew on board, this would be outweighed by the decreased flight time. Deep space itself is an enormous radiation hazard, with the constant galactic cosmic radiation damaging astronaut DNA.
In the late 1960s, NASA set up the Nuclear Engine for Rocket Vehicle Application program, or NERVA, developing the technologies that would become the nuclear rockets that would take humans to Mars.
They tested larger, more powerful nuclear rockets, in the Nevada desert, venting the high velocity hydrogen gas right into the atmosphere. Environmental laws were much less strict back then.
The first NERVA NRX was eventually tested for nearly two hours, with 28 minutes at full power. And a second engine was started up 28 times and ran for 115 minutes.
By the end, they tested the most powerful nuclear reactor ever built, the Phoebus-2A reactor, capable of generating 4,000 megawatts of power. Thrusting for 12 minutes.
Although the various components were never actually assembled into a flight-ready rocket, engineers were satisfied that a nuclear rocket would meet the needs of a flight to Mars.
But then, the US decided it didn’t want to go to Mars any more. They wanted the space shuttle instead.
The program was shut down in 1973, and nobody tested nuclear rockets since then.
But recent advances in technology have made nuclear thermal propulsion more appealing. Back in the 1960s, the only fuel source they could use was highly enriched uranium. But now engineers think they can get by with low-enriched uranium.
This would be safer to work with, and would allow more rocket facilities to run tests. It would also be easier to capture the radioactive particles in the exhaust and properly dispose of them. That would bring the overall costs of working with the technology down.
On May 22, 2019, US Congress approved $125 million dollars in funding for the development of nuclear thermal propulsion rockets. Although this program doesn’t have any role to play in NASA’s Artemis 2024 return to the Moon, it – quote – “calls upon NASA to develop a multi-year plan that enables a nuclear thermal propulsion demonstration including the timeline associated with the space demonstration and a description of future missions and propulsion and power systems enabled by this capability.”
Nuclear fission is one way to harness the power of the atom. Of course, it requires enriched uranium and generates toxic radioactive waste. What about fusion? Where atoms of hydrogen are squeezed into helium, releasing energy?
The Sun has fusion worked out, thanks to its enormous mass and core temperature, but sustainable, energy positive fusion has been elusive by us puny humans.
Huge experiments like ITER in Europe are hoping to sustain fusion energy within the next decade or so. After that, you can imagine fusion reactors getting miniaturized to the point that they can serve the same role as a fission reactor in a nuclear rocket. But even if you can’t get fusion reactors to the point that they’re net energy positive, they can still provide tremendous acceleration for the amount of mass.
And maybe we don’t need to wait decades. A research group at the Princeton Plasma Physics Laboratory is working on a concept called the Direct Fusion Drive, which they think could be ready much sooner.
It’s based on the Princeton Field-Reversed Configuration fusion reactor developed in 2002 by Samuel Cohen. Hot plasma of helium-3 and deuterium are contained in a magnetic container. Helium-3 is rare on Earth, and valuable because fusion reactions with it won’t generate the same amount of dangerous radiation or nuclear waste as other fusion or fission reactors.
As with the fission rocket, a fusion rocket heats up a propellant to high temperatures and then blasts it out the back, producing thrust.
It works by lining up a bunch of linear magnets that contain and spin very hot plasma. Antennae around the plasma are tuned to the specific frequency of the ions, and create a current in the plasma. Their energy gets pumped up to the point that the atoms fuse, releasing new particles. These particles wander through the containment field until they’re captured by the magnetic field lines and they get accelerated out the back of the rocket.
In theory, a fusion rocket would be capable of providing 2.5 to 5 Newtons of thrust per megawatt, with a specific impulse of 10,000 seconds – remember 850 from fission rockets, and 450 from chemical rockets. It would also be generating electricity needed by the spacecraft far from the Sun, where solar panels aren’t very efficient.
A Direct Fusion Drive would be capable of carrying a 10 tonne mission to Saturn in just 2 years, or a 1-tonne spacecraft from Earth to Pluto in about 4 years. New Horizons needed almost 10.
Since it’s also a 1 megawatt fusion reactor, it would also provide power for all the spacecraft’s instruments when it arrives. Much much more than the nuclear batteries currently carried by deep space missions like Voyager and New Horizons.
Imagine what kinds of interstellar missions might be on the table with this technology too.
And Princeton Satellite Systems isn’t the only group working on systems like this. Applied Fusion Systems have applied for a patent for a nuclear fusion engine that could provide thrust to spacecraft.
I know it’s been decades since NASA seriously tested nuclear rockets as a way to shorten flight times, but it looks like the technology is back. Over the next few years I expect to see new hardware, and new tests of nuclear thermal propulsion systems. And I am incredibly excited at the possibility of actual fusion drives taking us to other worlds. As always, stay tuned, I’ll let you know when one actually flies. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.