content
stringlengths 275
370k
|
---|
Don’t miss out. Stay Informed. Get EcoWatch’s Top News of the Day.
By Tim Radford
Trees have become a source of continuous surprise. Only weeks after researchers demonstrated that old forest giants actually accumulate more carbon than younger, fast-growing trees, British scientists have discovered that the great arbiters of long-term global temperatures may not be the leaves of an oak, pine or eucalyptus, but the roots.
The argument, put forward by a team from Oxford and Sheffield Universities in the journal Geophysical Research Letters, begins with temperature. Warmer climates mean more vigorous tree growth and more leaf litter, and more organic content in the soil. So the tree’s roots grow more vigorously, said Dr. Christopher Doughty of Oxford and colleagues.
They get into the bedrock, and break up the rock into its constituent minerals. Once that happens, the rock starts to weather, combining with carbon dioxide. This weathering draws carbon dioxide out of the atmosphere, and in the process cools the planet down a little. So mountain ecosystems—mountain forests are usually wet and on conspicuous layers of rock—are in effect part of the global thermostat, preventing catastrophic overheating.
The tree is more than just a sink for carbon, it is an agency for chemical weathering that removes carbon from the air and locks it up in carbonate rock.
That mountain weathering and forest growth are part of the climate system has never been in much doubt: the questions have always been about how big a forest’s role might be, and how to calculate its contribution.
Keeping climate stable
U.S. scientists recently studied the rainy slopes of New Zealand’s Southern Alps to begin to put a value on mountain ecosystem processes. Dr. Doughty and his colleagues measured tree roots at varying altitudes in the tropical rain forests of Peru, from the Amazon lowlands to 3,000 meters of altitude in the higher Andes.
They measured the growth to 30 cm below the surface every three months and did so for a period of years. They recorded the thickness of the soil’s organic layer, and they matched their observations with local temperatures, and began to calculate the rate at which tree roots might turn Andean granite into soil.
Then they scaled up the process, and extended it through long periods of time. Their conclusion: that forests served to moderate temperatures in a much hotter world 65 million years ago.
“This is a simple process driven by tree root growth and the decomposition of organic material. Yet it may contribute to the Earth’s long-term climate stability,” Dr. Doughty said. “It seems to act like a thermostat, drawing more carbon dioxide out of the atmosphere when it is warm and less when it is cooler.”
If forests cool the Earth, however, they might also warm it up. A team from Yale University in the U.S. has reported in Geophysical Research Letters that forest fires might have had an even greater impact on global warming during the Pliocene epoch about three million years ago than carbon dioxide.
Rapid rise expected
Dr. Nadine Unger, an atmospheric chemist, and a colleague have calculated that the release of volatile organic compounds, ozone and other products from blazing trees could have altered the planet’s radiation balance, by dumping enough aerosols into the atmosphere to outperform carbon dioxide as a planet-warmer.
In fact, the Pliocene was at least two or three degrees Celsius warmer than the pre-industrial world. The Pliocene is of intense interest to climate scientists: they expect planetary temperatures to return to Pliocene levels before the end of the century, precisely because humans have cleared and burned the forests, and pumped colossal quantities of carbon dioxide into the atmosphere. The greater puzzle is why a rainy, forested and conspicuously human-free world should have been so much warmer.
“This discovery is important for better understanding climate change through Earth’s history, and has enormous implications for the impacts of deforestation and the role of forests in climate protection strategies,” Dr. Unger said.
All this scholarship is concerned with the natural machinery of ancient climate change, and the Yale research was based on powerful computer simulations of long-vanished conditions that could not be replicated in a laboratory.
Meanwhile, ironically, forest scientists have had a chance to test the levels of volatile organic discharges from blazing forests because freakish weather conditions in Norway have seen unexpected wild fires in tracts of mountain forest. December was one of Norway’s warmest winter months ever. In one blaze, 430 residents were forced to evacuate. |
This is a diagram of the antiviral response to the flu. Courtesy of Vasilijevic J, et al (2017)
Flu viruses contain defective genetic material that may activate the immune system in infected patients, and new research published in PLOS Pathogens suggests that lower levels of these molecules could increase flu severity.
Influenza is particularly dangerous for infants, the elderly, and people with underlying medical issues, but otherwise-healthy people sometimes experience severe infection, too. This suggests that, among the multiple strains that circulate yearly, some are more virulent than others. Markers of severity have been found for specific strains, but a general marker that applies to multiple strains would be more useful to inform treatment and policy.
To identify such a marker, Ana Falcón of the Spanish National Center for Biotechnology, Madrid, and colleagues focused on defective viral genomes (DVGs). These molecules, which consist of pieces of viral RNA with missing genetic information, are found in multiple strains of flu virus. Previous research suggests that DVGs activate the immune system in infected animals, and thus might restrict severity.
To test whether DVGs could serve as a general marker of flu severity, the researchers infected both mice and human tissue cell cultures with different strains of influenza A H1N1 virus--the subtype responsible for flu pandemics. They found that strains resulting in lower levels of DVG accumulation in the cell cultures also produced more severe infection in the mice.
The research team also analyzed the genomes of viruses isolated from respiratory samples taken from people who experienced severe infection or death during the 2009 "swine flu" pandemic or later "swine flu-like" seasons. They found that the H1N1 strain that caused severe symptoms had significantly less DVG accumulation than influenza A strains sampled from people who experienced only mild symptoms.
Together, these results suggest that low levels of DVGs may indicate greater risk of severe disease in patients infected with influenza A virus. With further research, these findings could help predict flu severity, guide patient treatment, and inform flu prevention strategies.
Reference: Vasilijevic J, Zamarreño N, Oliveros JC, Rodriguez-Frandsen A, Gómez G, Rodriguez G, et al. (2017) Reduced accumulation of defective viral genomes contributes to severe outcome in influenza virus infected patients. PLoS Pathog13(10): e1006650. |
Bubble helps spider live a life aquatic
Spider submarine An enduring bubble of air inside an underwater silk sack allows one species of spider to remain underwater for hours at a time, according to a new study.
In fact, the bubble is so efficient it allows the spider to live virtually its whole life under water.
Professor Roger Seymour of the University of Adelaide says that the diving bell spider (Argyroneta aquatica) creates a submerged oxygen store, five to ten centimetres below the surface and can "stay down for more than a day while resting."
The study, which is published in The Journal of Experimental Biology, overturns previous research that suggests the spiders need to return to the surface every 20 to 40 minutes.
Air-breathing aquatic insects, as well as some species of spider, carry a bubble of air from the surface down with them. But this usually contains enough oxygen to last for several minutes. But, by virtue of its 'diving bell', the diving bell spider is capable of remaining under water for most of the winter.
A. aquatica makes its diving bell by spinning an open-bottomed dome-shaped silk cocoon between the fronds of pond plants. It then fills it with a single air bubble that, according to Seymour, can be "as big as your ring fingernail."
The size of the bell is highly variable, sometimes only admitting entrance of the spider's abdomen. Female diving bell spiders tend to make larger diving bells that can be further enlarged according to need, such as to accommodate eggs or prey. They also enlarge the bell when the oxygen levels in the water drops.
The never-bursting bubble
Seymour, along with German colleague Dr Stefan Hetz of Humboldt University, set out to determine the effectiveness of the spider's diving bell by measuring oxygen levels within the bell and surrounding water.
"The bubble inside actually protrudes out between the fibres of the web, so it's a naked air-water interface," says Seymour. "This also gives the spider its name (Argyroneta), which means silvery net. The bubbles look like silver balls that are held under the water."
The scientists used oxygen-sensitive fibre optic probes to calculate the gas volume of the diving bell and the level of gas exchange occurring between the bell and the surrounding water. They also measured the spider's oxygen consumption.
"[We found] up to eight times the amount of oxygen can go from the water into the bubble from what was initially present," says Seymour.
Indeed, the diving bell functions as a very effective physical gill as opposed to an anatomical gill. And, because the diving bell spider lives a quiet sedentary life, its oxygen requirements are easily met -even in extreme conditions of warm stagnant water.
But, without supplements of fresh air, the bubble eventually shrinks as the nitrogen gas diffuses back into the water. Thus, the spiders need to make a daily dash to the surface to collect a replenishing bubble.
"They need to come up to the surface to get this little bubble to reintroduce into the bell," says Seymour.
It's carried down on the spider's abdomen and rear legs. Once inside the bell, the new bubble fuses with the underwater air supply.
Staying still and safe
"Being able to stay still for so long, without having to go to the surface to renew the air bubble, protects the spiders from predators and also keeps them hidden from potential prey that comes near," says Seymour.
He speculates the spiders may build their diving bells at night for this reason.
But Seymour adds that the Eurasian-dwelling spiders are become increasingly scarce.
"In Germany, they are popular with aquaticists because they are so interesting in their behaviour. This may be one of the reasons they are becoming harder to find." |
Environmental Justice is the fair treatment and meaningful involvement of all people regardless of race, color, national origin, or income with respect to the development, implementation, and enforcement of environmental laws, regulations, and policies.
What is meant by fair treatment and meaningful involvement?
- Fair treatment means that no group of people should bear a disproportionate share of the negative environmental consequences resulting from industrial, governmental and commercial operations or policies
- Meaningful Involvement means that:
- people have an opportunity to participate in decisions about activities that may affect their environment and/or health;
- the public’s contribution can influence the regulatory agency’s decision;
- their concerns will be considered in the decision making process; and
- the decision makers seek out and facilitate the involvement of those potentially affected
EPA and Environmental Justice
EPA's goal is to provide an environment where all people enjoy the same degree of protection from environmental and health hazards and equal access to the decision-making process to maintain a healthy environment in which to live, learn, and work.
EPA's environmental justice mandate extends to all of the Agency's work, including setting standards, permitting facilities, awarding grants, issuing licenses and regulations and reviewing proposed actions by the federal agencies. EPA works with all stakeholders to constructively and collaboratively address environmental and public health issues and concerns. The Office of Environmental Justice (OEJ) coordinates the Agency's efforts to integrate environmental justice into all policies, programs, and activities. OEJ's mission is to facilitate Agency efforts to protect environment and public health in minority, low-income, tribal and other vulnerable communities by integrating environmental justice in all programs, policies, and activities.
Learn more on the history of Environmental Justice... |
You cannot see or smell carbon monoxide (CO), but at high levels it can kill a person in minutes. It is the leading cause of poisoning death, with over 500 victims in the United States each year.
Carbon monoxide is produced whenever a fuel such as gas, oil, kerosene, wood or charcoal is burned. The amount of CO produced depends mainly on the quality or efficiency of combustion. A properly functioning burner, whether natural gas or liquefied petroleum gas (LPG), has efficient combustion and produces little CO. An out-of-adjustment burner can produce life-threatening amounts of CO without any visible warning signs.
When appliances that burn fuel are maintained and used properly, the amount of CO produced usually is not hazardous. But if appliances are not working properly or are used incorrectly, dangerous levels of CO can collect in an enclosed space. Hundreds of Americans die accidentally every year from CO poisoning caused by malfunctioning or improperly used fuel-burning appliances.
Common sources of CO
Accumulation of combustion gases can occur when a blocked chimney, rusted heat exchanger or broken chimney connector pipe (flue) prevents combustion gases from being exhausted from the home. CO also can enter the home from an idling car or from a lawnmower or generator engine operating in the garage.
Another source for CO is backdrafting. When ventilation equipment, such as a range-top vent fan, is used in a tightly sealed home, reverse air flow can occur in chimneys and flues. An operating fireplace also can interact with the flue dynamics of other heating appliances. Again, backdrafting may result.
Other common sources of CO include unvented, fuel-burning space heaters (especially if malfunctioning) and indoor use of a charcoal barbeque grill. CO is produced by gas stoves and ranges and can become a problem with prolonged, improper operation — for example, if these appliances are used to heat the home. Flame color does not necessarily indicate CO production. However, a change in the gas flame’s color can indicate a CO problem. If a blue flame becomes yellow, CO often is increased.
While larger combustion appliances are designed to be connected to a flue or chimney to exhaust combustion byproducts, some smaller appliances are designed to be operated indoors without a flue. Appliances designed as supplemental or decorative heaters (including most unvented gas fireplaces) are not designed for continuous use. To avoid excessive exposure to pollutants, never use these appliances for more than four hours at a time.
When operating unvented combustion appliances, such as portable space heaters and stoves, follow safe practices. Besides observing fire safety rules, make sure the burner is properly adjusted and there is good ventilation. Never use these items in a closed room. Keep doors open throughout the house, and open a window for fresh air. Never use outdoor appliances such as barbeque grills or construction heaters indoors. Do not use appliances such as ovens and clothes dryers to heat the house.
Inspect heating equipment. To reduce the chances of backdrafting in furnaces, fireplaces and similar equipment, make sure flues and chimneys are not blocked. Inspect metal flues for rust. In furnaces, check the heat exchanger for rust and cracks. Soot also is a sign of combustion leakage. When using exhaust fans, open a nearby window or door to provide replacement air
CO poisoning symptoms
The initial symptoms of CO poisoning are similar to the flu but without the fever. They include headache, fatigue, shortness of breath, nausea, dizziness, vomiting, disorientation, and loss of consciousness.
In more technical terms, CO bonds tightly to the hemoglobin in red blood cells, preventing them from carrying oxygen throughout the body. If you have any of these symptoms and if you feel better when you go outside your home and the symptoms reappear when you go back inside, you may have CO poisoning.
If you experience symptoms that you think could be from CO poisoning, get fresh air immediately. Open doors and windows, turn off combustion appliances, and leave the house. Go to an emergency room and tell the physician you suspect CO poisoning.
If CO poisoning has occurred, it often can be diagnosed by a blood test done soon after exposure. Be prepared to answer the following questions for the doctor:
• Do your symptoms occur only in the house?
• Is anyone else in your household complaining of similar symptoms?
• Did everyone’s symptoms appear about the same time?
• Are you using any fuel-burning appliances in the home?
• Has anyone inspected your appliances lately?
• Are you certain these appliances are properly working?
Because CO is a colorless, tasteless, and odorless gas that is quickly absorbed by the body and the symptoms often resemble other illnesses, it is often known as the “silent killer.”
Prevention is the key
At the beginning of every heating season, have a trained professional check all your fuel-burning appliances: oil and gas furnaces, gas water heaters, gas ranges and ovens, gas dryers, gas or kerosene space heaters, fireplaces and wood stoves. Make certain that the flues and chimneys are connected, in good condition and not blocked.
Whenever possible, choose appliances that vent fumes to the outside. Have them properly installed, and maintain them according to manufacturers’ instructions. Read and follow all instructions that accompany any fuel-burning device. If you cannot avoid using an unvented gas or kerosene space heater, carefully follow the cautions that come with the device. Use the proper fuel and keep doors to the rest of the house open. Crack a window to ensure enough air for ventilation and proper fuel burning.
Proper installation, operation and maintenance of combustion appliances in the home are most important in reducing the risk of CO poisoning.
In recent years, CO alarms have become widely available. When selecting a CO alarm, make sure it meets the stringent requirements of Underwriters Laboratories (UL) or International Approval Service (IAS). Modern CO alarms can provide warnings for even nonlethal levels of this dangerous pollutant. However, do not think of the alarm as the “be all, end all” to alert you to dangerous CO levels. The U.S. Consumer Product Safety Commission recommends having at least one CO alarm in every home, placed outside of the sleeping area. Homes with several sleeping areas require multiple alarms.
Look for an alarm with a long-term warranty and one that easily can be self-tested and reset to ensure proper functioning.
Jan. 1-2 — Office closed.
Check out our Web page at www.archuleta.colostate.edu for calendar events and information. |
||This article may be too long to read and navigate comfortably. (March 2015)|
|Software development process|
|Paradigms and models|
|Methodologies and frameworks|
|Standards and BOKs|
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects).
Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test:
- meets the requirements that guided its design and development,
- responds correctly to all kinds of inputs,
- performs its functions within an acceptable time,
- is sufficiently usable,
- can be installed and run in its intended environments, and
- achieves the general result its stakeholders desire.
As the number of possible tests for even simple software components is practically infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources. As a result, software testing typically (but not exclusively) attempts to execute a program or application with the intent of finding software bugs (errors or other defects). The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can even create new ones.
Software testing can provide objective, independent information about the quality of software and risk of its failure to users and/or sponsors.
Software testing can be conducted as soon as executable software (even if partially complete) exists. The overall approach to software development often determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and then implemented in testable programs. In contrast, under an Agile approach, requirements, programming, and testing are often done concurrently.
- 1 Overview
- 2 History
- 3 Testing methods
- 4 Testing levels
- 5 Testing Types
- 5.1 Installation testing
- 5.2 Compatibility testing
- 5.3 Smoke and sanity testing
- 5.4 Regression testing
- 5.5 Acceptance testing
- 5.6 Alpha testing
- 5.7 Beta testing
- 5.8 Functional vs non-functional testing
- 5.9 Continuous testing
- 5.10 Destructive testing
- 5.11 Software performance testing
- 5.12 Usability testing
- 5.13 Accessibility testing
- 5.14 Security testing
- 5.15 Internationalization and localization
- 5.16 Development testing
- 5.17 A/B testing
- 5.18 Concurrent testing
- 5.19 Conformance testing or type testing
- 6 Testing process
- 7 Automated testing
- 8 Testing artifacts
- 9 Certifications
- 10 Controversy
- 11 Related processes
- 12 See also
- 13 References
- 14 Further reading
- 15 External links
Although testing can determine the correctness of software under the assumption of some specific hypotheses (see hierarchy of testing difficulty below), testing cannot identify all the defects within software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing is the process of attempting to make this assessment.
Defects and failures
Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, e.g., unrecognized requirements which result in errors of omission by the program designer. Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, usability, performance, and security.
Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single defect may result in a wide range of failure symptoms.
Input combinations and preconditions
A fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)—usability, scalability, performance, compatibility, reliability—can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases. Note that "coverage", as used here, is referring to combinatorial coverage, not requirements coverage.
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[dubious ]
It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
|Cost to fix a defect||Time detected|
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims which seem to contradict Boehm's graph, and no numerical results which clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.
Software testing can be done by software testers. Until the 1980s, the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing, different roles have been established: manager, test lead, test analyst, test designer, tester, automation developer, and test administrator.
|This scientific article needs additional citations to secondary or tertiary sources (March 2015)|
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979. Although his attention was on breakage testing ("a successful test is one that finds a bug") it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:
- Until 1956 – Debugging oriented
- 1957–1978 – Demonstration oriented
- 1979–1982 – Destruction oriented
- 1983–1987 – Evaluation oriented
- 1988–2000 – Prevention oriented
Static vs. dynamic testing
There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing involves validation. Together they help improve software quality. Among the techniques for static analysis, mutation testing can be used to ensure the test-cases will detect errors which are introduced by mutating the source code.
The box approach
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.
White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by seeing the source code) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
- API testing – testing of the application using public and private APIs (application programming interfaces)
- Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
- Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
- Mutation testing methods
- Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for:
- Function coverage, which reports on functions executed
- Statement coverage, which reports on the number of lines executed to complete the test
- Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.
Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing.
Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.
One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight." Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.
Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important in order to document the steps taken to uncover the bug.[clarification needed]
Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process. For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developers.
Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[not in citation given] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.
However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations. Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.
There are generally four recognized levels of tests: unit testing, integration testing, component interface testing, and system testing. Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model. Other test levels are classified by the testing objective.
Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.
These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other.
Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.
Depending on the organization's expectations for software development, unit testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software verification practices.
Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.
Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
Component interface testing
The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit. Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component.
System testing, or end-to-end testing, tests a completely integrated system to verify that it meets its requirements. For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.
Operational Acceptance testing
Operational Acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations Readiness and Assurance (OR&A) testing. Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.
An installation test assures that the system is installed correctly and working at actual customer's hardware.
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.
Smoke and sanity testing
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test.
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previous sets of test-cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test-cases to test parts of the new design to ensure prior functionality is still supported.
Acceptance testing can mean one of two things:
- A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression.
- Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even infinite period of time (perpetual beta).
Functional vs non-functional testing
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Software performance testing
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application.
Accessibility testing may include compliance with standards such as:
- Americans with Disabilities Act of 1990
- Section 508 Amendment to the Rehabilitation Act of 1973
- Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected to that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."
Internationalization and localization
The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).
Actual translation to human languages must be tested, too. Possible localization failures include:
- Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.
- Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent.
- Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
- Untranslated messages in the original language may be left hard coded in the source code.
- Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
- Software may use a keyboard shortcut which has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language.
- Software may lack support for the character encoding of the target language.
- Fonts and font sizes which are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small.
- A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction.
- Software may lack proper support for reading or writing bi-directional text.
- Software may display images with text that was not localized.
- Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.
Development Testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.
Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices.
A/B testing is basically a comparison of two outputs, generally when only one variable has changed: run a test, change one thing, run the test again, compare the results. This is more useful with more small-scale situations, but very useful in fine-tuning any program. With more complex projects, multivariant testing can be done.
In concurrent testing, the focus is more on what the performance is like when continuously running with normal input and under normal operation as opposed to stress testing, or fuzz testing. Memory leak is more easily found and resolved using this method, as well as more basic faults.
Conformance testing or type testing
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Traditional waterfall development model
A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed, before it is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.
Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
Agile or Extreme development model
In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently.
This methodology increases the testing effort done by development, before reaching any formal testing team. In some other development models, most of the test execution occurs after the requirements have been defined and the coding process has been completed.
Bottom Up Testing is an approach to integrated testing where the lowest level components (modules, procedures, and functions) are tested first, then integrated and used to facilitate the testing of higher level components. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. The process is repeated until the components at the top of the hierarchy are tested. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.
Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.
In both, method stubs and drivers are used to stand-in for missing components and are replaced as the levels are completed.
A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing. The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.
- Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
- Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
- Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
- Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
- Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
- Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
- Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
- Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
- Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.
Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as:
- Program monitors, permitting full or partial monitoring of program code including:
- Instruction set simulator, permitting complete instruction level monitoring and trace facilities
- Hypervisor, permitting complete control of the execution of program code including:-
- Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code
- Code coverage reports
- Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points
- Automated functional GUI testing tools are used to repeat system-level tests through the GUI
- Benchmarks, allowing run-time performance comparisons to be made
- Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage
Some of these features may be incorporated into a single composite tool or an Integrated Development Environment (IDE).
Measurement in software testing
Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Hierarchy of testing difficulty
Based on the amount of test cases required to construct a complete test suite in each context (i.e. a test suite such that, if it is applied to the implementation under test, then we collect enough information to precisely determine whether the system is correct or incorrect according to some specification), a hierarchy of testing difficulty has been proposed. It includes the following testability classes:
- Class I: there exists a finite complete test suite.
- Class II: any partial distinguishing rate (i.e. any incomplete capability to distinguish correct systems from incorrect systems) can be reached with a finite test suite.
- Class III: there exists a countable complete test suite.
- Class IV: there exists a complete test suite.
- Class V: all cases.
It has been proved that each class is strictly included into the next. For instance, testing when we assume that the behavior of the implementation under test can be denoted by a deterministic finite-state machine for some known finite sets of inputs and outputs and with some known number of states belongs to Class I (and all subsequent classes). However, if the number of states is not known, then it only belongs to all classes from Class II on. If the implementation under test must be a deterministic finite-state machine failing the specification for a single trace (and its continuations), and its number of states is unknown, then it only belongs to classes from Class III on. Testing temporal machines where transitions are triggered if inputs are produced within some real-bounded interval only belongs to classes from Class IV on, whereas testing many non-deterministic systems only belongs to Class V (but not all, and some even belong to Class I). The inclusion into Class I does not require the simplicity of the assumed computation model, as some testing cases involving implementations written in any programming language, and testing implementations defined as machines depending on continuous magnitudes, have been proved to be in Class I. Other elaborated cases, such as the testing framework by Matthew Hennessy under must semantics, and temporal machines with rational timeouts, belong to Class II.
The software testing process can produce several artifacts.
- Test plan
- A test plan is a document detailing the objectives, target market, internal beta team, and processes for a specific beta test. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy.
- Traceability matrix
- A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage.
- Test case
- A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
- Test script
- A test script is a procedure, or programing code that replicates user actions. Initially the term was derived from the product of work created by automated regression test tools. Test Case will be a baseline to create test scripts using a tool or a program.
- Test suite
- The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
- Test fixture or test data
- In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project.
- Test harness
- The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.
- Software testing certification types
- Exam-based: Formalized exams, which need to be passed; can also be learned by self-study [e.g., for ISTQB or QAI]
- Education-based: Instructor-led sessions, where each course has to be passed [e.g., International Institute for Software Testing (IIST)]
- Testing certifications
- ISEB offered by the Information Systems Examinations Board
- ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board
- ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board
- Quality assurance certifications
- CSQE offered by the American Society for Quality (ASQ)
- CQIA offered by the American Society for Quality (ASQ)
Some of the major software testing controversies include:
- What constitutes responsible software testing?
- Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.
- Agile vs. traditional
- Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles, whereas government and military software providers use this methodology but also the traditional test-last models (e.g. in the Waterfall model).
- Exploratory test vs. scripted
- Should tests be designed at the same time as they are executed or should they be designed beforehand?
- Manual testing vs. automated
- Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. More in particular, test-driven development states that developers should write unit-tests, as those of XUnit, before coding the functionality. The tests then can be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in test automation. Also, the investment in tools and expertise can be amortized over multiple projects with the right level of knowledge sharing within an organization.
- Software design vs. software implementation
- Should testing be carried out only at the end or throughout the whole process?
- Who watches the watchmen?
- The idea is that any form of observation is also an interaction — the act of testing can also affect that which is being tested.
- Is the existence of the ISO 29119 software testing standard justified?
- Significant opposition has formed out of the ranks of the context-driven school of software testing about the ISO 29119 standard. Professional testing associations, such as The International Society for Software Testing, are driving the efforts to have the standard withdrawn.
Software verification and validation
- Verification: Have we built the software right? (i.e., does it implement the requirements).
- Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer).
The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology:
- Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
- Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
According to the ISO 9000 standard:
- Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
- Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
Software quality assurance (SQA)
Software testing is a part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.
- Category:Software testing
- Dynamic program analysis
- Formal verification
- Independent test organization
- Manual testing
- Orthogonal array testing
- Pair testing
- Reverse semantic traceability
- Software testability
- Orthogonal Defect Classification
- Test Environment Management
- Test management tools
- Web testing
- Kaner, Cem (November 17, 2006). "Exploratory Testing" (PDF). Florida Institute of Technology, Quality Assurance Institute Worldwide Annual Software Testing Conference, Orlando, FL. Retrieved November 22, 2014.
- Software Testing by Jiantao Pan, Carnegie Mellon University
- Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., "Contract Driven Development = Test Driven Development – Writing Test Cases", Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007
- Kaner, Cem; Falk, Jack; Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed. New York, et al: John Wiley and Sons, Inc. p. 480. ISBN 0-471-35846-0. Cite error: Invalid
<ref>tag; name "Kaner1" defined multiple times with different content (see the help page). Cite error: Invalid
<ref>tag; name "Kaner1" defined multiple times with different content (see the help page).
- Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. pp. 41–43. ISBN 0-470-04212-5.
- Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 426. ISBN 0-470-04212-5.
- Section 1.1.2, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board
- Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board
- "Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications.".
- Software errors cost U.S. economy $59.5 billion annually, NIST report
- McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN 0735619670.
- Bossavit, Laurent (2013-11-20). The Leprechauns of Software Engineering--How folklore turns into fact and what to do about it. Chapter 10: leanpub.
- see D. Gelperin and W.C. Hetzel
- Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. ISBN 0-471-04328-1.
- Company, People's Computer (1987). "Dr. Dobb's journal of software tools for the professional programmer". Dr. Dobb's journal of software tools for the professional programmer (M&T Pub) 12 (1–6): 116.
- Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6): 687–695. doi:10.1145/62959.62965. ISSN 0001-0782.
- until 1956 it was the debugging oriented period, when testing was often associated to debugging: there was no clear difference between testing and debugging. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.
- From 1957–1978 there was the demonstration oriented period where debugging and testing was distinguished now – in this period it was shown, that software satisfies the requirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.
- The time between 1979–1982 is announced as the destruction oriented period, where the goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.
- 1983–1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.
- From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782.
- Introduction, Code Coverage Analysis, Steve Cornett
- Ron, Patton. Software Testing.
- Laycock, G. T. (1993). "The Theory and Practice of Specification Based Software Testing" (PostScript). Dept of Computer Science, Sheffield University, UK. Retrieved 2008-02-13.
- Bach, James (June 1999). "Risk and Requirements-Based Testing" (PDF). Computer 32 (6): 113–114. Retrieved 2008-08-19.
- Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p. 159. ISBN 978-0-615-23372-7.
- "Visual testing of software – Helsinki University of Technology" (PDF). Retrieved 2012-01-13.
- "Article on visual testing in Test Magazine". Testmagazine.co.uk. Retrieved 2012-01-13.
- Patton, Ron. Software Testing.
- "SOA Testing Tools for Black, White and Gray Box SOA Testing Techniques". Crosschecknet.com. Retrieved 2012-12-10.
- "SWEBOK Guide – Chapter 5". Computer.org. Retrieved 2012-01-13.
- Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional. p. 45. ISBN 0-201-80938-9.
- Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: Van Nostrand Reinhold. pp. 21,430. ISBN 0-442-20672-0.
- Clapp, Judith A. (1995). Software Quality Control, Error Analysis, and Testing. p. 313. ISBN 0815513631.
- Mathur, Aditya P. (2008). Foundations of Software Testing. Purdue University. p. 18. ISBN 978-8131716601.
- IEEE (1990). IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York: IEEE. ISBN 1-55937-079-3.
- Whitepaper: Operational Acceptance – an application of the ISO 29119 Software Testing standard. May 2015 Anthony Woods, Capgemini
- Paul Ammann; Jeff Offutt (2008). Introduction to Software Testing. p. 215 of 322 pages.
- van Veenendaal, Erik. "Standard glossary of terms used in Software Testing". Retrieved 4 January 2013.
- Part of the Pipeline: Why Continuous Testing Is Essential, by Adam Auerbach, TechWell Insights August 2015
- The Relationship between Risk and Continuous Testing: An Interview with Wayne Ariola, by Cameron Philipp-Edmonds, Stickyminds December 2015
- DevOps: Are You Pushing Bugs to Clients Faster, by Wayne Ariola and Cynthia Dunlop, PNSQC October 2015
- DevOps and QA: What’s the real cost of quality?, by Ericka Chickowski, DevOps.com June 2015
- Shift Left and Put Quality First, by Adam Auerbach, TechWell Insights October 2014
- ISO/IEC/IEEE 29119-1:2013 – Software and Systems Engineering – Software Testing – Part 1 – Concepts and Definitions; Section 4.38
- "Globalization Step-by-Step: The World-Ready Approach to Testing. Microsoft Developer Network". Msdn.microsoft.com. Retrieved 2012-01-13.
- EtestingHub-Online Free Software Testing Tutorial. "e)Testing Phase in Software Testing:". Etestinghub.com. Retrieved 2012-01-13.
- Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. pp. 145–146. ISBN 0-471-04328-1.
- Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley. p. 3. ISBN 0-201-79429-2.
- Marchenko, Artem (November 16, 2007). "XP Practice: Continuous Integration". Retrieved 2009-11-16.
- Gurses, Levent (February 19, 2007). "Agile 101: What is Continuous Integration?". Retrieved 2009-11-16.
- Pan, Jiantao (Spring 1999). "Software Testing (18-849b Dependable Embedded Systems)". Topics in Dependable Embedded Systems. Electrical and Computer Engineering Department, Carnegie Mellon University.
- Rodríguez, Ismael; Llana, Luis; Rabanal, Pablo (2014). "A General Testability Theory: Classes, properties, complexity, and testing reductions". IEEE Transactions on Software Engineering 40 (9): 862–894. doi:10.1109/TSE.2014.2331690. ISSN 0098-5589.
- Rodríguez, Ismael (2009). "A General Testability Theory". CONCUR 2009 - Concurrency Theory, 20th International Conference, CONCUR 2009, Bologna, Italy, September 1–4, 2009. Proceedings. pp. 572–586. doi:10.1007/978-3-642-04081-8_38. ISBN 978-3-642-04080-1.
- IEEE (1998). IEEE standard for software test documentation. New York: IEEE. ISBN 0-7381-1443-X.
- Kaner, Cem (2001). "NSF grant proposal to "lay a foundation for significant improvements in the quality of academic and commercial courses in software testing"" (PDF).
- Kaner, Cem (2003). "Measuring the Effectiveness of Software Testers" (PDF).
- Black, Rex (December 2008). Advanced Software Testing- Vol. 2: Guide to the ISTQB Advanced Certification as an Advanced Test Manager. Santa Barbara: Rocky Nook Publisher. ISBN 1-933952-36-9.
- "ISTQB in the U.S.".
- "American Society for Quality". Asq.org. Retrieved 2012-01-13.
- "context-driven-testing.com". context-driven-testing.com. Retrieved 2012-01-13.
- "Article on taking agile traits without the agile method". Technicat.com. Retrieved 2012-01-13.
- "We're all part of the story" by David Strom, July 1, 2009
- "IEEE Xplore - Sign In" (PDF). ieee.org.
- Willison, John S. (April 2004). "Agile Software Development for an Agile Force". CrossTalk (STSC) (April 2004). Archived from the original on October 29, 2005.
- "IEEE article on Exploratory vs. Non Exploratory testing" (PDF). Ieeexplore.ieee.org. Retrieved 2012-01-13.
- An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN 0-201-33140-3.
- Microsoft Development Network Discussion on exactly this topic Archived April 2, 2015 at the Wayback Machine
- "stop29119". commonsensetesting.org.
- Paul Krill (22 August 2014). "Software testers balk at ISO 29119 standards proposal". InfoWorld.
- Tran, Eushiuan (1999). "Verification/Validation/Certification". In Koopman, P. Topics in Dependable Embedded Systems. USA: Carnegie Mellon University. Retrieved 2008-01-13.
- Bertrand Meyer, "Seven Principles of Software Testing," Computer, vol. 41, no. 8, pp. 99–101, Aug. 2008, doi:10.1109/MC.2008.306; available online.
|At Wikiversity, you can learn more and teach others about Software testing at the Department of Software testing|
- Software testing tools and products at DMOZ
- "Software that makes Software better" Economist.com
- "What You Need to Know About Software Beta Tests" Centercode.com |
May: May Flowers
Spring is a great time for flowers. At Tyler you can visit our magnolias, azaleas, rhododendrons and lilacs that all bloom in May. You will find flowers in our Native Woodland Walk and along the forested pathways.
So you may be asking yourself, “Why are flowers different colors? Why are some sweet smelling and some smelly? Why do I see more bees on one flower and more butterflies on another?” Flowers are the part of the plant that makes seeds to make more plants. Flowers are shaped the way they are to attract pollinators or to be able to collect pollen from wind. Bees don’t see red, but do see blue, yellow and ultraviolet. Thus, bee-pollinated flowers are mostly yellow (some blue) with ultraviolet nectar guides, or “landing patterns.” Bees can smell, so they like flowers that have a delicate floral scent. Butterfly-pollinated flowers are brightly-colored (even red), but odorless (butterflies can’t smell). These flowers are often in clusters and/or are designed to provide a landing platform.
- Carnation: Obtain a white carnation, (celery works too), and put it in a vase with water and blue food coloring (about 10 drops in 1/4 cup water should do). Wait a day or two, and see what happens.
- Dissect a flower: Different flowers have different numbers of petals. Have your child dissect different flowers and count the petals. Next use them for the paint with flowers in the craft section.
- Paint with Flowers – color a flower coloring page with real flower petals. Buy bright colored flowers from a store or use flowers in your yard and rub them against the paper to color!
- Grow some flowers of your own! Grow flowers from seed or bulbs. Grow herbs to use in your kitchen.
When exploring Tyler, you may want to go on a scavenger hunt. You can get a hunt at the Visitor Center. If you want to go on a hunt by yourself, these are things you may want to look for:
- different color flowers such as green or orange
- a flower the size of your pinky nail
- a flower that likes water
- a flower that is visited by a bee
- a flower growing under a tree
- a flower in the vegetable garden
- a flower that a butterfly would like
- http://www.theteachersguide.com/plantsflowers.htm – tons of resources about plants and flowers
- http://biology.clc.uc.edu/courses/bio106/pollinat.htm – good background information on pollination
More Activities and Photos:
Visit us on Pinterest for more great learning opportunities! |
Why Do We Celebrate Memorial Day? is an informational reading literacy packet designed for first and second grades. Learn all about the history of this holiday and why it is important to celebrate.
Also includes a paragraph about Armed Forces Day (celebrated 3rd Saturday of May) and an opportunity for readers to compare the two holidays.
Use in Guided Reading groups, literacy centers, literacy circles, whole class.
Appropriate for first graders (with support) and on-level second graders. Also perfect to use for reading intervention for 3rd-5th graders.
-(7) Tier 3 vocabulary words- specific to Memorial Day. Vocabulary cards and definitions with graphics
-Memorial Day printable book- original informational text written by me that simply explains the history of this holiday and why we celebrate it.
-(2) different formats of book are included: booklet form (available in color and black/white) and article form
-(2) different cover pages for printable booklet- you can print the book on full sheets of paper or make smaller books by cutting the pages in half and stapling together. Use the full size cover page for the article.
-Calendar- mark important dates on the calendar
-Task Cards-(12) cards with recording sheet for reader response
-(7) stationery sheets- color and black and white
**Please note: the booklet form and the article form of the text contain the SAME information. It is provided to offer you and the students choice in the form that they would like to read.
Common Core State Standards Alignment
Reading Informational Text- 1.1 1.2, 1.4, 1.5, 1.6
2.1, 2.4, 2.5, 2.6
You may also be interested in my Flag Day Literacy Packet
Memorial Day Packet for Kindergarten
Teacher Mom of 3 |
In 468 CE, Rabbi Amemar, Rabbi Mesharsheya and Rabbi Huna, the heads of Babylonian Jewry, were arrested and executed 11 days later. The Jewish community of Babylon had existed for 900 years, ever since Nebuchadnezzar had conquered Israel, destroyed the Holy Temple, and exiled the Jews to Babylon. Seventy years later, when the Jews were permitted to return to Israel, a large percentage remained in Babylon -- and this eventually became the center of Jewish rabbinic authority. Things began to worsen in the 5th century, when the Persian priests, fighting against encroaching Christian missionaries, unleashed anti-Christian persecutions which caught the Jews of Babylonia in its wake. Eventually the situation improved, and Babylon remained as the center of Jewish life for another 500 years. |
Why Do Tridacnids Look the Way They Look?Author: James W. Fatherree, MSc
The tridacnid clams are usually quite beautiful, often covered in unique patterns and a broad range of colors, and I’ve wondered for a long time why they look as fancy as they do. After all, they’re just clams, albeit very unusual ones. So, after a whole lot of digging around for answers, I found some pretty interesting information about tridacnid coloration. I’ll cover some basics first, in case you don’t know much (or anything) about tridacnids, and then we’ll get to the other stuff, some of which will be news to just about everybody.
First of all, tridacnids contain populations of single-celled algae, called zooxanthellae, within some of their tissues. These are the same things that reef-dwelling corals harbor, and when a coral or tridacnid is exposed to bright light, the zooxanthellae can make more food than they need for themselves and actually “feed” their host. Excess nutrients made by the zooxanthellae via photosynthesis are donated to the coral or clam host, helping them meet their daily needs.
In the case of tridacnids, the bulk of the zooxanthellae are kept in a thin sheet of soft tissue, called the mantle, which can be extended well beyond the edge of the shell by most species. A tridacnid shell is made to sit up facing the sun when it’s open, rather than lying flat on its side. So during the day, a tridacnid simply has to open its shell and extend its zooxanthellae-packed mantle tissue toward the sun in order to feed, and it’s this mantle tissue that we notice when we look at a tridacnid clam.
Tridacnids can produce numerous pigments, and the zooxanthellae make their own pigments, as well, making the appearance of tridacnid mantles a collaboration of sorts between the host and the hosted. The zooxanthellae make pigments used in the process of photosynthesis, like chlorophyll and peridinin, as well as a few others, and the pigments created by the clams are used primarily as sunscreens that protect both of them from excessive light (Yonge 1975). That’s not all, though, as in different situations some pigments may also serve other functions.
The primary photosynthetic pigments in the zooxanthellae absorb visible light in order to perform photosynthesis, turning solar energy into the chemical energy trapped in simple sugars. However, these pigments can absorb and use some parts of the spectrum of visible light better than others, and they don’t absorb much green or red light. That’s why zooxanthellae tend to be rather brownish, or reddish-brown in color. Their color, as we see it, is a mixture of the green and red light that is reflected by their photosynthetic pigments.
On top of those, there can be some other pigments that also help to convert some unusable (or just less useable) colors of light coming from the sun into readily usable colors like blue. For example, a specific pigment may absorb less-suitable violet light and then give off (reflect or fluoresce) blue light, which can then be used by the zooxanthellae to produce more food. Likewise, one pigment may actually absorb unusable near-UV light and emit violet light, which is then absorbed by another pigment that emits usable blue light, and so on. Thus, the net effect is that the zooxanthellae and the tridacnid host may be able to get more food out of the same quantity of light in some situations, due to the presence of these “accessory” pigments.
Conversely, some non-photosynthetic pigments are produced in order to block damaging ultraviolet light. Excessive amounts of UV light isn’t just bad for people, it’s also bad for tridacnids and zooxanthellae, and when tridacnids are living in shallow tropical waters, they obviously need something to help block out the great amounts of UV light that reaches them there. So some pigments can be produced to function as natural sunscreens, which act as UV shields by reflecting and/or absorbing light in the UV part of the spectrum.
It’s likely that some pigments can also help when there is simply too much visible light, rather than only too much UV light. Although you’d think more is always better, it’s possible to have too much usable light, which can actually decrease photosynthetic rates. This occurs when there is so much light that the photosynthetic process is overloaded, and the rate of food production can actually start to go down rather than up. Thus, some pigments may actually help on bright days in shallow waters when things get out of hand.
Unfortunately, I have yet to find any book, article, or scientific paper that covers all of the pigments that can be made, but Yonge (1975) wrote that they are mainly in the color range from blue to green and brown to yellow, which can apparently mix to create many of the other colors that we see. But there are also other colors that simply can’t be produced by combining these, so there must be some number of other pigments, as well. Of course this means I haven’t been able to find the exact functions of all the tridacnid and zooxanthellal pigments and how they work, either, but you should have the basic idea now.
On another note, it’s also interesting that not all tridacnids living in really shallow water are brightly colored, while some deeper-living ones are. And there are other colorful sorts of non-tridacnid clams that live deep enough that they certainly don’t need photo-protective pigments, and don’t carry zooxanthellae either. Thus, it would seem that the roles of various clam pigments aren’t quite as straightforward as I’ve made it sound; again, these are just the basics.
In addition to pigments, the mantle also contains some little structures called iridophores, which are made of small groups of cells (called iridocytes). These cells contain stacks of tiny reflective platelets, and they can act as sunscreens, too (Griffiths et al. 1992). It’s these little reflectors that are also responsible for the iridescent appearance of some tridacnids and some of the colors of the mantle.
I’ll add that tridacnid mantles also contain some other things called mycosporine-like amino acids (MAAs), which are also known to be very effective, naturally produced UV sunscreens. These are also used by both tridacnids and reef corals, and a few other creatures (ex. Ishikura et al. 1997 and Dunlap & Shick 1998). Thus, the job of UV screening can fall at least partially upon the MAAs rather than just the iridophores and/or the pigments. MAAs are clear, though, and don’t have anything to do with how a clam looks.
Regardless of exactly what all of the pigments and iridophores are doing, as best as I can tell, nobody really knows why tridacnid mantles look the way they do when it comes to putting all these colors into mantle patterns. Tridacnids can be covered by a seemingly endless array of not just colors, but patterns, and sometimes it seems like every tridacnid of a given species that I see on a dive looks different. Likewise, many tridacnids are aquacultured on “clam farms,” and if you ever visit one you’ll see that hundreds of juveniles, which you know came from the same parents and were raised in the same tanks, can all look at least a little different. Many may look similar overall, but some will look nothing like the others. They can all have different patterns, different colors, or both.
Thus, at least to some to some degree, the colors and the patterns seem to be rather random, as each species has a range of colors/pigments it seems to be able to make, and then uses them in many different quantities and arrangements of spots, stripes, blotches, borders, etc. with no obvious reason why. Rosewater (1965) thought that these differences were somehow related to the zooxanthellae, but McMichael (1974) suggested it was due to some genetic variability in clams. However, apparently neither of these ideas has been thoroughly investigated. Ellis (1998) did report that extracting zooxanthellae from colorful clams and giving it to other clams did not cause the recipients to develop the same coloration, though, and Laurent et al. (2002) looked for a genetic link to color in tridacnids, but didn’t find anything at all. Burton (1992) also reported the same, as he found no apparent genetic link to color, either.
Still, it’s obvious that there’s at least some genetic control over appearance, as each tridacnid species can have some mantle patterns that are unique to that species. For example, many specimens of Tridacna maxima have a mantle pattern that is composed of small, rather teardrop-shaped spots (these are known as “teardrop maximas”), but no other species of tridacnid has this same look. There’s something about some maximas that produces this species-specific pattern at times. Additionally, a particular pattern seen on a particular species may be more or less prevalent, or even absent, in various geographic localities. Species X from location Y may have a lot of different mantle patterns, but in a large enough population sample of them, a few will look essentially alike, and those may look unlike specimens of species X found in other areas.
It’s also worth noting that tridacnids may or may not get darker and/or change color as a response to long-term changes in lighting. Here’s a good example of what I mean: In a series of mantle color experiments performed by ICLARM (2002), several predominantly brown-colored aquacultured specimens of Tridacna maxima known to have brightly colored parents were collected and used in a series of tests. Some specimens were kept in tanks under different light intensities, and some of them were kept at three different depths in protective aquaculturing cages in the sea, along with some colorful specimens of the same species, to see what effect the changes in lighting would have. Some were given zooxanthellae collected from other brightly colored specimens to see what that would do, and some were provided with extra nutrients to see what that might do.
It was found that darkness and/or color were indeed affected by the different lighting regimens in the tanks, but not consistently. Those kept under reduced lighting experienced an “intensification of baseline colors,” while the brown clams kept in the ocean cages stayed brown, the initially blue clams kept with them turned green, and the initially purple clams turned blue! Providing the brown specimens with the zooxanthellae from brightly colored specimens had no effect, and adding the extra nutrients had no effect, either. It’s all pretty hard to figure out.
Oh, and there are eyes to talk about, too. The mantles of the Tridacna species also have a number of dark-colored simple eyes on their surface. In fact, a single clam may have several thousand eyes. They aren’t very fancy, and they can’t produce a real picture like our eyes do, but they do allow tridacnids to sense in which direction the sun is. To some degree tridacnids can also detect shadows and different visible colors and UV light, and movement too. So mantles often have dark spots on them in addition to everything else, and the distribution of the eyes can be quite random at times. In some cases the eyes are found relatively widely spaced around the edge of the mantle, but in other cases they may be very tightly spaced, actually touching each other. Then again, they may also be scattered randomly all over the mantle in high numbers or low. Again, I found no real answers as to why there’s so much variability.
Aside from all this, it has also been suggested that not so much the colors, but rather the patterns on the mantle have something of a camouflage effect, as they can break up the outline of the exposed fleshy mantle (Knop 1996). This supposedly makes the mantle less obvious and more difficult for predators to see.
While diving I’ve found numerous relatively drably colored tridacnids with mottled mantle patterns that were pretty well camouflaged. However, I could obviously still see them anyway. Likewise, if you move too close to a tridacnid it can definitely see you coming and will usually retract its mantle into its shell as you approach, consistently giving away their own position. I’d miss many of them if they just sat still, but my eyes regularly pick up the jerking motion of the mantle much more easily than their actual mantle color or pattern.
Thus, some tridacnids are rather well hidden at a distance and may not be seen by something quite far away, but others are either very obvious and/or give their positions away themselves. So, for at least some of them, I can’t see that the color/pattern has anything to do with camouflage. Besides, it would seem to me that one pattern, or at least a few, would work better than the others and those patterns would eventually dominate the rest as the less-camouflaged clams got eaten by predators. Instead, it seems that particular colors and/or patterns may not provide any advantage or disadvantage at all.
All right, one more bit of info before we’re done. In addition to everything covered above, you should also note that the perceived color/pattern of many tridacnids’ mantle can change drastically depending on the viewing and/or lighting angle, particularly in the case of Tridacna crocea and T. maxima. In other words, if you change the angle that you view a clam from, or change the angle of the light source, or both, you will often see different colors being reflected off the mantle. Sometimes this isn’t very pronounced or doesn’t occur at all, but in some cases a specimen can look completely different when viewed from the front instead of from above. Some specimens can also have a brilliant reflective sheen when looked at from certain angles, as well. This isn’t caused by the pigments, but is instead due to the reflective iridophores in the mantle.
Well, that’s about all I can say on the matter. As you can see, there are still a number of unanswered questions when it comes to why tridacnids look the way they do, and I can assure you that I’ve looked in a lot of places for those answers. Some information is better than none, though, and we’ll just have to keep wondering about, and researching, the rest.
Burton, C. 1992. Mantle color variation and genetic diversity in Lizard Island giant clams (Tridacna gigas): http://www.uoguelph.ca/zoology/courses/ZOO4600/Copy92.pdf
Dunlap, W. C. and J. M. Shick. 1998. “Ultraviolet radiation-absorbing mycosporine-like amino acids in coral reef organisms: a biochemical and environmental perspective.” Journal of Phycology 34:418-430.
Ellis, S. 1998. Spawning and Early Larval Rearing of Giant Clams. Center for Tropical and Subtropical Aquaculture Publication 130. 55 pp.
Griffiths, D. J., H. Winsor, and T. Luongvan. 1992. “Iridophores in the mantle of giant clams.” Australian Journal of Zoology 40(3):319-326.
International Center for Living Aquatic Resources Management (ICLARM). 2002. Coastal and Marine Resources Research Program operational plan: http://www.worldfishcenter.org/operational%20plan/op2001/pdf/15-42.pdf (pg.16)
Ishikura, M., C. Kato, and T. Maruyama. 1997. “UV-absorbing substances in zooxanthellate and azooxanthellate clams.” Marine Biology 128:649-655.
Knop, D. 1996. Giant Clams: A Comprehensive Guide to the Identification and Care of Tridacnid Clams. Dahne Verlag, Ettlingen, Germany. 255 pp.
Laurent, V., S. Planes, and B. Salvat. 2002. “High variability of generic pattern in giant clam (Tridacna maxima) populations within French Polynesia.” Biological Journal of the Linnean Society 77:221-231.
McMichael, D. F. 1974. “Growth rate, population size and mantle coloration of the small giant clam Tridacna maxima (Roding), at One Tree Island, Capricorn Group, Queensland.” Proceedings of the Second International Coral Reef Symposium 1:241-254.
Rosewater, J. 1965. “The family Tridacnidae in the Indo-Pacific.” Indo-Pacific Mollusca 1:347-396.Yonge, C. M. 1975. “Giant clams.” Scientific American 232(4):96-105. |
Identifying flowers and other plants is an important skill for every gardener to learn. Not only will it help you tell the difference between a weed and a marigold seedling, but it will also allow you to identify plants in public gardens or in the wild so that you can incorporate coveted specimens into your own garden. Learning the relationships between various plants will help you create a more healthy and thriving garden, too.
All plants are given a scientific name (also sometimes called their Latin name) which consists of a capitalized genus name and a lower-case species name. So, for example, the large-flowering raspberry has a scientific name of Rubus parviflorus and the dewberry has a scientific name of Rubus flagellaris. The fact that they are both in the Rubus genus tell us that they are related and may share a similar plant structure and growth habit. Scientific names are important since the common names of plants may vary from region to region---i.e., in parts of North America, the large-flowering raspberry is known as the "thimbleberry."
To identify plants and flowers, you'll need some good field guides and other books. Some plant ID books are arranged by flower color, with all the red flowers grouped together and all the yellow flowers together, and so on. However, this can be confusing, since blossom color can vary between individual plants, and what one person might call "lavender" another person might think of as "blue." Better choices are books organized by family, habitat or bloom time.
Some field guides and other plant identification books will have a "key" in the front or the back which will help you narrow down an ID without flipping through the entire book. A key will ask you a series of questions, such as how many petals the flower has or how the leaves are arranged, and will offer some suggestions for identification. The book should also have a glossary to help you determine what "basal" leaves are or what it means for a flower to be "irregular."
Structure and Habitat
To identify plants, you must learn to look not only at the flower but also to the structure of the leaves, stems, fruit and other parts of the plant. A plant won't always be in bloom when you find it, or the bloom might be damaged or deformed. Being able to recognize a plant by the leaves or fruit will allow you to identify plants at any time of year. Paying attention to where a plant or flower is growing will also be a big help in identifying it, since many plants are very particular about their environment, and may only grow in moist conditions or in shady spots. For wildflowers, you will also check the plant's known range.
A loupe, hand lens or magnifying glass will make it easier to examine the tiny details of a flower, and can also offer a unique perspective on an otherwise familiar plant. You may also want a plant press (a heavy hardcover book will work in a pinch) to preserve specimens for later study. Take care in gathering plants from the wild, however. Do not take more than one or two specimens, and never pick plants that are rare, threatened or endangered. |
Manta rays are the largest rays in the world. The genus Manta was considered monotypic, but has since been categorized as two different species: Manta birostris and Manta alfredi. M. alfredi is the most abundant species in Zavora Bay, however both species can be seen all year round, with a peak from July-November. Little is known about manta’s population structure and dynamics despite the attraction to these charismatic species.
Research at the Zavora Marine Lab aims to generate knowledge on the region’s manta populations and assist with their conservation.
Manta ray photo identification
Manta rays have a unique spot pattern on their belly and between their gills. These markings make it possible to identify individuals. Interns use photo-identification to gather data on population dynamics, migration patterns, and environmental variables that influence manta rays’ distribution and abundance.
Zavora Lab founder Yara Tibirica, in collaboration with the oceanographer Carlos E. J. de A. Tibirica, developed software that catalogues the number and location of the spots to identify individual manta rays.
Photo (c) Maya Santangelo (Zavora Marine Lab, 2016) |
The different ways in which students are motivated can affect their learning experience. An NIE research team is looking into how teachers can help students turn potential negatives into positives.
Academic motivation has always been of interest to Assistant Professor Serena Luo because she knows how it can influence student learning.
“Motivation is the force that drives behaviour,” she says. “It’s the students’ own internal reasons or purposes that drive them to choose and engage in various learning activities.”
But different students can hold very different motivational beliefs. According to Serena, there are four types of beliefs:
- Self-efficacy: This refers to how confident students are in their learning.
- Value: Anything that students inherently enjoy doing will hold an intrinsic value for them. But if students perform a task for reward or because their parents or teachers want them to, the value is extrinsic.
- Goals: Students who set performance goals may study hard because they want to outperform others, but they may also withdraw effort when there is a risk of failure. Those who set mastery goals are keen to learn new things and seek new challenges.
- Attribution: When students succeed or fail in a task, do they attribute it to their innate ability, their hard work or other reasons?
Students’ motivational beliefs shape the emotions they experience while learning. “Both of these, in turn, affect students’ learning strategies, and ultimately, achievement.”
Maladaptive Learning Strategies
Students who tend to have negative learning experiences may end up adopting maladaptive learning strategies. Serena explains that maladaptive learning strategies are those that students use to avoid challenges they face in school:
Some students try to avoid challenges. Such students avoid challenging situations and have difficulty in planning for their studies effectively.
Some students copy answers from others. When they encounter difficulties, these students would rather copy the work of other students than do the work themselves.
Some students “handicap” themselves. When facing a challenge, these students feel they will not do well and will look for reasons to justify their expected poor performance. This expectation may be based on previous experiences or their low self-esteem.
Some students avoid seeking help. These students refrain from seeking help from peers or teachers although they may need it. Even if they do, it may be for the sake of finishing their work quickly with less effort.
Some students use emotional and avoidance coping. When such students encounter failures, they tend to feel despondent or that it is not their fault and shift the blame to others. They may also avoid the problem by shifting their attention to something else.
Turning Negatives into Positives
Students’ motivational beliefs constitute and are constituted by the context and situations in which they study.
To help students adopt positive motivational beliefs, emotions and learning strategies, Serena suggests that teachers can work on three aspects: (1) establish good teacher–student relationships; (2) make learning the classroom goal; and (3) consider students’ needs, opinions and difficulties.
It is important for teachers to show that they have high expectations of their students. In her study, Serena found that if teachers do this, students will tend to feel that they can do better.
Along with this, teachers can explicitly tell students that they can improve if they put in the effort. “We need to let students have that incremental belief about their ability, that their ability can be changed with effort and good learning strategies,” says Serena.
Teachers can also emphasize that the goal in the classroom is to learn, rather than to compare grades. This serves to build a classroom where students are not afraid to make mistakes. All answers and opinions are valued.
These methods will help create a positive classroom climate where students will have positive emotional experiences and learning behaviours.
But there is another way to motivate students that many of us may not think of: using assessment to engage students.
We need to let students have that incremental belief about their ability, that their ability can be changed with effort and good learning strategies.
– Serena Luo, Policy and Leadership Studies Academic Group
Assessment for Self-regulated Learning
For Serena, assessment is more than just about tests and exams.
“Assessment is part of the daily learning routine,” she says. “But they aren’t just about the students’ grades, and can also be used to promote self-regulated learning (SRL).”
Self-regulated learners take ownership of their learning and are highly motivated to learn, even when faced with challenging tasks.
While many focus on grades which may reflect how much students have learned, we must remember that assessment is also the time when teachers can give feedback that promotes SRL behaviour.
“Don’t just give a tick or cross,” Serena suggests. “Feedback does not need to be long, but it should be constructive and tell students how to do better.” Students will then know what is expected of them and how they can improve their own learning.
Serena also advises that feedback can be used to motivate if it links students’ improvement to the effort they have put in. A simple “Well done!” or “Good effort!” followed by why and what to do next will get students to see this link.
“Students will see their learning progress and it will enhance their self-efficacy. As long as they put in the effort, they can do better. Over time, students will tend to internalize the value of learning.”
Helping students to learn in such a positive manner is what Serena hopes to achieve and it motivates her to continue her work in this exciting area of education research.
Lee, K., Ning, F., & Hui, C. G. (2014). Interaction between cognitive and non-cognitive factors: The influences of academic goal orientation and working memory on mathematical performance. Educational Psychology, 34(1), 73–91. |
Kansas Geological Survey, Bulletin 211, Part 4, p. 23-27
Stoneware potters often complain about two problems occurring with their ware; it is cracked when removed from the kiln or the glaze pops off the surface after the glaze is fired. A body used for years may suddenly develop these problems. The potter assumes that nothing has changed in the method of handling or firing the body. This is probably true, but some small change in final fired-body composition has occurred. This problem is not new; it has been going on as long as potters have been firing stoneware. It's just that often one forgets how the old problems were solved. Most pottery books mention the cracking problem and imply that if you are careful and don't anger the "pottery gods" too often, the problem can be lived with.
It is the purpose of this note to give some insight into the source of the trouble and some steps that may be taken to correct it. No specific body compositions are referred to, nor is a typical composition listed, because there are so many variations of stoneware compositions in use. Only ideas for problem solutions are presented. How the ideas presented are used to solve your problem will depend upon how much control you have over the stoneware body formulation. Sometimes the simplest solution is to change to another clay body.
X-ray diffraction analysis and thermal expansion measurements of the broken ware always show an excessive amount of the high temperature phase of quartz known as cristobalite to be present. Vitrified stoneware expands in size when heated and shrinks when cooled. This change in length is called thermal expansion.
The troublesome thermal expansion curve for the cristobalite form of quartz is shown in Figure 1 along with the thermal expansion of other phases normally present in vitrified stoneware. Note the very large percent linear change for cristobalite at about 200°C (392°F). This is the cause of the trouble as it changes from the β to α form during cooling. The α and β refer to different arrangements of the atoms within the crystal structure, β always referring to the high-temperature structure in American literature. (It is common in European literature to reverse the meaning of the symbols.) This reaction happens upon heating and cooling each time the ware is thermally cycled. If the ware comes out of the kiln broken, the trouble is caused by the β form changing to the α form causing it to break in the first cooling period. The quartz expansion curve shows a similar behavior, but the transformation occurs at a higher temperature and is of a lesser magnitude than that of the cristobalite so it is less troublesome in most bodies. The glass expansion is characteristic of a glass in that the expansion is linear up to a region just below its softening temperature. Mullite has the ideal thermal expansion curve for a stoneware body that is to be used as ovenware. Note the lack of any irregularities in the curve. It is smooth and continually increasing with increasing temperature.
Figure 1--Thermal expansion curves for the phases present in fired stoneware.
Cristobalite is the high-temperature form of quartz with its region of equilibrium stability above 1470°C (2678°F). How can this cause trouble if the body has never been fired higher than cone 9 or 10, a maximum of 1285°C (2345°F)? The 1470°C temperature applies only if you are starting with pure quartz and converting it to cristobalite. A stoneware body is not pure quartz but is mostly made up of clays which contain some free quartz. Note that in the expansion curves for stoneware bodies in Figure 2 the α and β transformation for quartz is still visible, showing that much of the original quartz was not transformed to cristobalite or dissolved in the glass phase.
Figure 2--Thermal expansion curves for a cracking and non-cracking fired stoneware body.
Most of the stoneware bodies are formulated around clays containing the clay mineral kaolinite, which when heated undergoes the following set of reactions:
[Note: In above reactions, this (*) refers to a spinel crystal structure. it does not mean the compound MgO • Al2O3, spinel, is formed.]
It is the last reaction that causes the problem. If it can be prevented, or retarded, the amount of cristobalite will be held to a minimum. Why the cristobalite forms at temperatures below its temperature of stability is one of the interesting quirks of the crystallization phenomena. Amorphous silica has a random arrangement of silicon and oxygen ions similar to that of silica glass. However, this is a highly unstable state for the 1285°C temperature and the ions will arrange in the most loosely packed crystalline arrangement possible, which is β-cristobalite. Once in this form, the change to another crystalline arrangement is normally slow. Thus, there is a good chance of having some cristobalite present in the fired ware.
At the same time the transformation of the amorphous silica into cristobalite is occurring, another reaction is going on that is removing some of the amorphous quartz which is reacting with the melting fluxes to form the liquid phase that is vitrifying the body. Since the atomic arrangement of amorphous silica is closer to that of the liquid phase than is that of the crystalline quartz, the amorphous silica will be dissolved at a faster rate than the other crystalline forms of quartz. The liquid phase can only dissolve a certain amount of silica, depending upon its overall chemistry. The remaining silica is available for crystalline phase formation. If the liquid phase becomes supersaturated with silica as it cools, there is a high probability that the excess will precipitate out as crystalline cristobalite. A small amount of cristobalite is beneficial towards developing the proper compressive stress in the glaze to prevent crazing and increase the strength of the fired ware; only an excess causes trouble.
Before suggesting remedies for the problem, let's look at how the thermal expansion behavior of the body can cause both the body cracking and the glaze popping problem. Thermal expansion is the change of the length or volume of the solid as the temperature changes. Normally a solid expands when heated and shrinks when cooled. A small thermal expansion change means good thermal shock resistance while a large thermal expansion change means poor thermal shock resistance of the body. Why?
A thermal gradient is established in the ware when it is cooled. The slower the cooling rate the smaller is this gradient between the hotter inside and the cooler outside of the wall. The cooler outside wall has contracted more than the hotter interior, creating a tension or pulling apart in the outer wall because the interior is trying to keep it from contracting. Tension forces are building in the outer surface and when this force exceeds the tensile strength of the body, which is quite low, a crack begins and the wall breaks. The problem is dealt with by reducing the rate of cooling or by reducing the thermal expansion value so that it is difficult to establish a large differential of length change during cooling. What makes the α to β transformation so troublesome is that no ions have to move and rearrange to produce the change in volume. The change occurs almost instantaneously by changing a bond angle between the silicon and oxygen ion when the transformation temperature is reached. It occurs over a small temperature range in the expansion curve because it takes time for the interior of the ware wall to reach the transformation temperature. The transformations occur upon both heating and cooling as long as the crystalline phase is present. If you want cristobalite present in fairly large amounts, the cooling rate must be quite slow between 250°C (482°F) and room temperature to prevent the buildup of excessive tensile forces. Close the kiln door during this range, or at least don't let cooling air blow directly onto the ware. How fast the stoneware body can be cooled depends largely upon the thermal expansion behavior below 250°C (Figure 2 shows a fairly linear rate of expansion above this temperature). This does not mean that the body can be cooled from maturing temperature to 250°C in one hour. Find out the limitations of your ware and also the thermal shock limitations of the refractories in your kiln.
According to Lawrence (page 126) cracks originating because of the quartz inversion can be distinguished from cracks caused by the cristobalite inversion. Both cracks will be sharp, but because the cristobalite crack occurred at a much lower temperature, its surface will be dull while the surface of the higher temperature crack will be smooth and glossy.
Porous bodies may have a considerable amount of cristobalite present and still have reasonably good thermal shock resistance because the pore structure retards the spread of the crack. The cracks are still there, but they don't grow and cause immediate failure. After many thermal cycles, pieces of the ware will develop cracks of major size. Vitreous ovenware should have a minimum of cristobalite material present as the ware has no porous structure to block crack development and growth.
Glaze popping (dunting) also occurs below 255°C because the body is shrinking much faster than the glaze is shrinking. This difference in the two rates of length change causes a large amount of compressive stress to develop in the thin layer of glaze. The glaze buckles and pops off the surface. The cure is to lower the thermal expansion of the body in the last few hundred degrees of cooling temperature. Raising the thermal expansion of the glaze is another solution but it may create crazing problems that the method of lowering body expansion avoids. A good glaze fit occurs when the thermal expansion of the glaze is just slightly less than the thermal expansion of the body in all temperature regions where the glaze is solidly attached to the body. If given a choice, most potters would rather have the glaze craze than pop off the surface.
Now that we know that cristobalite is causing the problem, here are some steps that can be taken to minimize or control the amount formed.
The maximum amount of cristobalite formed from the decomposition of the kaolinite appears to be in the temperature range of 1250-1300°C (cone 7-10). Lowering the firing temperature without changing the flux content of the body may increase the porosity and may not properly mature the currently used glaze.
Most stoneware bodies contain brick grog which often has well-developed particles of cristobalite present. Such particles are difficult to dissolve in the liquid phase and will act as growth sites for the transformation of the amorphous quartz into cristobalite and for the precipitation of excess silica in the glass phase as cristobalite during cooling. It is best to use grogs low in cristobalite content, such as calcined kyanite or synthetic mullite (3Al2O3 • 2SiO2). Some commercial suppliers are using such grogs in their stoneware. Brick grog is often made by crushing scrap bricks; hence, the quality may vary widely from source to source.
Another method of reducing the cristobalite content is to dissolve the amorphous silica material in the high temperature liquid phase. If the body has five percent potash feldspar present, increase it to ten percent. This will increase the glass content and probably allow a lowering of the firing temperature. It may remove some of the iron color. If iron color is important, try adding three to five percent calcium fluoride (minus 200 mesh) in addition to the flux already present. This flux tends to retain the iron spots and there is a small amount of iron in natural calcium fluoride.
When the flux content is increased, the amount of liquid phase also increases at the maturing temperature, which may cause slumping. Always test fire small quantities of the body before preparing large amounts.
Beware of using fluxes high in sodium oxide content. Sodium oxide may accelerate the rate of conversion of amorphous quartz into cristobalite. Some claim that there is no difference between the use of soda spar and potash spar in stoneware bodies. It is best to use only the potash spar.
Why does reduction firing cause more cristobalite to be formed in some bodies than others? The amount of iron oxide (Fe2O3) is often the reason. Ferric oxide (Fe2O3) is reduced to ferrous oxide (FeO) in the reduction firing. The ferrous oxide melts and penetrates into the amorphous silica structure and promotes the formation of cristobalite. Such a compound is called a mineralizer. Thermal expansion measurements of all iron-containing stoneware bodies showed that reduction firing increased the amount of cristobalite present.
Keep the iron content as low as possible if you fully reduce the ware. If iron spots are important in the glaze, try to formulate the glaze so that the spot material is in the glaze and not derived from the contact of the glaze and the body.
Fire under mild oxidation atmosphere and reduce only as necessary to produce the glaze effect needed. It is not necessary to have the body saturated with black carbon into the center of the wall. This does nothing beneficial for the body. A reduction firing should not belch black smoke, as only the carbon monoxide present is affecting the reduction process.
The longer the body is held at peak temperature the more time is available for rearrangement of the silicon and oxygen atoms into the cristobalite structure. This is somewhat balanced by the glass-forming reaction which dissolves more quartz over a longer time period. However, the latter may saturate the glass composition and cause cristobalite to precipitate during cooling. Experiment and find out the best soaking time for the body.
This is a difficult question to answer as the average potter has no way to measure the quantity present. The best suggestion is to give the fired stoneware a mild thermal shock by removing it from the kiln at 250°C (500°F) and allowing it to rapidly cool to room temperature in a static air condition (no wind blowing across the piece). If it doesn't break, the amount of cristobalite present is not excessive. Figure 2 shows the thermal expansion for two stoneware bodies. The cracking stoneware body shows a rapid increase in thermal expansion between 100 and 200°C. It is this rapid expansion that is cracking the body. The non-cracking body is a similar composition that has been modified by replacing the brick grog with calcined kyanite grog and increasing the potash spar from 5 to 10 parts by weight. Its thermal expansion curve increases at a much slower rate. Both curves shows a small inversion just below 600°C due to the α-β quartz transformation showing that some of the quartz has not been dissolved in the liquid phase nor has it been transformed to cristobalite.
A stoneware body that can tolerate moderate thermal shock will have an expansion curve similar to the non-cracking curve. For non-ovenware items the amount of cristobalite can be increased. Remember that small amounts of cristobalite are not bad as they provide some compression in the glaze and reduce the tendency for glazes to craze.
For a good summary of applied science and the potter I suggest reading CERAMIC SCIENCE FOR THE POTTER by W. G. Lawrence, published in 1972 by the Chilton Book Company, Philadelphia, Pa.
Back to report index...
Kansas Geological Survey
Comments to [email protected]
Web version updated June 18, 2010. Original publication date April 1978. |
The pros of democracy
(1 of 2) Next position >
Democracy advocates for equality
Whether it be equal access to resources such as education or medical care, or equal representation and voting rights, or equal opportunities to improve oneself and raise one's social and economic standing, democracy advocates for equality.Democracy Equality
< (2 of 5) Next argument >
Democracy was first imagineered in ancient Greece by the leader of Athens, Cleisthenes, as a way in which the people could rule instead of being ruled. Equality was at the heart of the philosophy of democracy, and a new form of government took shape which would forever change the world. It was seen as the fairest and most ethical way to govern a nation and its people. For many centuries, democracy has become widespread among developed nations, being both promoted peacefully and by force. But today the debate rages as to whether or not democracy is really the golden form of government that it has long been made out to be. Does democracy really work? Do the pros outweigh the cons, or is democracy an inefficient and fallible system? With the tumultuous 2020 U.S. presidential election nearing the boiling point, this question is being asked now more than ever: What are the pros and cons of democracy?
Democracy embraces the idea that people of all races, genders, creeds, socioeconomic standing, etc. should be equal. This means equal access to resources such as health care and education, equal representation and voting rights, and equal opportunity for upwards mobility. Equal representation is a must in a democratic society. Every person has the same voting rights, and every person's vote counts the same way. That means every person has a voice, which gives them the power to elect politicians who will make positive changes in their neighborhoods, schools, and even the country as a whole. People who are otherwise oppressed can vote a person out of office who does not have their best interest at heart, and can vote for policies that will improve their quality of life and end their oppression. Equal education also means equal opportunity. Under a democratic government, all students would be given access to the same quality of instruction. This ensures that every person is well-equipped to survive and thrive in an adult world by the time they graduate from school. It paves the way for upwards mobility for people who were born into poverty; they can get high-paying jobs and not have to live from paycheck to paycheck. In a democracy, no one is doomed to live their entire lives under the status they were born into.
Democracy might advocate that every vote is equal, the but the reality is that every vote is not equal. People's opinions are shaped by their differing environments; how educated they are, whether or not they are on the right side of the law, etc. If, for example, a person is uneducated or does not follow the news, then they would be very hard-pressed to make an objective, rational, and informed decision, because they simply do not have all the facts. Should their vote count just as much as a person who has been well-educated on current issues and on how voting works? Wouldn't it just skew the outcome of an election and not be a true representation of what the population really wants? Couldn't it even allow for uneducated voters to be taken advantage of and manipulated into voting for someone who doesn't truly represent what they want? And in regards to equal education and opportunities, these simply do not exist. Schools are notoriously unequally funded, because they rely on taxpayers' money--and poor neighborhoods generate less revenue to support their schools, while rich neighborhoods have most everything they could ever need. There is no opportunity for equal mobility, because the under-funded education system in poor neighborhoods fails to deliver. Kids from poor neighborhoods start out at a disadvantage to achieving their goals, to starting careers, and to moving up and out of poverty. If democracy actually worked, this gross inequality would never be allowed. And by allowing inequality to persist in the quality of education the population receives, we are guaranteeing that votes will remain unequal, and that election results will continue to be skewed and voters abused.
[P1] In a democracy, every vote counts equally. [P2] In a democracy, everyone receives equal access to education. [P3] Democracy works because everyone is given equal opportunities.
Rejecting the premises
[Rejecting P1] Every vote does not count equally. [Rejecting P2] Everyone is not guaranteed the same quality of education. [Rejecting P3] Democracy doesn't work because not every citizen is given equal opportunities. |
NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) spacecraft has spotted a never-before-seen comet — its first such discovery since coming out of hibernation late last year.
“We are so pleased to have discovered this frozen visitor from the outermost reaches of our solar system,” said Amy Mainzer, the mission’s principal investigator from NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “This comet is a weirdo – it is in a retrograde orbit, meaning that it orbits the sun in the opposite sense from Earth and the other planets.”
Officially named “C/2014 C3 (NEOWISE)”, the first comet discovery of the renewed mission came on Feb. 14 when the comet was about 143 million miles (230 million kilometers) from Earth. Although the comet’s orbit is still a bit uncertain, it appears to have arrived from its most distant point in the region of the outer planets. The mission’s sophisticated software picked out the moving object against a background of stationary stars. As NEOWISE circled Earth, scanning the sky, it observed the comet six times over half a day before the object moved out of its view. The discovery was confirmed by the Minor Planet Center, Cambridge, Mass., when follow-up observations were received three days later from the Near Earth Object Observation project Spacewatch, Tucson, Ariz. Other follow-up observations were then quickly received. While this is the first comet NEOWISE has discovered since coming out of hibernation, the spacecraft is credited with the discovery of 21 other comets during its primary mission.
Originally called the Wide-field Infrared Survey Explorer (WISE), the spacecraft was shut down in 2011 after its primary mission was completed. In September 2013, it was reactivated, renamed NEOWISE and assigned a new mission to assist NASA’s efforts to identify the population of potentially hazardous near-Earth objects. NEOWISE will also characterize previously known asteroids and comets to better understand their sizes and compositions. |
September is Cholesterol Education month. While most people know that high cholesterol is a problem, they may not know much about the condition and how it affects the body. Knowing more about high cholesterol can help family caregivers to take better care of their aging relatives.
Cholesterol exists in every cell of the body. It is usually described as “waxy” and “fat-like.” It’s a necessary substance that the body uses in hormone and vitamin D production. It is also involved in digestion.
There are three kinds of cholesterol:
-High-Density Lipoprotein (HDL): HDL cholesterol is the good kind. It picks up excess cholesterol and delivers it back to the liver. The liver then eliminates the cholesterol from the body.
-Low-Density Lipoprotein (LDL): LDL is the bad kind of cholesterol. It combines with other substances in the body to create plaque. Plaque builds up in the arteries, causing atherosclerosis, which can lead to blockages in the arteries.
-Very Low-Density Lipoprotein (VLDL): VLDL is another kind of bad cholesterol. It is also involved in creating plaque. However, it differs from LDL because it transports triglycerides, a kind of fat, rather than cholesterol.
The body produces as much cholesterol as it needs. However, cholesterol is also present in many foods, like eggs, meat, and dairy products.
Causes of High Cholesterol
Most cases of high cholesterol are caused by leading an unhealthy lifestyle. Causes of high cholesterol include:
-Unhealthy Diet: Eating a diet that contains too much of certain kinds of fats contribute to high cholesterol. Saturated fats should be limited. They are found in some kinds of meat, dairy, processed foods, baked goods, and some other foods. Trans fats, which are found in processed and fried foods, should be avoided as much as possible.
-Not Exercising: Being inactive decreases the amount of good cholesterol in the blood.
-Smoking: Cigarette smoking both lowers good cholesterol and raises bad cholesterol.
-For some people, genetics can also play a role. Familial hypercholesterolemia is a condition that is passed down through generations. There are also some medications that can increase blood cholesterol levels.
If your aging relative has high cholesterol, senior care can help them to manage the condition. Some older adults consume an unhealthy diet because they have difficulty cooking for themselves or because they don’t enjoy cooking. Senior care providers can plan and cook meals that contain lots of healthy fruits, vegetables, whole grains, and lean proteins.
Senior care providers can also encourage more physical activity by inviting older adults to go for walks with them or to be involved in household tasks. Or, if your family member would enjoy a group fitness class, a senior care provider can drive them to the class.
If you or an aging loved one are considering Homecare in Mission Viejo, CA, or the surrounding Orange County areas, please call us at 949-535-2211 or CLICK HERE to get more information We provide senior care services in Newport Beach, Costa Mesa, Huntington Beach, Irvine, Laguna Woods, Laguna Beach, Mission Viejo and surrounding areas of Orange County California. Contact us today to learn more. |
STUDENTS’ LISTENING SKILL AT THE ELEVENTH GRADE: ART ANALYSIS STUDY
Keywords:Listening, Listening skill, Moral value, Song
This study aims to determine how students' listening skills. This research was conducted at SMK Maitreyawira Tanjungpinang for the 2020/2021 academic year. This study selected 2 classes 11th grade with a total of 35 students as research subjects. This study uses a descriptive qualitative method in which the researcher describes the results of the data that has been obtained by arranging words so that it becomes a clear explanation. The study used a song as a test and a single question. The results of this study stated that the 11th grade students of SMK Maitreyawira had good listening skills as the results were proven by the test results based on data analysis techniques. Based on the results of the study, the researcher suggests teachers to use teaching techniques using songs to build students' listening skills better. |
Stereochemistry is concerned with the depiction of molecules in three dimensions. In biological systems, this has broader implications. One stereoisomer of a molecule, for instance, makes up a majority of medications. One stereoisomer may have beneficial effects on the body while the other may not, or even be poisonous.
Stereochemistry, sometimes known as the "chemistry of space," is the study of how atoms and groups are arranged spatially within molecules. Stereochemistry primarily concentrates on stereoisomers. This area of chemistry is frequently referred to as 3-D chemistry. “The isomerism that is generated by the non-similar configurations of atoms or functional groups bonded to an atom in space" is known as stereoisomerism.
Types of Representations of Molecules in 3d
Different types of representations can be used to depict the three-dimensional (3-D) structure of organic molecules on paper. There are mainly four types of representations of organic molecules in three dimensions:
Wedge Dash Representation
A Wedge Dash, the most popular three-dimensional depiction of a molecule on a two-dimensional surface is in projection (paper). This type of representation is typically used for molecules with chiral centres. This type of representation employs three different types of lines.
Solid wedges or thick lines signify bond projections towards the observer or above the paper's surface. A continuous or regular line denotes a bond in the plane of the paper. A bond projection away from the observer is indicated by a dashed or broken line.
Wedge dash representation
The compound's carbon chain is projected vertically, with the topmost carbon being the one that is the most oxidised (as defined by the nomenclature rule). The chiral carbon atom is typically deleted because it is in the plane of the paper. The intersection of asymmetric carbon is represented by cross lines. The chiral carbon's horizontal bonds are thought to be above the plane of a piece of paper or point at the observer. The chiral carbon's vertical linkages are thought to be below the plane of paper or away from the observer.
(Image will be uploaded soon)
Keeping the preceding points in mind can assist the observer in converting the wedge dash structure to the Fischer projection and determining the similarity between the Fischer and the wedge dash.
Wedge dash to Fischer
The sawhorse formula describes how all the atoms or groups are arranged on two nearby carbon atoms. For the sake of clarity, the bonds connecting the two carbon atoms are depicted diagonally and as being substantially longer. It is assumed that the carbon in the lower left hand and the upper right hand carbon is the back carbon and the front carbon towards the observer or toward the observer's side, respectively, like ethane. In the Sawhorse formula, all parallel bonds are eclipsed while all antiparallel bonds are staggered. Bulky groups are closer to one another at 600 angles in a gauche representation.
The Newman projection, which was invented by Newman, is a very straightforward technique for displaying three-dimensional Mathematics on two-dimensional paper. The molecule is viewed from the front or along the axis of a carbon-carbon bond in these calculations. A point denotes the carbon atom closest to the eye, and a circle denotes the carbon atom farther away. A dot or circle is used to represent the bond between the three atoms or groups on the carbon atoms at 1200 degree angle to one another. All parallel bonds are eclipsed and antiparallel or opposite bonds in the Newman projection are staggered.
Wedge Dash to Newman Projection
The image shows that the front carbon is represented by a dot in the Newman projection. By aligning the eye and specifying the view, one can easily convert wedge dash into Newman projection.
Wedge dash to Newman projection
R S Configuration in Wedge Dash
The priority is determined by the atomic number of each of the four atoms directly connected to the chiral carbon in the event that they are all different. Priority is given to the atom with the highest atomic number. If there are two or more isotopes of the same element, the isotope with the higher mass is given priority.
The following atom's atomic number is utilised to determine priority if two or more of the atoms directly bound to the chiral carbon are identical. Priority is established at the first point of difference along the chain if these atoms likewise have identical atoms linked to them. If a double or triple bond is coupled to the chiral centre, the implicated electrons are duplicated or triplicated correspondingly. Hence, the first step is to assign priority to the group.
When the priority order (1 →2→ 3) is clockwise, and the group with the lowest priority is away from the observer (i.e., connected by a dashed line), the arrangement is designated as R. The configuration is S if the priority order is anticlockwise.
The picture shows R configuration as H is away from the observer and 1 →2→ 3 is clockwise.
Exchange two groups so that a dashed line joins the group with the lowest priority if the group with the lowest priority is not away from the observer. As you can see, the configuration is designated as S if the sequence is clockwise and R if it is anticlockwise. This is because you have switched two groups and are now figuring out the original molecule's enantiomer's conformational.
Finding RS configuration
In the above image, the first structure has an S configuration due to an anticlockwise arrangement of groups according to priority. However, in the third structure when two groups are interchanged, the configuration changes from S to R.
Stereochemistry is concerned with the depiction of molecules in three dimensions. In biological systems, this has broader implications. Different representations can be used to depict the three-dimensional (3-D) structure of organic molecules on paper. There are mainly four types of representations of organic molecules in three dimensions: Fischer Projection, Sawhorse, Newman Projection, and Wedge Dash. We discussed what each conversion looks like. We have also seen how to convert wedge dash to Fischer and Fischer projection to wedge dash. Assigning priority is important before determining the configurations of molecules in wedge dash representation. Furthermore, an R or S configuration can be found by applying the given rules. |
Children and Addiction
Unfortunately, young people can be especially vulnerable to addiction, as they are less experienced and more prone to societal pressures. Whether they get access to alcohol or drugs in the home or fall in with the wrong group of friends, the opportunities for children to be offered substances or be in an environment where substance abuse occurs are significant.
Battling addiction and ready for treatment?
Children Affected by Addiction Issues
Addiction is a chronic disease affected by genetics, environment, life experiences, and brain circuits. People suffering from addiction will engage in compulsive behaviors or use substances and persist despite damaging consequences.
While addiction is treatable, many addicts successfully hide their substance abuse, making it difficult to catch early if loved ones don’t know the signs. In addition, addiction directly affects behavior, so it’s common for addicts to become defensive and argumentative when confronted about their substance abuse.
These adverse reactions are sometimes not intentional and are knee-jerk reactions to protect themselves against judgment and the loss of freedom to continue using. Such behavior may be even more severe in children and teens in the early stages of emotional development.
Risk Factors for the Development of Addictions in Children
From birth to the late 20s, human brains and bodies continue to grow and develop. Internally, significant changes occur on a physical and mental level through adolescence. Therefore, it’s not surprising that the introduction of and dependence on substances can seriously disrupt this growth and cause damage.
The risk factors present for children with addiction can be dire, so it’s vital to be aware of these hazards. This section will discuss the risks and factors to consider if you suspect your child is abusing substances.
According to the National Institute for Mental Health, about half of individuals who experience a substance use disorder during their lives will also experience a co-occurring mental disorder and vice versa. As mental illness can present itself in adolescence, it’s crucial to be aware of co-occurring disorders to treat each individually.
In addition, according to data gathered by the National Institute of Drug Abuse (NIDA), drug use typically starts in adolescence, a period when the first signs of mental illness commonly appear.
However, as mentioned previously, essential functions in the brain continue developing until the mid to late 20s. Because of this ongoing development, adolescents don’t yet have control over executive functions like decision-making and impulse control, which can worsen their substance abuse.
Co-occurring disorders are mental illnesses that coincide with substance use disorders. Comorbidity is another term often used, although it typically describes neurological or physical illnesses present with addiction.
The National Institute for Mental Health found the most common co-occurring disorders include:
- Anxiety disorders
- Post Traumatic Stress Disorder (PTSD)
- Attention-deficit hyperactivity disorder (ADHD)
- Bipolar disorder
- Personality disorders
Based on research from the National Institute on Drug Abuse (NIDA), over 60% of adolescents in community-based treatment programs for substance use disorder also meet diagnostic criteria for another mental illness.
This surprising percentage highlights the importance of early diagnosis, as research has shown that catching mental illness earlier may decrease the likelihood of future substance use disorder co-occurring.
Early education is critical if there is a history of addiction in the family. While having a genetic predisposition to addiction can put someone at higher risk of becoming addicted, this doesn’t guarantee they will develop an alcohol or drug addiction.
Parental addiction can also majorly impact whether or not children end up abusing alcohol or drugs. According to NIDA, children of addicts are at a greater risk of developing a substance abuse issue, especially when they live with the addicted parent or caregiver.
According to the Substance Abuse and Mental Health Services Administration (SAMSHA), 1 in 8 children lives in a home with at least one adult with a substance use disorder.
Many parents worry about their children falling in with the “wrong crowd” or giving in to peer pressure. Because their impulse control and judgment are not fully developed, you should be aware of the types of people your child is listening to or even admiring.
According to NIDA, children often give in to peer pressure due to a desire to impress their peers. If the child is exposed to drugs and alcohol through a parent or admired peer, they may feel tempted to try the substance to feel older or cooler.
Being aware of who your children are spending time with and reinforcing the dangers of substances when exposed to them through outside influences is the first and best line of defense.
Drug Use and Addiction Statistics in Children
Many addiction studies center on adult substance abuse and child neglect. Fortunately, more research is identifying what ages children begin taking substances to understand the reality of alcohol and drug use in minors.
Adolescents (Children 12 and Under)
While alcohol and drug addiction in children under age 12 are relatively rare, it still happens.
- According to the National Survey on Drug Use and Health, some children are already abusing drugs by age 12 or 13, which indicates some may begin even earlier.
- Based on data from the National Center for Drug Abuse Statistics, 21.3% of 8th graders have tried illicit drugs at least once.
- Early abuse includes drugs like tobacco, alcohol, inhalants, marijuana, and psychotherapeutic medications like benzodiazepines, stimulants, and sedatives.
Underage Teens (Children 13 to 17)
As children develop into teenagers, societal pressures increase, and they are more likely to be exposed to substances. Some teens may be tempted to act older than they are, whether through peers or media, or may even use substances to cope with trauma or undiagnosed issues.
- In 2011, the WHO (World Health Organization), UNESCO, and UNICEF indicated an evident rise in the number of high school students (from 14 to 18) who had consumed drugs, where 9.5 percent of youths claimed to have used drugs in 2007.
- Meanwhile, according to the National Council on Alcoholism and Drug Dependence, an estimated 20 million Americans aged 12 or older (8% of the population) have used an illegal drug in the past 30 days.
- While young people don’t drink as often as adults, they consume over 90% of their alcoholic beverages through binge drinking.
Talking to Your Kids About Drugs and Alcohol
The good news is that talking to your children about drugs and alcohol in an honest, loving way can help. The National Council on Alcoholism and Drug Dependence shows that kids who learn about the risks of alcohol and drugs from their parents are up to 50% less likely to use.
Many parents struggle to know where to begin and avoid the topic altogether out of awkwardness. Unfortunately, the lack of clarity and education can leave children woefully misinformed and put them in real peril later. However, some ways to open the discussion feel more comfortable and natural.
Prevention Starts With a Conversation
While some parents may feel their child is too young to learn about drugs and alcohol, waiting until a certain age can do more harm. Many prevention programs begin as early as preschool, with age-appropriate conversations about drugs and alcohol. It’s never too early to start talking to your child about substance abuse.
Small children can quickly learn about substances from frequent advertisements or overhearing conversations. Learning the truth about drugs and alcohol from a responsible parent is always preferable to an unreliable, potentially predatory source like a commercial.
Starting the conversation early also destigmatizes these discussions in your household. If your child understands that it’s okay to have honest discussions about substances early on, they’re more likely to come to you in the future if they’re being pressured by friends or have begun using and need help.
It’s a Discussion, Not a Lecture
Many parents today will remember the anti-drug commercials and ad campaigns that filled media and anti-drug speakers who gave school talks. These lectures aren’t as effective in the home, as they often come across as preachy and are more likely to earn an eye roll.
Instead, turn these teaching moments into open discussions where you welcome questions in a safe, shame-free environment. Ensuring you have a non-emotional, informative approach will increase your kid’s likelihood of actually listening rather than tuning you out.
Create a Zone of Trust
Because young brains are still developing, the fear of getting in trouble or losing privileges can impact their judgment. If your child feels they can’t come to you with complex subjects, it will be much harder to address substance abuse in the future.
As mentioned before, creating an open environment that destigmatizes discussions about drugs and alcohol is essential. If your child knows they can talk to you about substance abuse without getting yelled at, they are more likely to come to you if they make a mistake.
Handle Problems With Love and Grace
Discovering your child is using drugs or alcohol can be incredibly upsetting, especially if they’ve been hiding it for some time. While you have every right to be upset, it’s important to remember that flying off the handle seldom resolves issues. Blowing up at them is more likely to start an argument and unintentionally teach them that their struggles are only deserving of punishment.
That isn’t to say you shouldn’t discipline your child, but remember that your child is not trying to hurt you intentionally. Instead, they struggle and need your help, love, and support.
Find ways to set boundaries and discipline their behavior in a fair and supportive way. The bottom line is that you love and want the best for them, and if you lead with that love, you can set your child up for success from the beginning.
Is My Child Abusing Drugs?
Adolescence and puberty can be turbulent for children, making it challenging to notice changes that could indicate substance abuse.
Many symptoms overlap—mood swings are common during puberty but can also indicate addiction. This distinction is necessary, as accusing your child of abuse that isn’t taking place can be just as damaging. You should also consider other elements (e.g., puberty, changing schools, etc.) when taking note of behavioral changes.
The following section will outline the most common warning signs when children abuse drugs and alcohol, but this is not a definitive list.
Physical Warning Signs of Drug Abuse
Common physical warning signs of substance abuse or addiction may include:
- Lack of hygiene—not taking showers, changing clothes, or brushing their teeth
- Memory lapses and poor concentration
- Lack of coordination or slurred speech
- Unexplained weight loss
- Red or watery eyes, pupils larger or smaller than normal, blank stare
- Sniffing constantly
- Excessive sweating, tremors, or shakes
- Cold, sweaty palms or shaking hands
- Unexplained nausea or vomiting
Behavioral Warning Signs
Common behavioral warning signs of substance abuse or addiction may include:
- Sudden and sustained emotional changes
- Chronic lying
- Missing important appointments
- Losing interest in favorite things
- Being tired and sad
- Extreme changes in eating habits
- Being very energetic, talking fast, or saying things that don’t make sense
Other Warning Signs of Addiction
Additional warning signs of substance abuse or addiction may include:
- Dramatic changes in friendships or having a sudden new friend group
- Having problems at school—missing class, getting bad grades
- Having issues in personal or family relationships
- Asking for money or stealing money
Drugs and Paraphernalia
Locating and identifying drug paraphernalia is critical in determining if your child is abusing substances. Some items may look like everyday objects, and others may be obvious.
Some common examples of drug paraphernalia include:
- Loose pills
- Pipes (metal, wooden, acrylic, glass, stone, plastic, or ceramic)
- Water pipes
- Roach clips
- Miniature spoons
- Chillums (cone-shaped marijuana/hash pipes)
- Cigarette papers
Behavioral Addiction in Children
While the main focus here has been on drugs and alcohol, these are not the only addictions children face. Behavioral addictions function similarly to substance addictions by activating specific brain transmitters.
Common behavioral addictions in children include:
- Gaming addiction
- Internet addiction
- Social media addiction
- Food addiction
These issues can also co-occur with other substance issues and mental illnesses. Although they don’t have the same risk of physical dependence as drugs and alcohol, the same adverse mental effects can occur.
Addiction Treatment for Children
Treatment options available for children and teens struggling with addiction are similar to those for adults—medical detox, inpatient rehab, outpatient rehab, and psychotherapy. There also may be programs in your area specifically targeted at younger addicts, so don’t hesitate to ask a medical professional if those options are available.
Medical detox is crucial if your child has been taking certain substances that need to be flushed from their system before treatment can begin. During detox, a licensed medical professional will monitor your child’s vitals over a while or wean them off the drugs through a process known as tapering.
Some substances can include uncomfortable or dangerous side effects when stopped, so detoxing in a medically controlled environment can ensure your child’s safety.
Inpatient programs involve the child living at a treatment facility for a certain amount of time, receiving one-on-one and group therapy, and participating in controlled activities. This option may be ideal because it puts the child in a controlled environment without access to substances.
Outpatient programs typically come in two approaches—Intensive Outpatient Programs (IOP) and Partial Hospitalization Programs (PHP). In an IOP, your child still lives at home but comes to the treatment center for a certain amount of hours a week, where they will receive individual and group therapy, as well as life skills practice and medication management if needed.
On the other hand, a PHP gives your child access to the more intensive offerings of an inpatient program but without a considerable time commitment. PHPs are generally considered a step up from IOPs regarding the level of care. Your child will have access to the same treatments and more involved services like medical detox but can still go home at the end of the day.
Your primary healthcare provider or an addiction counselor can help you determine which level of rehab is the best fit for your child and their situation.
Additional Approaches to Addiction Treatment in Children
Aside from the treatment options previously discussed, other measures may help treat your child’s substance abuse. Removing your child from that friend group and even changing schools may be essential if their friends are also engaging in substance abuse. Monitoring and limiting their internet and phone usage may also be necessary.
Other options may include:
- A sober companion: This trained professional listens to your child’s concerns, provides healthy companionship, formulates sober-minded plans, and provides feedback on their progress.
- Ongoing therapy: Therapy can help your child improve their mental and behavioral health. A therapist can allow your child to build important skills, such as improving confidence, increasing self-control, and learning healthy ways to practice self-care to enhance their overall well-being.
Getting Help for Your Child’s Drug Use
Finding treatment for your child’s substance abuse is never easy, but thankfully there are tools available to help you find the best options for a life free of substance abuse. SAMHSA’s online treatment locator is a fantastic tool at https://findtreatment.gov or by calling 1-800-662-4357.
You can also speak with your child’s pediatrician or community addiction center to see the available treatment options. You are not alone in this fight for your child’s sobriety, so don’t hesitate to reach out to these resources. They can help you locate the best treatment centers for your child and also help with insurance questions.
FAQs About Children and Addiction
What causes addiction in children?
Often children will engage in substances to fit in and impress their peers. They may also use drugs and alcohol to cope with stress or trauma. Others may pursue substances to experiment or thrill-seek.
Understanding these motivations is essential when considering the best treatment, as the underlying cause should always be considered and treated.
How do you know if your child has an addiction?
Look for physical and behavioral warning signs, and consider the people in your child’s friend group. Some indicators of substance abuse include:
- Slurred speech
- Impaired coordination
- Poor hygiene
- Extreme mood swings
- Sudden changes in their friend group
- Poor school performance
How is addiction defined?
Standard indicators of addiction include physical dependence and compulsive substance use regardless of negative consequences.
Substance abuse is hazardous for children because their brains are still in critical stages of development and can experience severe damage from substance abuse.
What are the consequences of addiction in child?
Drug and alcohol use can interfere with developmental processes occurring in the brain, primarily affecting their decision-making. Substance use makes them more susceptible to risk-taking behavior, such as unsafe sex and dangerous driving.
In addition, using substances from a young age can contribute to adult health problems, such as heart disease, high blood pressure, and sleep disorders.
How do you treat addiction in children?
Several treatment options are available to children, including medical detox, inpatient rehab, outpatient rehab, and psychotherapy. A licensed physician can evaluate your child’s situation and give you the best recommendation of treatment options.
How can you help your child overcome addiction?
Support and love are key to addressing your child’s addiction.
While putting them in treatment can be difficult and sometimes even damaging to your relationship, operating from a place of love and reminding them of this is essential through every stage of recovery. Showing your child that you’re not the bad guy and genuinely want the best for them is a healthy approach to have with them. |
|This article is about an older version of DF.|
Dwarf Fortress features some pretty complex behavior in an attempt to simulate fluid mechanics. One aspect of this behavior is seen in the form of pressure. The basic idea here is quite simple - certain forms of fluids movement exert pressure, causing them to potentially move upwards into other areas.
Contrary to what many people may believe, pressure is not a property of a body of liquid. Pressure is simply one of 3 rules by which liquids can be moved - the others are "falling" (when the tile beneath contains less than 7/7 of liquid) and "spreading out" (when the liquid levels of two adjacent tiles are averaged, possibly pushing items around).
The following types of liquid movement follow the rules of pressure:
- Water falling downward into more water
- River/brook source tiles (whether the map edge or the "delta" where the river itself begins) generating water
- Screw pumps moving water or magma
When a liquid is moved (or created) with pressure, it attempts to locate the nearest tile on the same Z-level as its destination tile (for falling water, this is 1 Z-level beneath its original location) by moving north, south, east, west, down, or up. As it tries to locate an appropriate destination, the liquid will first only try to move sideways and downward - only when this fails will it attempt to move upward. Pressure will not propagate through diagonal gaps.
A demonstration of pressure using U-Bends
A U-Bend is a channel that digs down, and curves back up. With pressure a fluid will be pushed up the other side of the u-bend. By understanding how pressure works in a u-bend you should be able to adapt this knowledge to use fluids in any configuration you desire without any unexpected surprises that could make life in your fortress more fun than anticipated. Water and magma both behave very differently with regards to pressure, so read carefully.
Water in a U-Bend
The following three diagrams demonstrate different ways water might behave in a u-bend. In all three cases, the water source is on the left side of the diagram and water is filling the area to the right. In the first example (Diagram A), we have water taken directly from a (flat) river used to fill a u-bend. In this case, the river is free to flow off the edge of the map, so the only pressure comes from the water tile on the top of the left side (highlighted in green) falling downward (into the tile highlighted in red), so the water on the right side stops one level below the river itself.
In the next example (Diagram B), a dam has been placed, preventing the river from flowing off the edge of the map. In this case, the pressure exerted by the river source (highlighted in red) allows the water to fill up the remaining level of the u-bend. Use caution when placing a dam on your river.
The final example (Diagram C), demonstrates how a screw pump exerts pressure - in this case, the water fills up to the same level as the pump's output tile (highlighted in red).
With these three simple examples, you should be ready to go build your enormous plumbing masterpiece, and be relatively safe from any unanticipated flooding. If you plan to work with magma as well however, you should read further.
|Diagram A||Diagram B||Diagram C|
|Undammed River||Dammed River||Screw Pump|
Magma in a U-bend
Magma does not exert pressure when it falls downward. In our first magma example (Diagram A) we show how this works by creating a short u-bend and connecting it up to a magma pipe - it simply fills the lowest point and makes no further attempt to go back up.
In the second diagram (Diagram B) we see how with the addition of a single screw pump, the entire situation changes dramatically - when the screw pump moves magma to the right side, it does so using the rules of pressure and allows the area to fill up to the level of the pump. Accidentally flooding your fortress with magma is considerably more fun than a flood of water.
Diagram A Diagram B Magma Pipe Screw Pump Side View Side View
▒≈≈≈▒ %%≈▒≈≈≈▒ %% = Pump ▒≈≈≈▒ ▒ ▒≈≈≈▒▒≈▒≈≈≈▒ ≈ = Magma ▒≈≈≈▒ ▒ ▒≈≈≈▒▒≈▒≈≈≈▒ ▒ = Solid Ground ▒≈≈≈≈≈≈≈▒ ▒≈≈≈▒▒≈≈≈≈≈▒ ▒≈≈≈▒▒▒▒▒ ▒≈≈≈▒▒▒▒▒▒▒▒
Pressure is a lazy model, but will always behave like above. For example, a system on z0 receives water from a cistern z3 in amounts of ~3/tick. This system consists of a tree of passages, one tile wide, and contains 'underpasses' on z-1. Water will flow into the system to a depth of 7 before coming up on the other side of a the first underpass, as is expected. However, if faced with two underpasses, it will choose the nearest one and fill all the system on the other side of that underpass to a depth of 7 before filling the system on the other side of the far underpass. Similarly, if faced with multiple exits from the system, the whole flow will flow out of one exit, the nearest lowest one.[Verify]
Waterfalls are of special concern. When drawing water from a waterfall it is important to understand that, since the water is falling on top of the river's surface, the pressure exerted when it falls down into the river will permit it to pass through U-bends that would normally not be filled when using a flat undammed river - if you tap into a river below a waterfall just as you would above it, you could very easily flood your fortress.
There are two methods for neutralizing fluid pressure: diagonal connections and screw pumps. Knowing how to manipulate pressure as needed allows you to quickly move fluids wherever you wish in your fortress allowing you to build things a dwarf can be proud of.
Liquids moving via pressure can only move to orthogonally adjacent tiles. When faced with a diagonal gap, pressure will fail to move the liquid, forcing the liquid to instead spread out. By forcing fluids through a diagonal connection you can prevent pressure from propagating past a certain point.
This does not work on a vertical basis - water only travels straight up and down to different Z-levels, never diagonally.
If you wish to maintain the rate of flow after de-pressurizing, it's recommended that you have more diagonals than water tiles - that is, if the source is 3-tiles wide, you may wish 4 or more diagonal passages.
Top View ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ > > > ▒ > > > 4Z Deep ▒ 1Z Deep > > > ▒ > > > ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
▒≈≈≈▒ ▒≈≈≈▒ ▒≈≈≈▒▒▒▒▒▒▒▒▒▒▒▒▒▒ ▒ ▒≈≈≈≈≈≈≈≈RRR≈≈≈≈≈≈≈▒ RRR = Regulator design as seen in top view ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
Since water pressure does not propagate through pumps, it is possible to fill a pool from a "pressurized" source using a screw pump without it overflowing. Of course, there is a downside - you still have to run the pumps and due to the source water's pressure, the pump must be powered instead of run by a dwarf, as the tile the dwarf needs to stand on is filled by water. Furthermore, the pump will likely need to be powered from above or below (as water would simply flow around a gear or axle placed next to the pump), though creative setups are still possible by using additional screw pumps to transmit power.
Your vertical axles or gear assemblies need to be placed above the solid tile of the pump, and there must not be a channel over the walkable pump tile. (Water can only flow straight upward, not up and to the side at the same time.) Multiple adjacent pumps will also transfer power between themselves automatically.
Side view Power Water Key ↓ ↓↓↓↓↓ ▒ = Normal wall ▒▒▒▒▒▒║▒▒▒▒≈≈≈≈≈ ▒ = Wall that pressurised water would flow into if it were to be dug out ▒▒▒▒▒▒║▒▒▒▒▒≈≈≈≈ ≈ = Regular water _ ___▒║▒▒▒▒▒▒▒≈≈ ≈ = Pressurised water ▒≈≈≈≈≈%%≈≈≈≈≈≈≈≈ %% = Pump ▒▒▒▒▒▒▒▒≈▒▒▒▒▒▒▒ ║ = Axle ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ _ = Floor
Do note that the screw pump will still exert pressure when filling the pool, but said pressure will be independent of the source and can be subsequently blocked by diagonal gaps. |
|What is stormwater runoff?|
|Impervious surfaces are areas covered by buildings, asphalt, concrete, or other materials that prevent water from seeping into the ground. When it rains, much of the water flows to streams and lakes, becoming stormwater runoff.
The more buildings, streets, and parking lots we build, the more surface stormwater runoff is generated during each rainstorm.
|What is stormwater pollution?|
|As the stormwater runs off, it collects/picks up pollutants such as oil, trash, lawn debris, fertilizers and pesticides. As the amount of pollutants in the runoff increases, it is called stormwater pollution.|
|Why is stormwater pollution a problem that we should be concerned about|
|The replacement of vegetation with concrete and asphalt reduced the ability of the land to cleanse or remove pollutants from the water as it travels through the areas that drain to our lakes.
Without the benefit of treatment that comes with the natural sheetflow of water over native soils and vegetation, the pollutants contained in the stormwater runoff are discharges into our lakes, streams, wetlands, and underground aquifer.
Stormwater runoff is a significant source of pollution that can harm aquatic life and even contaminate the groundwater drinking supply. Depending on the type of pollutant and the land use, pollutant concentrations in the runoff from developed areas are often 10 to 100 (or more) times higher than runoff from undeveloped land.
Stormwater runoff is now considered the greatest source of pollutant loading to Florida’s lakes, rivers and estuaries.
|Where do the storm drains flow to?|
|Many of the storm drains within the City of Lake Wales drain to Lake Wailes lake. All stormwater runoff flows to lakes, streams, and wetlands where the water percolates back into the aquifer.|
|What problems are caused by stormwater pollution?|
|1. Pollution - As stormwater runs-off of 'impervious' surfaces (e.g. building/house rooftops, streets, parking lots, and driveways) it picks up contaminants and debris such as oil, grease, fertilizers, pesticides, litter, leaves and grass clippings. This stormwater runs through the miles of ditches and underground pipes that all lead to our lakes, streams and wetlands. There it deposits the pollution that eventually will make its way to the underground aquifer, where our drinking water comes from. We need better pollution control and stormwater treatment to reduce this contamination.
2. Runoff - The combination of flat terrain and limited natural or man-made stormwater drainage systems results in flooded streets, yards, and occasionally a home or business. The City has worked hard to solve this problem, but there are areas yet to be addressed.
The City of Lake Wales will establish a Stormwater Utility fee to provide funding necessary for a long-term commitment to improving water quality in Lake Wales lakes. The fee will establish a funding source for all activities related to the collection, storage, treatment and conveyance of stormwater within the city of Lake Wales.
|What is a watershed?|
|A watershed is an area of land that flows across as it moves toward a common body of water, such as a stream, river, lake or coast.|
|How can you help protect our watershed?|
|By following these five simple steps, you can help improve the health of Florida’s watersheds now and for future generations:
1. Use Fertilizers and Pesticides Sparingly: A Florida-friendly landscape minimizes the need for fertilizer and pesticides. Applying more fertilizer that your yard can use allows excess nutrients to be transported by runoff. This may cause algal blooms and lower the oxygen levels in the water bodies disrupting the natural balance within the watershed. Toxins from the pesticides may kill beneficial organisms within the watershed.
2. Conserve Water: Use Florida-friendly landscaping to save water. Over-watering can damage lawns and plants. In addition, excess water use stresses our water supply.
3. Have Septic Systems Inspected Regularly. Leaking septic systems may contaminate the water, making it harmful to plants, animals, and people. Septic tanks should be inspected every two to three years pumped as needed.
4. Never Pump Anything Down a Storm Drain. Storm drains help prevent flooding streets and highways by quickly and efficiently transferring rainwater into nearby water bodies. Chemicals and other toxins dumped in storm drains find their way into lakes, rivers and streams, polluting the watershed.
5. Pick Up After Pets. In high “pet traffic” areas near water bodies, bacteria from pet waste can be carried into water bodies, harming fish and other animals. |
rhetorical device, literary technique, or situation in which there is an incongruity between the literal and the implied meaning
Irony is a term for a figure of speech. Irony is when something happens that is opposite from what is expected. It can often be funny, but it is also used in tragedies. There are many types of irony, including those listed below:
- Dramatic irony, when the audience knows something is going to happen on stage that the characters on stage do not.
- Socratic irony, when someone (usually a teacher) pretends to be stupid in order to show how stupid his pupils are (while at the same time the reader or audience understand the situation).
- Cosmic irony, when something that everyone thinks will happen actually happens very differently.
- Situational irony e.g. Mr. Smith gets a parking ticket. This is ironic because Mr. Smith is a traffic warden.
- Verbal irony is an absence of expression and intention. Sarcasm may sometimes involve verbal irony.
- Irony of fate is the misfortune in the result of fate or chance.
- The difference between of things seem to be or reality.
- In Shakespeare's play Romeo and Juliet, Juliet takes a potion that will put her to sleep, making her look dead. She does this in the hopes of being reunited with Romeo. He incorrectly learns of her death, and kills himself. This is an example of dramatic irony, as the reader/viewer knows she is not dead, but Romeo does not.
- A common example of cosmic irony could be that a child wants some kind of pudding, and misbehaves to try to get it. The parent withholds it because of the child's behavior.
- Verbal irony can be found in sarcasm, but not just that.
- In Sophocles' play Oedipus Rex, Oedipus acts out based on the knowledge of his fate which in turn leads to the fulfillment of the tragic fate. This is an example of how fate plays on irony.
- "Irony" at Rhetoric.byu.edu; retrieved 2012-1-14. |
Mankind's Explanation: Gravity
Original Version for Printing
“Gravitation. The force that causes objects to drop and water to run downhill is the same force that holds the Earth, the Sun, and the stars together and keeps the Moon and artificial satellites in their orbits. Gravitation, the attraction of all matter for all other matter, is both the most familiar of the natural forces and least understood.
Gravity is the weakest of the four forces that are currently known to govern the way physical objects behave. The other three forces are electromagnetism, which governs such familiar phenomena as electricity and magnetism; the “strong force” which is responsible for the events in nuclear reactors and hydrogen bombs; and the “weak force,” which is involved with radioactivity. Because of its weakness, gravity is difficult to study in the laboratory.
Despite its weakness, gravitation is important because, unlike the three other forces, it is universally attractive and also acts over an infinite distance. Electromagnetic forces are both attractive and repulsive and as a result generally cancel out over long distances. The strong and weak forces operate only over extremely small distances inside the nuclei of atoms. Thus, over distances ranging from those measurable on Earth to those in the farthest parts of the universe, gravitational attraction is a significant force and, in many cases, the dominant one.
Both Sir Isaac Newton in the 17th century and Albert Einstein in the 20th century initiated revolutions in the study and observation of the universe through new theories of gravity. The subject is today at forefront of theoretical physics and astronomy.
Late in the 17th century Newton put forward the fundamental hypothesis that gravity that makes objects fall to Earth and the force that keeps the planets in their orbits are the same. In the early 1600s the German astronomer Johannes Kepler described three laws: first, all planets move in ellipses with the Sun at one focus; second, a line between the Sun and a planet would sweep out equal areas of the ellipse during equal times; third, the square of the period of any planet (the quantity of the time it takes to go around the Sun multiplied by itself) is proportional to the cube of its average distance to the Sun--that is, the average distance multiplied by itself twice.
In his book ‘Principia Mathematical’, published in 1687, Newton showed that both Kepler’s laws and Galileo’s observations of Earth’s gravity could be explained by a simple law of universal gravitation. Every celestial body in the universe attracts every other celestial body with a force described by F = G Mass 1 Mass 2 / Radius squared in which F is the force, m1 and m2 are the masses of the two gravitating objects, R is the distance between them, G is the gravitational constant (6.67 x 10^ -8 dyne cm sq./gm sq.).
Also in the ‘Principia Mathematica’ Newton mathematically defined the concept of “force” to be equal to the mass of an object on which the force is applied, multiplied by the acceleration that results from the force, or F = MA, in which A is acceleration. Because the gravitational force increases proportionately to the mass of the object that is accelerated, any object, no matter what its mass, accelerates equally if placed at the same distance from another mass. Galileo observed that all objects on the Earth are accelerated by the planet’s gravity to the same extent.
demonstrated mathematically that the law gravitation he proposes predicts that
the planets follow Kepler’s three laws. Newton vision of a world governs by
the simple, unalterable laws exerted a powerful influence for more than a
If the planets are attracted to the Sun by gravity, why do they not fall in? Newton showed that if the velocity is high enough, a planet will always be accelerating toward the Sun without ever leaving its orbit. This is because an object’s motion is the result of both its previous direction of travel and speed-- that is, its velocity and acceleration applied to it. Just as a rock whirling at the end of a string as long as it is whirled fast enough, so objects in a gravitational field remain in their orbits if they are moving fast enough.
not only keeps planets and moons in their orbits, but holds them together. It
also played a dominant part in their creation. The Sun, for example, produces
the heat and light needed for life on Earth through nuclear reaction deep in its
interior. These same reactions would blow the Sun apart if it were not for the
immense force of its self-gravitation holding it together. Some 5 billion years
ago the Sun and planets contracted out of a diffused cloud of dust and gas,
again compressing themselves under the influence of their own increasing
gravitational fields. In the same way the huge galaxies and clusters of
galaxies, consisting of trillions of stars, are bound by gravity and were formed
primarily by gravitational contraction, through other forces-- such as pervasive
magnetic fields in space-- probably played a role as well.
Newton’s laws do not explain why all
objects attract all others. By 1916 the theoretical physicist Albert Einstein
had formulated a new theory of gravity that attempted to explain its actual
nature. In his theory, called general relativity, gravity does not exist as a
real force. Instead, each mass in the universe bends the very structure of space
and time around it, somewhat as a marble sitting on a very thin piece of rubber
does. This distortion of the space surrounding each object in turn bends the
path of all objects, even those possessing no mass at all such as photons.
Despite the success of Einstein’s theory, much remains unknown about gravity. Still unanswered are questions about its relation to the other three forces of nature, why it is so much weaker, and why matter creates the curvature of space around it. These and other fundamental questions about gravity continue to be the subject of theoretical work by scientists.”
Above information in quotes acquired from: Compton’s Encyclopedia Online v3.0 (c) The Learning Company, Inc. http://comptonsv3.web.aol.com/search/fastweb?getdoc+viewcomptons+A+3409+6++gravity
Return to Main Document
Return to Title Page 2 |
Mental health and mental illness – Are they the same? We often use these terms as if they mean the same thing, but they are not!
It’s important to understand that everyone has mental health just like everyone has physical health. A person can have good physical health and at the same time have a physical illness such as diabetes. Similarly, people can experience good mental health and also have a mental illness at the same time. As the World Health Organization famously says, “There is no health without mental health!”
To understand mental health, it is necessary to understand the three related components of mental health: mental distress, mental health problems and mental illness.
Mental Distress is the inner signal of anxiety that a person experiences when something in their environment is demanding that they adapt to a challenge (for example: work or school responsibilities, not making a sports team, etc.). These situation effect our emotions, our thoughts and feelings, our ability to problem solve and interact with others. Mental distress is normal and a very necessary part of learning to adapt and develop our resiliency. Most people experience mental distress every single day. We often learn to manage distressing events by trial and error, getting advice from others, role modelling from others etc.
Sometimes we experience more distressing events that can lead to developing a Mental Health Problems. Emotions such as grief, anger, anxiety are normal reactions to unexpected events of life. These emotions are also accompanied by difficulties with a person’s thinking, and behavior that may interfere with a person’s day to day functioning. When this occurs we may need additional supports to help assist us with these problems in functioning. First and foremost, support from our natural supports help us when we experience mental health problems. Assistance from therapists, your primary care provider, and other helping professionals may also be beneficial while experiencing a mental health problem.
A mental illness, on the other hand, is a biological diagnosed medical condition, that results in disturbances of a person’s emotions (panic attacks, depression, overwhelming anxiety); thinking (hopelessness, delusions, suicidal thoughts); physical symptoms (fatigue, excessive movement); and behavior (refusing to go to school, withdrawal, neglect of self-care. There are many different mental illness, each characterized by different set of symptoms as well as the range in intensity thus effecting a person’s day to day functioning.
Think of health as being on a continuum, ranging from exemplary health to illness, with varying degrees of healthy states in between. So for example some people have good health and have no problems going about their lives.
One of the most important parts of good mental health is the ability to look at our problems or concerns realistically. Good mental health isn’t about feeling happiness, contentment and confident 100% of the time, it’s about understanding that life has its ups and downs and knowing how to cope and work through problems with our natural supports and those around us. Mental Wellness is characterized by our ability to bounce back from life’s obstacles. With the right supports and tools, anyone can live well, find meaning, contribute to their communities, and work towards their goals. |
A pointer variable is a variable that holds the address of a memory location.
Every variable is assigned a memory location whose address can be retrieved using the address operator &. The address of a memory location is called a pointer. 1
The pointer data type allows us to designate a variable to hold an address or a pointer. The concept of an address and a pointer are one in the same. A pointer points to the location in memory because the value of a pointer is the address were the data item resides in the memory. Given an integer variable named age:
int age = 47;
We can create a pointer variable and establish its value which would be the done using the address operator [which is the ampersand or &] by:
int * int pointer = &age;
The asterisk is used to designate that the variable int pointer is an integer pointer [int *]. This means that whenever we use the variable int pointer that the compiler will know that it is a pointer that points to an integer.
In order to use pointers you will need to understand the indirection operator which is covered a supplemental link. |
US Department of Defence’s famous agency “DARPA” (Defense Advanced Research Projects Agency) has produced a sonic fire extinguisher. It is used to extinguish light fires, for example flames in contained areas such cockpits. The whole concept around which this device revolves is sound. Shown below is an experiment conducted by the DARPA team in which they installed two speakers on either side of the liquid fuel flame. This successfully demonstrated the controlling of fire by amping up the acoustic field. The sound increased the air velocity, which in turn thinned the area of flame. In other words, “boundary of flame”; the particular area available for fire for combustion, shrank. From here on, the thinned flame was relatively easier to extinguish.
Acoustics on the same time also helped in disturbing the pool of fuel which caused higher fuel vaporization. This all worked towards reducing the fire boundary, which gradually became smaller and smaller.
Here is a video of the experiment. |
It is well known that Asians have recycled humanure for centuries, possibly millennia. How did they do it? Historical information concerning the composting of humanure in Asia seems difficult to find. Rybczynski et al. state that composting was only introduced to China in a systematic way in the 1930s and that it wasn't until 1956 that composting toilets were used on a wide scale in Vietnam.1 On the other hand, Franceys et al. tell us that composting "has been practiced by farmers and gardeners throughout the world for many centuries." They add that, "In China, the practice of composting [humanure] with crop residues has enabled the soil to support high population densities without loss of fertility for more than 4000 years." 2
However, a book published in 1978 and translated directly from the original Chinese indicates that composting has not been a cultural practice in China until only recently. An agricultural report from the Province of Hopei, for example, states that the standardized management and hygienic disposal (i.e., composting) of excreta and urine was only initiated there in 1964. The composting techniques being developed at that time included the segregation of feces and urine, which were later "poured into a mixing tank and mixed well to form a dense fecal liquid" before piling on a compost heap. The compost was made of 25% human feces and urine, 25% livestock manure, 25% miscellaneous organic refuse, and 25% soil.3
Two aerobic methods of composting were reported to be in widespread use in China, according to the 1978 report. The two methods are described as: 1) surface aerobic continuous composting; and 2) pit aerobic continuous composting. The surface method involves constructing a compost pile around an internal framework of bamboo, approximately nine feet by nine feet by three feet high (3m x 3m x 1m). Compost ingredients include fecal material (both human and non-human), organic refuse, and soil. The bamboo is removed from the constructed pile and the resultant holes allow for the penetration of air into this rather large pile of refuse. The pile is then covered with earth or an earth/horse manure mix, and left to decompose for 20 to 30 days, after which the composted material is used in agriculture.
The pit method involves constructing compost pits five feet wide and four feet deep by various lengths, and digging channels in the floor of the pits. The channels (one lengthwise and two widthwise) are covered with coarse organic material such as millet stalks, and a bamboo pole is placed vertically along the walls of the pit at the end of each channel. The pit is then filled with organic refuse and covered with earth, and the bamboo poles are removed to allow for air circulation.4
A report from a hygienic committee of the Province of Shantung provides us with additional information on Chinese composting.5 The report lists three traditional methods used in that province for the recycling of humanure:
1) Drying it - "Drying has been the most common method of treating human excrement and urine for years." It is a method that causes a significant loss of nitrogen;
2) Using it raw, a method that is well known for pathogen transmission; and
3) "Connecting the household pit privy to the pigpen . . . a method that has been used for centuries." An unsanitary method in which the excrement was simply eaten by a pig.
No mention is made whatsoever of composting being a traditional method used by the Chinese for recycling humanure. On the contrary, all indications were that the Chinese government in the 1960s was, at that time, attempting to establish composting as preferable to the three traditional recycling methods listed above, mainly because the three methods were hygienically unsafe, while composting, when properly managed, would destroy pathogens in humanure while preserving agriculturally valuable nutrients. This report also indicated that soil was being used as an ingredient in the compost, or, to quote directly, "Generally, it is adequate to combine 40-50% of excreta and urine with 50-60% of polluted soil and weeds."
For further information on Asian composting, I must defer to Rybczynski et al., whose World Bank research on low-cost options for sanitation considered over 20,000 references and reviewed approximately 1200 documents. Their review of Asian composting is brief, but includes the following information, which I have condensed:
There are no reports of composting privys (toilets) being used on a wide scale until the 1950s, when the Democratic Republic of Vietnam initiated a five-year plan of rural hygiene and a large number of anaerobic composting toilets were built. These toilets, known as the Vietnamese Double Vault, consisted of two above ground water-tight tanks, or vaults, for the collection of humanure (see Figure 6.3). For a family of five to ten people, each vault was required to be 1.2 m wide, 0.7 m high, and 1.7 m long (approximately 4 feet wide by 28 inches high and 5 feet 7 inches long). One tank is used until full and left to decompose while the other tank is used. The use of this sort of composting toilet requires the segregation of urine, which is diverted to a separate receptacle through a groove on the floor of the toilet. Fecal material is collected in the tank and covered with soil, where it anaerobically decomposes. Kitchen ashes are added to the fecal material for the purpose of reducing odor.
Eighty-five percent of intestinal worm eggs, one of the most persistently viable forms of human pathogens, were found to be destroyed after a two month composting period in this system. However, according to Vietnamese health authorities, forty-five days in a sealed vault is adequate for the complete destruction of all bacteria and intestinal parasites (presumably they mean pathogenic bacteria). Compost from such latrines is reported to increase crop yields by 10-25% in comparison to the use of raw humanure. The success of the Vietnamese Double Vault required "long and persistent health education programs." 6
When the Vietnamese Double Vault composting toilet system was exported to Mexico and Central America, the result was "overwhelming positive," according to one source, who adds, "Properly managed there is no smell and no fly breeding in these toilets. They seem to work particularly well in the dry climate of the Mexican highlands. Where the system has failed (wetness in the processing chamber, odours, fly breeding) it was usually due to non-existent, weak, or bungled information, training and follow-up." 7 A lack of training and a poor understanding of the composting processes can cause any humanure composting system to become problematic. Conversely, complete information and an educated interest will ensure the success of humanure composting systems.
Another anaerobic double-vault composting toilet used in Vietnam includes using both fecal material and urine. In this system, the bottom of the vaults are perforated to allow drainage, and urine is filtered through limestone to neutralize acidity. Other organic refuse is also added to the vaults, and ventilation is provided via a pipe.
In India, the composting of organic refuse and humanure is advocated by the government. A study of such compost prepared in pits in the 1950s showed that intestinal worm parasites and pathogenic bacteria were completely eliminated in three months. The destruction of pathogens in the compost was attributed to the maintenance of a temperature of about 40°C (104°F) for a period of 10-15 days. However, it was also concluded that the compost pits had to be properly constructed and managed, and the compost not removed until fully "ripe," in order to achieve the total destruction of human pathogens. If done properly, it is reported that "there is very little hygienic risk involved in the use and handling of [humanure] compost for agricultural purposes." 8
In short, it doesn't look like the Asians have a lot to offer us with regard to composting toilet designs. Perhaps we should instead look to the Scandinavians, who have developed many commercial composting toilets.
Source: The Humanure Handbook. Jenkins Publishing,
PO Box 607, Grove City, PA 16127. To order, phone: 1-800-639-4099. |
Valley and Ridge Province
The erosional characteristics of the sedimentary rock formations exposed along great anticlines and synclines of the Appalachian Mountains are responsible for the characteristic Valley and Ridge topography. Durable layers of sandstone and conglomerate form ridges, whereas less resistant limestone and shale underlie the valleys in the region. Along the eastern margin of the Valley and Ridge is the Great Valley, a broad valley underlain by Cambrian and Ordovician shale and carbonate rocks that weather and erode faster that more durable sandstone and conglomerate that crop out in ridges and plateaus to the west (see Figure 52). It extends southward from the Adirondack Mountains region, encompassing the upper Hudson River Valley between the Taconic Mountains (to the east) and the Catskills (to the west). It gradually bends westward into northern New Jersey, forming a broad, low valley broken by long, low ridges. It is bordered by the Highlands of the Reading Prong on the south and east, and the high ridge of Kittatinny Mountain to the west. In New Jersey and western Pennsylvania, Kittatinny Mountain represents the eastern-most hogback ridge of Middle Paleozoic rocks of the Valley and Ridge. North of New Jersey the characteristic folds of the Valley and Ridge fade into the nearly flat-lying strata of the Catskills region and the Allegheny Plateau region of western New York and Pennsylvania. In the New York Bight region, the Allegheny Plateau and the Catskill Mountains of Pennsylvanian and New York are the northern extension of the greater Appalachian Plateau.
Precambrian age (Grenvillian) crystalline igneous and metamorphic rocks form the basement beneath the sedimentary rocks of the plateau regions and the Valley and Ridge Province. In general, the Paleozoic sedimentary cover above the Precambrian basement increases in thickness from several kilometers in the midcontinent region to nearly a dozen kilometers in portions of the Appalachian Basin region. Throughout Paleozoic time, the Appalachian Basin region was the site of accumulation of vast quantities of sediment derived from uplifts created by the Taconic Orogeny (Late Ordovician), the Acadian Orogeny (Late Devonian), and Alleghenian Orogeny (Late Mississippian to Permian). These three mountain building intervals each left a progressive tectonic impression on the rocks of the New York Bight region and beyond (generalized illustration of these events is shown in Figure 53). Between and following these mountain building episodes were extensive quiescent periods when weathering and erosion stripped away most topographic relief, allowing shallow marine seaways to episodically invade portions of the landscape in the New York Bight region. This is demonstrated by the thick sequence of sedimentary rock formations which crop out through the region extending from the Hudson Valley into the Appalachian Basin (including the Catskills, the Green Pond Outlier, and the Valley and Ridge regions [Figure 54]).
Aftermath of the Taconic Orogeny
As the Taconic Orogeny subsided in early Silurian time, uplifts and folds in the Hudson Valley region were beveled by erosion. Upon this surface sediments began to accumulate, derived from remaining uplifts in the New England region. The evidence for this is the Silurian Shawangunk Conglomerate, a massive, ridge-forming quartz sandstone and conglomerate formation, which rests unconformably on a surface of older gently- to steeply-dipping pre-Silurian age strata throughout the region. This ridge of Shawangunk Conglomerate extends southward from the Hudson Valley along the eastern front of the Catskills. It forms the impressive caprock ridge of the Shawangunk Mountains west of New Paltz, New York. To the south and west it becomes the prominent ridge-forming unit that crops out along the crest of Kittatinny Mountain in New Jersey.
Through Silurian time, the deposition of coarse alluvial sediments gave way to shallow marine fine-grained muds, and eventually to clear-water carbonate sediment accumulation with reefs formed from the accumulation of calcareous algae and the skeletal remains of coral, stromatoporoids, brachiopods, and other ancient marine fauna. The episodic eustatic rise and fall of sea level caused depositional environments to change or to shift laterally. As a result, the preserved faunal remains, and the character and composition of the sedimentary layers deposited in any particular location varied through time. The textural or compositional variations of the strata, as well as the changing fossil fauna preserved, are used to define the numerous sedimentary formations of Silurian through Devonian age preserved throughout the region.
The Acadian Orogeny
Uplifts and volcanic centers formed during the Acadian Orogeny in the New England Region shed fine-grained clastic material into an expansive inland seaway that covered most of the southern and central Appalachian region and much of the midcontinent during Middle Devonian time. As the Acadian Orogeny progressed, greater quantities of coarser clastic sediments migrated into shallow sea, building an extensive alluvial plain along the eastern margin of the seaway. The Catskills region was proximal to the Acadian Highlands, and therefore was the site of the greatest accumulation of sediment in the region. (The boundary between the two geologic regions is a line approximating the location of the modern Hudson River; the Acadian Highlands was to the east.) To the west, the marine strand line migrated back and forth through time as the supply of sediments fluctuated and as sea level rose and fell. Sediments of Late Devonian age accumulated as a sedimentary wedge to as much as 7,000 feet in the Catskills region; these sedimentary deposits are thickest in the east and grow progressively thinner westward and southward into the central Appalachian Basin region. Massive accumulations of conglomerate and sandstone exposed along the eastern edge of the Catskills plateau led to an early interpretation that the Catskills formed as a great delta-type deposit, similar to the modern greater Mississippi Delta. However, complexities in the sequence of the sedimentary formations throughout the greater Catskills have been revealed from more recent geological investigations. A new perspective of the Catskills sedimentary sequence is model of fluctuating shorelines and prograding alluvial environments along the western margin of the Acadian upland. Farther to the west massive quantities of organic-rich mud accumulated in a deeper restricted seaway basin. These organic-rich mud deposits represent the oil and gas shales that are abundant throughout Appalachian Basin and the Ohio Valley regions.
The pattern and extent of Devonian age outcrops that exist in the New York Bight region provide information about even more extensive Devonian age deposits that existed in the past. The eastern edge of outcrop belt of Late Devonian rock shown in Figure 52 roughly outlines the extent of Catskills. The southern extent of the Devonian outcrop belt is part of the folded strata along the western Delaware River valley along the New Jersey-/Pennsylvania border. Devonian sedimentary rocks also crop out closer to New York City in the Green Pond Outlier, a complex synclinal trough that trends northeastward through the heart of the Highlands region in northern New Jersey and southern New York. Based on the occurrence of marine sedimentary units in the Green Pond Outlier it can be assumed that Devonian sedimentary units were continuous across much of the New York Bight region prior to the Acadian Orogeny. Devonian sedimentary rocks are also preserved in a complex synclinal area in northeastern Connecticut and extending northward into central Massachusetts. Igneous intrusions of Late Devonian age occur in small portions of Westchester County, New York (the Peekskill Granite just east of Peekskill, and the Bedford Augen Gneiss which crops out along the New York/Connecticut Boarder near the Mianus River Gorge). Several massive intrusions of Devonian age occur in the central Western Connecticut Uplands. Some of these intrusions may have contributed to episodes of volcanism in the region.
Regional metamorphism during the Acadian Orogeny affected the rocks throughout New England, including the bedrock of the New York City area. Heating and annealing during metamorphism "reset" the geologic ages of most older rocks in the eastern Highlands Province (including the rocks throughout Manhattan and the Bronx) to Late Devonian age. The influence of regional metamorphism associated with the Acadian Orogeny diminishes significantly west of the Hudson River
The Acadian Orogeny lasted from Late Devonian into early Mississippian time. This is inferred, in part, by the abundance of igneous intrusions of these ages throughout the Appalachian region. By Late Mississippian time, mountain building in throughout Appalachian region had drastically subsided. This can be inferred from the extensive sequence of marine limestones formed from clear water marine sedimentation preserved as strata of late Early Mississippian age (Meramecian, around 350 million years ago) throughout the western Appalachian Basin region and the midcontinent. By the end of Mississippian time, mountain building was once again proceeding. This is represented in the sedimentary record as the flood of clastic material preserved in association with the Pennsylvanian coal measures throughout the Appalachian Basin region. These coal measures formed in association with alluvial flood plains and inland coastal swamplands that developed along the western margin that of the Appalachian Mountains and in Late Paleozoic sedimentary basin throughout the midcontinent.
The Alleghenian Orogeny
During late Paleozoic time the ancient Iapetus Ocean (also called Proto-Atlantic Ocean) continued to vanish as the North America continent (Laurentia) collided with Africa (which was part of a larger collection of continents called Gondwanaland). During this time all of the Earth's continents were coalescing to form a single, great supercontinent, Pangaea (beginning roughly 320 million years ago during the Pennsylvanian Period [see Figures 8, 53C, and 83]). In eastern North America the formation of Pangaea corresponded to the Alleghenian Orogeny, the mountain-building episode associated with the formation of great folds and thrust faults throughout the central Appalachian Mountains region.
As the continents collided, the rock material trapped in-between was crushed and forced upward into a great mountain range, probably similar in size and character of the modern Alps. With nowhere to go, rocks along the eastern margin of the North American continent were shoved far inland (the same occured in the opposite direction along the margin of the African continent, forming the Atlas Mountains of Morocco and the western Sahara). The sedimentary rock in the eastern Appalachian Basin region was squeezed into great folds that ran perpendicular to the direction of forces. The greatest amount of deformation associated with the Alleghenian orogeny occurred in the Southern Appalachians (North Carolina, Tennessee, Virginia, and West Virginia). In that region a series of great fault developed in addition to the folds. As the two continents collided, large belts of rock bounded by thrust faults piled one on top another, shortening of the crust along the eastern edge of North America in the North Carolina and Tennesee region by as much as 200 miles. The relative amount of deformation gradually diminishes northward. The fold belt extends northward through Pennsylvania and gradually peters in the vicinity of the New York border. The Kittatinny Mountains in northwestern New Jersey mark the northeastern-most extension of the high ridges of the Valley and Ridge Province. The influence of Alleghenian deformation on the regions east of the Valley and Ridge Province must have be even more intense, however, there is little evidence preserved. Rocks of Mississippian, Pennsylvanian, and Permian age are missing in the New York Bight region.
A great unconformity beneath the Triassic sedimentary rocks of the Newark Basin series represents an extensive period of erosion of uplifted rocks and sediments during and after the Alleghenian Orogeny. In the New York Bight region, this unconformable surface is flooded beneath the lower Hudson River below the Palisades, and in New Jersey it is covered by younger sediments of the Coastal Plain.
Field Trips Destinations in New York, New Jersey, and eastern Pennsylvania:
Kittatinny Mountain, New Jersey and Pennsylvania
U.S. Department of the Interior, U.S. Geological Survey
Maintainer: WESP team webmaster contact
FOIA || Privacy Statement || Disclaimer || Accessibility
This site last updated July 22, 2003 (ps)
Do you like this post? |
Comfort women, also called military comfort women, Japanese jūgun ianfu, a euphemism for women who provided sexual services to Imperial Japanese Army troops during Japan’s militaristic period that ended with World War II and who generally lived under conditions of sexual slavery. Estimates of the number of women involved typically range up to 200,000, but the actual number may have been even higher. The great majority of them were from Korea (then a Japanese protectorate), though women from China, Taiwan, and other parts of Asia—including Japan and Dutch nationals in Indonesia—were also involved. From 1932 until the end of the war in 1945, comfort women were held in brothels called “comfort stations” that were established to enhance the morale of Japanese soldiers and ostensibly to reduce random sexual assaults. Some of the women were lured by false promises of employment, falling victim to what amounted to a massive human trafficking scheme operated by the Japanese military. Many others were simply abducted and sent against their will to comfort stations, which existed in all Japanese-occupied areas, including China and Burma (Myanmar). Comfort stations were also maintained within Japan and Korea. The women typically lived in harsh conditions, where they were subjected to continual rapes and were beaten or murdered if they resisted. The Japanese government had an interest in keeping soldiers healthy and wanted sexual services under controlled conditions, and the women were regularly tested for sexually transmitted diseases and infections. According to several reports—notably, a study sponsored by the United Nations that was published in 1996—many of the comfort women were executed at the end of World War II. The women who survived often suffered physical maladies (including sterility), psychological illnesses, and rejection from their families and communities. Many survivors in foreign countries were simply abandoned by the Japanese at the end of the war and lacked the income and means of communication to return to their homes. |
One of the most prominent ruling houses in the history of Europe, the Hohenzollern Dynasty played a major role in the history of Germany from the late Middle Ages until the end of World War I. The first known ancestor of the family was Burchard I, who was count of Zollern in the 11th century. By the third and fourth generations after Burchard, two branches of the family had formed. One, the Zollern-Hohenberg, became extinct by 1486. The other, originally the counts of Nuremberg, survived into the 20th century. The Nuremberg branch was further divided about 1200 into the Franconian and Swabian lines.
The Franconian branch moved into prominence when Frederick VI (1371–1440) was appointed elector of Brandenburg in 1415 as Frederick I. This territory in the northeastern lowlands of Germany was the nucleus on which the kingdom of Prussia was built. It was in 1701 that Frederick III of Brandenburg was given the title “king in Prussia.” The title was changed to “king of Prussia” in 1772, when Frederick the Great obtained it (see Frederick the Great). The Prussian kings retained their title as electors of Brandenburg until the Holy Roman Empire was dissolved by Napoleon I in 1806 (see Germany, “History”; Holy Roman Empire).
Subsequent rulers of Prussia were Frederick William II (ruled 1786–97), Frederick William III (ruled 1797–1840), Frederick William IV (ruled 1840–61), William I (ruled 1861–88), Frederick III (ruled 1888), and William II (ruled 1888–1918). During the reign of William I, Germany was united by the military might of Prussia, and the Franco-Prussian War was fought. Under William II (more commonly known as Kaiser Wilhelm II) Germany fought and lost World War I. William II abdicated in the last days of the war and went into exile in The Netherlands. This ended the German sovereignty of the Hohenzollerns. The Swabian branch of the family remained in power longer. Ferdinand became king of Romania in 1914, and his descendants ruled there until 1947. |
ORAL HEALTH IS GATEWAY TO OVERALL HEALTH
Taking good care of your mouth , teeth and gums is a worthy goal in and of itself. Our mouth is full of bacteria, germs in the mouth (bacteria) use sugar as food to make acids. Overtime, the acids can attack the tooth, creating decay leading to cavity and gum disease.
Untreated gum disease can advance to periodontitis, in which gums pull away from the teeth and form spaces? (pockets) that become infected. The body's immune system fights the bacteria as the infection spreads and grows below the gum line.
Oral health problems are also related to general health which includes.
• POORLY CONTROLLED DIABETES: Chronic gum disease may , in fact make diabetes more difficult to control . Infection may cause insulin resistance , which distrupts blood sugar control.
• PRETERM BIRTH : Severe gum disease may increase the risk of preterm delivery and giving birth to a low birth weight baby. Oral bacteria release toxins, which reaches? the placenta through the mother's bloodstream and interfere with the growth and development of the fetus, also causes the mother to produce labor triggering substanced too quickly potentially triggering premature labour and birth.
• CARDIOVASCULAR DISEASE : Bacteria from inflammation of gums and periodontal disease can enter your bloodstream and travel to arteries in the heart and cause arthreosclerosis (hardening of the arteries) . This can cause an increased risk of heart failure or stroke.
• DEMENTIA : Alzheimer's disease is caused by bacteria which reaches brain either through nerve channels or bloodstream.
HOW CAN I PROTECT MY ORAL HEALTH ?
• Brush your teeth at least twice a day with fluoride toothpaste.
• Floss daily
• Eat a healthy diet and limit between- meal snacks?.
• Avoid tobacco use
• Regular dental checkup and cleanings. |
In this section you'll need to know:
Watch this revision clip to revise Podsol formation. Take special care to fully expand on points relating to leaching and waterlogging in this soil type.
Listen to the 2 minute RevisionBlast and then bulet point the main features from memory.
This video clip will demonstrate how to explain the formation of a brown earth soil using a diagram. Keep your diagram simple; draw the 'ladder' structure, add a tree with long roots, some worms in horizon A, blur the horizon between A and B and add broken up bedrock in horizon C. Add an iron pan if you can explain the conditions that might lead to it being there.
Listen to the 2 minute RevisionBlast and then try to mind map the main features from memory.
Watch this clip to revise how Gley soils form. Refer to your notes to get a better understanding of the process of 'gleying'. Consider how you are going lay out your answer, the diagram takes up space so be prepared to use a whole new page in your answer booklet.
Listen to the 2 minute RevisionBlast and then try to rewrite a model answer from memory. |
What’s in a Name and How are Tropical Storms and Hurricane Names Chosen?
During the olden days, the need for naming of tropical storms or hurricanes was important, and the matter of tracking storm locations was already practiced, albeit by way of crude and unsophisticated methods. Today, developing nations still have the same dilemma due to lack of facilities. However, through the workings of the United Nations (UN), a system to determine how tropical cyclones and hurricane names are chosen was established, in order to provide warnings to nations all across the globe. In order to appreciate the information about the current systems, and find out how do they name tropical storms, let us first find out about the methods used in the past.
Actually, the history involving the name selection process for these catastrophic weather disturbances are varied, and did not stem from just one particular methodology. Early Spanish seafarers and explorers used the names of saints as their basis for naming storms. A particular patron saint assigned to a specific date and nearest the day when the storm arrived, was their way of identifying the storm that visited them. For succeeding storms occurring around the same date and related to a patron saint whose name had been previously used, a Roman numeral was annexed to distinguish one from the other; e.g. Felipe (Philippe), Felipe II, Felipe III, Felipe IV and so forth. However, this system could not be universally adopted, since not everyone was Catholic.
In another part of the world, around 1887, an Australian weather forecaster by the name of Clement L. Wragge had a list of political figures whom he disliked. He used their names to identify a storm being tracked down. Hence, the politician’s name was annexed to a weather disturbance’s descriptions like “wandering aimlessly about (the Pacific)” or “is causing great distress”. Wragge’s list however, was not long enough to sustain all the weather disturbances that came around.
Another interesting method of naming tropical storms-hurricanes, was the use of female names to identify and track down storms during World War II. US military weather forecasters used their wives’ or girlfriends’ names, perhaps out of affection or otherwise. In later years, however, the women’s liberation movement added this method of name selection for storms among their protest agenda, as a sexist concept. Women libbers wanted to eradicate the general idea that certain storm traits or characteristics were exclusively attributable to females. These protests were made years after the UN had established the World Meteorological Organization (WMO),and by 1900, the WMO Hurricane Committee subsequently added male names to the pre-determined lists assigned to hurricanes.
Since 1951, the WMO has acted as a specialized agency to handle international concerns for meteorology, geophysical sciences, and operational hydrology. Through WMO, 189 member countries and U.S. territories are able to facilitate the exchange of data and information in real time or even in near-real-time, particularly those related to weather, climate, or water hazards. The need to maintain a universal nomenclature, as reference for tropical storms and hurricanes, thus became necessary.
WMO Process of Selecting Names for Tropical Storms and Hurricanes
In a separate article, Hurricanes, Cyclones, Tornadoes and Definition and Differences, you’ll learn that a weather disturbance starts as a tropical cyclone. The classification of the tropical cyclone as a hurricane or typhoon will come into focus, once the weather disturbance takes on a definite direction or storm path. Hence, the tropical cyclone becomes a hurricane, typhoon, severe tropical cyclone, severe cyclonic storm or simply, a tropical cyclone when it reaches the respective world climate region’s area of responsibility.
Each region constitutes several affected countries involving five different tropical cyclone classifications, so the WMO established five tropical cyclone working committees per region. These committees are responsible for assigning the international name of the tropical cyclone entering their region. In viewing the corresponding lists currently used by each region, note that each committee had different groupings for their set of pre-determined lists. This is because some world climate regions observe different methods for how the list of international names is applied or used.
Please proceed to the next page for the list of names used by each region and information on how the lists are utilized.
The 5 WMO- World Climate Region Working Committees
WMO-Hurricane Committee for the Atlantic and Eastern North Pacific Region
This region maintains a set of lists where hurricane names chosen for a six-year period are selected; the region reuses or recycles each list every six years. Hence, if the list is from 2010 to 2015, this means that the current list was the list from where hurricane names were chosen in the years 2004 up to 2009, in the same order.
WMO-Typhoon Committee for the Western North Pacific Region
The current list being used in this region was prepared in 2008 and contains five sets of alphabetically listed names. The Typhoon Committee uses these international names according to its listing sequence. If a list has been exhausted, the next list is used. As an example, if the name Bebinca was assigned to a tropical cyclone established as the last typhoon for the year, it follows that the next international name assigned to the first typhoon of the succeeding year will be Rumbia.
WMO-Tropical Cyclone Committee RA I for the Southwest Indian Ocean Region
This regional committee uses a single list and assigns the listed names only to new cyclones that originated from their region. If a tropical cyclone originated from another country, this committee merely retains the originating country’s assigned name for the tropical cyclone, when referring to the weather disturbance.
If this set of existing lists is exhausted, and no new list has replaced it, the regional committee simply re-uses the first name appearing on the list.
WMO-Tropical Cyclone Committee RA IV for the North Atlantic Ocean, Caribbean Sea, Gulf of Mexico and the Eastern North Pacific
The committee for this region observes the same methods observed by the Hurricane Committee by reusing or recycling the set of six annual lists every six years. Similarly, the 2010 list was the register used in 2004 and so forth up to 2009 in the same order
WMO-Tropical Cyclone Committee RA V for the South Pacific Ocean and Southeast Indian Ocean Region
This region’s set of lists is used sequentially according to name. The name assigned to the first cyclone of the year, will be the name immediately after the name used for the last cyclone of the previous year. This is the same method used by the Tropical Committee.
Considerations Made by Member Countries when Submitting or Using Names in their Predetermined Lists
– These names are for international references; a member country may have their own register of local names for domestic purposes.
– The tropical cyclone’s name should be easy to recall for purposes of instilling instant awareness via quick referencing methods.
– In case the list contains names of tropical cyclones that resulted in catastrophic consequences involving high rates of mortality and financial losses, the said name is stricken off the list. It is considered no longer appropriate for use. One such example is Katrina, which was assigned to the infamous hurricane wreaking havoc in America in 2005. This is avoids possible confusion and is in deference to those whose lives were affected by the natural calamity.
– The name should not refer to any one person in particular but should be one that is common and familiar to each region.
You can find more lists of tropical cyclone names used in previous years for region-specific areas like the Arabian Sea and the Bay of Bengal at the WMO website, Storm Facts- TC Names. The lists above were presented in order to provide a visual aid on how tropical storms and hurricanes are named. |
Trace the letter Z! But first, warm up your kid’s hands by having them draw a zig-zag. Or a zebra, zipper, zeppelin, zucchini or anything else that starts with the letter Z.
Two worksheets in this set, lower case Z and upper case Z.
Encourage your kid’s imagination and have them practice writing the letter Z at the same time.
Drawing is an amazing activity that warms up the hands, which will make writing the letters less stressful.
After they are done with drawing, the kids proceed to trace the 3 rows of letters Z, and if they feel confident enough, proceed to the 4th row where they can write the letter Z independently. |
A crucial fingerprint of an extremely distant quasar, observed from Gemini Observatory, will now enable astronomers to sample light produced from the dawn of time.
A number of large telescopes were used to observe quasar J0439+1634 in the optical and infrared light. The 6.5 m MMT Telescope was used to discovery this distant quasar. It and the 10 m Keck-I Telescope obtained a sensitive spectrum of the quasar in optical light. The 8.1 m Gemini Telescope obtained an infrared spectrum that accurately determined the quasar distance and the mass of its powerful black hole. The 2x8.4 m Large Binocular Telescope captured an adaptive optics corrected image that suggests the quasar is lensed, later confirmed by the sharper Hubble image. (Image credit: Feige Wang (UCSB), Xiaohui Fan (University of Arizona)
This intensive glimpse into time and space was experienced by astronomers due to an inconspicuous foreground galaxy acting as a gravitational lens, which enlarged the ancient light of quasar. The observations from Gemini Observatory provide the crucial pieces of the puzzle in establishing this object as the brightest appearing quasar that emerged relatively early in the Universe’s history. The discovery raises hopes that more similar sources will be identified.
Some of the very first cosmic light started a long voyage via the expanding Universe, much before the cosmos reached its billionth birthday. One specific light beam, from an energetic source known as a quasar, serendipitously passed close to an intervening galaxy, the gravity of which bent and enlarged the light from quasar and refocused it in the direction, enabling telescopes like Gemini North to explore the quasar in a more detailed manner.
If it weren’t for this makeshift cosmic telescope, the quasar’s light would appear about 50 times dimmer,” stated Xiaohui Fan of the University of Arizona who headed the research. “ This discovery demonstrates that strongly gravitationally lensed quasars do exist despite the fact that we’ve been looking for over 20 years and not found any others this far back in time.”
By filling a crucial hole in the data, the Gemini observations were able to offer important pieces of the puzzle. The Gemini Near-InfraRed Spectrograph (GNIRS) was utilized by the Gemini North telescope on Maunakea, Hawaii to dissect a considerable swath of the infrared portion of the light’s spectrum. Tell-tale signature of magnesium present in the Gemini data is important for establishing how far humans are looking back in time. In addition, the Gemini observations made it possible to determine the mass of the black hole that powers the quasar.
When we combined the Gemini data with observations from multiple observatories on Maunakea, the Hubble Space Telescope, and other observatories around the world, we were able to paint a complete picture of the quasar and the intervening galaxy.
Feige Wang, University of California, Santa Barbara
Wang is also a member of the discovery team.
That picture showed that the quasar is situated quite far back in space and time—shortly after the so-called Epoch of Reionization—when the very first light emerged from the Big Bang.
This is one of the first sources to shine as the Universe emerged from the cosmic dark ages. Prior to this, no stars, quasars, or galaxies had been formed, until objects like this appeared like candles in the dark.
Jinyi Yang, University of Arizona
Yang is another member of the discovery team.
The foreground galaxy that improves the humans’ view of the quasar is particularly dim, which is rather unexpected. “
If this galaxy were much brighter, we wouldn’t have been able to differentiate it from the quasar,” explained Fan and further added that this discovery will transform the way astronomers search for lensed quasars in the coming days and may considerably boost the number of lensed quasar discoveries. Conversely, as Fan proposed, “ We don’t expect to find many quasars brighter than this one in the whole observable Universe.”
Moreover, the strong brilliance of the quasar, called J0439+1634 (J0439+1634 for short), indicates that it is driven by a supermassive black hole at the core of a young forming galaxy. Through the wide appearance of the crucial magnesium fingerprint captured by Gemini, astronomers were able to determine the supermassive black hole of the quasar mass at 700 million times that of the Sun. Most probably, a sizable flattened disk of gas and dust surrounds the supermassive black hole. It is believed that this torus of matter—called an accretion disk— feeds the black hole powerhouse by continually spiraling inward. Observations made at submillimeter wavelengths using the James Clerk Maxwell Telescope on Maunakea, indicate that the black hole is accreting gas and may also be triggering the birth of stars at an extraordinary rate—which seems to be around 10,000 stars every year; in contrast, the Milky Way Galaxy forms one star every year. Yet, the exact rate of the formation of stars could be considerably lower due to the boosting effect of gravitational lensing.
Quasars are quite energetic sources fueled by massive black holes believed to have existed in the very first galaxies to occur in the Universe. Quasars, due to their distance and brightness, offer an extraordinary view into the conditions in the early Universe. A redshift of 6.51 in this quasar translates to a distance of 12.8 billion light years, and the quasar also seems to shine with an integrated light of approximately 600 trillion Suns, increased by the gravitational lensing magnification. The foreground galaxy, which bent the light of the quasar, is roughly half that distance away, at just 6 billion light years from the earth.
Fan’s group chose J0439+1634 as an extremely distant quasar candidate depending on optical data from a number of sources: the United Kingdom Infra-Red Telescope Hemisphere Survey (performed on Maunakea, Hawaii), the Panoramic Survey Telescope and Rapid Response System1 (Pan-STARRS1; operated by the University of Hawaii’s Institute for Astronomy), and the Wide-field Infrared Survey Explorer (WISE) space telescope archive of NASA.
The initial follow-up spectroscopic observations, which were performed at the Multi-Mirror Telescope in Arizona, demonstrated the object as a high-redshift quasar. The MMT’s finding was confirmed by subsequent observations with the Gemini North and Keck I telescopes in Hawaii, leading to Gemini’s detection of the vital magnesium fingerprint—the key to solving the incredible distance of the quasar. Conversely, the quasar and the foreground lensing galaxy seemed to be so close that it became impossible to isolate them with images captured from the ground owing to a blurring of the atmosphere on Earth. The exquisitely vivid images taken by the Hubble Space Telescope revealed that a faint lensing galaxy splits the quasar image into three parts.
The quasar is ready for future inspection. Furthermore, astronomers intend to apply the Atacama Large Millimeter/submillimeter Array, and then NASA’s James Webb Space Telescope, to peer within 150 light-years of the black hole and directly identify the effect of the black hole’s gravity on gas motion and on the formation of stars in its vicinity. Any upcoming discoveries of very distant quasars, for example, J0439+1634 will continue to provide knowledge to astronomers about the growth and the chemical environment of large black holes in the early Universe. |
Bronchitis is a medical condition characterized by an acute inflammation of the airways of the lungs. In bronchitis, the trachea, bronchi, and the bronchioles become inflamed either due to infections or due to foreign objects that may cause damage to the inner lining of the airways. The inflammation usually occurs when the thin lining of mucus in the airways becomes irritated. When such an inflammation occurs, the lining synthesizes even more mucous. Over a period of time, these secretions build up and there is excessive coughing, which is a natural mechanism for clearing out the mucous. In many cases, the cough and the discomfort caused due to it is very severe.
Anyone can get bronchitis. When severe, the symptoms are usually similar to asthma. There are no routine tests for bronchitis. To diagnose this medical condition, the doctor first takes note of your medical history. There is a detailed discussion about your symptoms, followed by a physical examination. The test for bronchitis also includes eliminating the possibility of pneumonia. When testing for bronchitis, the doctors will also try to make sure that you do not have risk factors for any other serious diseases. Diseases like chronic obstructive pulmonary disease can affect treatment and therefore need to be assessed before treatments are given.
If in the preliminary tests, the presence of viruses is detected, treatment is immediately started and another test for bronchitis is not given. However, if in the test, bacterial presence is seen, further tests may be required before prescribing antibiotics.
Further, bronchitis testing is done only if the symptoms do not clear in the next two to three weeks. More complicated tests for chronic bronchitis may be recommended if you have been recently diagnosed with heart failure, pneumonia, and tuberculosis. For people whose immune system is not working very well, complications may develop from bronchitis, and therefore, more tests are immediately conducted so that a diagnosis can be made quickly and treatment can be started.
Possible tests for bronchitis are chest x-rays, gram stains, culture and sensitivity tests, and blood, urine, and stool tests to check for viruses and bacteria. Usually, the x-rays of people who have acute bronchitis are normal and do not show any abnormalities. The mucus of the lungs is sent for a culture in order to check for bacterial presence. This helps the doctor to prescribe the correct antibiotic to the patient. |
Biology is among the most practical subjects and is both fascinating and important. Biology is not only important for school exams but it is also an integral part of competitive exams like NEET. Students who wish to pursue a medical career in the future need to have a strong foundation of the subject from an early stage.
In this article, some of the important tips are given that can help the students to learn biology more effectively and excel in the exams.
- Be Well Acquainted With The Basics
It is important to have a thorough understanding of the basics to be able to comprehend higher level concepts easily. This also helps the students to stay interested and motivated in learning the concepts. For example, it is important to know the basic concepts of nucleus, cells, structure of DNA, etc. to be able to understand the higher concepts in cell theory later.
Visualization helps to understand the topics better and retain the concepts for longer. With visualization, the engagement increases and students develop a deeper understanding of the subject.
- Relate Practically
Biology is an extremely practical subject and almost all of its concept can be related to the practical world. Topics like the human digestive system, the human brain, etc. can be understood in a better way if the concepts are related to the actual human body.
- Be Thorough with the Diagrams
Since biology is a very conceptual subject, students need to be well-versed with various important terminologies and diagrams related to the subject. By knowing the diagrams, it becomes easy to understand the topics and visualize them. Diagrams are also an integral part of most exams and often direct questions like draw a labeled diagram of human liver, plant cell, animal cells, nucleus, etc. are asked in the exams.
Biology is vast and includes several concepts and theories. So, to be able to retain the concepts, students need to revise periodically and analyze the current knowledge. It is suggested to revise the fundamental topics periodically as the higher concepts generally are a continuation of the fundamental topics.
These were a few tips that can help the students to learn biology in a more effective way and score well in the exams. Students are also suggested to subscribe to BYJU’S YouTube channel and learn different biology topics in a more engaging and efficient way. |
Without flying insects like butterflies, wasps and bees, flowering plants would have a hard time surviving. To reproduce, flowers create seeds, which eventually grow into new plants. Seeds don't develop spontaneously; they develop after pollen, sticky spores found on the stamen of a plant, come in contact with the pistil. This process is called pollination.
Unlike humans and other animals, flowering plants can self-pollinate, since they have both the male (stamen) and female (pistil) reproductive parts. Self-pollination happens when pollen from a plant comes in contact with its own pistil. Seeds are produced but usually make for weaker plants. Cross-pollination occurs when the pollen from one plant is carried to the pistil of another plant. This type of pollination can produce the hardiest offspring, but it's difficult for most flowering plants to pull off [source: Missouri Botanical Garden]. Some flowering plants, like dandelions, adapted to produce spores that are easily carried off by the wind (or the strong breath of a child). Others get by with a little help from their friends in the insect world.
When winged insects look for nectar (a sugar water-like substance found in flowers), they generally climb around the reproductive organs of flowers to get it. Since there's only so much nectar to be found in a flower, insects will travel from flower to flower to get their fill. As they do this, the sticky pollen spores that attach to the insects' limbs are transferred to the pistils of other plants they visit. The miracle of cross-pollination has occurred.
Not all of the pollen is transferred to the pistils, however. Some of it remains on the insect. When honeybees return to their hives with a stomach full of nectar, pollen spores can also be found in and on the bee. Bees make honey by regurgitating the nectar (and pollen) into their mouths. Inside, enzymes break down the nectar into simple sugars. Bees spit the ensuing mixture into individual honeycombs and evaporate much of the water found in it by flapping their wings over it. Then they cover the honeycomb with wax until they're ready to use if for food or until a beekeeper breaks into the hive to remove the honey-filled combs found within.
Exactly what does this have to do with your runny nose, watery eyes and scratchy throat? Read the next page to find out how honey may cure what ails you. |
Introduction to alkanes
Alkanes are hydrocarbons - compounds that contain only carbon atoms and hydrogen atoms. They can be obtained by the fractional distillation of crude oil and their uses include:
- Fuelling cars, boats and planes
- The production of alkenes from long-chain alkanes 1
- The production of candle wax
- Creating new road surfaces
Structure of Alkanes
The only bonds between atoms of an alkane are single covalent bonds. Therefore, alkanes are saturated hydrocarbons: they contain no double or triple bonds.
The general formula of alkanes is CnH2n+2 where n is the number of carbon atoms in the longest carbon chain of the alkane.
Alkanes contain a chain of carbon atoms, all bonded to each other by single bonds. Shown below is the carbon chain of the alkane that contains six carbon atoms. In its organic compounds, carbon must form four bonds with other atoms so any "unused" bonds are also shown.| | | | | | - C - C - C - C - C - C - | | | | | |
As described above, an alkane contains only atoms of carbon and hydrogen. Therefore, hydrogen atoms are drawn where there is an "unused" bond on the above diagram.H H H H H H | | | | | | H - C - C - C - C - C - C - H | | | | | | H H H H H H In general, to draw the structural formula of an alkane:
- Draw a carbon chain of the correct length. Each carbon atom should be joined to its neighbour or neighbours by single bonds.
- Remembering that carbon forms four bonds in its organic compounds, add the correct number of hydrogens to any 'unused' bonds making sure to only attach hydrogen atoms to carbon atoms using single bonds.
An alkane's name is formed using the following rule: The letters ane are added to a prefix which depends on the number of carbon atoms in the longest carbon chain of the alkane.
The prefixes used are the same for all organic molecules. The list below shows the names of the first ten alkanes, with the prefix shown in bold italics. The numbers correspond to the number of carbon atoms in the chain.
Therefore, the alkane with six carbon atoms is hexane (shown above) and the alkane with ten carbon atoms is decane.
Reactions of Alkanes
Alkanes are highly flammable. Carbon dioxide and water are the only products of the reaction when alkanes are burned in excess oxygen. Carbon monoxide is also produced when alkanes are burned in insufficient oxygen. For example, when methane is burned in excess oxygen, carbon dioxide and water are produced.
CH4 (g) + 2O2 (g)→ CO2 (g) + 2H2O (l)The strong single covalent bonds are difficult to break so alkanes are rather unreactive.2 Aside from being flammable, two other notable reactions are:
This is performed on an industrial scale to produce very useful chemicals called alkenes. Alkanes with long carbon chains are not very useful so they are heated in the presence of a catalyst (for example aluminium oxide) to produce an alkane with a shorter carbon chain and an alkene. For example, when decane is heated in the presence of a catalyst, octane and ethene are produced.
C10H22 (l)→ C8H18 (l) + C2H4 (g)
Reaction with Bromine
Alkanes undergo a substitution reaction3 with bromine (Br2 (l)) in strong ultraviolet light. For example, when pentane reacts with bromine, 1-bromopentane4 and hydrogen bromide gas are produced: a bromine atom is swapped with a hydrogen atom.
C5H12 (l) + Br2 (l)→ C5H11Br (l) + HBr (g) |
Because we rarely encounter problems with our horses' ears, we often take them for granted. The equine ear, however, is an indispensable communication tool.
Because we rarely encounter problems with our horses' ears, we often take them for granted. The equine ear, however, is an indispensable communication tool. A horse's acute sense of hearing allows him to detect danger, communicate with other horses, and respond to his handler's vocal cues. Even the direction of a horse's ears imparts a world of information. If you watch carefully, they will reveal the animal's temperament and will even let you know where his attention is focused. Because the equine ear can convey so much information, learning how your horse's ears work and how his hearing differs from yours will help you to better understand and predict his behavior.
Structure of the Ear
Horses' ears, like yours, are finely tuned instruments designed to convert sound waves in the environment into action potentials in the auditory nerve. This nerve, which is located at the base of the skull, then sends the information to the brain to be translated and interpreted.
To collect sound waves from the environment, a horse uses his pinna, the large, cup-like part of the ear that you can see. Made of cartilage, the pinna can rotate to capture sound waves from all directions. This useful ability is due to the fact that horses have 16 auricular muscles controlling their pinna. Humans, in contrast, only have three such muscles, all of which are vestigial (almost useless).
After being trapped by the pinna, the collected sound waves are funneled through the external ear canal (commonly referred to as the auditory canal) to the middle ear, where they cause the eardrum, a thin membrane, to vibrate. These vibrations are then sent through the ossicles, a series of three tiny bones called the malleus, incus, and stapes. Finally, they reach the inner ear, where they cause vibrations in a snail-shaped structure called the cochlea.
Running up and down the cochlea are extremely sensitive hair cells that act as transducers. When these hair cells bend, they generate electrical signals that stimulate the auditory nerve. This nerve then passes the impulses on to the brain.
Ears in Communication
When attempting to hear something, horses will automatically flick their ears toward the source of the sound. Most horse owners are familiar with this phenomenon; we often see horses prick their ears forward when they are concentrating on something directly in front of them. This easily observable honing in on a noise is called the Pryer reflex, and it allows a horse to instinctively focus his attention on sound sources in the environment.
Knowing that a horse intuitively directs his ears toward whatever he's focusing on can come in very handy, especially when riding. The Pryer reflex can help you anticipate a spook or check where your horse's attention is directed.
In addition to showing us what a horse is concentrating on, the direction of the ears can tell us a lot about that particular animal's temperament. Although lop ears that flop either forward or out to each side are not correct from a conformational standpoint, they are said to be signs of a kind and generous horse. Similarly, a horse whose ears are frequently laid back against his head is considered to have a bad temper.
The reason horses often make this threatening gesture when they are angry or aggressive is thought to date back to prehistoric times, when horses adopted this posture to prevent their ears from being damaged when fighting.
As they flick back and forth, a horse's sensitive ears pick up a large range of sounds. According to Rickye Heffner, PhD, professor of psychology at the University of Toledo and a specialist in mammal hearing, horses can hear moderately loud sounds between 55 Hz and 33.5 kHz. This is in sharp contrast to a human's hearing; we cannot hear sounds higher than 20 kHz.
Although horses have very sensitive hearing even at such high frequencies, their ability to locate the source of a sound is not very precise. Spooks very often arise from the fact that horses can only locate the general direction of a noise, not its exact origin.
Because horses are prey animals, one might think this disability would make wild horses more susceptible to predators. A horse's eyes, however, have evolved to make up for his lack of accurate sound localization. Horses have a well-rounded and long field of vision, allowing them to easily see oncoming predators from nearly any direction or distance.
So what makes some horses spook more often than others? Because sounds can be associated with negative experiences, a spooky horse might connect a particular noise with the threat of danger.
Some horses seem to jump at every little sound, regardless of its origin. While such animals might appear to have ultra- sensitive hearing, Sandra Edgar-Sargent, DVM, believes those horses are probably just more responsive to the sounds in their environment.
No horse, however, reacts to every sound it hears. Although horses instinctively pay attention to the vocalizations of other equines, as well as sounds that are not part of their normal repertoire, they filter out much of what they hear. This helps them make sure that only relevant sounds are acted upon.
"Not all horses who fail to respond to a sound do so because they can't hear it," says Heffner. "They may not be attending to the sound or they may have learned that the sound is not informative or important. We ignore most of the sounds around us even though we can certainly hear them."
Horses can, however, lose their hearing. "Like any other animal," says Heffner, "horses can have hearing loss due to age, some antibiotics, ear mites, and genetic disorders."
According to Heffner, age-related hearing loss generally begins to be noticeable in a horse's middle age (around 15 years), but can occur much earlier if a horse is exposed to loud sounds.
"Hearing loss almost always affects high frequencies the most," she says, "and these are particularly useful for sound localization. However, horses are unusual in that they don't use high frequencies to locate sound sources along the horizon, so they probably would not be affected in this ability by a high-frequency hearing loss. They are very poor at localizing sound in the horizontal plane and can't get much worse. High frequencies are also useful, however, for using the pinna to tell whether a sound is in front of or behind the horse, and for localizing sounds in the vertical plane (up-down distinctions), and these abilities could be reduced."
Horses that do lose their hearing generally compensate very well. However, if you suspect your horse is hard of hearing, Heffner suggests that you try making a very soft hissing sound with your back to the horse every time you give him grain for a few days. Then make the sound and see if the horse responds by associating the noise with getting grain. If he does, he probably isn't hard of hearing. If he does not react, Heffner stresses that this does not necessarily mean that your horse has a hearing loss. He just might not be fooled by your trick.
According to Sargent, the best way to truly evaluate a horse's hearing is through a BAER (brainstem auditory evoked response) test. This hearing test uses small electrodes placed under the skin of the scalp to detect electrical activity in the cochlea and auditory pathways.
"If your horse does have a hearing loss, you should make sure your commands are loud and clear and given when there are not other competing noises around," cautions Heffner. "You should not depend on the horse to know where sounds are coming from when out riding, and if the loss is severe, the horse may not detect oncoming vehicles or other animals. It might, therefore, be startled more than usual when something suddenly comes into view."
Hearing loss can also occur because of conditions that do not involve the ear itself. Parotitis, for example, is the swelling and inflammation of the parotid salivary gland, just below the ear. That can cause hearing loss.
Diseases of the guttural pouches can also cause swelling in this area and loss of hearing. The function of guttural pouches--sacs that open into the Eustachian tubes of the inner ears--is not known. However, because the sacs are located so close to the pharynx, bacterial upper respiratory infections can sometimes spread into them and cause the accumulation of pus. Fungi can also invade the guttural pouches, causing pain in the parotid area, nasal discharge, neck stiffness, and abnormal head posture.
Potential Ear Problems
Problems in the guttural pouches can unfortunately also migrate to the middle ear. "A middle ear infection can be the result of bacterial or fungal infections that come from the bloodstream or from the guttural pouch," says Sargent. "Unlike other species, when a horse gets a middle ear infection, it often migrates ventrally (downward). Then, instead of rupturing the eardrum and draining into the external ear canal, it causes inflammation of the tympanic bulla, which houses the middle ear, as well as the stylohyoid bone (part of the skull). Because this inflammation causes excessive bone formation, it results in fusion of the temporohyoid joint where the stylohyoid bone and temporal bone meet."
Fusion of this joint can lead to stress fractures in the petrous temporal or stylohyoid bones of the skull, which can in turn cause neurological problems. If this occurs, Sargent says the horse might exhibit ear rubbing, head tossing, and chomping movements, and he might have pain when the base of the ear is palpated. A severely affected horse will be depressed, keep his head tilted, walk in circles, and appear dizzy. The nerves of the face might also be paralyzed, resulting in drooping ears and lips, drooling, and an inability to blink.
Fortunately, Sargent says, ear problems are rare in the horse. Middle ear infections are uncommon, and horses rarely get infections of the external ear canal like dogs and cats do.
Probably the most frequently encountered problem with a horse's ears is external parasites. "Ticks, chiggers, and Psoroptes mites can all sometimes get down in there," says Sargent. "In the southwestern and western parts of the United States, there is a certain kind of tick called the spinose ear tick that can get in a horse's ear, but it occurs uncommonly."
According to Heffner, ear mites can be the cause of head rubbing, head shaking, and irritability. "Some ear mites produce a waxy plug in the ear canal between them and the outside world, which prevents medication from reaching them and also affects hearing," she says. "You can't always see the mites because they live in the ear canal near the eardrum, but if the horse is restrained, they might be visible using an otoscope."
If you suspect that your horse has ear mites, even if you can't see them, it is best to consult your veterinarian so that the horse can be treated promptly. Your vet will likely need to use heavy sedation in order to examine the deep ear canal and might recommend a dewormer or special drops to get rid of the mites.
Like mites, blackflies are also quite irritating to horses. Although these tiny, blood-feeding gnats are only 1-6 mm long, their minuscule, serrated mouthparts can inflict painful bites.
In addition to being extremely aggravating to horses, blackfly bites are thought to spread the virus that can cause aural plaques--flat, gray-white papillomas that are sometimes found on horses' ears.
"Aural plaques are scaly lesions that form on the inside of the pinna," says Sargent. "They are caused by a virus and are typically asymptomatic, meaning that they really cause the horse no problems, so we typically recommend leaving them alone."
General Ear Care
Although ear problems arise infrequently in horses, Sargent suggests keeping an eye out for excessive head shaking and discharge from your horse's ears. If your horse is shaking his head repeatedly and rubbing his ears on anything he can find to do the job, or if there is blood or fluid coming from his ears, you should call your veterinarian.
"Normally, however, if you are not seeing any of those things, I would not recommend doing anything to your horse's ears," comments Sargent. "Horses often resent their ears being handled, so if you're not having any problems, I would leave them alone."
Because the hair in horses' ears prevents dirt and insects from getting inside, Sargent also recommends that you refrain from clipping your horse's ears. "If they're not having any problems, I wouldn't stress a horse out by clipping his ears," she says. "The only time I would trim the ears is when the horse is real sensitive to gnats, and the bites are making his ears bleed. If his ears are scabby and the hair is matted up, then you might want to shave the ear hair, clean the area out, and put some kind of fly repellent on the ear. I would only clean a horse's ears, however, upon the recommendation of a veterinarian, because you can do more harm than good if you get fluid down in the ear."
Unfortunately, not all horses will let you clip the hair from their ears. "Some horses will require sedation for you to be able to trim the hair out of their ears or even examine them very closely," says Sargent. If your horse is not head shy, however, Sargent recommends using small, quiet clippers like those used on dogs.
If you have a horse who is excessively bothered by gnats, Sargent also suggests that you wipe the ears with fly repellent or use a fly mask that includes ear covers. "There are also lotions and creams that have fly repellent in them," she says. "If your horse will let you apply them, they have more of a residue action and last longer than just using fly spray." She recommends choosing the method that your horse will tolerate.
Taking precautions will help keep your horse's ears comfortable and allow him to hear properly. Along with keeping him healthy, paying attention to your horse's ears will give you valuable insight into his behavior.
About the Author
Erika Street is a writer and filmmaker with a BA in animal physiology. |
The Guitar, The Oud and The Lute Instrument
Lutes are generally thought to have originated in Mesopotamia around 2000 BC, from which they traveled both west to Europe and east to Asia. Many different designs and variations on the basic design have existed through the ages. The long lute, having a neck longer than the body, which date back to around 2000 BC, has modern descendents in several countries (e.g., the tar of Turkey and Iran, the sitar and vina of India, the bouzouki of Greece, the tambura of India and Yugoslavia, and the ruan of China). The short lute, which dates from about 800 BC, is the ancestor of the European lute as well as many other plucked string instruments around the world.
The European lute first appeared in the thirteenth century, deriving its name from the Arabic phrase “al-oud,” which means “made of wood.” The lute is one of the most attractive and delicate of all Renaissance musical instruments. Its principal characteristics are an exceptional lightness of construction, a rounded back con- structed from a number of ribs, and a peg-box set at an angle to the fingerboard.
Instruments of the sixteenth century generally had eleven strings in six courses (all but the uppermost consisting of two unison strings), which might be tuned to A2, D3, G3, B3, E4, and A4, although the tuning was often changed to fit the music being played. Sometimes the lower three courses were tuned in octaves.
In the seventeenth century, an increasing number of bass courses were added. These usually ran alongside the fingerboard, so that they were unalterable in pitch during playing. Lundberg (1987) describes a family of Italian sixteenth/ seventeenth-century lutes as follows:
Small octave: four courses, string length 30 cm; Descant: seven courses, string length 44 cm;
The Oud conceder to be tuned from the top F2, A3, D3, G3, C3, F,3 to cover the midrange octave of the Piano and the main rang singing octave of the human voice,
The Guitars and Lutes
Alto: seven courses, string length 58 cm;
Tenor: seven courses, string length 67 cm;
Bass: seven courses, string length 78 cm; Octave bass: seven courses, string length 95 cm.
The pear-shaped body of the lute is fabricated by gluing together a number (from 9 up to as many as 37) of thin wooden ribs. The table or sound board is usually fabricated from spruce, 2.5–3.0 mm thick, although other woods, such as cedar and cypress, have also been used.
Acoustics of the European Short Lute
Only a few studies on the acoustical behavior of lutes have been reported. Firth (1977) measured the input admittance (driving point mobility) at the treble end of the bridge and the radiated sound level 1 m away.
Firth associates the peak at 132 Hz with the Helmholtz air mode and the peaks at 304, 395, and 602 Hz with resonances in the top plate. Figure 3.27 illustrates five such resonances and also shows how the positions of the nodal lines are related to the locations of the bars. The resonances at 515 and 652 Hz are not excited to any extent by a force applied to the bridge because they have nodes very close to the bridge.
Acoustics of the Turkish Long-Necked Lute
The Turkish tanbur is a long-necked lute with a quasi-hemispherical body shell made of 17, 21, or 23 thin slices of thickness 2.5–3.00 mm. The slices are usually cut from ebony, rosewood, pearwood, walnut, or cherry. The sound board is made of a thin (1.5–2 mm) spruce panel. It has neither a sound hole or braces. The strings
Rossing and G. Caldersmith
Barring pattern and nodal patterns in the top plate of a lute at five resonances; (b) locations of nodes compared to the bridge and the bars (Firth 1977)
are stretched between a raised nut and a violin-like bridge The long neck (73.5–84 cm), which is typically made of ebony or juniper, hosts 5,258 movable frets of gut or nylon. The tanbur has seven strings, six of them grouped in pairs, and the lowest string, tuned to A1, is single. The pairs are tuned to A2, D2, and again A2 (or alternatively A2, E2,and A2).
The impulse response of the tanbur body for three orthogonal force impulses applied to bridge are shown in Fig. 3.29. These responses include the effects of driving point admittance of the bridge, the vibration of body and neck, and the directivity of the radiation pattern. These responses were recorded in an anechoic room (Erkut et al. 1999).
A modal shape represents the motion of the guitar in a normal mode of vibration. Optical methods give the best spatial resolution of a given operational deflection shape (ODS), which in many cases closely resembles a normal mode. Optical methods include holographic interferometry, speckle-pattern interferometry, and scanning laser vibrometry.
Another technique for obtaining modal shapes, called experimental modal testing, excites the guitar body with a force hammer and uses an accelerometer to
3 Guitars and Lutes 29
Fig. 3.12 Holographic interferograms of a classical guitar top plate at several resonances. Resonance frequencies and Q-values (a measure of the sharpness of the resonance) are given (Richardson and Roberts 1985)
sense its motion. The force hammer is moved from point to point in a grid, and a frequency response function (FRF) determined for each point of excitation. The resulting FRFs are processed by a computer and the modal shape is determined by use of a curve-fitting program.
3.3 String Forces
A player can alter the tone of a guitar by adjusting the angle through which the string is plucked. Not only do forces parallel and perpendicular to the bridge excite different sets of resonances, but they result in tones that have different decay rates, as shown in Fig. 3.13. When the string is plucked perpendicular to the top plate, a strong but rapidly decaying tone is obtained. When the string is plucked parallel to the plate, on the other hand, a weaker but longer tone results. Thus, a guitar tone can be regarded as having a compound decay rate, as shown in Fig. 3.13 (bottom). The spectra of the initial and final parts of the tone vary substantially, as do the decay rates.
Classical guitarists use primarily two strokes, called apoyando and tirando (sometimes called the rest and free strokes). The fingernail acts as sort of a ramp, converting some of the horizontal motion of the finger into vertical motion of the string, as shown in Fig. 3.14. Although the apoyando stroke tends to induce slightly more vertical string motion, there is little difference between the two strokes in this
30 T.D. Rossing and G. Caldersmith
Fig. 3.13 Decay rates of guitar tone for different plucking directions (Jansson 1983)
Fig. 3.14 Finger motion and resulting string motion of apoyando and tirando strokes. In the apoyando stroke, the finger comes to rest on an adjacent string; in the tirando stroke, it rises enough to clear it (Taylor 1978)
regard. However, the player can change the balance between horizontal and vertical string motion by varying the angle of the fingertip (Taylor 1978).
Guitar Sound Radiation
Sound radiation from a guitar, like most musical instruments, varies with direction and frequency. Even with sinusoidal excitation at a single point (such as the bridge), the radiated sound field is complicated because several different modes of vibration with different patterns of radiation may be excited at the same time. Figure 3.15
Guitars and Lutes 31
Fig. 3.15 Mechanical frequency response and sound spectrum one meter in front of a Martin D-28 steel-string guitar driven by a sinusoidal force of 0.15 N applied to the treble side of the bridge. The solid curve is the sound spectrum; the dashed curve is acceleration at the driving point
Fig. 3.16 Sound radiation patterns at four resonance frequencies in a Martin D-28 folk guitar (compare with Fig. 3.7 which show the corresponding modal shapes) (Popp and Rossing 1986)
shows the sound spectrum one meter in front of a Martin D-28 folk guitar in an anechoic room when a sinusoidal force of 0.15 N is applied to the treble side of the bridge. Also shown is the mechanical frequency response curve (acceleration level versus frequency). Note that most of the mechanical resonances result in peaks in the radiated sound, but that the strong resonances around 376 and 436 Hz (which represent “seesaw” motion; see Fig. 3.11) do not radiate strongly in this direction. The mode at 102 Hz radiates efficiently through the sound hole.
Figure 3.16 shows polar sound radiation patterns in an anechoic room for the modes at 102, 204, 376, and 436 Hz. The modes at 102 and 204 Hz radiate quite efficiently in all directions, as would be expected in view of the mode shapes (see Fig. 3.7). Radiation at 376 Hz, however, shows a dipole character, and at 436 Hz a strong quadruple character is apparent, as expected from Fig. 3.7 (Popp and Rossing 1986).
32 T.D. Rossing and G. Caldersmith
Fig. 3.17 Comparison of the sound level of the fundamentals of played notes (bars) to the guitar frequency response function (solid curve) with its level adjusted for a good fit. A graph of the rate of sound decay (dB/s) versus frequency similarly follows the frequency response curve (Caldersmith and Jansson 1980)
The output spectrum of a guitar may be calculated by multiplying the bridge force spectrum by the frequency response function of the guitar body. This is greatly complicated, however, by the rapid change in the force spectrum with the time after the pluck (see Fig.3.13). Caldersmith and Jansson (1980) measured the initial sound level and the rate of sound decay for played notes on guitars of high and medium quality. They found that both the initial sound level and the rate of decay replicate the frequency response curve of a guitar, as shown in Fig. 3.17. At strong resonances, however, the initial levels are slightly lower, and the levels decay faster than predicted by the frequency response curves.
Rating the sound quality of classical guitars and how the quality depends on design and construction details have been studied by several investigators. According to Jansson (2002), most guitar players feel that tonal strength or carrying power is the most important single quality criterion, with tone length and timbre being the second most important. In the previous section, we mentioned how the initial sound level and rate of sound decay depends upon the resonances of a guitar body.
Tones from recorded music were analyzed in the form of long time average spectra (LTAS), and it was found that better guitars have a higher level up to 3,000 Hz. Comparing two guitars, it was found that the less good guitars tended to have a lower level below 2,000 Hz and above 400 Hz (Jansson 2002).
The Guitars and The Oud
Some extensive listening tests were conducted at the Physikalisch-Technische Bundesanstalt in Germany to try to correlate quality in guitars to their measured frequency response (Meyer 1983). Some of the features that correlated best with high quality were:
1. The peak level of the third resonance (around 400 Hz);
2. The amount by which this resonance stands above the resonance curve level; 3. The sharpness (Q value) of this resonance;
4. The average level of one-third-octave bands in the range 80–125 Hz;
5. The average level of one-third-octave bands in the range 250–400 Hz;
6. The average level of one-third-octave bands in the range 315–5,005 Hz;
7. The average level of one-third-octave bands in the range 80–1,000 Hz;
8. The peak level of the second resonance (around 200 Hz).
3.5.1 Influence of Design and Construction
Meyer found that using fewer struts, varying their spacing, adding transverse bracing and reducing the size of the bridge, to have desirable effects (Meyer 1983). He experimented with several different bridge shapes and found that a bridge without “wings” gave the best result.
Jansson (2002) found the following order of importance for different parts in determining quality:
2. Top plate thickness 3. Cross bars or struts.
So-called “frame” guitar designs have a rigid waist bar to inhibit leakage of vibrational energy from the lower bout to the upper bout and other parts of the guitar.
The bridge has a marked stiffening effect on the top plane, and thus affects the vibrations. For a heavy bridge the frequency of the first top plate resonance may decrease, the mass giving a larger contribution than the stiffness increase. Hand- made Spanish bridges tend to be considerably lighter and less rigid than factory- made bridges. For low frequencies the mass increase may dominate, but at higher frequencies the stiffening effect dominates (Jansson 2002).
3.5.3 Thickness of the Top Plate and Braces
Richardson and Roberts (1985) studied the influence of top plate and strut thickness with finite-element modeling using a computer. At the start, the plate thickness
34 T.D. Rossing and G. Caldersmith
Fig. 3.18 Lattice bracing of a guitar top plate used by Australian luthier Greg Smallman. Struts are typically of carbon-fiber- epoxy, thickest at the bridge and tapering away from the bridge in all directions (Caldersmith and Williams 1986)
was 2.9 mm, and the struts were 14 mm high and 5 mm wide. Their calculations showed that the cross struts gave a large influence at least for the low resonances. A reduction in strut height also results in a large influence on the resonance frequen- cies. Reduction in top plate thickness, especially thinning along the edge, has the greatest effect of all.
Richardson and his students have also found that reducing the effective mass has a great effect on radiation of high-frequency sound, even more than tuning the mode frequencies (Richardson 1998). The effective mass is difficult to control, however, after the choice of materials and general design has been made. Of primary importance is the effective mass of the fundamental sound board mode.
Australian luthier Greg Smallman, who builds guitars for John Williams, has enjoyed considerable success by using lightweight top plates supported by a lattice of braces, the heights of which are tapered away from the bridge in all directions, as shown in Fig. 3.18. Smallman generally uses struts of carbon-fiber-epoxy expoxied to balsa wood (typically 3 mm wide and 8 mm high at their tallest point) in order to achieve high stiffness-to-mass ratio and hence high-resonance frequencies or “lightness” (Caldersmith and Williams 1986).
3.5.4 Asymmetrical and Radial Bracing
Although many classical guitars are symmetrical around their center plane, a number of luthiers (e.g., Hauser in Germany and Ramirez in Spain, Schneider and Eban in the United States) have had considerable success by introducing varying degrees of asymmetry into their designs. Most asymmetric guitars have shorter but thicker struts on the treble side, thus making the plate stiffer. Three such top plate designs are shown in Fig. 3.19.
3 Guitars and Lutes 35
Fig. 3.19 Examples of asymmetric top plates: (a) Ramirez (Spain); (b) Fleta (Spain); (c) Eban (United States)
Fig. 3.20 Holographic interferograms showing modal shapes of two low-frequency modes at 101 and 304 Hz in a radially braced classical guitar (Rossing and Eban 1999)
The very asymmetric design in Fig. 3.19c was proposed by Kasha (1974) and developed by luthiers Richard Schneider, Gila Eban, and others. It has a split asymmetric bridge (outlined by the dashed line) and closely spaced struts of varying length. A waist bar (WB) bridges the two long struts and the sound hole liner.
Despite its asymmetry the vibrational modal shapes, at least at low frequency, are quite similar to other good classical guitars, as shown in the holographic interfero- grams in Fig. 3.20. The particular guitar in this modal study had a one-piece bridge and radial bracing in the back plate as well as the top plate. Other luthiers have had considerable success with radial bracing. Australian luthier Simon Marty uses a radial bracing of balsa or cedar reinforced with carbon fiber. Trevor Gore has had success using falcate bracing with curved braces of balsa and carbon fiber.
3.6 A Family of Scaled Guitars
Members of guitar ensembles (trios, quartets) generally play instruments of similar design, but Australian physicist/luthier Graham Caldersmith has created a new family of guitars especially designed for ensemble performance. (Actually, he has created two such families: one of classical guitars and one of steel-string folk guitars). His classical guitar family, including a treble guitar, a baritone guitar,
36 T.D. Rossing and G. Caldersmith
and a bass guitar in addition to the conventional guitar – which becomes the tenor of the family – has been played and recorded extensively by the Australian quartet Guitar Trek (Caldersmith 1995).
Caldersmith’s guitar families include carefully scaled instruments, the tunings and resonances of which are translated up and down by musical fourths and fifths, in much the same way as the Hutchins–Schelleng violin octet (see Chap. 18). Calder- smith’s bass guitar is a four-string instrument tuned the same as the string bass and the electric bass (E1, A1, D2, G2), an octave below the four lowest strings of the standard guitar. The baritone is a six-string instrument tuned a musical fifth below the standard, while the treble is tuned a musical fourth above the standard, being then an octave above the baritone. Caldersmith uses an internal frame, but a graded rectangular lattice instead of the diagonal lattice (see Fig. 3.21). The Australian Guitar Quartet is shown in Fig. 3.22.
Fig. 3.21 Caldersmith guitar with internal frame and rectangular lattice
Fig. 3.22 The Australian Guitar Quartet play on scaled guitars: bass and baritone by Graham Caldersmith, standard and treble by Greg Smallman and Eugene Philp
Guitars and Lutes 37
3.7 Synthetic Materials
Traditionally guitars have top plates of spruce or redwood with backs and ribs of rosewood or some comparable hardwood. Partly because traditional woods are sometimes in short supply, luthiers have experimented with a variety of other woods, such as cedar, pine, mahogany, ash, elder, and maple. Bowls of fiberglass, used to replace the wooden back and sides of guitars, were developed by the Kaman company in 1966; their Ovation guitars have become popular, partly because of their great durability.
One of the first successful attempts to build a guitar mostly of synthetic materials was described by Haines et al. (1975). The body of this instrument, built to the dimensions of a Martin folk guitar, used composite sandwich plates with graphite- epoxy facings around a cardboard core. In listening tests, the guitar of synthetic material was judged equal to the wood standard for playing scales, but inferior for playing chords. In France, Charles Besnainou and his colleagues have constructed lutes, violins, violas, cellos, double basses, and harpsichords, as well as guitars, using synthetic materials (Besnainou 1995).
3.8 Other Families of Guitars
Most of our discussion has been centered on classical guitars, with occasional compar- ison to the steel-string American folk (flat top) guitar. There are several other types of acoustic guitars in use throughout the world, including flamenco, archtop, 12-string, jazz, resonator, etc. Portuguese guitars will be discussed in Chap. 4. Some Asian plucked string instruments of the lute family will be discussed in Chap. 11.
The gypsy guitar, known in France as the manouche guitar, gained popularity in the late 1920s. Played by Django Reinhardt throughout his career, the instrument has seen a revival in interest. The community of gypsy jazz players today is a small, but growing one, and the original Selmer–Maccaferri guitars are highly valued and widely copied. Its low-gauge strings offer its player a brighter, more metallic tone, with an ease for creating a very distinct vibrato (Lee et al. 2007).
C. Besnainou (1995). “From wood mechanical measurements to composite materials for musical instruments: New technology for instrument makers.” MRS Bull. 20(3), 34–36.
R. R. Boullosa (1981). “The use of transient excitation for guitar frequency response testing.” Catgut Acoust. Soc. Newsl. 36, 17.
G. Caldersmith (1995). “Designing a guitar family.” Appl. Acoust. 46, 3–17.
G. W. Caldersmith and E. V. Jansson (1980). “Frequency response and played tones of guitars.” Quarterly Report STL-QPSR 4/1980, Department of Speech Technology and Music Acoustics,
Royal Institute of Technology (KTH), Stockholm, pp. 50–61.
G. Caldersmith and J. Williams (1986). “Meet Greg Smallman.” Am. Lutherie 8, 30–34.
O. Christensen and R. B. Vistisen (1980) “ Simple model for low-frequency guitar function.”
J. Acoust. Soc. Am. 68, 758–766.
C. Erkut, T. Tolonen, M. Karjalainen, and V. V€alim€aki (1999). “Acoustical analysis of tanbur, a
Turkish long-necked lute.” Proceedings 6th International Congress on Sound and Vibration.
Wonderful explanation of the Lute and Guitar has been done by Thomas Rossing, search source The Science of String Instruments |
Did you know?
Ospreys are an indicator species. The health of their population has implications for the health our coastal ecosystems.
New Jersey Endangered and Threatened Species Field Guide
Species Group: Reptile
The wood turtle’s carapace, or upper shell, has a sculpted or grooved appearance. Each season a new annulus, or ridge, is formed, giving each scute (a scale-like horny layer) a distinctive pyramid-shaped appearance. As the turtle ages, natural wear smoothes the surface of the shell. While the scutes of the carapace are brown, the plastron, or underneath shell, consists of yellow scutes with brown or black blotches on each outer edge. The legs and throat are reddish-orange. The male wood turtle has a concave plastron while that of the female is flat or convex. The male also has a thicker tail than the female. Adult wood turtles measure 5.5 to 8.0 inches in length.
Distribution and Habitat
The wood turtle is found in eastern North America, ranging from Nova Scotia in the south to Virginia in the south and from the easternmost US states to Minnesota in the west. In New Jersey, the range is primarily limited to the northern and central portions of the state, although there have been a few occurrences within the Pine Barrens.
Wood turtles historically inhabited nearly all counties in northern and central New Jersey. However, habitat loss has restricted their distribution to disjunct populations associated with particular drainages. The largest populations in the state currently exist in Hunterdon, Morris, Sussex, Passaic, and Warren counties.
Unlike other turtle species that favor either land or water, the wood turtle resides in both aquatic and terrestrial environments. Aquatic habitats are required for mating, feeding, and hibernation, while terrestrial habitats are used for egg laying and foraging.
Freshwater streams, brooks, creeks, or rivers that are relatively remote provide the habitat needed by these turtles. Wood turtles are often found within streams containing native brook trout. These tributaries are characteristically clean, free of litter and pollutants, and occur within undisturbed uplands such as fields, meadows, or forests. Open fields and thickets of alder, greenbrier, or multiflora rose are favored basking habitats. Lowland, mid-successional forests dominated by oaks, black birch, and red maple may also be used. Wood turtles may also be found on abandoned railroad beds or agricultural fields and pastures.
Wood turtle habitats typically contain few roads and are often over one-half of a mile away from developed or populated areas. Individuals from relict or declining populations are also sighted in areas of formally good habitat that have been fragmented by roads and development.
An omnivorous species, the wood turtle consumes a variety of animal and plant matter. Insects and their larvae, worms, slugs, snails, fish, frogs, tadpoles, crayfish, and carrion are included in their diet. In addition, these turtles forage on leaves, algae, moss, mushrooms, fruit, and berries.
During early March, wood turtles emerge from hibernation and bask along stream banks. Breeding activity begins as water temperatures warm to about 59°F. The turtles mate within streams during April and, by mid-May, move to dry land, where they will spend the next several months. Females seek elevated, well-drained, open areas where they can dig their nests. Nests are often excavated at depths of 3.5 – 4.5 inches.
The female wood turtle lays a clutch of eight to nine smooth, white eggs which hatch in about 70 to 71 days. The eggs, hatchlings, and adults may fall prey to raccoons or skunks. Hatchling wood turtles are 1.5 inches in length and are distinguished by their long tails, which can be nearly equal in length to the carapace. If the young turtles survive, they can live 20 to 30 years, reaching sexual maturity in their fourteenth year.
During the summer months, adult wood turtles wander along stream corridors while foraging in open fields and woodlands. In New Jersey, marked wood turtles have been observed at locations up to 0.56 miles from their wintering streams. One turtle traversed nearly 1 mile within two months. The turtles are especially vulnerable to being struck and killed by automobiles while crossing roads during this nomadic period.
Wood turtles return to streams and creeks and begin hibernating by late November. They winter in muddy stream bottoms, within creek banks, or in abandoned muskrat holes. Individuals may overwinter in the same stream or embankment during successive years.
Current Threats, Status, and Conservation
Historically, the wood turtle was a fairly common species within suitable habitat in New Jersey. By the 1970s, however, declines were noted as wood turtles were absent from many historic sites due to habitat loss and stream degradation. Because of this, the wood turtle was listed as a threatened species in New Jersey in 1979.
Since the late 1970s, biologists have monitored and surveyed wood turtle sites in New Jersey, providing valuable data regarding the life history, reproduction, and habitat use of these turtles in the state. There is, however, a continuing need to examine the productivity and juvenile survival of wood turtles, which may be threatened by disturbance or predation. The New Jersey Endangered Species Act prohibits the collection or possession of wood turtles. However, collecting of wood turtles as pets continues to be a problem which may result in reduced population size or the loss of local populations.
Text derived from the book, Endangered and Threatened Wildlife of New Jersey. 2003. Originally edited by Bruce E. Beans and Larry Niles. Edited and updated Michael J. Davenport in 2010.
Species: G. insculpta
Report a sighting
Report a sighting of a banded shorebird or rare species.
Become a Member
Join Conserve Wildlife Foundation today and help us protect rare and imperiled wildlife for the future.
Download the complete list of New Jersey's Endangered, Threatened, & Special Concern species. |
Inclusive Design Tips: Presenting Information in Multiple Ways
For those of us who have good vision and hearing, websites and digital media can provide very colorful, stimulating, and impactful experiences. But what if you couldn’t see color? What if you couldn’t hear, or needed to keep your computer or phone muted? What if you couldn’t see at all? If you’re a web professional, how can you make sure that a person who is colorblind, fully blind, or deaf (or all of the above!) is able to have the same impactful experience with your website as everyone else?
Most people use multiple senses to perceive and interact with websites and apps: primarily sight, but also hearing and touch. A person who can’t see will rely on a screen reader, which allows them to hear the content of a webpage. If that person also couldn’t hear, they would rely on the screen reader to translate the text from a webpage into braille through a refreshable braille display.
Designing a site for people of all abilities means that the content of the site needs to be perceivable and understandable in multiple ways. That often means being redundant. It also often means making sure that everything has a text-based equivalent. Text is by far the easiest medium for assistive technology to translate.
This blog post addresses three related topics in the area of accessible design:
- Providing text or visual alternatives when using color as a form of information
- Making sure that audio cues don’t rely entirely on the user’s ability to hear
- Ensuring that references to visual information like color, shape, and position are presented so that users without sight can still understand
These three topics are all part of the Web Content Accessibility Guidelines (WCAG), and were designed to help prevent web authors and designers from accidentally excluding users who are unable to use one or more of their senses when using the web.
Color as Information
Color is often used as a pointer in digital media to draw the user’s attention to some specific element. Unfortunately, somewhere between 5% and 10% of people on Earth are at least partially colorblind. A red outline around a form field to indicate an error may be useful for many people, but if a person looking at it can only perceive red as a shade of gray, the information conveyed by the red outline (i.e. the fact that there’s an error) will be lost on them.
If, however, the red outline is accompanied by a text-based error message and an exclamation point icon, the error will be obvious regardless of whether the person looking at it can see color or not.
Representations of more complex information, like charts and graphs, are where color particularly comes into play. In many cases, color is just one visual aspect that helps to communicate information – along with shape, size, proximity, texture, labels, and a whole host of other indicators. Most charts and graphs are relatively simple, and it’s easy enough to use attributes like contrast and text labels to make sure that the information is color-agnostic.
As more information is represented, divorcing meaning from color becomes more difficult. The trick to designing charts and graphs well is to convert them to grayscale and see how they look. Better yet, show the grayscale version to a friend or coworker. If they can understand the information in the graph, then you can be reasonably sure that the graph doesn’t rely on color alone to convey information.
As a designer, or as a content creator, your job is to make sure that your designs and content aren’t totally reliant on color to be understood. There are a number of desktop and browser-based tools available to help you test your work, including:
- Color blindness filters and proofing in Photoshop and Illustrator
- Color Oracle (desktop-based color blindness simulator for Mac, Windows, and Linux)
- NoCoffee (a Chrome plugin: use for functional websites and web-based prototypes)
If you test our your design and notice some problems with how you’re using color, here are a few techniques you can use to improve your design:
- Use text labels or icons to label elements that are not obvious in black and white
- If using multiple solid shapes to convey different types of information (like in a bar graph, for example), augment the color with a pattern or texture.
- You can also increase or decrease the value of each color (the darkness or lightness) so that they have sufficient contrast in comparison with each other.
Keep in mind that not only users with genetic color blindness are affected. People with low vision and elderly people also sometimes have a harder time perceiving color. The differences in colors will also be less noticeable in a low contrast environment, such as on a screen viewed in bright sunlight. Many people also still print digital materials like emails and visual aids (such as graphs and charts) on paper, and may not be able to print in color.
Color isn’t the only visual characteristic that can be used to describe objects on a web page. Using shape or size to call attention to certain objects – in other words, as a visual cue – is actually a very useful technique for conveying information. Using more than just text can be particularly helpful for people with learning or cognitive disabilities.
However, if a person has very low vision or is completely blind, they may not be able to perceive this type of information at all. As with the use of color, the key is to make sure that the same information is represented in multiple ways so that it can be perceived by multiple senses. Representing the same information using text (either as offscreen text or as a text label) is generally the best way, as text can be translated to a variety of devices, including screen readers and braille displays.
One of the most common types of visual information used in sites and apps today are graphical symbols, like icons. Icon font sets like Font Awesome have made is easier than ever for designers and developers to implement icons with a variety of meanings into all sorts of digital and print media.
In the example above, multiple sets of icons are used to describe the status, priority, and completion amount of tasks. For each set, shape is used in addition to color to differentiate each icon from one another. The meanings of the icons also duplicate textual information that is present elsewhere in the table. A sighted user would be able to glance at the table quickly and understand certain metadata about each task by looking at the icons. Because the information conveyed by the icons is also represented in text, a non-sighted user would also be able to understand the information within the table fairly easily.
But what if there was no textual information accompanying the icons? Take this example:
The Priority column in this tasks table uses colored squares in red, yellow, and green to represent high, medium, and low task priority. Ideally, each icon would have alternative text associated with it so that non-sighted users would be able to understand it. In this example, color is also the sole identifying feature of each icon, so for the sake of colorblind and low vision users, it would be best to either use a different shape for each priority level, or to replace or augment the icons with a visible text label.
When designing with icons and symbols, always do a quick reality check to make sure your design is informative to users of all abilities. One trick is to temporarily remove all icons and non-text symbols from your design. Can you understand everything in the design without them? If important information is now missing because the icons are missing, make sure that you’ve written alternative text for those icons. You can also consider whether it would harm your design too much to add visible text labels to supplement the icons.
Visual Information in Text
The WCAG success criterion for Sensory Characteristics (1.3.3) also specifically references the use of visual information in instructions. For example, text on your site may instruct users to find additional information in the upper right. If you can’t see the page and are using a screen reader to understand it, you won’t be able to tell which elements are in the upper right portion of the page because locational information isn’t conveyed to screen readers.
If, however, your text instructs the user to find additional information using the “help” button in the upper right portion of the header, a screen reader user will know to navigate to the header (which is easy if the site uses landmarks) and to listen for a button labeled “help.” Sighted users will still be able to use the locational information (“in the upper right”). Adding additional description will help you make sure that non-sighted users aren’t left out.
Another type of sensory information is information conveyed by sound. This is rarely used on websites nowadays, but sounds that are used to call the user’s attention to a notification or to a particular action are often used in mobile apps and desktop applications, like video games. For example, an email app on a phone may play a sound to notify you that a new email has arrived. A video game in which you control a character may play a warning sound if your character is standing in something that causes damage.
As with color and other visual information, the key to making auditory information accessible is to make sure that the same information is conveyed visually and/or textually. In the case of the email app notification, having a textual notification in the phone’s notification area will often do the job. Allowing the option for haptic feedback (such as a vibration) or visual feedback (a blinking light on the phone) can also work. Haptic feedback on a video game control or a pulsing red border around your character or the edges of the screen can convey the same warning as a sound.
Keep in mind that these techniques don’t just help users who are deaf or hard of hearing. They also help users who have their devices muted or on low volume. Maybe you’d like to be notified of new emails during a meeting but don’t want your phone to distract others around you. Or you may want to play a video game while your spouse or children are sleeping in the next room.
Tips and Takeaways
Making sure that information can be perceived by more than one sense is key to creating a design that will be usable for everyone. While the use of color, shapes, and sounds is effective in drawing attention to design elements and in helping people understand different types of information, designers need to keep in mind that not everyone can see, hear, or understand those cues.
As you’re evaluating your design, ask yourself these questions:
- If I convert everything to grayscale, am I still able to understand everything?
- If I remove all icons and graphic symbols and replace them with alt text I’ve written, is all of the same information represented?
- Does any text on the page reference elements by color, shape, or size only?
- If I turn the sound off, are all notifications and errors still conveyed to me?
If you are not the person who implements designs for sites and apps into code, make sure you annotate your designs with non-obvious things like alt text for icons. You will want to make sure those accessibility features won’t be missed when the design goes to development.
Above all, remember that when it comes to design, redundancy isn’t a bad thing. It’s all right – and is often preferable – to give users multiple ways to find information, perform actions, and perceive content. Remember that accessibility has its roots in usability, and creating a truly usable website or application means taking all types of users and their many different learning styles and abilities into account.
- Understanding WCAG success criteria 1.3.3 – Sensory Characteristics
- Understanding WCAG success criteria 1.4.1 – Use of Color
- BBC Mobile Accessibility Guidelines – on incorporating audio alerts
- Examples of using visual cues from Oregon State University
Caitlin Geier is a UX designer on Deque’s product team. As a UX designer, Caitlin’s work with accessible design flourished once she began working for Deque. She is passionate about understanding the users she’s designing for, and she continually strives to incorporate accessibility elements into her work in order to ensure that all users can benefit from inclusive design. |
using place value to round multi digit whole numbers worksheets original 1 rounding 3 and 4 the nearest 100 worksheet.
rounding worksheet nearest million 4 digit numbers to the hundred worksheets.
fourth grade math worksheets place value new best images on of.
rounding numbers worksheets grade 4 math 3 and digit to the nearest 100 worksheet.
ordering sets of large numbers math practice worksheet grade 4 big worksheets 8 rounding 3 and digit to the nearest 100 collection workshee.
rounding worksheet nearest hundred thousand 4 digit numbers to the worksheets.
grade 3 math worksheet find the missing place value from 4 digit rounding worksheets 2 and numbers to nearest 100 free library download an.
addition subtraction multiplication division worksheet horizontal the adding and subtracting three digit estimating 4 worksheets free library download print on numbers rounding di.
rounding to differentiated worksheet activity sheet pack 3 m sheets 1 4 digit numbers the nearest hundred worksheets diff.
common core math third grade for printable estimation strategies worksheets free reading comprehension comprehe.
fun and engaging math rounding to the nearest ten thousand worksheet printable worksheets for grade.
rounding to the nearest ten worksheet worksheets grade math games for third graders 4 digit numbers.
rounding to differentiated worksheet activity sheet pack m sheets 3 4 digit numbers the nearest hundred worksheets.
rounding number worksheets the best image collection download and share 4 digit numbers to nearest 10 100 worksheet.
amusing adding decimals worksheet and answers in rounding info to the nearest tenth worksheets 4.
rounding and estimating f more practical examples of why we round numbers 4 digit to the nearest 10 100 worksheet understanding are rounded.
math worksheets 2 digit multiplication elegant best images on rounding numbers classroom 4 printable multiplica.
using place value to round multi digit whole numbers worksheets rounding comparing 3 kids two and three worksheet 2 4 games compare the ne.
rounding to the nearest worksheet wonderfully video for lesson numbers ten of.
snapshot image of rounding 3 digit whole numbers to the nearest ten worksheets 4 10.
autumn math worksheets fall grade common core by teaching buddy rounding 4 digit numbers to the nearest 10 100 and.
rounding off whole numbers worksheets grade to the nearest thousand worksheet on dads ro.
rounding and estimating f numbers to the nearest 3 4 digit 100 worksheet.
multiply 2 digit tenths by 1 whole numbers decimals multiplication rounding 4 to the nearest 10 100 and 1000 worksheet worksheets.
rounding worksheets numbers excel worksheet nearest fourth grade 4 digit to the 10 100 and 1000 best images on m.
rounding round em up math worksheets kids learning and printable for numbers 4 digit to the nearest 10 numbe.
rounding to the nearest 0 and 00 worksheets round worksheet each number ten 4 digit numbers 10 works.
grade 4 maths worksheets rounding off decimals resources printable topic digit numbers to the nearest 10 lets share knowledge.
rounding worksheets grade new words problems for multiplication word 1 2 digit of.
rounding to the nearest thousand worksheet worksheets three digit numbers 4 hundred work.
original rounding 4 digit numbers to the nearest hundred worksheets teachers take out.
grades 3 5 place value worksheets cover grade 4 rounding and digit numbers to the nearest 100 worksheet.
grade 4 maths resources rounding off decimals printable worksheets digit numbers to the nearest 10 and 100 worksheet.
comparing two digit numbers first grade worksheets 2 worksheet rounding adding single multiplying column 5 multipl.
rounding 3 and 4 digit numbers bundled original 1 to the nearest 10 100 1000 worksheet by teachers take out. |
Eighth Grade is a year of rumbling revolution and the earnest seeking of independence; it is a year of incredible change and incredible growth. As the eighth-grade adolescent makes his or her journey through this oft-times turbulent year, s/he is met by a curriculum that offers up pictures of historical individuals who have sought for the same freedom and independence. We hold these truths to be self-evident, that all people are created equal! In these unforgettable words, it is plain that Jefferson’s unwavering passion for freedom is undeniable, but his perception of the human condition and his eloquence in his communication made all the difference in the character of the independence that followed. History in the eighth grade follows the lives of individuals who helped shape, or were shaped by, the revolutions going on around them. The Industrial, American, French Revolutions are visited. The American Civil War and the American Civil Rights Movement are also visited.
Students are divided into two streams. Algebra focuses on linear equations and applications in one and two variables including systems of equations. They apply graphical and algebraic methods to find solutions to many relevant problems. Pre-Algebra focuses on solving one-step, two-step, and multi-step linear equations and inequalities. Khan Academy is used as an online tool to help practice and master these topics.
Literature read includes short stories, poetry, and Shakespearean drama. In the language arts there is an increasing emphasis on nuances of style and grammar in the student’s expository and creative writing. Students read and study modern literature and works from across the curriculum, and produce a class play.
Physiology, Physics and Chemistry are all continued from 7th Science. The students will study the structure and function of the human skeleton, the eye and the ear, while touching on muscles, tendons, ligaments and the brain. The Physics studied covers motion, forces, pressure, density, buoyancy and leads directly to the study of Meteorology. Finally the students will study the Chemistry of living things, with a focus on carbohydrates, lipids, and proteins.
Students continue to develop their ability to play violin, viola, cello, bass, and alto recorder in an orchestra. They are taught more advanced music reading concepts and more advanced playing techniques. They will also continue to develop their ability to play soprano, alto, tenor, and bass recorder in an ensemble and they will continue to develop their choral singing skills. Students will learn to identify stylistic characteristics of music periods and composers and learn the key periods and dates of music history. The 8th grade music class will perform in two string orchestra concerts.
In addition to the scientific drawings done throughout the year, the students learn how to draw and paint the human being. Realism in proportion, light and shadow is emphasized. The students also design, construct and hand-decorate the five platonic solids.
FOREIGN LANGUAGE (Spanish)
Emphasis is placed on the reinforcement and acquisition of grammar and other vocabulary study that will prepare students for high school-level Spanish. Students continue to formally study various aspects of grammar, including, for example, deeper work into a range of regular and irregular verbs and tenses, possessive adjectives and superlatives. Students apply their burgeoning fluency in Spanish to reading two short novels, short stories, and passages in Spanish.
HISTORY and SOCIAL STUDIES
The forward-looking impulse is best addressed in the main lesson, and in particular, the history curriculum. Whereas the seventh grade took as its theme the intellectual and aesthetic flowering of the Renaissance, the eighth grade is fully present in modern times. Its aim is to bring the accumulated image of world civilization up to the present day. Nothing characterizes the modern period better than the great revolutions—the industrial, political, and scientific revolutions that pulled down the old monarchical orders, and in turn, gave rise to the struggles for individual freedoms and human rights. All these have had far-reaching cultural consequences, and it is important that students consciously realize and appreciate this as they themselves are carried into the turmoil of adolescence.
Students continue to use computers and the computer lab to write essays and other Main Lesson content. The students use teacherweb to communicate with their teachers, gather details of assignments and find copies of Main Lesson chalkboard drawings. Students also build keynote presentations for their 8th grade (year-long) projects. This combines designing/writing content slides, photos, videos and music. Finally digital literacy will be introduced to prepare the students to understand how to do research online, and how to find reliable sites; digital footprints, plagiarism.
Students have generally mastered all developmental motor skills. Games this year provide practice for fun, fitness and continued growth. Games become a vehicle for communication. Each movement class begins and ends in a circle. Each member of the circle will have an active role in a game. Roles will frequently change throughout the year. We discuss democratic process within the group. Group and individual goals are spoken keeping in mind that mistakes made in a game are also lessons. Cooperative games require planning within the group and critical thought. Students work together with clear concise communication. These games have the added benefit of working through self-esteem and instilling leadership qualities. We play games with basketballs, bouncy balls, Frisbees, tennis balls, and a variety of other equipment. Sport games are played with a slight variation to target group cooperation. For example four corners basketball requires the ball to be passed to each member of the team before a basket can be attempted. The ultimate goal is that students will leave Novato Charter School with a life love of games and play. |
In 2010, some 14.5% of American households — 17.2 million — could not always furnish enough food for family members, according to a USDA report. This figure has remained at elevated levels following the Great Recession in 2007-08, and it is even higher for households with children. The consequences of such food insecurity can be devastating for children’s health and development. While previous studies on this issue have largely focused on individuals and households, food insecurity can strike certain communities and neighborhoods more than others.
A 2012 study in the Journal of Applied Research on Children, “Individual, Family, and Neighborhood Characteristics and Children’s Food Insecurity,” examines individual, family and neighborhood characteristics of food-insecure children. The researchers, from Rice University, based their work on the Early Childhood Longitudinal Study (ECLS), a nationally representative dataset of 20,000 kindergarteners from 1998 to 1999.
Findings of the report include that:
- In both kindergarten and third grade, 8% of the children were classified as food insecure. Only 5% of white children were food insecure, while 12% and 15% of black and Hispanic children were food insecure, respectively. In third grade, 13% of black and 11% of Hispanic children are food insecure compared to 5% of white children.
- Over 20% of children whose mothers held less than a high school education were food insecure at kindergarten. This figure is significantly higher than for children of mothers with a high school degree (8%) and children whose mother’s attained a college degree (1%).
- A typical food-insecure neighborhood is approximately 25% Hispanic and 16% black. The average food-insecure child lives in a neighborhood where more than a quarter of households are lead by women.
- Children in the Hispanic/foreign-born neighborhoods are also far more likely to be food insecure: 16% in kindergarten and 13% at third grade.
“Policies that focus on levels of food insecurity within neighborhoods or communities, rather than a strictly individual or household-level focus, may have more far-reaching effects on curbing food insecurity,” the researchers conclude. “For example, a focus on improving access to affordable and healthy foods in poor neighborhoods could reap dividends for decreasing household food insecurity.”
Because the data analyzed were slightly older, the researchers note the following about the study’s relevance to contemporary America: “From 2000 to 2007, household food insecurity rates were closer to 11%, undergoing a spike from 2007 to 2008 to roughly 14% in 2009 and 2010, which was the highest level since the USDA surveys began in 1995. Thus, it is likely that our estimates, from data before the increase in food insecurity began, are conservative and reflect a better food environment for households with children in the U.S. than can be expected today.”
Tags: children, nutrition, food, race, Hispanic, African-American |
Well before the United States' formal entry into World War II on Dec. 8, 1941, the political, military and industrial leaders of the nation started what became a massive national mobilization effort. The nation's scientific organizations made vital contributions in several fields. In the area of aeronautical research, the National Advisory Committee for Aeronautics made many important contributions to the development and production of military aircraft that saw service during the war. The story of military aviation and the American aircraft industry during the war is well-known. The story of research conducted by the NACA has received less attention. This fact sheet highlights some aspects of the NACA's wartime work.
The NACA was established in 1915 "to supervise and direct the scientific study of the problems of flight with a view to their practical solution." During the next 25 years, the NACA became one of the world's premier aeronautical research organizations. Still, in 1939 (the year Germany invaded Poland), there were only 500 employees and the organization had a modest budget of a little more than $4 million. Like almost every other government agency, the war transformed the NACA. It grew from one research facility--the Langley Memorial Aeronautical Laboratory in Hampton, Va.--to three. The new facilities were the Ames Aeronautical Laboratory in Mountain View, Calif., and the Aircraft Engine Research Laboratory in Cleveland, Ohio. Employment peaked at 6,077 employees in 1945 and the budget that same year was almost $41 million.
Equally significant as the changes created by the expansion of personnel and facilities was the change in the NACA's approach to research. Before the war, the NACA had conducted more basic research, mainly aerodynamics and flight research. Despite its charter to find "practical solutions," the NACA had held industry at arm's length. The urgent demands of war necessitated a different relationship. A unique partnership was formed by NACA researchers with industry designers and military planners. One journalist characterized the NACA's role in this partnership as "the research midwife at the birth of ... better American planes."
What the journalist was trying to capture was the NACA's new role in conducting applied development work for industry and the military. The fundamental research agenda of the 1920s and 30s proved a solid foundation for its new wartime responsibilities. The NACA mandate to find "practical solutions" assumed paramount importance in guiding the organization during the war. NACA engineers began using the laboratories (especially its growing wind tunnel complex) and technical expertise to develop new testing methods World War II and the National Advisory Committee for Aeronautics: U.S. Aviation Research Helped Speed Victory which resulted in improvements in aircraft speed, range and maneuverability that ultimately helped turn the tide of the war in favor of the Allies.
In January 1944, a leading aviation publication, Aviation, editorialized that the NACA was the "force behind our Air Supremacy" and that "the story of the NACA [was] the story of American aviation. Neither could well exist without the other. If either fails, the other cannot live." This was recognized later as exaggerated praise. In fact, the NACA was far behind Great Britain and Germany in recognizing the significance of jet propulsion, the single most important development in aviation during the war period.
Nonetheless, the NACA did have many important accomplishments during the war and its employees were justifiably proud of their work. Air power was a critical factor in the nation's military success. Those who worked closely with the application of technology to the development and production of aircraft had a keen appreciation for the often "invisible" contribu-tions made by the NACA. Further, the NACA organization served as a model for structuring other government scientific research agencies both during and after the war.
What follows is a description of a few of the major research projects including drag cleanup, deicing, engine development, low-drag wing, stability and con-trol, compressibility, ditching and seaplane studies. To learn more about the NACA's wartime work, see the suggested reading list at the end of this fact sheet.
The most important work done by the NACA during the war was "drag cleanup." Drag is the resistance to airflow. Every engineer or experimentalist since the beginnings of heavier-than-air flight has struggled to minimize it. Between 1938 and 1940, researchers at Langley pioneered a method using its cavernous Full-Scale Wind Tunnel to measure drag and make recommendations to the manufacturer as to how best to correct the problems. The military was so enthusiastic about the results of the "drag cleanup" process -- which helped solve technical problems and was quick and inexpensive -- that it had the NACA test virtually every new prototype.
Drag cleanup work continued throughout the war at Langley as well as at the Ames laboratory which opened a still larger full-scale tunnel of its own. The experimental work began by putting a full-size aircraft into one of the full-scale wind tunnels, taking off all antennas and other items sticking out from the aircraft body and, finally, covering the entire airplane surface with tape. Measurements were made of this "aerodynamically smooth" airplane. Gradually, the engineers would remove the tape strips and determine the drag created by every part of the airplane. The resulting report not only identified the problems but also made recommendations on how to correct them. The NACA also conducted an extensive program of flight research to confirm its drag cleanup recommendations and further assist industry and the military in the quest for maximum performance.
This Douglas XSBD-2 model was the first aircraft
to be tested in the NACA Ames 40 x 80-ft. Wind
Tunnel, the largest wind tunnel in the world at the
time. Drag reduction studies were performed on
A Lockheed YP-38 Lightning (the second prototype of the Lightning series) undergoes drag clean-up in the NACA Langley 30 x 60-ft. Full-Scale Tunnel.
A good example of the impressive results produced by drag cleanup was the Bell P-39 Airacobra. The aircraft originally had a top speed of 340 mph. After undergoing two months of drag cleanup work, the plane emerged with a new maximum speed of 392 mph. Instead of an expensive and time-consuming complete redesign of the aircraft, the NACA's drag cleanup research showed that minor modifications would enable the P-39 to meet the Army's specifications.
Ice is the bane of every pilot. It coats wings and propellers, reducing lift and increasing drag, often resulting in fatal crashes. Right from the start, the entire aviation community was unanimous in its desire to develop a system that would make flying safer. The NACA began its studies in the late 1920s at Langley, however, work was transferred to Ames as soon as the California lab was opened. The icing project was con-sidered unique among NACA wartime efforts because it consisted of both research and extensive design of actual hardware used on airplanes.
The NACA developed a heat deicing system which piped air heated by hot engine exhaust along the leading edge of the wing. The Army asked the NACA to install a prototype of this system on two bombers, the Boeing B-17 Flying Fortress and the Consolidated B-24 Liberator; the Navy swiftly followed suit having the NACA put a prototype on the Consolidated PBY Catalina. Studying the results of these prototype systems as well as further test results from its own Curtiss C-46 "flying laboratory," the NACA perfected a deicing system used to save the lives of countless airmen flying in dangerous weather conditions. In 1946, Langley/Ames researcher Lewis Rodert and the NACA were awarded the Collier Trophy, aviation's highest award, for their deicing work.
Ice formation on the loop antenna and copilot's
airspeed mast of a Curtiss C-46 Commando
transport at Ames. The aircraft participated in icing
flight research studies in 1944.
The NACA pioneered new methods of "trouble-shooting" defects in new, higher powered piston engines. These efforts were conducted at the new Aircraft Engine Research Laboratory in Cleveland (now known as the Lewis Research Center) beginning in 1942. Engineers closely examined engines slated for rapid production and use in military aircraft. They developed new ways to solve complex combustion, heat exchange and supercharger problems. Engine manufacturers considered the supercharger their most pressing technical problem. The NACA began by studying existing corporate research programs; it swiftly introduced a single standard test procedure. Then researchers began developing the centrifugal supercharger. This work was extremely useful to manufacturers.
Engine research did not receive very much public attention. One project NACA engineers often high-lighted was their work on the engines for the Boeing B-17 Flying Fortress. While testing the early B-17 prototypes, the Army had discovered that adding a turbo-supercharger would greatly improve the altitude and speed of the bomber. The Army ordered future B-17s be equipped with turbo-superchargers. Supercharger technology was not very well developed and Wright Aeronautical, makers of the R-1820 Cyclone engines used on the B-17, struggled with the requirements. This was precisely the kind of problem the engine lab was intended to work on. Eventually, the turbo-supercharger problems were resolved and the B-17, a true high-altitude, high speed bomber, went on to become one of the military's most success-ful bombers. The turbosupercharger was also used with great success in the Boeing B-29 Superfortress. The Wright R-3350 Duplex Cyclone that powered the B-29 also underwent extensive testing in the NACA's new Altitude Wind Tunnel at the engine lab.
On May 8, 1942, the Wright R-2600 Cyclone
14-cylinder engine became the first test subject to
be evaluated in the Engine Propeller Research
Building at the NACA Aircraft Engine Research
It should be noted that while the NACA engine research facility was used by General Electric to test the I-16 turbojet engine, NACA researchers had been virtually excluded from some aspects of wartime research on jet propulsion. Early in the war, the NACA had pursued some jet engine technologies, including an axial-flow compressor, that had a bright future but had too many problems to be overcome in the short term. In any event, by that time the United States was already behind Great Britain and Germany in developing a workable jet aircraft. Nonetheless, there was a dedicated group at the engine lab which seized every opportunity to work on the General Electric project thereby building the foundation for the NACA's postwar work on jet engine technology.
Airfoil research was a well-established hallmark of the NACA. A new series of airfoils announced in 1940 was the basis of the NACA's low-drag wing research which had a profound effect on the outcome of World War II. An airfoil is a typical cross-sectional shape of a wing. Airplane designers chose from hun-dreds of airfoils to get the maximum amount of lift-to-drag ratio. The aerodynamicist's dream was the laminar flow airfoil because it meant the layers of air moved completely smoothly over the surface of the wing. The new series developed by the NACA pro-duced predominantly laminar flow when the airplane was at cruising speed.
Low-drag wings resulted in high speed at cruise conditions and longer range which is why the British (the original purchasers of the aircraft) requested that North American engineers use the NACA low-drag wing on the P-51. A prototype of the fighter tested at Langley showed tremendous capabilities and ignited enormous enthusiasm among engineers and test pilots alike. The Mustang went on to become a highly effec-tive long-range escort support for heavy strategic bombers and outclassed most enemy fighter opposition in aerial combat. Back in the lab, low drag wing work continued as NACA engineers developed a second laminar flow airfoil series which would be incorporated into other aircraft such as the Bell P-63 Kingcobra, Douglas A-26 Invader and America's first jets, the Bell P-59 Airacomet and the Lockheed P-80 Shooting Star.
This P-51B was used at NACA Langley to conductin-flight
investigations of wing sections, including therevolutionary near
laminar-flow airfoil. The device located behind the white (test) section
of the wing is an air pressure rake which registered details of airflow
over the section.
Wartime research on stability, control, and aircraft handling qualities included several projects. Three are described here. Stability means the tendency of an airplane to return to steady flight after a disturbance (such as a wind gust). The best way to determine an air-craft's stability was through flight testing. One impor-tant contribution made by the NACA in this area was its famous technical report, No. 755, "Requirements for Satisfactory Flying Qualities of Airplanes." Representing a decade of work, the NACA introduced to the industry a new set of quantitative measures to characterize the stability, control and handling qualities of an airplane. The military readily adopted the NACA findings and for the first time issued specific design standards to its aircraft manufacturers. It is a classic example of the partnership between the military, air-craft industry and the NACA.
Another important area of work was spinning -- the dangerous, uncontrolled downward spiral of an airplane. Both the Army and the Navy required that every fighter, light bomber, attack plane and trainer be tested in the NACA spin tunnels, using accurately scaled and dynamic models. More than 300 models were tested and aircraft designers used the results to help minimize spinning tendencies. This work also contributed to changes in airplane tail design, a factor instrumental in helping pilots recover from high-speed dives.
While flying various combat missions pilots of high-speed aircraft had been terrified by the unexplained loss of control which occurred when air flow over various portions of their aircraft exceeded the speed of sound (the airplane did not actually fly faster than the speed of sound). Suddenly without warning the air-plane would plunge into a steep dive and the pilot's controls would be completely useless. This phenome-non surprised engineers both in industry and the NACA. The NACA quickly initiated studies of the problem. Air, researchers learned, is a compressible fluid and when airplanes approached the speed of sound it became so dense and the pressure so great that shock waves formed, changing the airflow over the surfaces of the wings.
Extensive testing of compressibility was conducted first on the Lockheed P-38 Lightning at Langley. Lockheed chose not to follow the recommendations because they would have required extensive design changes. In theory the NACA proposals would have solved the problem but Lockheed wanted a quick and inexpensive solution. A second research investigation was then undertaken at Ames. Ames researchers came up with three possible solutions. While they did some wind tunnel work, most of the Ames work involved flight tests duplicating the conditions encountered in combat. Although never crediting the NACA, Lockheed adopted the dive flaps proposal which added flaps on the wing's lower surface. While this was only a "quick fix" the dive recovery flaps did enable the pilot to overcome the effects of compressibility and retain control over the airplane if it went into a dive.
By 1943 military planners agonized over the mount-ing losses of aircrews experienced over vast ocean areas, particularly in the Pacific. They asked the NACA to assist in finding ways in which aircrews and aircraft might better withstand water impacts. An intensive program was initiated at Langley using its hydrodynamic and structures facilities plus full-scale aircraft. Most studies involved models but one of the more dramatic efforts was a joint project with the Army. Using an actual Consolidated B-24 Liberator, the NACA ditched the plane in the nearby James River. The force of the impact on the aircraft's bomb bay doors and other structural components was measured. The information obtained from the studies was sent to aircraft manufacturers as well as to air units in both the European and Pacific Theaters. The research helped to save the lives of countless aircrews.
Sequential photos showing the ditching of a Consolidated B-24D
Liberator in the James River.
Before and during the war, NACA researchers studied problems involving seaplane hulls and floats using two unique tow tank facilities and an impact basin located at Langley. During takeoff and landing, the large seaplanes with their heavy hulls tended to bob up and down (the engineers called it "porpoising") or even worse, skip along the surface making the aircraft dangerously uncontrollable. The Navy, which used seaplanes for maritime patrol was keen on finding solutions to the problems of porpoising and skipping. Careful study showed that adding a "step" --really a notch --that broke the smooth surface of the hull would eliminate both problems. The step provided two separate surfaces --one for when the seaplane was plowing through the water (early in takeoff and during the final stages of landing) and one for when it was skimming along the surface when the plane was nearly airborne. The NACA also solved the problem of water being sprayed onto the propeller and wings by adding metal strips to the hull which deflected the spray.
A model of the Consolidated PBY Catalina flying-boat is prepared for tests in the NACA Langley hydrodynamic facilities.
* To order NASA History Series books , contact the NASA Information Center, Code JOB-19, NASA Headquarters, Washington, DC 20546, or by telephone at 202-358-0000. Order by SP number. All orders should be prepaid. |
The Art of Synthesizing: Grade 5 Style!
Grade 5 students have recently finished their Social Studies unit on Ancient Civilizations. The unit began with student engaged in a group brainstorm, answering the question: “What do we already know about Ancient Civilizations?” Student came up with a myriad of ideas, ranging from the creation of Stonehenge to the use of the abacus. Not only did students realize that they had a wealth of knowledge to share, but they also had many questions.
From this introduction, student moved forward to examine and discuss several questions, such as:
“What is a civilization?”
“What is the sequence of events that took place after nomads established a permanent location?”
“What geographical features are important when creating a permanent settlement?”
In order to direct student research and help students understand where they are going, teachers introduced the final project: Creative Consultancy Agency. Students needed to pretend that they were part of a consultancy agency that had been approached by a group of nomads. These nomads were interested in settling and establishing themselves in a permanent location. Students needed to design a presentation or a research paper that made recommendations to the audience (the nomads) for the perfect geographical location, the steps needed for settling, and the elements needed for their successful civilization. In order to support their recommendations, students would need to provide concrete examples from history.
Let the information gathering begin! As students read through their resources, teachers modeled effective note-taking and non-fiction reading strategies. Together they examined the Sumerian and the Mayan cultures. Students had to discern what information is relevant for them in order to complete their project.
After working through several different resources, with the support of their teacher, students went off in partner groups to gather more information on China, Egypt, Greece, Inca, Aztecs, and Rome. Not only were a variety of non-fiction texts used, but students also accessed information from the Internet and short informational videos. Students used a graphic organizer to focus their research into specific aspects of a civilization. Some of these included job specialization, technological advancements, government, belief systems, education, and economy.
Using the wealth of information they gathered, students began the challenging task of interpreting. They compared and contrasted the different settlements. They used graphic organizers, such as Venn Diagrams, to draw out and highlight key elements of ancient civilizations.
Once students had finished interpreting their information, they used these elements as a basis for their recommendations and they began to organize their presentation or research paper.
The day of presentations came quickly and students were eager to share their learning successes with one another. Overall, it was an engaging learning experience for students.
To read more stories about Shekou International School visit their website: www.sis.org.cn/why-choose-sis |
Today’s post will provide an overview look at Alzheimer’s Disease. As I’ve stated before, Alzheimer’s Disease is a specific type of brain deterioration disease (dementia) that differs from other dementias.
While Alzheimer’s Disease is a type of dementia, not all dementias are Alzheimer’s Disease. “Alzheimer’s Disease” has become the catch-phrase for all neurological degeneration among the general population and that imprecision leads to a lack of understanding of the complexities of these diseases, especially when several types of dementia are present concurrently.
Dementias affect specific areas of the internal structure of the brain and are caused by specific abnormal occurrences within those areas. We’ve looked at vascular (multi-infarct) dementia, which is a result of small vessel ischemia within the blood vessels in the brain, and Lewy Body dementia, which occurs when abnormal proteins are deposited in the cortex of the brain.
Alzheimer’s Disease affects the whole brain, essentially eroding and diminishing, through the resulting atrophy, the whole structure of the brain. The two crucial components in Alzheimer’s Disease are the overabundant presence of plaques (beta-amyloid protein deposit fragments that accumulate in the spaces between neurons) and tangles (twisted fibers of disintegrating tau proteins that accumulate within neurons). Watch this short video to see how these plaques and tangles form and how they lead to neuron death.
While plaques and tangles, which lead to neuron death (the nerve cells get deprived of what they need to survive and be healthy), are part of the aging process, in our loved ones with Alzheimer’s Disease, there are so many of them that the brain slowly dies from the inside out.
It is clear from the picture above exactly why Alzheimer’s Disease is a systemic disease, because all areas of the brain are eventually impacted.
However, as Alzheimer’s Disease begins, the first area of the brain affected is the temporal lobe, which is, in part, responsible for long and short-term memory, and persistent short-term memory loss is usually one of the first symptoms of Alzheimer’s Disease to appear.
The second area of the brain to be affected is generally the frontal lobe, which handles information processing and decision-making. The last part of the brain to be affected is usually the parietal lobe, which is the area of the brain responsible for language and speech.
Alzheimer’s Disease has distinct stages in which symptoms materialize. The stages are (this lists the three main stages, but there is also a more comprehensive seven-stage breakdown, known as the Global Deterioration Scale or the Reisberg Scale):
- Stage 1 – Mild – Recurring short-term memory loss, especially of recent conversations and events. Repetitive questions and some trouble with expressing and understanding language. Possible mild coordination problems with writing and using objects. May have mood swings. Need reminders for some daily activities, and may begin have difficulty driving.
- Stage 2 – Moderate/Middle – Problems are evident. Continual memory loss, which may include forgetting personal history and the inability to recognize friends and family. Rambling speech. Unusual reasoning. More confusion about current events, time, and place. Tends to get lost in familiar settings. Experiences sleep issues (including sundowning). More pervasive changes in mood and behavior, especially when experiencing stress and change. May experience delusions, aggression, and uninhibited behavior. Mobility and coordination may be affected. Need set structure, reminders, and assistance with daily living.
- Stage 3 – Severe/Late – Confused about past and present. Loses all ability to remember, communicate, or process information. Generally incapacitated with severe to total loss of verbal skills. Unable to care for self. Often features urinary and bowel incontinence. Can exhibit extreme mood disturbances, extreme behavior, and delirium. Problems with swallowing occur in this stage as well.
It’s important to remember that not all our loved ones with Alzheimer’s Disease – especially if there are other dementias present – will go through every aspect of each stage nor through all the stages before they die. That’s one of the real difficulties with “mixed-dementia” diagnoses, as these are called, because it’s difficult to tell which brain disease is causing which problems and that makes them more difficult to manage symptom-wise.
The medications generally prescribed for Alzheimer’s Disease are Aricept (mild to moderate stages), Namenda (moderate stage), and Excelon (mild to moderate). All three of these medications are cognitive enhancers. It’s not unusual to have more than one of these medications prescribed at a time.
I will talk specifically about sleep disturbances in dementias and Alzheimer’s Disease, including sundowning, in another post, but I will caution all caregivers to stay away from both non-prescription sleep medications like Tylenol PM, Advil PM, and ZZZQuil and prescription sleep medications like Lunesta and Ambien (all of these can actually make the symptoms worse and definitely make injury and/or death from a fall more likely).
Melatonin is naturally-occurring sleep hormone in humans. As people age, there is less melatonin produced. That’s why, in general, most older people who have never had sleep disorders eventually and gradually sleep less than their younger counterparts. However, the brain damage that dementias and Alzheimer’s Disease cause exacerbates this lack of melatonin.
So, it’s worth it to try a therapeutic dose (up to 20 mg per night is considered to be safe) of Melatonin. It is available over-the-counter at both brick-and-mortar and online drug stores.
Start with a 3 mg dose and add slowly. With my mom, a 5 mg dose provided enough for her to sleep as best as she could through the night. Do not overdose because this will disrupt the circadian rhythm further by producing late sleeping and grogginess during the day.
Usually our loved ones with dementia and/or Alzheimer’s Disease, even though these diseases are fatal (when the brain’s dead, you’re dead), don’t die from them specifically.
They die either from a concurrent health problem (in my mom’s case, it was congestive heart failure which lead to a major heart attack, a minimal recovery, and then her death twelve days later) or from complications that arise from the brain degeneration caused by the dementias and/or Alzheimer’s Disease.
The two most common causes of death in Alzheimer’s Disease are pneumonia (the brain controls swallowing, and once that becomes compromised, aspiration of food into the lungs is likely and leads to an infection) and fatal trauma to the head from falls. |
The first idea is based on the behaviour of a system of two populations in a predator-prey relationship. If the size of the population of prey increases then there is more food for the predators. If rabbits multiply, there is more food for foxes and so foxes become numerous. Now the rabbits get eaten up, the foxes become hungry and gradually diminish again. This type of periodic change in population size had been observed in nature, and is modelled by mathematical equations, called the Volterra-Lotka equations, named after the two people who, independently, discovered them. Given the size of the two populations, the equations predict the future population sizes. For most initial conditions the populations exhibit the cyclic behaviour described above, but there is a balance point at which the populations remain stable.
The second idea in our recipe for the fractals is an iterative method for estimating the solution of an equation. These methods provide rules, that given a guess at the answer, will find an even better guess. This better guess can then be plugged back into the rule to get an even better answer. We can continue this way until we get the required level of accuracy.
In their book "The Beauty of Fractals", Peitgen and Richter suggest an iterative method that is a combination of two discrete one-step methods, Euler's method and Heun's method. They provide a composite formula with two constants, p and h. If p=0 we have Euler's method, while if p=h then we have Heun's method. The interesting things happen when other values of p and h are chosen.
The default settings in the fractal explorer are h=0.739 and p=0.739. This gives a fractal with a "wiggly line" attractor, a nine-point cyclic attractor and a single unstable point at (1,1).
The calculate button will calculate the fractal plot using the values from the table of settings. For each point on the graph
the x and y coordinates are plugged into the iterative equation, which gives a sequence of new points called an orbit.
If x or y become negative, or become greater than the bailout value, the orbit escapes and the starting point is coloured yellow.
If the orbit does not escape the starting point is coloured dark red, and if the final point of the orbit is on the graph this will be coloured blue.
The Redraw button will re-display the previously calculated fractal. It can be used to clear any orbit traces or to change the display settings, "End Points" and "Escape Speed" without the need to re-calculate the underlying plot.
A checkbox allows you to view the end points of every non-escaping orbit. These points will approximate to the attractors. The greater the number of iterations the nearer to the attractors the end points are likely to be.
If the checkbox, "Show escape speed", is checked then the brightness of the yellow colour depends on the number of iterations before the orbit of the point escapes. The lighter the colour the more iterations are needed before the orbit escapes.
The Orbit button will draw lines showing the orbit of the point (x,y) entered in the edit boxes.
If "Show last orbit points" is checked, then lines will not be drawn. Instead the last points will be coloured green.
The orbit can be cleared using the Redraw button.
An orbit can also be generated by clicking a point on the graph. This point will be used as the start of an orbit.Back to Top...
|Number of iterations||The number of iteration that are to be carried out before deciding that the orbit from a point remains non-zero and less than the bailout figure.|
The value at the lefthand end of the x-axis. It must not be negative.
By changing the minimum and maximum values, it is possible to zoom in on any part of the graph.
After changing the ends of the axes the fractal must be calculated again.
|Maximum x||The value at the right hand end of the x-axis. Must be larger than the minimum x value.|
|Minimum y||The value at the bottom of the y-axis, which must not be negative.|
|Maximum y||The value at the top of the y-axis. Must be larger than the minimum y value.|
|Bailout||The iteration for the current point on the graph is stopped if either x or y exceed this value. The iteration is also stopped if either x or y become negative.|
|p-factor||The Newton factor. See the Mathematics section for details.|
|h-factor||The Heun factor. See the Mathematics section.|
The equations to model the population dynamics of a predator-prey situation are
The iterative method proposed by Peitgen and Richter in their book "The Beauty of Fractals"
newX = x + h/2( f(x,y) + f(x+pf(x,y), y+pg(x,y)))
newY = y + h/2( g(x,y) + g(x+pf(x,y), y+pg(x,y)))
h and p are the two constants which define the iterative method to be used, as described in the Background section.
f(x,y) = αx – βxy
g(x,y) = –γy + δxy
For simplicity we can choose α=β=γ=δ=1 |
Akimel O’odham women of southern Arizona (also known as the Pima) use techniques passed down through generations to create fine baskets. Though baskets are now treated as art objects, they were originally created for storing, carrying, serving, drinking, and protecting food items. Beginning in the 1880s, more and more tourists, scientists, and collectors traveled by the new railroad lines to the southwestern United States, resulting in the creation of increased numbers of baskets, such as this one, for the tourist trade. The pattern shown on this basket is known as coyote track.
This basket is part of the Seton Hall Museum of Anthropology and Archaeology Collection. |
Animal and bird species found only on a single island should still be common within that island.
The natural history of islands is littered with examples of unusual species found only in one place, such as the Hawaiian Goose, Galápagos Tortoises and Dodo that may once have been common on their islands, but since human contact have become rare or even extinct. Now this new modelling approach shows that in general, most unique island species should be common on their island. If they are not, then the researchers believe human activity is most likely to be the cause.
"Models of island ecology have tended to focus on the total number of different species that you might expect to find on an island, rather than on how common or rare those species are and whether or not they are unique to the island," says Dr James Rosindell, of Leeds' Faculty of Biological Sciences. "Our model is able to predict the way that new species develop in isolation from the mainland as well as how many individuals of each species we could expect to see in their natural habitat. However, there is little data on population sizes and this highlights a real gap in knowledge that we need to fill."
To develop the model, the researchers collated data on bird species found across 35 islands and archipelagos. Modern genetics makes it possible to identify which species have diverged to create new species - so the team were able to test their model against actual data.
The model and data both show that whilst islands close to the mainland have no unique species, more distant islands tend to have unique species that are closely related to mainland species. Only the islands and archipelagos furthest from the mainland are expected to contain large numbers of unique species closely related to each other, such as Darwin's finches on the Galápagos and the Hawaiian honeycreepers.
"This model is still in its early stages of development, but we hope it will help to prompt more study of population sizes on islands," says Dr Albert Phillimore, from Imperial's Department of Life Sciences. "Comparing the predictions of different models to actual data can help us to identify where other factors are coming into play - such as additional ecological processes and human intervention. In the future, we plan to look at how the model could also help make predictions relevant to conservation strategy."
The work has been funded through an EPSRC research fellowship and an Imperial Junior Research Fellowship.
Dr James Rosindell and Dr Albert Phillimore are available for interview.
For more information:
Jo Kelly, Campus PR, tel 0113 258 9880, mob 07980 267756, email [email protected]
Notes to editors:
Since its foundation in 1907, Imperial's contributions to society have included the discovery of penicillin, the development of holography and the foundations of fibre optics . This commitment to the application of research for the benefit of a all continues today, with current focuses including interdisciplinary collaborations to improve global health, tackle climate change, develop sustainable sources of energy and address security challenges.
In 2007, Imperial College London and Imperial College Healthcare NHS Trust formed the UK's first Academic Health Science Centre. This unique partnership aims to improve the quality of life of patients and populations by taking new discoveries and translating them into new therapies as quickly as possible. |
and the Water Cycle
suggested grade levels: 2- 4
view Idaho achievement standards for this lesson
|Small containers with hot water||Food coloring|
1. Expose students to the Digital Atlas of Idaho. The section on Climatology might be very useful when explaining the water cycle. To get there: Click on Atlas Home, Climatology, then on cloud imaging.
2. Scroll down to the links on the types of clouds and click on them. Point out that evaporation had to occur in order for the moisture to get into the atmosphere. Point out that evaporated water can condense back to a liquid state.
3. Explain that water evaporates from the oceans, freshwater sources, plants, and from the land. It then goes back into the atmosphere and condenses as clouds and eventually falls back to earth as rain, sleet, hail, or snow.
4. Obtain several containers with hot water, and then dissolve as much salt as you can into these containers. Add a different color of food coloring to each of these containers; add enough so the solution is dark.
5. Give students paintbrushes and paper for them to paint a landscape picture with clouds. Allow drying overnight.
6. Examine paintings the next day and ask students what they think happened. Notice the salt is on the paper but the water is gone.
7. Set the coloring containers out to dry over the next week and observe the disappearance of the water.
for class discussion:
1. What happened to the water?
2. Why did the salt stay behind?
3. What will happen to the water that evaporated?
4. Where else does evaporation occur?
These are links to access the handouts and printable materials. |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Elementary Homophones 3
In this vocabulary practice worksheet, students examine the 15 homophones listed in the word bank and match them to the appropriate 15 words.
6 Views 32 Downloads
Choose the Correct Homophone
It's a known secret that English can be a difficult language to learn, and homophones don't make it any easier. Help your young readers tackle these tricky words with this simple fill-in-the blank exercise in which they identify the word...
4th - 8th English Language Arts CCSS: Designed
Greek and Latin Roots, Prefixes, and Suffixes
How can adding a prefix or suffix to a root word create an entirely new word? Study a packet of resources that focuses on Greek and Latin roots, as well as different prefixes and suffixes that learners can use for easy reference
3rd - 8th English Language Arts CCSS: Adaptable
Language Skill Boosters Grade 6
In this language and vocabulary activity, students fill in the missing letters in 20 words, using the meaning in the brackets. All words are adjectives that end with "-ful". Students then complete 40 varied exercises that include: verbs,...
6th - 7th English Language Arts |
What is Rabies?
Rabies is a disease affecting all mammals, including man, caused by a virus that attacks the central nervous system, including the brain. Symptoms may include unexplained aggression, impaired locomotion, varying degrees of paralysis, and extreme depression or viciousness. After the onset of symptoms, terminal paralysis and death are imminent.
Strains of Rabies
There are several strains of the virus that are carried by different species of animals. A "strain" of rabies is a form of the virus that is primarily carried by a specific species of animal, known as the dominant reservoir species. Although a strain is specific to a particular species, other mammals are susceptible to that strain as well. When an animal other than the normal host species contracts the virus, it is called a spillover. In the case of the raccoon strain, which has been affecting the New England area since September of 1992, the most common spillover animals have included skunks, cats, woodchucks, and foxes. The fact that spillover occurs is cause for some concern.
How rabies is transmitted
Most commonly, rabies is transmitted by means of a bite wound. The virus is present in the saliva of the infected animal and is transmitted to the victim that is bitten. Occasionally rabies is transmitted by other forms of exposure such as contact between saliva of an infected animal and broken skin, open wounds or contact between infected saliva and mucous membranes (such as mouth or eyes).
Once the virus has been introduced under the skin, it replicates at the site and spreads to the brain via the nerves and spinal cord. The time the virus takes to reach the brain is called the incubation period. This period is determined by how far the bite wound is from the head.
During the incubation period the animal is NOT infectious. After the incubation period has ended -- with the virus reaching the brain and proceeding to the salivary glands of the animal -- that animal becomes infectious and IS capable of transmitting the virus through a bite.
For dogs and cats there is a period of about three days in which an animal will shed (be able to transmit) rabies virus in its saliva, but will not be showing any neurological signs. After this, the infected animal will begin to exhibit signs of the disease and its health will deteriorate rapidly. Most likely, a dog or cat will be dead within 4 or 5 days of showing clinical signs of the disease. |
is an integrated hydrologic and economic model designed to:
- better understand the key linkages between water, food security, and environment.
- develop scenarios for exploring key questions for food water, food, and environmental security, at the global, national and basin scales
Broadly speaking the model consists of two integrated modules: the ‘food demand and supply’ module, which is adapted from IMPACT model (Rosegrant, Cai and Cline 2002); and the ‘water supply and demand’ module which uses a water balance based on the Water Accounting framework (Molden 1997, Fraiture 2007) underlying PODIUM combined with elements from the IMPACT-WATER model (Cai and Rosegrant 2003). The schematic model structure is given in figure 1.
The model estimates food demand as a function of population, income and food prices. Crop production depends on economic variables such as crop prices, inputs and subsidies on one hand and climate, crop technology, production mode (rain fed versus irrigated) and water availability on the other. Irrigation water demand is a function of the food production requirement and management practices, but constrained by the amount of available water.
Water demand for irrigation, domestic purposes, industrial sectors, livestock and the environment are estimated at basin scale. Water supply for each basin is expressed as a function of climate, hydrology and infrastructure. At basin level, hydrologic components (water supply, usage and outflow) must balance. At the global level food demand and supply are leveled out by international trade and changes in commodity stocks. The model iterates between basin, region and globe until the conditions of economic equilibrium and hydrologic water balance are met.
Model spatial disaggregation:
In order to adequately model hydrology, it makes most sense to use river basin as basic spatial unit. When it comes to food policy analysis, administrative boundaries should be used (trade and policy making happens at national level, not at river basins scale). Therefore, WATERSIM takes a hybrid approach to its spatial unit of modeling. Firstly the world is divided into 125 major river basins of various sizes with the goal of achieving accuracy with regard to the basins most important to irrigated agriculture. Next the world is divided into 115 economic regions including mostly single nations with a few regional groupings of nations. Finally the river basins were intersected with the economic regions to produce 282 Food Producing Units or FPU’s (figure 2). The hydrological processes are modeled at basin scale by summing up relevant parameters and variables over the FPU’s that belong to one basin. Similarly economic processes are modeled at regional scale by summing up the variables over the FPU’s belonging to one region.
Economic processes are modeled at an annual time step, while hydrological and climate variables are modeled at a monthly time-step. Crop related variables are either determined by month (crop ET) or by season (yield, area). The food supply and demand module runs at region level on a yearly time-step. Water supply and demand runs at FPU level at a monthly time-step. For the area and yield computations the relevant parameters and variables are summed over the months of the growing season.
The model, written in GAMS, is developed by IWMI with input from IFPRI and the University of Illinois.
- Scenario analysis for the Comprehensive Assessment of Water Management in Agriculture (CA)
- Scenario contribution to the International Assessment of Agricultural Science and Technology (IAASTD)
- Water implication of biofuels
- Impact of trade and agricultural policies on water use
- Sub-Saharan Africa investment study
- Scenarios at basin level for the benchmark basins in the Challenge Program on Water and Food
The model is designed for research purposes. For more information on model design, applications and availability please contact Charlotte de Fraiture, [email protected]
Cai, X.; Rosegrant, M. 2002. Global water demand and supply projections. Part 1: A modeling approach. Water International 27(3):159–169.
Fraiture, C. de. 2007. Integrated water and food analysis at the global and basin level. An application of WATERSIM. Water Resources Management 21: 185-198
Molden, D. 1997. Accounting for water use and productivity. System-wide Initiative on Water Management (SWIM) Paper No.1. Colombo, Sri Lanka: International Water Management Institute.
Rosegrant, M., X. Cai, and S. Cline. 2002. World Water and Food to 2025. Dealing with Scarcity. Washington, D.C.: International Food Policy Research Institute.
|Figure 1 : Schematic Diagram of WATERDIM Model
Figure 2: WATERSIM Spatial units |
Visit the beach on a hot afternoon and you may not realize it, but someone — or rather something — is watching from above. If you stand in the right place, the silent watcher’s invisible spotlight will pass right over you, like the spotlight of a police helicopter flitting overhead.
That aerial observer zooming over your head is the Jason-2 satellite. It flies 1,340 kilometers (832 miles) high — as far above the ground as New York City is from Chicago. It travels 25,000 kilometers per hour, 27 times as fast as a commercial jet. And it circles Earth a little over 12 times a day.
Two thousand times per second, Jason-2’s spotlight — pointed down at Earth — flashes on for an instant. It isn’t a flash that you could see even if you were looking. The spotlight is throwing off radio waves, which are invisible to the eyes of humans and other animals. Those waves ripple down to Earth and bounce off of its surface, back into space. A computer aboard the satellite times exactly how long those reflected radio waves take to return — usually, about nine-thousandths of a second.
By measuring how long the signal takes to bounce back, Jason-2 can measure the distance between itself and Earth’s surface. The satellite was launched into space to measure sea-surface heights. Or, more to the point, Jason-2 is measuring how quickly the planet’s seas are rising.
Scientists these days are worried about sea level. As Earth warms, the surface of the ocean is creeping upward. This creep is happening partly because saltwater expands a tiny bit as it warms. “Warmer water literally is taller,” explains Josh Willis. He’s a climate scientist at the NASA Jet Propulsion Laboratory in Pasadena, Calif.
Sea level also is rising because warm temperatures have prompted glaciers in Antarctica, Greenland and other usually cold places to melt more quickly. Glaciers are essentially rivers of ice, and their melting adds freshwater to the ocean. Antarctica and Greenland are together losing about 350 cubic kilometers of ice per year — enough meltwater to fill up 80,000 Yankee baseball stadiums. Spread over the world’s oceans, that meltwater alone raises sea level about 1 millimeter (1/25th of an inch) or so each year.
Jason-2 has shown that overall, sea level is currently rising about 2.4 millimeters per year — a little more than the thickness of a quarter.
That may not sound like much — but those quarters stack up year after year. This slow rise is expected to cause flooding in many of the world’s coastal cities in the next 50 to 100 years. Worse yet, the speed of sea level rise is also expected to grow. Seas may eventually rise four to eight times faster than they are today.
High and dry
Scientists have long known that sea level changes over time. Paul Hearty, a geologist at the University of North Carolina at Wilmington, has found boulders covered in barnacle shells some 30 meters (100 feet) above sea level. Those high and dry barnacles are several million years old. They serve as evidence that sea level was once much higher. Scientists have also found dead coral reefs buried 150 meters beneath the sea. When that coral was alive, it grew just below the water’s surface. Today, those coral skeletons provide evidence that sea level was once much lower, too.
Sea level has risen and fallen in sync with the ice ages, over hundreds of thousands of years. During past ice ages, oceans were lower because more water was tied up in glaciers on land. But between ice ages, sea level sometimes rose higher than it is today, as melting glaciers sweated their water into the ocean. Some 125,000 years ago, just before the last ice age began, sea level was a whopping 5 to 8 meters (16 to 26 feet) higher than it is today.
The big challenge for scientists has been how to measure changes to sea level throughout the past 50 to 100 years. Bruce Douglas, a retired scientist who worked for 20 years at the National Oceanic and Atmospheric Administration, or NOAA, in Rockville, Md., spent years working on this. During the 1980s and 1990s he measured sea level rise by studying records from tide gauges. Harbor operators have relied on these devices for more than 200 years to monitor the water level in coastal areas in order to alert ships at risk of running aground. But the gauges gave a limited picture: They measured the level of the world’s oceans, which cover 360 million square kilometers (139 million square miles), in only 20 or 30 places!
The bumpy ocean
Scientists have gradually solved that problem as satellites have become accurate enough to monitor sea level. Several satellites, including Jason-2, have now been shot into orbit for this purpose. These satellites let scientists do something that they could never do before: measure sea level not just in a few places along the coast, but also across the entire length and breadth of the world’s oceans.
These fuller measurements revealed something amazing: Unlike the water in your bathtub, the sea is actually bumpy!
Sea level varies from one part of the ocean to another by a meter or more. And how quickly sea level is rising also varies from place to place. For example, sea level in the western Pacific Ocean, near the island nations of Indonesia and the Philippines, is rising three times faster than the global average.
The ocean’s warming causes much of this bumpiness. During so-called El Niño years, when waters of the eastern Pacific Ocean warm to about 1°C (1.8 °F) higher than usual, the sea level off of California’s coast can rise by 10 or 20 centimeters (4 to 8 inches). This is caused by the expansion of the warming seawater. But the world’s oceans aren’t warmer overall during an El Niño period. Heat has just shifted from one place to another: As the eastern Pacific warms, the western Pacific cools down, causing sea level there to temporarily drop.
So sea level operates a bit like a water bed: Push down on it in one place and it will rise in another. By averaging out all of those bumps detected by Jason-2 and other satellites, scientists have calculated that the world’s oceans are rising by about 2.4 millimeters per year.
The big question for scientists like NASA’s Willis is: Has sea level been rising that fast for millennia? Or did this rate of sea level rise begin much more recently? In the past century or two, humans began spewing more carbon dioxide into the air through the burning of fossil fuels such as coal, oil and natural gas. And as a type of greenhouse gas, carbon dioxide helps warm Earth’s atmosphere — and its seas. So it would make sense that the seas started rising more quickly in the last few hundred years.
Archeologists studying ancient fishponds built by the Romans 2,000 years ago have now helped to answer this important question.
The Romans built dozens of saltwater ponds in southern Europe, along the edge of the Mediterranean Sea. Their goal was to farm fish for dinner. Channels connected the sea to the ponds, which were replenished during high tides. So the ponds had to be built right at sea level. But today, you have to go snorkeling to see the stone walls that the Romans built to contain their ponds: They sit 1 meter under water. They were submerged by rising seas.
Those ponds show that sea level has risen by no more than 1 meter in the last 2,000 years — less than one-tenth of a millimeter per year. If sea level is currently rising 2.4 millimeters per year, then it is rising at least 20 times as fast as it has, on average, since the fishponds were built!
Concludes Willis, “The last 150 years have seen much more rapid rise than at any other time in the last 2,000 years.”
That escalation started just as humans began producing large amounts of carbon dioxide.
And scientists expect sea level rise to speed up even more. As ice melts around the edges of Antarctica and Greenland, some glaciers there have sped up to two, four, and in some cases, even eight times their former speed. As a result, they are dumping ice into the ocean much more quickly than before. Each year, both Greenland and Antarctica are losing a little more ice than the year before. So if sea level rise is speeding up, how fast could it get?
Drowned coral reefs
History provides some worrisome hints. These warnings come from dead coral reefs that scientists have found in deep water off the coasts of Hawaii, Tahiti and Australia. Because corals depend on sunlight that filters through the water, they can survive only a few meters deep. The dead reefs “drowned” in darkness when the sea level jumped, suddenly putting them too far underwater to survive.
“There were huge changes in sea level from 20,000 years ago to about 10,000 years ago” as ice sheets in Antarctica, Greenland, North America and Europe were melting, says Douglas, the retired NOAA scientist. The biggest of those changes, a disaster that scientists call Meltwater Pulse 1A, drowned those coral reefs. Meltwater Pulse 1A was an episode of runaway melting that lasted for 200 to 500 years. During that time, sea level rose by at least 4 meters per century — 16 times faster than today.
Sea level rise probably won’t skyrocket that quickly in the near future. There simply isn’t as much ice in the world to melt all at once as there was 14,000 years ago. But sea level rise could still speed up quite a bit.
Two separate research teams, including one led by Martin Vermeer at Helsinki University of Technology in Finland, predict that sea level will still rise anywhere from 0.75 meters to 1.9 meters by the year 2100. The top end of that range is taller than most grownups (a 6 foot man is only 1.8 meters tall). In order for sea level to reach these heights so soon, it would need to be rising 15 to 30 millimeters per year by the year 2100 — 6 to 12 times faster than it is today.
The year 2100 may seem pretty far off. But if you’re a teen today, then there’s a very good chance that you’ll live to see it. And that kind of sea level rise will cause some huge problems.
A rise in sea level of just 1 meter would directly affect 3.7 million people in the United States alone. That’s according to predictions by Jeremy Weiss and Jonathan Overpeck. These climate scientists work at the University of Arizona in Tucson. Earlier this year, they published a study indicating that such a rise would put the homes of those millions of people in contact with seawater during ordinary high tides that happen every month.
Some towns already sit on the edge. Every now and then in downtown Miami, unusually high tides cause the streets to flood as ocean water pushes through the storm sewers and gurgles up from the manholes like a sea monster. These tides are caused by minor changes in the wind, or by storms passing by hundreds of miles away. “We are getting sea levels 1.5 to 2 feet [45 to 60 centimeters] above the normal projected level almost every year now,” says Harold Wanless, a geologist at the University of Miami. “It’s just starting to show how vulnerable we are.”
But the worst effects will happen during large storms called hurricanes that form over the ocean. As a hurricane comes ashore, its howling winds push a pile of seawater forward. These water piles, called storm surges, can tower as much as 8 meters (26 feet) above normal sea level. Two-meter (6.5-foot) storm surges are common for weaker hurricanes. Storm surges can spill into coastal towns, washing away buildings and cars. (In one part of New York City, the storm surge associated with Superstorm Sandy on October 30 broke a record. The coastal water briefly rose by 13.88 feet — more than 2 feet higher than the previous record, set 191 years earlier. Flowing over seawalls, this surge flooded parts of lower Manhattan.)
Sea level rise could cause future storm surges to start out a meter higher, says Robert Deyle, an environmental planning professor at Florida State University in Tallahassee. “Storms are going to flood further inland,” he concludes.
In the United States, regions from Texas to North Carolina would suffer the most from hurricanes. Preventing flooding there will require that hundreds of kilometers of seawalls and dikes be built. In some places, though, sea level rise will be made even worse by other environmental problems that humans have caused.
Some 2.2 million people live just north of the city of Manila in the Philippines. This area is known as KAMANAVA (an abbreviation combining the names of four towns). Already, high tides flood some streets in KAMANAVA several times each year — and these floods can last weeks.
The sea around the Philippines is rising 7 to 8 millimeters per year, three times the worldwide average. But this is only part of the problem. Even as the sea around Manila and KAMANAVA rises, the land they sit on is sinking. This sinking is called subsidence.
The land is sinking because people have drilled so many wells and pumped so much water out of the ground for drinking and for watering crops. They have pumped out 7 cubic kilometers of water — almost enough water to build a mountain the size of Pikes Peak in Colorado! The depleted land is deflating, like an apple shrinking as it dries. It is sinking 90 millimeters (3.5 inches) per year in some spots.
“In those places we have a lot of road-raising projects,” explains Fernando Siringan. He’s a coastal geologist at the University of the Philippines in Diliman. To keep those roads above water, the government has to raise roads by almost a meter — “and then do it again after two or three years,” he notes. “It’s not cheap.”
Humans have caused coastal land to sink in many other places, too — including China, the Netherlands, southeast England and parts of the United States. Some areas of New Orleans have sunk 5 meters (16 feet) in the last 150 years. And that partly explains why the city was devastated in 2005 by storm surges from Hurricane Katrina.
Many people have referred to sea level rise as a future problem. But KAMANAVA and New Orleans offer two examples of how in some parts of the world sea level rise is already a problem. A problem that scientists need to watch closely.
The Jason-2 satellite that is tracking sea level will have to be replaced in several years. Heavy radiation from the sun is slowly cooking the satellite’s computers. And in order to make exact measurements of the sea’s height, Jason-2 has to travel a precise orbit around Earth. But as it zooms around Earth 4,562 times each year, this satellite is slowly wandering out of that orbital path.
Willis, the NASA scientist, is part of the team that is getting Jason-2’s replacement ready. “Jason-3 is almost a carbon copy of Jason-2,” he says. When Jason-3 blasts into space in 2014, Willis will be one of the people eagerly watching the information that it beams back to Earth.
Word Find (click here to enlarge puzzle for printing)
barnacle A small sea animal that grows in colonies and filters the water for food. The hard shells of barnacles stick to rocks, the sides of ships and other objects at the water’s surface.
coral Small ocean animals that form colonies. Many build large, rocklike reefs out of a mineral known as calcium carbonate.
El Niño A change in ocean circulation that happens about once every five years, causing the surface waters of the eastern Pacific Ocean to warm by about half a degree Celsius.
glacier A river of slowly flowing ice that forms when snow falls, piles up and then compresses into ice over time.
hurricane A type of large storm, also known as a typhoon, with winds of at least 104 kilometers (64 miles) per hour, and sometimes exceeding 250 kilometers (155 miles) per hour. Hurricanes over the Atlantic Ocean form during summer or early fall.
ice age A cooling of the Earth, happening every hundred thousand years or so, and lasting tens of thousands of years. During these periods, glaciers and ice sheets form over many regions. Past ice ages led to large regions of present-day Canada and northern Europe being topped with ice more than a kilometer (0.6 mile) thick.
Meltwater Pulse 1A A sudden warming and melting of glacial ice during the close of the last ice age, 14,600 years ago. The episode caused global sea level to jump by 10 meters (32 feet) or more.
NASA, or National Aeronautics and Space Administration. This government agency runs the U.S. space program and many orbiting research satellites that study Earth.
Pikes Peak A famous peak in the Rocky Mountains of central Colorado. It rises 1,600 meters above the surrounding land, to a total height of 4,302 meters (14,115 feet).
radio waves A type of electromagnetic radiation. It is similar to visible light, but at a lower frequency. It is used to transmit radio and television signals, and is also used in radar.
sea level The overall level of the ocean over the entire globe when all tides and other short-term changes are averaged out.
storm surge A bulge in the surface of the ocean, as much as 8 meters (26 feet) above the surrounding sea level. It is caused by the winds of a hurricane or other tropical storm pushing and piling up the water
subsidence To sink or fall. In geology, the term refers to a drop in surface level as the ground compacts, due to the loss of water (usually from groundwater extraction) that had helped support tiny rocks and sediment.
tide Rises and falls in sea level that happen typically twice per day. These changes are caused by the gravity of the moon (and to a lesser degree, the sun) pulling on water in the oceans.
tide gauge A device used to measure the water level and tides in harbors. It is used to predict tide patterns. This allows harbormasters to alert ships before they risk running aground in shallow water.
to visualize how places could be affected by sea level rise:
S. Ornes. “The sinking city.” Science News for Students. April 9, 2012.
D. Fox. “A ghost lake.” Science News for Students. Feb. 1, 2012.
A. Biskup. “Polar ice feels the heat.” Science News for Students. April 23, 2008.
E. Sohn. “Shrinking glaciers.” Science News for Students. Sept. 5, 2005.
E. Sohn. “Ice age melting and rising seas.” Science News for Students. Aug. 27, 2004.
Teacher’s questions: Questions you can use in your classroom related to this article. |
America fought World War II to preserve freedom and democracy, yet that same war featured the greatest suppression of civil liberties in the nation’s history. In an atmosphere of hysteria, President Roosevelt, encouraged by officials at all levels of the federal government, authorized the internment of tens of thousands of American citizens of Japanese ancestry and resident aliens from Japan. On March 18, 1942, Roosevelt authorized the establishment of the War Relocation Authority (WRA) to govern these detention camps. He chose as its first head Milton Eisenhower, a New Deal bureaucrat in the Department of Agriculture and brother of General Dwight D. Eisenhower. In a 1942 film entitled Japanese Relocation, produced by the Office of War Information, Eisenhower offered the U.S. government’s rationale for the relocation of Japanese-American citizens. He claimed that the Japanese “cheerfully” participated in the relocation process, a statement belied by all contemporary and subsequent accounts of the 1942 events.Listen to Audio:
Milton Eisenhower: When the Japanese attacked Pearl Harbor, our West Coast became a potential combat zone. Living in that zone were more than 100,000 persons of Japanese ancestry: two thirds of them American citizens; one third aliens. We knew that some among them were potentially dangerous. But no one knew what would happen among this concentrated population if Japanese forces should try to invade our shores. Military authorities therefore determined that all of them, citizens and aliens alike, would have to move. This picture tells how the mass migration was accomplished.
Neither the Army nor the War Relocation Authority relished the idea of taking men, women, and children from their homes, their shops, and their farms. So the military and civilian agencies alike determined to do the job as a democracy should: with real consideration for the people involved.
First attention was given to the problems of sabotage and espionage. Now, here at San Francisco, for example, convoys were being made up within sight of possible Axis agents. There were more Japanese in Los Angeles than in any other area. In nearby San Pedro, houses and hotels, occupied almost exclusively by Japanese, were within a stone’s throw of a naval air base, shipyards, oil wells. Japanese fishermen had every opportunity to watch the movement of our ships. Japanese farmers were living close to vital aircraft plants. So, as a first step, all Japanese were required to move from critical areas such as these.
But, of course, this limited evacuation was a solution to only part of the problem. The larger problem, the uncertainty of what would happen among these people in case of a Japanese invasion, still remained. That is why the commanding General of the Western Defense Command determined that all Japanese within the coastal areas should move inland.
Source: Japanese Relocation, produced by the Office of War Information, 1942.National Archives and Records Administration, Motion Picture Division. |
Presentation on theme: "13-3: The Gas Laws. What are the Gas Laws? The Gas Laws: are mathematical representations of the interaction between the four variables of gas: 1)Pressure."— Presentation transcript:
What are the Gas Laws? The Gas Laws: are mathematical representations of the interaction between the four variables of gas: 1)Pressure 2)Volume 3)Temperature 4)Quantity of a Gas
The First Law: Boyles Law Robert Boyle (1627-1691): English Chemist/Physicist, one of the first scientists to note that gases particles are spread out from one another. Most famous experiment involved trapping air in a J-shaped tube, while changing its pressure and measuring its volume. The experiment included mercury inside the tube, which moved and balanced out between the two sides when Boyle increased the atm. pressure. But how did this help Boyle learn anything?
The Pressure-Volume Relationship Boyle wanted to know, Is there a relationship between pressure and the volume of a gas? The answer is YES!
What exactly is Boyles Law? Boyles Law states: if the temperature of a given gas remains unchanged, the product of the pressure times the volume has a constant value.
How Can Boyles Law Help You? Boyles law can help you to determine different qualities without having to measure them. P 1 V 1 = P 2 V 2 What is the mathematical relationship between pressure and volume? inverse
The Second Law: Charless Law Jacques Charles too determined a relationship between gas variables. He did this by performing an experiment in which he kept the amount of the gas and its pressure constant. He discovered that the results of his experiment, when graphed, made a straight line with a positive slope when volume was plotted against temperature.
The Temperature-Volume Relationship Charles proved that volume of a gas is directly proportional to its temperature. It was determined by scientists that the lowest temperature your can reach is absolute zero. Which is equal to -273.15 o C. Matching absolute zero, you must have an absolute temperature. Since the Kelvin scale is only measured in positive units, and its zero coincides with absolute zero, you use this when relating temperature with volume.
But, What Is Charless Law? Charless Law States: at constant pressure, the volume of a fixed amount of gas is directly proportional to its absolute temperature. V 1 T 2 = V 2 T 1
The Third Law: Avagadros Law We all know who the Italian Chemist, Amedeo Avagodro is, and what he discovered. However, he also created a hypothesis that puzzled scientists in his times, and wasnt accepted until decades later!
The Amount-Volume Relationship Avagodros Law States: equal volumes of gases at the same temperature and pressure contain an equal number of particles. Avagadros Law also states two main points: 1) all gases show the same physical behavior, 2) the larger the volume of a gas, the greater number of particles within the gas.
The Fourth Law: Daltons Law John Dalton (1766-1844): English chemist who was one of the first scientists to consider mixtures of gases. After numerous experiments he was able to determine that each gas in a mixture exerts the same pressure that it would if it were alone (not in a mixture). Each gas gives a pressure in a mixture, they each give a Partial Pressure.
Daltons Law Of Partial Pressures Daltons Law Of Partial Pressure States:the sum of partial pressures of all the components in a gas mixture is equal to the total pressure of the gas mixture To sum it up in an equation, it is written like this: P T =P a + P b + P c +... Where P T equals the total of all pressures, and the sub letters mark a different gas in the mixture. |
Nuclear physics is often thought of as nuclear power, but nuclear physics is really the investigation and understanding of (atomic) nuclei, and nuclear power is just one of the applications of nuclear physics.
The nucleus (that we study) is the heart <3 of the atom, and it's where almost all mass of matter resides. Nuclei consist of neutrons and protons; ranging from the smallest one ("normal" hydrogen) with just one proton (and zero neutrons), to the biggest with a few hundred neutrons and protons. (You can't make a nucleus with just one neutron, you have to have at least one proton). The nucleus is small, but large enough to do stuff like vibrate and rotate (what the nuclear physicist would call "show collective degrees of freedom").
A major motivation for studying the atomic nucleus is to gain a fundamental understanding of our world; its origin and future, and its current state. Nuclear physics can explain how stars work to release more or less all the useful energy in the world, while they at the same time produce the different elements - from hydrogen to iron. (Therefore there is today a lot of collaboration between nuclear physicists and astrophysicists.)
In addition to nuclear power, nuclear medicine (medical diagnosis and treatments) is another important application of nuclear physics <3 <3 <3 |
Vicente L. Rafael
Explores the historic roots and practices of colonialism throughout the world, focusing on the roles of nationalism, cosmopolitanism, and imperial domination. Treats colonialism as a world event whose effects continue to be felt and whose power needs to be addressed. Offered: S.
What is colonialism and how does it historically come about as an integral aspect of the formation of the West after 1500? How does a study of colonial practices and imperial regimes allow us to critically approach the ways by which Western encounters with non-Western peoples produce relations of power and inequality? What are the various ways by which colonized peoples comprehend and respond to the demands of colonial rule? What role does nationalism play in determining the limits and possibilities of colonial rule and native responses? In addressing these questions, this course will examine a variety of historical, literary, and cinematic productions set in colonial contexts ranging from the Americas to Asia and Africa, including the recent US "war on terror." In doing so, the course will treat colonialism as a world historical event whose effects are still at work and whose power continue to hold sway.
Student learning goals
General method of instruction
The class will consist of lectures, discussions and films.
Class assignments and grading
Readings, one mid-term exam, and one final exam, each worth 50% of the grade.
Mid-term and final exams. |
Nasa has started work on a new space telescope that will provide views of the universe 100 times larger than what the Hubble Space Telescope can capture.
The Wide Field Infrared Survey Telescope (WFIRST) is expected to launch in 2024. It boasts the ability to capture images with the depth and quality of the Hubble telescope, but take in a much larger expanse of space.
The space agency is hoping the WFIRST will help researchers uncover some of the mystery surrounding dark energy and dark matter. It's also expected the telescope will discover new planets and galaxies.
"WFIRST has the potential to open our eyes to the wonders of the universe, much the same way Hubble has," said John Grunsfeld, an astronaut and associate administrator of Nasa's Science Mission Directorate.
"This mission uniquely combines the ability to discover and characterize planets beyond our own solar system with the sensitivity and optics to look wide and deep into the universe in a quest to unravel the mysteries of dark energy and dark matter."
The telescope will include a Wide Field Instrument that will help survey the universe, and a Coronagraph Instrument will help block glare from stars in order to better see the planets orbiting them. Nasa explains that this will help researchers better understand the physics of these atmospheres, and even discover signs of environments suitable for life.
It will not only be used to spot galaxies and planets far beyond what we can see now, but also track their location to better understand the expansion of the universe, and reveal further insight into the dark energy that could be causing it to grow.
You can see more details of the WFIRST in Nasa's video below.
Image credit: Nasa/GSFC/Conceptual Image Lab |
Chipmunks are not known for their night vision. They're a diurnal, or daytime, creature whose eyes work best when the sun is out. The same can be said of humans, too. Chipmunks tend to sleep when the sun sets. When they do come out at night, they're at a grave disadvantage, but they are not totally blind thanks to how their eyes are made.
How Chipmunks See
A chipmunk's eyes work as a means of defense. They and other small mammals tend to experience the world at a higher rate of speed than humans. They must move fast to escape predators, and therefore have to react to stimuli much faster. This includes how their brains translate the sight of, for example, a hawk's shadow into the recognition of what danger that shadow represents. Compared to humans, a chipmunk's view of the world is much faster and larger. It may even be wider thanks to where their eyes are positioned on their heads: their viewing range can be more than a human's 90-degree angle.
What Makes Up the Eye
Mammalian eyes tend to have similar structures inside them, and a chipmunk is no exception. The iris and pupil of the eye aid in gathering up the light outside. The light then passes through the lens on the way to the retina at the back of the eye. Within the retina are what are called rods and cones. Not only do these structures in the eye help to perceive color, but they also help an animal see in darkness. The information is then passed to the optic nerve and into the brain.
Rods and Cones
The rods and cones within the eye are named due to their shapes. The difference between them otherwise is in how they function. Rods work better in low light. They tell the brain basic visual information and catch motion. Cones are for bright light. They give the brain fine details of what is being seen. Chipmunks have many fewer rods than cones in their eyes.
Nocturnal and Diurnal
Chipmunks are diurnal. In other words, they only come out during the daytime. The reason is not because they are blind at night, but because everything is too dark for their main defense system -- their eyes -- to work to their advantage. Nocturnal animals tend to have large eyes with slit pupils of some fashion, as that shape helps them filter out the light. They are specialized for night-time viewing, and may even have a light-boosting mechanism in their eyes called the tapetum. Chipmunk eyes are specialized for use in the daytime. They have smaller eyes with rounded pupils that do not keep the light out well. They can detect motion extremely well and watch for shadows on the ground from airborne predators. If they were out at night, they could not do so.
- Adirondack Ecological Center: Eastern Chipmunk
- NOVA Online: Night Creatures of the Kalahari - The Nocturnal Eye
- Newsweek: Slow Motion; Christopher Dickey
- Discover Nature Close to Home: Things to Know and Things to Do; Elizabeth Lawlor
- Jupiterimages/Photos.com/Getty Images |
Welding is a method of joining two materials such as brass, aluminum, stainless steel, polymer, plastic and fuses them. The equipment which is used for welding is known as welder. The filler is normally used to hold the pieces together. Different types of energy source materials are also used for welding materials such as flame, gas, electric beam, laser, ultra sound and friction. There are different processes of welding and pipe threading machines. These are MIG welding, TIG welding, arc welding etc.
Types welding machine
The most common type welder use in industrial use is arc welder. This electric machine involves a stick electrode in order to conduct electricity to the work piece. It also melts it in the same time in order to fill the gaps. The purpose is identified first and the solution is then provided according to it. It involves power supply and some important tools that might be needed with the welding machine. The accessories which are mostly required is welding helmet, the torch, welding curtain, welding gauntlet and goggles. Such accessories should be wearing before doing welding. The flames which came out while doing it are harmful for eyes and result into major health issues. |
|As the Qualifications and Curriculum Authority (UK) states in 'The Curriculum Guidance
for the Foundation Stage'....
"The early years are critical in children’s development. Children develop rapidly during
this time, physically, intellectually, emotionally and socially. The foundation stage is
about developing key learning skills such as listening, speaking, concentration,
persistence and learning to work together and cooperate with other children. It is also
about developing early communication, literacy and numeracy skills that will prepare
young children for key stage 1 of the national curriculum."
Download this .pdf document. It's a large file - please be patient.
|To Download the Full Free Spelling Curriculum Gr 6-8 (credits and
references contained therein) please click here. The file is in PDF format
and is quite large - please be patient. Thank you.
Homeschool unless otherwise stipulated. |
Presentation on theme: "Chapter 2 Atoms, Molecules, and Ions History n Greeks n Democritus and Leucippus - atomos n Aristotle- elements. n Alchemy n 1660 - Robert Boyle- experimental."— Presentation transcript:
History n Greeks n Democritus and Leucippus - atomos n Aristotle- elements. n Alchemy n 1660 - Robert Boyle- experimental definition of element. n Lavoisier- Father of modern chemistry. n He wrote the book.
Laws n Conservation of Mass n Law of Definite Proportion- compounds have a constant composition. n The react in specific ratios by mass. n Multiple Proportions- When two elements form more than one compound, the ratios of the masses of the second element that combine with one gram of the first can be reduced to small whole numbers.
What?! n Water has 8 g of oxygen per g of hydrogen. n Hydrogen peroxide has 16 g of oxygen per g of hydrogen. n 16/8 = 2/1 n Small whole number ratios.
Proof n Mercury has two oxides. One is 96.2 % mercury by mass, the other is 92.6 % mercury by mass. n Show that these compounds follow the law of multiple proportion. n Speculate on the formula of the two oxides.
Dalton’s Atomic Theory 1) Elements are made up of atoms 2) Atoms of each element are identical. Atoms of different elements are different. 3) Compounds are formed when atoms combine. Each compound has a specific number and kinds of atom. 4) Chemical reactions are rearrangement of atoms. Atoms are not created or destroyed.
n Gay-Lussac- under the same conditions of temperature and pressure, compounds always react in whole number ratios by volume. n Avagadro- interpreted that to mean n at the same temperature and pressure, equal volumes of gas contain the same number of particles. n (called Avagadro’s Hypothesis) A Helpful Observation
Experiments to determine what an atom was n J. J. Thomson- used Cathode ray tubes
Millikan’s Experiment X-rays X-rays give some electrons a charge.
Millikan’s Experiment Some drops would hover From the mass of the drop and the charge on the plates, he calculated the mass of an electron
Radioactivity n Discovered by accident n Bequerel n Three types –alpha- helium nucleus (+2 charge, large mass) –beta- high speed electron –gamma- high energy light
Rutherford’s Experiment n Used uranium to produce alpha particles. n Aimed alpha particles at gold foil by drilling hole in lead block. n Since the mass is evenly distributed in gold atoms alpha particles should go straight through. n Used gold foil because it could be made atoms thin.
Chemical Bonds n The forces that hold atoms together. n Covalent bonding - sharing electrons. n Makes molecules. n Chemical formula- the number and type of atoms in a molecule. n C 2 H 6 - 2 carbon atoms, 6 hydrogen atoms, n Structural formula shows the connections, but not necessarily the shape.
H H HH H HCC n There are also other model that attempt to show three dimensional shape. n Ball and stick.
Ions n Atoms or groups of atoms with a charge. n Cations- positive ions - get by losing electrons(s). n Anions- negative ions - get by gaining electron(s). n Ionic bonding- held together by the opposite charges. n Ionic solids are called salts.
Polyatomic Ions n Groups of atoms that have a charge. n Yes, you have to memorize them. n List on page 65
Naming compounds n Two types n Ionic - metal and non metal or polyatomics. n Covalent- we will just learn the rules for 2 non-metals.
Ionic compounds n If the cation is monoatomic- Name the metal (cation) just write the name. n If the cation is polyatomic- name it. n If the anion is monoatomic- name it but change the ending to –ide. n If the anion is poly atomic- just name it n Practice.
Covalent compounds n Two words, with prefixes. n Prefixes tell you how many. n mono, di, tri, tetra, penta, hexa, septa, nona, deca n First element whole name with the appropriate prefix, except mono. n Second element, -ide ending with appropriate prefix. n Practice
Ionic compounds n If the cation is monoatomic- Name the metal (cation) just write the name. n If the cation is polyatomic- name it n If the anion is monoatomic- name it but change the ending to -ide n If the anion is poly atomic- just name it n practice
Ionic Compounds n Have to know what ions they form n off table, polyatomic, or figure it out n CaS nK2SnK2SnK2SnK2S n AlPO 4 n K 2 SO 4 n FeS n CoI 3
Ionic Compounds n Fe 2 (C 2 O 4 ) n MgO n MnO n KMnO 4 n NH 4 NO 3 n Hg 2 Cl 2 n Cr 2 O 3
Ionic Compounds n KClO 4 n NaClO 3 n YBrO 2 n Cr(ClO) 6
Naming Covalent Compounds n Two words, with prefixes n Prefixes tell you how many. n mono, di, tri, tetra, penta, hexa, septa, nona, deca n First element whole name with the appropriate prefix, except mono n Second element, -ide ending with appropriate prefix n Practice
n CO 2 n CO n CCl 4 nN2O4nN2O4nN2O4nN2O4 n XeF 6 nN4O4nN4O4nN4O4nN4O4 n P 2 O 10 Naming Covalent Compounds
Writing Formulas n Two sets of rules, ionic and covalent n To decide which to use, decide what the first word is. n If is a metal or polyatomic use ionic. n If it is a non-metal use covalent.
Ionic Formulas n Charges must add up to zero. n Get charges from table, name of metal ion, or memorized from the list. n Use parenthesis to indicate multiple polyatomics.
Ionic Formulas n Sodium nitride n sodium- Na is always +1 n nitride - ide tells you it comes from the table n nitride is N -3
Ionic Formulas n Sodium nitride n sodium- Na is always +1 n Nitride - ide tells you it comes from the table n nitride is N -3 n Doesn’t add up to zero. Na +1 N -3
Ionic Formulas n Sodium nitride n sodium- Na is always +1 n nitride - ide tells you it comes from the table n nitride is N -3 n Doesn’t add up to zero n Need 3 Na Na +1 N -3 Na 3 N
Ionic Compounds n Sodium sulfite n calcium iodide n Lead (II) oxide n Lead (IV) oxide n Mercury (I) sulfide n Barium chromate n Aluminum hydrogen sulfate n Cerium (IV) nitrite
Covalent compounds n The name tells you how to write the formula n duh n Sulfur dioxide n diflourine monoxide n nitrogen trichloride n diphosphorus pentoxide
Acids n Substances that produce H + ions when dissolved in water. n All acids begin with H. n Two types of acids: n Oxyacids n Non-oxyacids
Naming acids n If the formula has oxygen in it n write the name of the anion, but change –ate to -ic acid –ite to -ous acid n Watch out for sulfuric and sulfurous n H 2 CrO 4 n HMnO 4 n HNO 2
Naming acids n If the acid doesn’t have oxygen n add the prefix hydro- n change the suffix -ide to -ic acid n HCl nH2SnH2SnH2SnH2S n HCN
Formulas for acids n Backwards from names. n If it has hydro- in the name it has no oxygen n Anion ends in -ide n No hydro, anion ends in -ate or -ite n Write anion and add enough H to balance the charges.
Formulas for acids n hydrofluoric acid n dichromic acid n carbonic acid n hydrophosphoric acid n hypofluorous acid n perchloric acid n phosphorous acid
Hydrates n Some salts trap water crystals when they form crystals. n These are hydrates. n Both the name and the formula needs to indicate how many water molecules are trapped. n In the name we add the word hydrate with a prefix that tells us how many water molecules.
Hydrates n In the formula you put a dot and then write the number of molecules. Calcium chloride dihydrate = CaCl 2 2 Calcium chloride dihydrate = CaCl 2 2 Chromium (III) nitrate hexahydrate = Cr(NO 3 ) 3 6H 2 O Chromium (III) nitrate hexahydrate = Cr(NO 3 ) 3 6H 2 O |
PINDICS means Performance Indicators for Elementary School Teachers and Evolving performance standards for teachers’ accountability. NCERT will develop through appropriate provision of training and support, teacher performance standards. NCERT has developed a framework for Performance Indicators for Elementary School Teachers (PINDICS) that is based on norms and standards as enunciated in various studies and statutory orders of the government.
Performance Indicators for School TeachersThese performance standards define the criteria expected when teachers perform their major tasks and duties. Under each performance standard there will be specific tasks which teachers are expected to perform- termed as specific standards. These are further delineated as performance indicators that can be used to observe progress and to measure actual result compared to expected result. These performance standards define the criteria expected when teachers perform their major tasks and duties. These are further delineated as performance indicators that can be used to observe progress and to measure actual result compared to expected result. NCERT will also develop and pilot instruments to measure teacher competence under PINDICS. PINDICs will eventually evolve as the framework for effective teacher performance for effective monitoring and benchmarking across the country.
- NCERT has developed PINDICS for elementary school teachers.
- Self-assessment by the teacher at least twice a year.
- Assessment by Head Teacher/CRC/BRC coordinators also twice in a year; using teacher’s self-assessment record, observation of actual classroom processes and dialogue with teachers, students and SMC members
- Would be linked subsequently to QMTs and consolidated at Cluster, Block , District & State level.
What is PINDICS ? and Guidelines
Circumstance: Performance Indicators (PINDICS) are used to assess the performance and progress of teachers. It consists of performance standards (PS), specific standards and performance indicators. Performance standards are the areas in which teachers perform their tasks and responsibilities. Under performance standards there are some specific tasks which are expected to be performed by the teachers. These are termed as specific standards. From specific standards performance indicators have been derived.
7 Performance Standards (PS)Performance Standards communicate expectations for each responsibility area of the Teacher job performance. The following seven performance standards have been identified.
Use of PINDICS
- PS -1 : Designing Learning Experiences for Children
- PS -2 : Knowledge and Understanding of Subject matter
- PS -3 : Strategies for Facilitating Learning
- PS -4 : Interpersonal Relationship
- PS -5 : Professional Development
- PS -6 : School Development
- PS -7 : Teacher Attendance
PINDICS can be used by teachers themselves for assessing their own performance and to make continuous efforts to reach the highest level. These can also be used for teacher appraisal by the supervisory staff/mentor to assess and to provide constructive feedback for the improvement of teacher performance. Each performance indicator is rated on four point scale ranging from 1 to 4 indicating the levels of performance.
4 Rating Points
The four Rating Points are
1. Not meeting the expected standardIf the teacher performs tasks in an innovative way and makes extra efforts for improving student performance can be rated as beyond the expected standard.
2. Approaching the expected standard
3. Approached the expected standard
4. Beyond the expected standard
Guidelines for teachers
Self-assessment by the teacher should be done for every quarter in a year. Complete the teacher identification information on page 1.
- No item should be left blank
- Read each performance indicator carefully and reflect on it in the context of your classroom practice and give rating point in appropriate box.
- Place yourself on a point on the four point scale according to your performance against each indicator.
- Work out total score on the performance standard (area) by adding scores on each indicator of the standard.
- Prepare a descriptive report on the basis of your assessment. The report may also include the areas in which help is required.
Teacher Identification Information Form Details
Guidelines for Head Teacher/CRCC (School complex HM) /MRCC (MEO)Assessment by Head Teacher/CRCC (School complex HM)/MRCC (MEO) should be carried out twice in a year keeping in view following points.
- Use teacher's self-assessment record
- Observe actual classroom processes
- Have dialogue with teachers, students and SMC members to supplement teacher's report
- Prepare a descriptive report based on self-observation and report collected from the teacher
- Discuss the report with the teacher concerned to improve his/her level of performance
- Link information from teachers assessment using PINDICS with information about student attendance, curriculum coverage and student learning outcomes from Quality Monitoring Tools (QMTs)
- Complete Teacher Performance Sheet and Consolidation Sheet CRC level for onward transmission to BRC. |
"Earth fault loop impedance" is a measure of the impedance, or electrical resistance, on the earth fault loop of an AC electrical circuit, explains Alert Electrical. The earth fault loop is a built-in safety measure within electrical systems to prevent electric shock.Know More
Coming into contact with a live current in an AC circuit could potentially be fatal. This can occur by accident when a non-live portion of an electrical system becomes live due to a system fault or short. An upstream protective device, such as a residual current device, must quickly detect such a short has occurred and intervene by shunting the current through the earth fault loop rather than the individual in contact with the live current.
Alert Electrical further explains that the greater the impedance value on the earth fault loop, the longer it takes the RCD to make the current path switch. If the earth fault loop impedance value is too great, this delays the switch to the point a person may still receive a harmful, even fatal electrical shock before the switch can take place. For that reason, the earth fault loop impedance of every AC circuit must be low enough for the RCD to function properly.Learn more in Electricity
A laser diode works by having a p-n junction that is powered by the injection of electric current. The active medium for these lasers is a semiconductor similar to what is found in light-emitting diodes.Full Answer >
An LM741 is a general purpose operational amplifier for military, industrial and commercial applications. As of 2014, manufacturers include Texas Instruments, Fairchild Semiconductor, Intersil Corporation and National Semiconductor. Datasheets can be found on each respective manufacturer's website.Full Answer >
There are two methods normally associated with generating electricity. First is alternating current, typically achieved by spinning a large magnet in a loop of wire. The second method is electrochemistry, which uses a chemical reaction to generate electricity; examples include batteries and fuel cells.Full Answer >
There are many ways to make static electricity. To create a small amount of static electricity a person can develop a static charge by rubbing a glass rod with a silk cloth or amber against wool. The static charge will allow the glass and the amber to attract small amounts of paper and plastic.Full Answer > |
Onto the National Stage: Congresswomen in an Age of Crisis, 1935-1954
Thirty-six women entered Congress for the first time between 1935 and 1954, a tumultuous two decades that encompassed the Great Depression, World War II, and the start of the Cold War. Women participated in America’s survival, recovery, and ascent to world power in important and unprecedented ways; they became shapers of the welfare state, workers during wartime, and members of the military. During this time the nation’s capital took on increasing importance in the everyday lives of average Americans. The Great Depression and the specter of global war transformed the role of the federal government, making it a provider and protector. Like their male counterparts, women in Congress legislated to provide economic relief to their constituents, debated the merits of government intervention to cure the economy, argued about America’s role in world affairs, and grappled with challenges and opportunities during wartime.
Subtle changes, however, slowly advanced women’s status on Capitol Hill. By and large, women elected to Congress between 1935 and 1954 had more experience as politicians or as party officials than did their predecessors. In the postwar era, they were appointed more often to influential committees, including those with jurisdiction over military affairs, the judiciary, and agriculture. Also, several women emerged as national figures and were prominently featured as spokespersons by their parties; this was a significant break from tradition. |
The new, acclaimed motion picture Selma suggests that President Lyndon Baines Johnson was not an ardent supporter of the Voting Rights Act of 1965, and that he and Dr. Martin Luther King Jr. had a fragile relationship. Nothing is further from the truth.
Both men worked very hard to create a society in which all people had the right to vote, access to medical care, decent housing and funding for education.
In my view, history will show that no American president played as critical a role in the advancement of civil rights ‘ fair housing and education than Johnson. In fact, a number of authors have written that only the acts of President Abraham Lincoln equal what Johnson did for minorities in America.
Most knowledgeable political historians agree that the Civil Rights Act of 1964 and the Voting Rights Act, which passed one year later, became law because Johnson passionately supported them. In addition to the two landmark civil rights measures, the nation also witnessed the passage of legislation that introduced Medicaid and Medicare during the Johnson administration.
In fact, federal legislation that prohibited housing discrimination in the sale, rental or financing of housing based on race, national origin or religion was signed into law by Johnson. The federal housing legislation, which became a model for many state legislatures, became law on April 11, 1968, just seven days after King’s assassination.
In the area of education, Johnson included in his War on Poverty agenda the Elementary and Secondary Education Act of 1965. Among other things, the legislation provided financial assistance to students from low-income families. Under the law, $1 billion in funding was made available to schools that served minority students.
King and Johnson were born and raised in a segregated South. They understood the political realities, and they worked as best they could to change them. King was present at the White House when the Voting Rights Act was signed into law. He and Johnson communicated regularly. Their individual lives impacted the country and each other.
While speaking before a joint session of Congress to propose the Voting Rights Act, Johnson passionately said: “And we shall overcome.” King and some of his close aides watched the president on television. One of them, my colleague Rep. John Lewis, said King cried when he heard the president use the banner cry of the civil rights movement during his address.
The importance of Johnson’s work was celebrated in Austin last year by ordinary citizens, President Barack Obama and former Presidents Bill Clinton and George W. Bush. I will join congressional colleagues in a celebration of Johnson’s monumental achievements later this year in the nation’s Capitol.
Today, the entire nation will pause to acknowledge King’s contributions. There will be tributes, parades and speeches made to celebrate a remarkable life cut short by a sniper’s bullet.
Johnson decided in 1968 not to seek a second term and died of a heart attack in 1973. The national debate over the Vietnam War had damaged the soul of this patriot and defender of civil rights.
I believe legislation guaranteeing equal rights to minorities would not have passed Congress but for his fortitude and his belief in the equality of all people. I also believe Johnson and King had tremendous respect for one another and understood the crucial roles that each played in changing our nation. They are owed a tremendous debt of gratitude by all people.
U.S. Rep. Eddie Bernice Johnson represents the 30th Congressional District. Reach her at ebjohnson.house.gov. |
Feedback and Correction
To produce proficient speakers of English, we must offer correction in the classroom. The most obvious and oft-used form tends to be the direct teacher-to-student type, as in: "Akinori, you should say, 'Have you ever gone abroad?' instead of 'Have you ever went abroad?' Remember: go, went, gone." But this kind of correction proves the least desirable, especially if used often, because:
L1 refers to the first language, or native tongue, of the students and/or teacher. (L2, or second language, can be viewed as the target/foreign language.) There are positives and negatives for use of the L1 in the class. There are also positives and negatives for avoidance of the L1 in the class. It should be understood that many institutions and private language schools prefer to institute a 100% English only rule, which fails to consider the positives of using L1 in the classroom. Each teacher should assess how to best use the L1 in his classroom, particularly how to balance teacher and student talk time. However, care should be taken if the teacher allows the students to use their first language. It's very easy for the L1 to become a crutch, which can limit the students' improvement.
Imagine a pair of your intermediate students has the following conversation:
A: What are you going to do at Saturday afternoon?
B: I'm going to go to shopping.
A: I understand. Do you know what are you going to buy?
B: Not really. I maybe want to buy some new jeans.
Do you correct the conversation? If yes, what do you correct?
As teachers, we must decide whether or not to offer correction in each and every class. Appropriate correction and feedback is a staple of the ESL EFL classroom, just as are drills or speaking activities. But too much correction produces a class of students whose fluency suffers. They become overly concerned with grammatically correct responses. They produce lengthy pauses before answering even the most simple of questions, focusing too much on word order, verb tense, and the like. If the teacher swings the pendulum the other way and corrects too little, then words tumble out of the mouths of students. What comes out, though, is chocked full of problems with grammar and vocabulary. Too much and too little correction can hinder communication.
Praise and criticism serve as invaluable tools in any classroom. Both direct students to their goals, as well as to the teacher's expectations. For example, praise tells students what was done right, which means it may be repeated. Criticism tells students what was done wrong or needs work, which means it gets additional attention in the future.
For some teachers, praise comes naturally. They know just how and when to comment to students. Yet it's a skill that can be learned and used by all. In fact, it should be learned and used by all, and is every bit as important as lesson structure and reinforcement activities. Praise can get students to put in a superior performance for even a novice teacher, while his more experienced peer may struggle to get his class to rise to his expectations. Why? Because that more experienced teacher may simply not be offering effective praise.
Homework from workbooks or grammar worksheets serve as good resources to review the contents of the day's lesson. Typically consisting of fill-in-the-blanks or matching exercises, these controlled, right-or-wrong exercises try to get students to remember the lesson and target language.
Although specific goals and objectives, a lesson plan, and preparation are all important to the success of a lesson, the classroom remains a very dynamic and changeable environment. Even the best lesson plans may get altered based on the needs of the students on any particular day. And although a lesson plan shouldn't be ignored, the teacher should also be aware that any plan isn't set in stone. What's more, he should also be on the lookout for teachable moments.
A teachable moment refers to a time when students are especially receptive to learning something. The teacher can take advantage of this moment and take a detour from the lesson plan. In so doing, students gain value from the detour, as well as better remember the information conveyed.
Tests and quizzes remain an important part of the ESL EFL classroom, despite the perceived negatives. For example, it generally proves much easier to assess grammar or listening comprehension with multiple choice or fill-in-the-blank assessment; however, these sorts of tests don't show productive competency, which is also something more challenging to objectively assess. And yet, there are positives, as tests and quizzes allow the teacher to realize if course goals are being met, if all of the students understand the course content, and if the course needs additional curriculum and/or support.
There are alternatives to tests, though, such as course projects which require students to use not only content studied, but also to independently explore a lot of other grammar, vocabulary, etc. The project might be a short story or report written by the students, or even a skit written and performed by a group. Students can easily incorporate PowerPoint, add pictures, or film and edit a performance. More work is required than sitting a test, but the end result usually feels far more rewarding. What's more, students can work to the best of their ability too. |
view a plan
This brief lesson idea covers Federalism, Separation of Powers, and Checks and Balances
6, 7, 8
Title – Our Constitution at Work
By – Bridget Murphy
Subject – Social Studies
Grade Level – Junior High
This plan is to assess the student’s understanding of three of the principles of our Constitution: Federalism, Separation of Powers, and Checks and Balances. Students should look for newspaper or magazine articles that illustrate each of these principles. For instance, an example of Federalism would be new coins that are being minted, or a meeting to set policy of a local school board. Students will create a scrapbook in which they place the articles, a short summary of each article, and what principle is illustrated. This is a good assessment of application of skills taught.
E-Mail Bridget ! |
The lesson is a final stage of teaching student's understanding/usage of 'wh' question forms.
Prerequisite Knowledge: The lesson requires that the meaning of 'wh' question forms previously be taught and students have had practice in providing examples of each type.
1) Have your students match up the correct 'wh' word with the correct 'wh' answer. For example, who = person/animal, what = thing/doing word, where = location/direction, when = time (past/present/future), why = reason.
2) Ask students to provide different examples of each type of question form. If your students have difficulty with completing this step auditorially, then provide pictures and ask them to identify which category the picture belongs to. For example, show them a picture of a person and ask which question form it belongs to.
3) Show your students several picture story cards and discuss the each 'wh' form in the picture
4) Across the top of the board label five columns with each 'wh' word
5) Finally, place an undisclosed picture in a mystery box, or just tape it to the board face down and have your children ask yes/no questions to narrow down each type of 'wh'.
6) Write each of the student's answers in the respective column.
7) circle the correct answer in each column
8) Provide lots of practice
9) when your students can guess the story cards without any assistance from you, then you know your students understand this concept. |
You may have come across the term analogy whilst looking at the English language, but what is this and how does it work? The concept of an analogy might seem a little confusing at first but by looking at the meaning, we can further understand how it can be used.
In this article, we are going to be taking a look at what exactly an analogy is as well as viewing some examples of them in both a spoken context and within written work.
An analogy is a way of describing something by comparing it to something else which makes the idea more simple. The thing which is being compared is likely to have some similarities or things in common with whatever its comparison is. These comparisons are often used as figurative language in order to more easily explain an idea or principle. You might say that someone made an analogy between the human mind and a smart computer.
When used as a literary device, analogy will assist the writer to make a comparison between two things that may be familiar or not. It can also be used in order to create a deeper meaning and allow the reader to create a more detailed image in their mind about what is being described.
Analogies can come in various forms but are usually seen in the form of a simile, where a comparison is made using the words like or as, or as a metaphor where the comparison is made in a non literal sense.
Analogy in Conversation
You may see the use of analogy within day to day conversation as it is a common way to elaborate on an idea by using this type of comparison. We are now going to take a look at some examples of analogy which may be heard in spoken language.
- Mary had a little lamb, its fleece white like snow.
- As light as a feather.
- As stiff as a board.
- As sweet as a cookie.
- As good as gold.
- It is like trying to find a needle in a haystack.
- Your actions are as useful as moving the deckchairs on the Titanic.
- It is as useful as a chocolate teapot.
- The last few weeks have been an emotional roller-coaster.
- To explain a joke is like dissecting a mouse, you gain a better understanding, but in the process, the mouse’s life is lost.
- She is as graceful as a freezer falling down the stairs.
- Flower is to the petal as tree is to the leaf.
- Her flower like smile blossomed through the morning sun.
- He is like a diamond in the rough.
- My father is the sergeant within our home.
- Those words are like music to my ears.
- An atoms structure is just like the solar system, with the sun as the nucleus and the planets as the electrons.
- The way in which a detective solves crimes, is like the way a doctor diagnoses illnesses.
- In the same way that a caterpillar emerges from its cocoon, we must emerge from our comfort zones.
Analogy in Literature
As we have mentioned earlier, analogy is a common way for writers to create a more clear image for the reader as well as a way to express a deeper meaning to the piece of writing. Let’s now take a look at some examples of times in which analogy has been used within written work.
- In the play Romeo and Juliet written by William Shakespeare, we see an example of an analogy being used in the following passage; “Her beauty is on the face of the night, like a jewel in the ear of Ethiop.”
- In the piece Do not go gently into the good night by Dylan Thomas, there is a good example of analogy when reading this line; “Graven men who are near death and see with a blind sight, blind eyes which burn like meteors and be gay.”
- In the famous novel Animal farm written by George Orwell, we see many examples of analogy throughout, one of these can be seen in the following passage; “The animals looked from man to pig and from pig to man, and once again yet it was not possible to tell which was which.”
- Once again from Romeo and Juliet, we see another example of analogy in the line “The candles of night are burned out and day tiptoes on the mountains.”
- In the song by Walk Whitman, Song for myself, we see an example of analogy in the line “When the sharks fin cuts like a black chip in the water.”
- In the movie, Forrest Gump we see a now, very famous analogy which reads “life is like a box of chocolates, you do not know what you might get.”
- In a note from Henry Kissinger to the president, we see an example of analogy in the sentence “Withdrawing the American troops will be like salted peanuts to the general public, the more they do it, the more they will want.”
- We see another example of analogy in the play Romeo and Juliet by William Shakespeare in which he writes. “what is in a name? A rose by any other name would still smell so sweet.“
- John Donne writes a poem entitled The flea in which he uses the title creature to compare his marital bed. This is seen in the following extract “This flea is you and I and this our wedding bed.”
We have looked at ways in which analogy can be used within both written and spoken English. Through doing this we have discovered that this concept can be used to further explain an idea by means of comparison to something which has similarities. Analogies can be seen in the form of a simile or a metaphor.
Analogy can be used in spoken language to develop the listeners understanding of what is being said as well as being used as a literary device which aids the reader in creating a more detailed picture within their mind. |
Slavery played an important role in the development of the American colonies. It was introduced to the colonies in 1619, and spanned until the Emancipation Proclamation in 1863. The trading of slaves in America in the seventeenth century was a large industry. Slaves were captured from their homes in Africa, shipped to America under extremely poor conditions, and then sold to the highest bidder, put to work, and forced to live with the new conditions of America.
There was no mercy for the slaves and their families as they were captured from their homes and forced onto slave ships. Most of the Africans who were captured lived in small villages in West Africa. A typical village takeover would occur early in the morning. An enemy tribe would raid the village, and then burn the huts to the ground.
Most of the people who were taken by surprise were killed or captured; few escaped. The captured Africans were now on their way to the slave ships. “Bound together two by two with heavy wooden yokes fastened around their necks, a long line of black men and women plodded down a well-worn path through the dense forest. Most of the men were burdened with huge elephants’ tusks.
Others, and many of the women too, bore baskets or bales of food. Little boys and girls trudged along beside their parents, eyes wide in fear and wonder” (McCague, 14). After they were marched often hundreds of miles, it was time for them to be shipped off to sea, so that they could be sold as cheap labor to help harvest the new world. But before they were shipped off, they had to pass through a slave-trading station.
The slave trade, which was first controlled by Portugal, was now controlled by other European nations. In the late 1600s, Spain, Holland, England, France and Denmark were all sending ships to West Africa. The slave trade was becoming big business (Goodman, 7). The selection of the slaves by the traders was a painstaking process. Ships from England would pull up on the coast of Africa, and the captains would set off towards the coast on small ships.
“If the slave trader was a black chief, there always had to be a certain amount of palaver, or talk, before getting down to business. As a rule, the chief would expect some presents, or dash” (Stampp, 26). Once the palaver was over, the slaves had to be inspected. The captain of the ship usually had a doctor who would check the condition of the slaves. They would carefully examine the slaves, looking in their mouths, poking at their bodies, and making them jump around.
This was done so that the doctor could see how physically fit the slaves were. If the slaves were not of the doctor’s standards, they were either killed or kept to see if another ship would take them. In the 1600s, the journey across the Atlantic for the African slaves was a horrible one. It was extremely disease-ridden, and many slaves did not survive the journey. The people were simply thrown into the bottom of the ship and had to survive the best they could.
Often, many slaves had to wait in the bottom of the ship while they were still docked at the harbor so that the traders could gather up more and more slaves. There were usually 220 to 250 slaves in each ship. Then they had to stay down there for the long trip across the Atlantic Ocean to the New World. “Women and children were allowed to roam at large, but the men were attached by leg irons to chains that ran along the ship’s bulwarks.
After a breakfast of rice or cornmeal or yams, with perhaps a scrap of meat thrown in, and a little water, there came the ceremony of “dancing the slaves” -a compulsory form of exercise designed, it was said, for the captive’s physical and mental well being”(Howard, 23). Even though there was ventilation, the air in the crowded hold area quickly grew foul and stinking. Fierce tropical heat also added to the misery of the slaves. Seasickness was also a problem.
Conditions on the ships improved as the slave trade continued, but thousands of Africans still lost their lives on the journey to the new world. When slaves would try to rebel on the ship, they were immediately killed and thrown overboard. Some slaves preferred death over slavery. Watching their chance while on deck, they often jumped overboard to drown themselves (Davis, 67). Africans were brought to America to work.
“They worked the cotton plantations of Mississippi and in the tobacco fields of Virginia, in Alabama’s rich black belt, in Louisiana’s sugar parishes, and in the disease-ridden rice swamps of Georgia and South Carolina”(Buckmaster, 153). Most slaves were worked extremely hard because they had the job of cultivating the crops on the plantations. It began before daybreak and lasted until dark, five and sometimes six days a week. “An Alabama man said ‘Sunup to sundown was for field Negroes.’
Men and women alike were roused at four or five a.m., generally by the blowing of a horn or the ringing of a bell” (Goodman, 18). By daybreak, the slaves were already working under the control of Negro drivers and white overseers. They plowed, hoed, picked, and performed the labors appropriate to the season of whatever they were harvesting.
For example, during the harvest season on a sugar plantation, slaves were worked sixteen to eighteen hours a day, seven days a week. That is longer hours than convicts were permitted to work in several of the Southern states (DuBois, 35). This was not only limited to sugar. Cotton and tobacco workers had the same harsh hours in the hot southern sun.
Even children were put to work on the plantations. “By the age of six or seven, children were ready to do odd jobs around the plantation-picking up trash in the yard, raking leaves, tending a garden patch, minding babies, carrying water to the fields. By the age of ten, they were likely to be in the fields themselves, classed as “quarter hands” (McCague, 35).
Often there were health problems among the slaves in early America. “The combination of hard, sometimes exhausting toil and inferior diet, scanty clothing and unsanitary housing-led, predictably, to health problems” (Goodman, 31). This caused a problem for slave owners because they wanted the most efficiency out of their slaves as possible.
In some places, doctors were called in to treat blacks as well as whites. The slave trade played an important role in the growth of the American colonies. Without the trading of slaves in the seventeenth century, American plantations would not have prospered into the export empire that they were.
Buckmaster, Henrietta. Let My People Go. Boston: Beacon Press, 1941. Davis, David Brion. Slavery and Human Progress. New York: Oxford University Press, 1984.
DuBois, William Edward Burghardt. The Suppression of the African Slave-Trade to the United States of America. New York: Schocken Books, 1969.
Goodman, Walter. Black Bondage: the Life of Slaves in the South. New York: Farrar, Straus & Giroux, 1969.
Howard, Richard. Black Cargo. New York: G. P. Putnam’s Sons, 1972.
McCague, James. The Long Bondage 1441-1815. Illinois: Garrard Publishing Company, 1972. Stampp, Kenneth M. The Peculiar Institution. New York: Borzoi Books, 1982. |
The notion of social construction helps to define and explain social relations, realities, and the importance of knowledge sharing. Following Beaumie Kim (2001): “Social constructivism emphasizes the importance of culture and context in understanding what occurs in society and constructing knowledge based on this understanding.” The four tenets of social constructivism are knowledge, reality, learning, and inter-subjectivity of social meanings. They describe the origin of social realities very simply as a process by which individuals who repeatedly confront a task or situation relevant to their lives develop habitual ways of dealing with it. People first recognize the recurrent nature of a situation; then, they develop roles or functions for cooperating individuals to perform in connection with the task involved. (Searle, 1997). Through socialization, social constructions are internalized, and as experience is filtered and understood through meaningful symbols, the essence of individual identity is formed. Identity is built upon the foundation of family identity. “Intersubjectivity is a shared understanding among individuals whose interaction is based on common interests and assumptions” (Kim 2001). The construction is the same as the construction of all identities, for instance, young children learn to use verbal labels for themselves and their behavior, as well as for others and their behavior. These labels then come to have the same meaning for the learners as they do for the “old hands.” Social constructions thus embodied in the language shared within a group come to be embedded in the foundation of individual identities by means of language. Individuals observe and judge their own behavior and the behavior of others. In making these judgments, they use ‘the scripts’ provided by society. The meanings of behaviors and the judgments that individuals attach to them are part of these scripts. Following Warmoth (2000): “knowledge is not what individuals believe, but rather what social groups, or knowledge communities, believe” (Kukla, 2001, p. 45). Reality means that a child is born into a social world that has the experienced characteristic of being the sole reality
specifically for you
for only $16.05 $11/page
Distributive justice determines the process of resource allocation in society in accordance with the rules of justice and rights of individuals. “Distributive justice” in its modern sense calls on the state to guarantee that property is distributed throughout society so that everyone is supplied with a certain level of material means” (Fleischacker, 2004). The four features of distributive justice are strict egalitarianism, justice as a virtue, the difference principle, and equality of resources. Identifying the kinds of goods available for distribution and the criteria which are appropriate to each means interpreting the culture of particular society questions of justice always arise within a bounded political community. Each community creates its own social goods, their significance depends upon the way they are conceived by the members of that particular society. The roster of such goods will differ from place to place (Searle, 1997). But this suggests simply that there is a widely held distributive principle that holds that medicine ought to be allocated according to need. And although this principle may be suggested by the nature of medicine itself — the fact that medical need is a necessary condition for being able to benefit from this good — it is not entailed by it. “Distributive justice” in its modern sense, calls on the state to guarantee that property is distributed throughout society so that everyone is supplied with a certain level of material means” (Fleischacker, 2004, p. 7). Distributive justice arises within a bounded community in which each person enjoys equal political and civil rights.
Reproductive success can be explained as “a passing of genes to the next generation” (Kukla, 2000, p. 48). The four main features of the concept are selfishness, competition, behavioral differences, and evolution. The growth of reproductive technology is closely tied to a society’s central tenets. The United States can be characterized by its cultural diversity, but it can also be characterized by cultural ideologies originating from its early roots, “Selection increases the frequency of adaptive traits, traits that give their bearers an advantage in the competition for productive success relative to other individuals” (Sociobiology, 2005). In the selection process, competition means “performing the pertinent tasks better than one’s competitors (Sociobiology, 2005). Individuals in a single population can differ from each other from a variety of causes. Reproductive success produces a population of defectors because it favors the higher-scoring strategy in every contest. The reproductive success acting on variation in a character (parameter of a design or strategy) can often be expected to fix the mean value near the functional optimum and to minimize the variation about that mean. It is important to note that reproductive success “is related in most cultures to differential wealth, social status and power within that culture” (Sociobiology, 2005).
In reproductive success, reproductive health becomes a fairly well-accepted concept at government policymaking levels; thus, understanding of the concept is believed to be almost nonexistent at the field-worker and service-provider levels, where reproductive health is still equated with family planning. It is believed, however, that the lack of comprehension about reproductive health care would not hamper implementation of the government’s reproductive health agenda since the essential services package proposed under the Health and Population Sector Programme (HPSP) would be implemented in such a way that 40 percent of its components included reproductive health care elements (Kukla, 2000).
The relations between distributive justice and social construction can be explained by a close link between social reality and social justice. There is an obvious sense in which social reality is constructed by people: people make the social world through our conceptualizations and interactions, attitudes, and acceptances. Collective acceptance is the key to the construction (including equal and fair distribution of resources) of many social entities and properties. Social institutions and arrangements are constructed and maintained, but they also change, sometimes in a piecemeal fashion, sometimes abruptly. In this case, the role of distributive justice is to establish resource allocation (Searle, 1997). As the most important, ‘social justice takes place within a changing social structure, within the evolving institutions of economics. The models and rules of justice are shaped by the institutions of economics. They are so shaped regardless of whether and how well they represent economic reality. The concept of knowledge has to include that of truth in order for the difference to be visible. It is only with this traditional and richer notion of knowledge that we can entertain the idea that different constellations of belief-shaping social conditions and distributive justice have different likelihoods of generating true beliefs (Distributive Justice, 2007). This is a “realist version” of the idea that distributive justice is socially constructed. It suggests that the institutions of economics have to be so designed as to maximize the likelihood that the economic models published in journals and textbooks help us attain maximum relevance in our beliefs about the way the economy works.
The relations between reproductive success and social construction are based on the idea of social interactions and functions. The idea is that it is an important feature of social reality that people seek to make sense of their social lives: people give accounts of the reasons and worth of their social behavior. In this sense, reproductive success is pre-interpreted by social actors themselves (Searle, 1997). Reproductive success becomes a portion of reality as well as the truth about that reality. Rather, it is by way of such pursuits that social construction takes place, not only of knowledge but of reality as well. Realists may resist this reasoning by arguing that while knowledge claims are socially constructed, and it is an empirical question to determine the precise ways in which they are constructed, the truth of those claims and the reality that those claims about do not coincide (Searle, 199; Sociobiology, 2005).
The relationships between reproductive success and distributive justice are based on a right of a citizen to access resources and the competitive nature of reproduction. For instance, in reproductive rights, as in other aspects of social policy, feminists have been active participants in the policy processes. Following “duties of parents to children, of beneficiaries to benefactors, of friends and neighbors to one another, and of everyone to people “of merit.” (Fleischacker, 2004, p. 27). In addition to their social movement roles of outside interveners and trenchant public critics of the inquiry processes and recommendations, feminists have been insiders in non-government organizations with interests in procreation, members of government departments, elected officials, and members of committees of inquiry. In some cases, outside critics became members of inquiries, being formally required to see the issues from another perspective.
100% original paper
on any topic
done in as little as
- Searle, G. (1997). The Construction of Social Reality. Free Press.
- Distributive Justice (2007). Web.
- Fleischacker, S. (2004). A Short History of Distributive Justice. Web.
- Kim, B. (2001). Social Constructivism.
- Kukla, A. (2001). Social Constructivism and the Philosophy of Science. Routledge.
- Sociobiology. (2005). Web. |
This Teacher’s Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the experiences, achievements, and perspectives of Asian Americans and Pacific Islanders across U.S. history.
Archival visits, whether in person or online, are great additions to any curriculum in the humanities. Primary sources can be the cornerstone of lessons or activities involving any aspect of history, ancient or modern. This Teachers Guide is designed to help educators plan, execute, and follow up on an encounter with sources housed in a variety of institutions, from libraries and museums to historical societies and state archives to make learning come to life and teach students the value of preservation and conservation in the humanities.
The National Endowment for the Humanities has compiled a collection of digital resources for K-12 and higher education instructors who teach in an online setting. The resources included in this Teacher's Guide range from videos and podcasts to digitized primary sources and interactive activities and games that have received funding from the NEH, as well as resources for online instruction.
This Teacher's Guide compiles EDSITEment resources that support the NEH's "A More Perfect Union" initiative, which celebrates the 250th anniversary of the founding of the United States. Topics include literature, history, civics, art, and culture.
Our Teacher's Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the achievements, perspectives, and experiences of African Americans across U.S. history.
This Teacher's Guide will introduce you to the cultures and explore the histories of some groups within the over 5 million people who identify as American Indian in the United States, with resources designed for integration across humanities curricula and classrooms throughout the school year.
Since 1988, the U.S. Government has set aside the period from September 15 to October 15 as National Hispanic Heritage Month to honor the many contributions Hispanic Americans have made and continue to make to the United States of America. Our Teacher's Guide brings together resources created during NEH Summer Seminars and Institutes, lesson plans for K-12 classrooms, and think pieces on events and experiences across Hispanic history and heritage.
EDSITEment brings online humanities resources directly to the classroom through exemplary lesson plans and student activities. EDSITEment develops AP level lessons based on primary source documents that cover the most frequently taught topics and themes in American history. Many of these lessons were developed by teachers and scholars associated with the City University of New York and Ashland University. |
Recognition of the importance of statistical literacy has resulted in the incorporation of many graphical and numerical statistical techniques into the primary and secondary mathematics curricula. However, statistical literacy requires more than the ability to construct graphs and compute numerical summaries of data. Many important concepts of statistics, such as the role that the method of data collection plays in determining the type of analysis that is appropriate and the type and scope of conclusions that can be drawn and the role that sampling variability plays in drawing conclusions from data, are not mathematical in nature. As a consequence, they are not easily integrated into mathematics courses. When these topics are neglected, students are not well-equipped to evaluate statistical studies described in the media. In particular, students do not have a foundation that allows them to decide if conclusions drawn are reasonable or to think about to what group the conclusions would apply. To address this concern, materials were designed to help students (and teachers) develop the conceptual understanding needed for statistical literacy. By focusing on study design (observational studies (with and without random selection), surveys, and experiments), these materials provide an overall framework that enables students to see how the procedural aspects of statistics that they have encountered in mathematics courses are just a part of the overall data analysis process. This paper will describe the approach taken in these materials, why this particular approach was chosen, and how these materials are being used both in the secondary classroom and also as part of professional development workshops for teachers.
Keywords: Statistical literacy; Statistics education in secondary school
Biography: Roxy Peck is currently Professor Emerita of Statistics at Cal Poly. While at Cal Poly, she served as Associate Dean of the College of Science and Mathematics and Chair of the Statistics Department. Internationally known in the area of statistics education, Roxy was made a Fellow of the American Statistical Association in 1998 and in 2003 she received the American Statistical Association's Founders Award in recognition of her contributions to K-12 and undergraduate statistics education. She is the author of several statistics text books and editor of Statistics: A Guide to the Unknown, a collection of expository papers that showcase applications of statistical methods. |
Civics for All: K-2 Part 1
This curriculum resource is the first volume of the K-2 grade band of Civics for All. This resource is a companion to the New York City Department of Education (NYCDOE) curriculum, Passport to Social Studies. The included resources were developed by a team of NYCDOE staff, New York City teachers, and various cultural partners. The development of these resources was informed by and integrated with the following documents and perspectives: New York State K–12 Social Studies Framework, the New York State Next Generation English Language Arts & Literacy in History/Social Studies Learning Standards, and the New York City K–12 Social Studies Scope & Sequence. The Civics for All curriculum is designed to be modified and adapted to fit the needs of all New York City classrooms. The NYCDOE will continue to curate lesson plans and resources that will be made available to teachers and schools.
This guide includes multiple components:
• Guidance for teachers and school administrators on quality civics instruction
• Suggestions on implementation models for integrating Civics for All into existing programming
• Lesson plans divided thematically to reflect grade-band level-appropriate learning
• Project plans designed to enhance student understanding through project-based learning
• A comprehensive step-by-step guide to engaging students in community-based, real-life action projects
Please note that the Civics for All materials for each grade band include two volumes, Civics for All: K-2 Part 2 can be found here.
Access a version of this resource compatible with assistive technology and screen-readers here. |
Influenza, commonly known as “the flu”, is an infectious disease caused by the influenza virus. Symptoms can be mild to severe. The most common symptoms include: a high fever, runny nose, sore throat, muscle pains, headache, coughing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children there may be nausea and vomiting but these are not common in adults. Nausea and vomiting occur more commonly in the unrelated infection gastroenteritis, which is sometimes inaccurately referred to as “stomach flu” or “24-hour flu”. Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.
Usually, the virus is spread through the air from coughs or sneezes.This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are sick. The infection may be confirmed by testing the throat, sputum, or nose for the virus.
Influenza spreads around the world in a yearly outbreak, resulting in about three to five million cases of severe illness and about 250,000 to 500,000 deaths. In the Northern and Southern parts of the world outbreaks occur mainly in winter while in areas around the equator outbreaks may occur at any time of the year. Death occurs mostly in the young, the old and those with other health problems. Larger outbreaks known as pandemics are less frequent. In the 20th century three influenza pandemics occurred: Spanish influenza in 1918, Asian influenza in 1958, and Hong Kong influenza in 1968, each resulting in more than a million deaths. The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June of 2009. Influenza may also affect other animals, including pigs, horses and birds.
Frequent hand washing reduces the risk of infection because the virus is inactivated by soap. Wearing a surgical mask is also useful. Yearly vaccinations against influenza is recommended by the World Health Organization in those at high risk. The vaccine is usually effective against three or four types of influenza. It is usually well tolerated. A vaccine made for one year may be not be useful in the following year, since the virus evolves rapidly. Antiviral drugs such as the neuraminidase inhibitors oseltamivir among others have been used to treat influenza. Their benefits in those who are otherwise healthy do not appear to be greater than their risks. No benefit has been found in those with other health problems. |
Animal Behavior Courses
Take free online animal behavior courses to build your skills and advance your career. Learn animal behavior and other in-demand subjects with courses from top universities and institutions around the world on edX.
learn animal behavior
What is Animal Behavior?
Animal behavior, also called ethology, is the scientific study of animals in their natural habitat. This includes the study of their social interaction, methods of communication, responses to threats, emotions, mating rituals and more. Research into animal behavior can help when studying human behavior as well as aid in the training of service animals as well as pet training. It also aids wildlife managers, park rangers and animal protection groups in conservation efforts.
Online Animal Behavior Courses and Programs
EdX offers free online courses in animal behavior from major universities. Go beyond the nature documentaries and learn about the scientific approach to the study of different animal species including how hypotheses, grounded in evolutionary theory, are tested. Start with Introduction to Animal Behavior, a 6-week course from Wageningen University that explores how animals learn from and communicate with each other, hunt and survive. Learn about their social and mating rituals and get a sense of the complexity of their behavior.
Explore a Career in the Field of Animal Behavior
Animal behaviorists can work in many related fields from animal psychology to dog behavior training. Work locally at a zoo, animal shelter or dog training classes or travel around the world with animal research and conservation groups. Behavioral scientists can have a significant impact on conservation and the protection of endangered species. Case studies and research findings can inform policy decisions and help direct aid to organizations around the world that help to protect animal populations. Capture, analyze and present data related to endangered populations or habitats and teach wildlife managers the benefits of animal behavior study and research to help inform their actions.
Enroll in animal behavior courses to learn more about this fascinating field of life science. |
Corn is planted in all continents and certainly is productive in many temperate zone areas of the world. But one important contributor to its success as a major supplier of carbohydrates to people and animals that they feed evolved in its original tropical past. Most plants have a photosynthesis system with an inefficiency that limits its productivity. This system, labeled as C3 photosynthesis, peaks in its ability to fully use total light intensity to about 3000 foot candles where-as unclouded sunlight has 10000 foot candles. In corn, with it C4 photosynthesis, it continues to produce carbs in direct relation intensity of the light with maximum photosynthesis in bright sunlight.
Carbon dioxide enters plants through holes in leaves called stomata. These structures also allow oxygen to escape from leaves to the benefit of all of us. Water vapors also go through the same stomata. Stomata open and close. At night they close with the benefit of avoiding unnecessary loss of water when photosynthesis cannot occur. But when plant tissue is stressed from lack of water, these stomata also close, limiting the water loss but also interfering with uptake of carbon dioxide for photosynthesis. C3 photosynthesis doesn’t make carbohydrates out of all the CO2 it absorbs, using some of it in other molecules. No problem when environment provides plenty of moisture, is generally cool and have long summer days, but some plant species that evolved under hot dry conditions evolved systems to overcome that limitation.
Teosinte, the species of origin for corn in Central America, has a C4 photosynthesis system. Plants with this character have additional structures in their leaves surrounding cells that perform photosynthesis. These cells function to reduce the loss of CO2 by causing these molecules to be recycled into more carbohydrates. The combination of extra enzymes and structures comes at some energy cost but the net gain is both more net carbohydrate and better utilization of CO2, even if stomata are closed.
Fortunately, corn that was moved out of the original dry hot environment, kept that C4 photosynthesis system. Along with that came the C4 photosynthesis advantages and its superior production of carbohydrates. Sorghum and sugar cane also are C4 plants but wheat, rice and soybeans are C3 and will not be able to match corn in carbohydrates per acre because of this trait. Although only about 3% of all plants species are C4, it does occur in a few plants in many plant families, suggesting that it can evolve independently. This would seem to raise hope that the efforts of the International Rice Research Institute and others trying to convert that C3 species to a C4.
About Corn Journal
The purpose of this blog is to share perspectives of the biology of corn, its seed and diseases in a mix of technical and not so technical terms with all who are interested in this major crop. With more technical references to any of the topics easily available on the web with a search of key words, the blog will rarely cite references but will attempt to be accurate. Comments are welcome but will be screened before publishing. Comments and questions directed to the author by emails are encouraged. |
The Greek hero Achilles was a great warrior, much feared for his fighting skills and daring courage. But beneath his unbeatable facade, he had a vulnerability that led to his downfall—his heel.
A vulnerability, in the field of cybersecurity, is a weakness in your computer systems that an attacker can exploit. Computers, digital devices, and software have many known and undiscovered flaws due to their design. For example, many popular devices have insufficient user authentication, and some software may not encrypt their data. These weaknesses render them vulnerable to attacks.
Hackers are always searching for vulnerabilities to get into a target computer. Once in, they can steal sensitive data or sabotage the organization’s operations.
Read More about a “Vulnerability”
How Do Cyber Attackers Take Advantage of Vulnerabilities?
Any software, application, or system has vulnerabilities or security flaws. That is why companies hire penetration testers in the first place. These experts test all software and hardware in a corporate network for exploitable weaknesses so these can be patched or fixed before they are deployed or connected to other systems. Why?
Attackers can create exploits or malicious programs that take advantage of vulnerabilities to gain access to a target network. They typically use so-called “exploit kits.” An exploit kit is like a set of lock picks that can open any insufficiently secured door.
It has become common, in fact, for hackers to use exploit kits to infect target users’ computers with malware. An example would be WannaCry ransomware. Several attackers have been known to take advantage of a Windows vulnerability to drop the malware onto a system.
Known vulnerabilities (those that have available patches) are easier to address than unknown or zero-day vulnerabilities. To reduce risks brought on by known vulnerabilities, all users need to do is download and install patches. That’s not the case for zero-days, though, as you’ll see in the next section.
What Is a Zero-Day Vulnerability?
A zero-day vulnerability typically remains unknown and undisclosed for some time. Hence, it doesn’t usually have a patch or fix. That gives cyber attackers time and opportunity to exploit the weakness for campaigns. The longer it remains unpatched, the more it exposes an affected system or application and, consequently, the network to various online threats that could lead to a breach.
What Are the Different Types of Vulnerabilities?
Knowing the enemy is winning half the battle. As such, being aware of the common types of vulnerabilities can help you protect your network.
- Weak passwords: Using your first and last name as passwords and reusing them for multiple accounts is a common practice, making hacking easy. An insecure password is one of the types of vulnerabilities that you can easily remedy.
- Lack of data encryption: Encryption makes data indecipherable so only the intended recipient can read it. Even when hackers intercept it, they won’t be able to understand it unless they have the decryption key. If an email, application, or software doesn’t use encryption, threat actors can easily read the data in it.
- URL redirection to malicious sites: Allowing redirects can help website owners, but threat actors can exploit this by redirecting users to malicious sites.
- Software bugs: Software products almost always have bugs or errors. But you have to make sure that the software you use doesn’t have security bugs that can pose risks. Also, ensure that you install patches as soon as they become available.
How Can You Defend against Vulnerability Exploitation?
Individuals and companies alike are not immune to vulnerability exploitation. But there are ways to reduce related risks, such as:
- Install security solutions that can detect and block exploits.
- Regularly download and install security patches or updates as soon as vendors release them.
- Use intrusion detection/prevention systems (IDSs/IPSs) that specifically monitor for and block vulnerability exploits.
- Scan your network for the presence of security bugs or weaknesses. Hire a penetration tester if you have the resources to do so.
- In the absence of patches (for zero-days), rely on indicators of compromise (IoCs) stated in security alerts and news. Avoid clicking Uniform Resource Locators (URLs) or links embedded in and opening attachments to emails from unknown senders. They can be vulnerability exploits in disguise.
Cybercriminals are known for taking advantage of any weakness to gain a foothold in your network. And vulnerability exploitation is one of the most effective ways they can use. That’s why it’s always best to practice good security hygiene.
What Is the Difference between a Vulnerability and a Risk?
A vulnerability within an organization’s network refers to any weakness that can put it in danger despite its security efforts. It can come in the form of a firewall failure that enables hackers to access the network.
On the other hand, a risk refers to the possibility that these vulnerabilities will be exploited by threat actors. Vulnerabilities wouldn’t pose any risk without the presence of threats. However, since threats abound, measuring and monitoring risks is crucial.
What Is Vulnerability Disclosure, and Is it Important?
Vulnerability disclosure is the process of reporting flaws in software or hardware that can weaken their security. The vulnerabilities could be discovered by security researchers, bug-bounty hunters, in-house developers, and other parties.
Whoever found the vulnerability would report it to the company responsible for the software or hardware. They would then wait for the vendor to fix the security flaw to release the vulnerability disclosure to the public. This would generally take between 60 and 120 days.
Vulnerability disclosures are important, as they ensure vendor accountability and keep products secure. |
Effects of global warming on El Niño in the 21st Century
El Niño remains the largest climate phenomenon that occurs frequently producing droughts, floods, wildfires, dust and snow storms, fish kill, and even elevated risks of civil conflicts.
El Niño’s occur every two-to-seven years with very strong El Niño’s occurring about every 15 years. How the its frequency or the time between two events and strength will change because of global warming remains a grand challenge for climate models.
- A study, published in Journal nature, has thrown some light on the effects of global warming on El Niño in the 21st Century.
- El Niño is measured by an index that averages sea surface temperature anomalies over the central-eastern tropical Pacific.
- The theatre of action for El Niño is the tropical Pacific Ocean but its global reach costs the global community tens of billion dollars each time.
- This has been an issue in finding a consensus among models as far as the El Niño response to global warming is concerned.
- The results should serve as a warning to the countries on all continents that suffer from these extreme weather events during strong El Niño events such as the ones during 1982-83, 1997-98 and 2015-16.
- The mean state of the tropical Pacific has cold temperatures in the east around the Galápagos Islands because the trade winds blowing from the east to west diverge waters away from the equator and push them westward.
- The atmosphere warms westward, moving waters and piling it in the west. Warm waters favour atmospheric convection and produce over 5 meters of rain per year to the west of the Dateline to New Guinea.
- El Niño is a perturbation of this background state of cold east – warm west ocean with air rising in the west and sinking in the east.
- In this context, it is imperative that models be held to very stringent standards on their performance of El Niño behaviour during historic periods, especially the 20th century, as a test of their reliability for future projections. This would also be necessary for projecting other events such as droughts and floods.
- For example, droughts over India are closely tied with El Niño and any projections of how droughts will respond to global warming will depend on how models perform in their historic depiction of El Niño’s as well as monsoons and how reliably they can project El Niño response to global warming in addition to how the models perform in reproducing floods and droughts of 20th century. |
he common wisdom is that the invention of the steam engine and the advent of the coal-fueled industrial age marked the beginning of human influence on global climate.
But gathering physical evidence, backed by powerful simulations on the world's most advanced computer climate models, is reshaping that view and lending strong support to the radical idea that human-induced climate change began not 200 years ago, but thousands of years ago with the onset of large-scale agriculture in Asia and extensive deforestation in Europe.
What's more, according to the same computer simulations, the cumulative effect of thousands of years of human influence on climate is preventing the world from entering a new glacial age, altering a clockwork rhythm of periodic cooling of the planet that extends back more than a million years.
"This challenges the paradigm that things began changing with the Industrial Revolution," says Stephen Vavrus, a climatologist at the University of Wisconsin-Madison's Center for Climatic Research and the Nelson Institute for Environmental Studies. "If you think about even a small rate of increase over a long period of time, it becomes important."
Addressing scientists here today (Dec. 17) at a meeting of the American Geophysical Union, Vavrus and colleagues John Kutzbach and Gwenaëlle Philippon provided detailed evidence in support of a controversial idea first put forward by climatologist William F. Ruddiman of the University of Virginia. That idea, debated for the past several years by climate scientists, holds that the introduction of large-scale rice agriculture in Asia, coupled with extensive deforestation in Europe began to alter world climate by pumping significant amounts of greenhouse gases — methane from terraced rice paddies and carbon dioxide from burning forests — into the atmosphere. In turn, a warmer atmosphere heated the oceans making them much less efficient storehouses of carbon dioxide and reinforcing global warming.
That one-two punch, say Kutzbach and Vavrus, was enough to set human-induced climate change in motion.
"No one disputes the large rate of increase in greenhouse gases with the Industrial Revolution," Kutzbach notes. "The large-scale burning of coal for industry has swamped everything else" in the record.
But looking farther back in time, using climatic archives such as 850,000-year-old ice core records from Antarctica, scientists are teasing out evidence of past greenhouse gases in the form of fossil air trapped in the ice. That ancient air, say Vavrus and Kutzbach, contains the unmistakable signature of increased levels of atmospheric methane and carbon dioxide beginning thousands of years before the industrial age.
"Between 5,000 and 8,000 years ago, both methane and carbon dioxide started an upward trend, unlike during previous interglacial periods," explains Kutzbach. Indeed, Ruddiman has shown that during the latter stages of six previous interglacials, greenhouse gases trended downward, not upward. Thus, the accumulation of greenhouse gases over the past few thousands of years, the Wisconsin-Virginia team argue, is very likely forestalling the onset of a new glacial cycle, such as have occurred at regular 100,000-year intervals during the last million years. Each glacial period has been paced by regular and predictable changes in the orbit of the Earth known as Milankovitch cycles, a mechanism thought to kick start glacial cycles.
"We're at a very favorable state right now for increased glaciation," says Kutzbach. "Nature is favoring it at this time in orbital cycles, and if humans weren't in the picture it would probably be happening today."
Deforestation of Europe and large scale rice production saved us!
it's interesting that we ought to be going into a glacial cycle if these guys are right and also fits with past studies that we were headed that way too. |
First of all, what’s a virtual operating system (or virtual machine)? Just in case, an operating system, is the software that makes it possible to use your computer without knowledge about how the hardware works. This could be Windows (XP, 2000, Vista or Windows 7), Linux (Ubuntu, LinuxMint, Debian, OpenSuse, ……), BSD or a Mac (OS X). The operating system is a logical unit who hides what happens on the background when you open a file, save something or start a program. They are great!
Now you know what an operating system is, what’s a virtual one? It’s a technique where multiple operating systems are working on the same computer on the same moment! Where do they save there files? Each virtual machine has a big file, that represents his hard drive. This technique is already since the 60ties in use, but still most people at home do not use it.
Why would you want to use virtual machines? There are a lot of reasons, like that a standalone operating system only uses 15% of his power. 85% Is just left for nothing. Specially for servers, this is bad, because they cost a lot! With virtualization it’s possible to combine multiple servers in 1. It means:
- less energy use
- much more efficiënt
- less hardware
- price => happy
Making a backup is as easy as making a copy of the hardware file, and when your computer fails, you can use this file to restart your server in no time on a different computer.
Another thing where you want to use a Virtual Machine: to share an operating system with pre-installed software (like for school use). No more losing time with installing it!
Use at home? Of course! You could use it test new operating systems, or to test software, when you are want to ensure that it’s safe. You could also use multiple screens, multiple operating systems! How cool is that??
- Full virtualization: You already have a host with operating system, and use a program like VirtualBox to simulate a computer. The virtual machine doesn’t has direct access to hardware.
- Para virtualization: Some components from the virtual machine have direct access to the hardware.
- Application virtualization: A technique where programs you install are bound to a layer. When you disable that layer, all the changes the program made are gone.
- Desktop virtualization: Multiple screens, and multiple people, work on the same computer. Each user has his own screen.
- Storage virtualization: Multiple hard drives combined in one logical volume.
Do you want to test it? Check this tutorial about how to install Ubuntu on VirtualBox! |
Have you ever seen a Computerized Tomography (CT) scanner? If you ever get the chance to look at one closely, you might feel impressed: it’s a feat of meticulous engineering, capable of simultaneously taking X-rays from hundreds of different angles. You might also feel grateful – it’s one of the most important medical diagnostic advances of the last several decades (with a Nobel Prize behind it).
The powerful processor inside the CT scanner reconstructs a three-dimensional representation, using image processing techniques that carefully examine the image and identify areas of interest. Medical imaging is perhaps today the most advanced and widely practiced branch of medical diagnosis. It surely is hard to believe that this field emerged from an accidental discovery.
In 1865, Röntgen was surprised when he observed that a cathode-ray tube caused different items in his lab to glow. He then experimented with it and realized that this ray could pass through human tissue, but not bones. He called his discovery an “X” ray to communicate his inability to explain the nature of this kind of radiation. A long time passed before ultrasound sensing, originating in maritime warfare technology, was used to search for tumors in the 60s. Magnetic Resonance Imaging, first discovered in the 70s, uses strong magnetic fields and works based on the property that tissues emit different signals depending on their composition, and is now widely used today to detect cancerous cells.
Since then, progress in the field of medical imaging has moved from discovering new techniques of portraying the human body to perfecting existing ones. Computers were introduced and equipped CT scanners in 1972, making it possible to apply image analysis techniques on the generated images. Reduction in cost, harmful side-effects to patients and inaccuracies were of paramount importance to make these practices beneficial and safe for the public. The software employed by medical imaging systems, the operating systems and the image analysis techniques, have become the focal point of progress.
Today, medical imaging is entering a new era, focusing on unlocking the true power of the images themselves. These images, produced by medical systems, vary significantly across applications, but can all be viewed as data useful for medical diagnosis. This new turn is not a coincidence. Our society is currently leveraging the power of data in many other commercial and scientific sectors, from marketing to autonomous driving and climate change. The technology that served as a catalyst in all these cases is Artificial Intelligence (AI), and, when it comes to healthcare, this transition from traditional to AI-enabled practices merits a closer look.
Today, AI researchers are collaborating with physicians to improve medical diagnosis and, in the past couple of years, we have seen systems that successfully use AI for the detection of eye disease, cancer and Alzheimer’s. While slow, AI is steadily penetrating the traditional field of healthcare, as reported in the AI Index 2018 report.
But why has artificial intelligence found such fertile ground in healthcare?
Targeting efficiency and consistency in medical diagnosis
Developing advanced medical imaging created a new problem: we were generating health data quicker than we were able to process it using traditional methods. Roughly 90 per cent of all data comes from imaging technology. It is estimated that at any given time in the UK, there are more than 300,000 X-rays that have been waiting more than 30 days for analysis.
Medical data are only useful when they contribute to medical diagnosis. According to a study, pathologists correctly diagnose abnormal, precancerous cells only about half the time. Also, pathologists are restricted by their human nature: more than six hours of work can be strenuous for them, while a human mind cannot simultaneously analyse more than a handful of pictures to reach a verdict.
AI has long established its success in discovering patterns across vast data sets and reaching conclusions as to the nature of data. Machine learning algorithms observe large data sets of images, discover patterns associated with the presence or absence of a particular disease, and learn prediction models that can be used for medical diagnosis. Deciding whether a tumor is malignant or benign can be viewed as a classification task and data offered from past diagnosis of human practitioners are fed into prediction models to discover what makes a correct diagnosis.
Although immense effort and care are put into the training process, once a model has been correctly validated, it can be a consistent and tireless physician that we can consult whenever we need a diagnosis in the future.
Should we then expect AI to replace the less efficient and more prone-to-error human physicians?
Understanding the limitations of artificial intelligence
Trustworthy prediction models are of immense use, but medical diagnosis is an intricate process consisting of many steps. It follows then, that for one of the steps to fail, could lead to a wrong, and perhaps dangerous, diagnosis.
The most straightforward restriction in AI is its need for data generated by humans. There are a lot of hurdles in this step. Generation requires human resources, time and money, acquisition of data is restricted by privacy laws and, even after data have been collected, the lack of a common interface across different imaging systems, practitioners and countries, leads to the creation of enormous, unstructured databases. These databases are scary enough to discourage any machine learning team from pre-processing them, meaning that a surprisingly large amount of these data does not contribute to medical diagnosis in the end.
Another important fact about AI that any human physician should bear in mind when interacting with it, is that a prediction model is not a black box offering prophecies that should be blindly trusted. Models are of a purely statistical nature and their decisions should be tuned in accordance with a physician’s understanding of a disease: should the model be conservative when predicting the presence of a disease, or alarm at the slightest chance of a bad diagnosis? Has the model “seen” enough patients in the past similar to the one currently under examination? The human operator should leverage all of their available context to evaluate, and even question, the decisions of the AI system.
This is a great challenge today and has sparked controversy among AI experts, with one side claiming that complexity and deepness of a machine learning model consistently performs better than human intuition and experts on the other side warning that future progress in the field requires explainable AI. When it comes to applications such as healthcare, researchers, practitioners and institutions tend to agree that AI should be viewed as a tool – and how useful can a tool that you can’t understand be?
Finally, when AI is used in a way that directly affects human life, ethical concerns naturally arise. As with autonomous vehicles, medical diagnosis raises the question of liability: who is responsible if a mistake made by an AI system threatens a human life? As diagnosis is a complex procedure that requires the collaboration of different components, our society needs to develop an understanding and a legal framework for addressing these ethical concerns.
A focus on augmentation
Our society is still today pestered by inefficient and unreliable processes, but one should not be carried away by our contemporary urge for automation. Medical diagnosis is an intricate area that differs from other AI applications, such as industrial processes and virtual assistants, in that complete automation is not desired. We will always need human physicians for decision making, interacting with patients and advancing this field with new technologies and ideas.
The essence of augmentation lies in the complementarity of artificial and human intelligence: the advanced analytical skills of AI can be combined with the human abilities of creativity and intuition to bring us one step closer to our quest for efficient, human-centric medical diagnosis. Already, machine learning models are used in AI-based predictive patient surveillance systems, such as WAVE, where they can leverage in real-time data provided by wearables, while SubtlePET and SubtleMR are the first AI-enabled solutions that recently received FDS approval. Although they do not provide medical diagnosis, these systems seamlessly improve the efficiency of existing PET and MRI scanners by enhancing the image quality, accelerating scanning and reducing radiation levels through the use of advanced deep learning algorithms.
If anything, innovation will continue to accelerate. Within the last five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, beating every other industry in terms of deal activity related to artificial intelligence. While this is exciting, there’s plenty of call for caution.: as with all relationships, this collaboration of artificial and human intelligence needs some time to mature and find a balance that augments the strengths and counteracts the weaknesses of the two parts. |
CMP is a problem-centered curriculum promoting an inquiry-based teaching-learning classroom environment. Mathematical ideas are identified and embedded in a carefully sequenced set of tasks and explored in depth to allow students to develop rich mathematical understandings and meaningful skills. The Common Core State Standards for Mathematics (CCSSM) and the Standards for Mathematical Practice are embedded within each problem. The curriculum helps students grow in their ability to reason effectively with information represented in graphic, numeric, symbolic, and verbal forms and to move flexibly among these representations to produce fluency in both conceptual and procedural knowledge.
The overarching goal of the Connected Mathematics Project (CMP) is to help students and teachers develop mathematical knowledge, understanding, and skill along with an awareness of and appreciation for the rich connections among mathematical strands and between mathematics and other disciplines. The CMP curriculum development has been guided by our single mathematical standard:
All students should be able to reason and communicate proficiently in mathematics. They should have knowledge of and skill in the use of the vocabulary, forms of representation, materials, tools, techniques, and intellectual methods of the discipline of mathematics, including the ability to define and solve problems with reason, insight, inventiveness, and technical proficiency. |
Raz-Plus resources organized into weekly content-based units and differentiated instruction options.
Informational (nonfiction), 1,042 words, Level R (Grade 3), Lexile 910L
This informational text tells all about one of America's favorite championship games: the Super Bowl. The author explains how the famous football game began, how it got its name, and how it's changed through the years. Also included is a collection of game highlights, complete with a timeline of "Super Moments in Super Bowl History."
Guided Reading Lesson
Use of vocabulary lessons requires a subscription to VocabularyA-Z.com.
Use the reading strategy of connecting to prior knowledge to understand text
Sequence Events : Sequence events
Grammar and Mechanics
Quotation Marks : Understand the use of quotation marks in text
Conjunctions : Identify conjunctions
Think, Collaborate, Discuss
Promote higher-order thinking for small groups or whole class
You may unsubscribe at any time. |
PITTSBURGH--Trained AI agents can adopt human design strategies to solve problems, according to findings published in the ASME Journal of Mechanical Design.
Big design problems require creative and exploratory decision making, a skill in which humans excel. When engineers use artificial intelligence (AI), they have traditionally applied it to a problem within a defined set of rules rather than having it generally follow human strategies to create something new. This novel research considers an AI framework that learns human design strategies through observation of human data to generate new designs without explicit goal information, bias, or guidance.
The study was co-authored by Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University's College of Engineering, Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon, and Chris McComb, an assistant professor of engineering design at the Pennsylvania State University.
"The AI is not just mimicking or regurgitating solutions that already exist," said Cagan. "It's learning how people solve a specific type of problem and creating new design solutions from scratch." How good can AI be? "The answer is quite good."
The study focuses on truss problems because they represent complex engineering design challenges. Commonly seen in bridges, a truss is an assembly of rods forming a complete structure. The AI agents were trained to observe the progression in design modification sequences that had been followed in creating a truss based on the same visual information that engineers use--pixels on a screen--but without further context. When it was the agents' turn to design, they imagined design progressions that were similar to those used by humans and then generated design moves to realize them. The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems.
The framework was made up of multiple deep neural networks which worked together in a prediction-based situation. Using a neural network, the AI looked through a set of five sequential images and predicted the next design using the information it gathered from these images.
"We were trying to have the agents create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step," said Raina.
The researchers tested the AI agents on similar problems and found that on average, they performed better than humans. Yet, this success came without many of the advantages humans have available when they are solving problems. Unlike humans, the agents were not working with a specific goal (like making something lightweight) and did not receive feedback on how well they were doing. Instead, they only used the vision-based human strategy techniques they had been trained to use.
"It's tempting to think that this AI will replace engineers, but that's simply not true," said McComb. "Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively."
This paper is part of a larger research project sponsored by the Defense Advanced Research Projects Agency (DARPA) about the role of AI in human/computer hybrid teams, specifically how humans and AI can work together. With the results from this project, the researchers are considering how AI could be used as a partner or guide to improve human processes to achieve results that are better than humans or AI on their own.
For more information: Raina A, McComb C, Cagan J. Learning to design from humans: imitating human designers through deep learning. ASME. J. Mech. Des. 2019;():1-24. doi:10.1115/1.4044256.
>About the College of Engineering at Carnegie Mellon University: The College of Engineering at Carnegie Mellon University is a top-ranked engineering college that is known for our intentional focus on cross-disciplinary collaboration in research. The College is well-known for working on problems of both scientific and practical importance. Our "maker" culture is ingrained in all that we do, leading to novel approaches and transformative results. Our acclaimed faculty have a focus on innovation management and engineering to yield transformative results that will drive the intellectual and economic vitality of our community, nation and world. |
Water, Sanitation and Hygiene (WASH)
We observed that many villagers, especially children, drank from unclean water sources such as the river and rainwater that was collected in urns/pots. Most, if not all houses were not equipped with hand soap near sinks and many villagers do not practice washing their hands with soap after defecation, after farming, or before having their meals. In an interview with Phon Thong Medical Centre, we learnt that their top referral to the District Hospital involved severe stomachache, diarrhea and resultant dehydration.
what we did:
In dealing with the problems relating to WASH, we adopted a two-pronged approach:
We decided to educate the village children on the “Hygiene” aspect of WASH, and see if they would pass this knowledge on to their parents and other adult villagers. We taught the children 2 components of basic hygiene: hand hygiene (WHO’s 7 steps of hand-washing) and dental hygiene (Colgate’s brushing guidelines). To do so, we used creative methods to engage them and help them to remember the steps such as singing the 7 steps of hand-washing in the tune of the Happy Birthday Song and peer teaching.
In order to enable the children to practice their newly-learnt proper hand and dental hygiene in school, we built tippy-taps in schools to provide them with clean water and soap. We involved the school teachers and students in the maintenance of the tippy-taps for sustainability.
During our Dec 18 trip, we plan to assess the retention of hand and dental hygiene knowledge in the children we taught and also see whether the transfer of this knowledge from children to adults has successfully occurred.
In addition, we will be checking to see if our tippy-taps have been well maintained and utilised by the students and teachers. |
How did feminism of the antebellum period challenge traditional gender beliefs and social structures?
The feminism of the antebellum period was very much a protofeminism. It was not anywhere near to being as assertive as feminism is today. However, for its time, it did represent a challenge to traditional beliefs about gender roles and social structures.
In the antebellum period, there was a widespread belief that women and men should operate in separate spheres. Women were believed to be better suited...
(The entire section contains 224 words.)
check Approved by eNotes Editorial |
A new Science
It was in one of the more economically important fields of geology that women paleontologists had what turned out to be the biggest long-term impact. In the first decades of the twentieth century, and especially during World War I, the growing demand for petroleum stimulated interest in micropaleontology (the study of fossils of single-celled and other very small organisms) as a tool for correlating the rock layers in which oil is found. It was a new science; micropaleontology courses were soon taught at several colleges and universities, including the University of Texas at Austin.
There, Professor Francis Luther Whitney (1878-1962) had a number of female students, including two women – Alva Ellisor (1892-1964) and Esther Applin (1895-1972) – who eventually became the leading economic micropaleontologists in the US. In a study of early 20th century women paleontologists, 10 of 20 worked for oil companies. Like other new fields of science, micropaleontology was more open to women than the established disciplines, which were likely to be “old boys’ clubs”. It was also a more welcoming atmosphere because of its setting—micropaleontological work took place in a quiet laboratory, rather than a rough, dirty fieldsite. For example, when Winifred Goldring asked about a field position at the USGS in 1928, she was turned away, as they wanted a “he-man” paleontologist.
Micropaleontology also required patient attention to tiny details, a skill often attributed to women, at the time. A 1926 article in the LA Times reinforced the stereotype: “A larger number of women attended the present convention than any previous national meeting of petroleum geologists, and the science is bringing more and more women into the fold. Especially in the highly specialized fields of the science, such as paleontology, seismography and stratigraphy, which require attention to minute scientific details, are the women said to be gaining prominence.” |
Little Gem Nebula shows off complex, knotty filament structures with a bright, enclosed central gas bubble surrounded by larger, more diffuse gas clouds
Image credit: ESA/Hubble & NASA, Acknowledgement: Judy Schmidt
Space news (August 15, 2015) – approximately 6,000 light-years toward the constellation Sagittarius (The Archer) –
When NASA’s Hubble Space Telescope first looked at the Little Gem Nebula (NGC 6818) using the Wide Field Planetary Camera 2 back in 1997, the image obtained was done so with filters that highlighted ionized oxygen and hydrogen in the planetary nebula.
This image of the Little Gem Nebula shows off complex structures with a bright, enclosed central gas bubble surrounded by larger, more diffuse gas clouds obtained using different filters. Offering the human journey to the beginning of space and time a totally different view of this spectacular stellar object.
Our own Sun billions of years in the future will shed its outer layers into space to create a glowing cloud of gas similar to planetary nebula NGC 6818. Space scientists believe the stellar wind created by the star at the center of this planetary nebula provides the force to propel the uneven outflowing mass.
Studying the final days of sun-like stars provides scientists with data concerning the life cycle of stars similar in size and output to the Sun. Data they can use to devise new ideas and theories to delve deeper into the mysteries surrounding the closest star to Earth.
You can find more information on planetary nebula here.
Learn more about NASA’s space mission here.
You can learn more about the discoveries of the Hubble Space Telescope here.
Learn more about main sequence stars like the Sun. |
The answer to this question is really in the fire itself. Fire is a chemical reaction that occurs very quickly and gives off heat and light. The most common is the chemical action between oxygen and fuel. If heat and light are given off, you have fire! To make a fire, three things are necessary.
The first is a fuel, the second is oxygen and the third thing is heat. Paper or wood that is simply exposed to air does not catch fire: When the fuel becomes hot enough, oxygen can begin to combine freely with it and then it will burst into flames. Every fuel has its own particular temperature at which it begins to burn. This temperature is called the kindling temperature or flash point of fuel. When something catches fire, it is very important to bring the flames under control as soon as possible. This is especially true when a building catches fire. |
It’s never too early to nurture children’s development of language and literacy skills. Even at a very young age, experiencing different genres of books, hearing stories from the adults who care about them, and exploring books alone or with peers helps them learn how to listen to and understand language and how to share their thoughts, ideas, and feelings.
Our new video series highlights eight ways to support language and literacy skills development in your own early childhood classroom.
- Capture children’s interest before you read.
Have children sitting on the edge of their seats before the story even begins! Before your next read-aloud, take a moment to get children interested by providing an exciting overview of the story they’re about to hear. Not sure what this looks like? In this video, Kai-leé Berke provides a few examples so you can see this technique in action.
- Introduce vocabulary during a read-aloud.
Select a few words to highlight and define for children before you begin the read-aloud. Choose words that are important to understanding the meaning of the story and then define the words as you read. You can define words during a read-aloud by pointing out part of an illustration that shows the meaning of a word, showing facial expressions or moving your body in a way that provides explanation, or giving a brief definition.
- Share the see-show-say strategy with families.
See-show-say is an easy, 3-part strategy that you can share with families for conducting read-alouds at home. In this video, Breeyn Mack demonstrates how adults can invite children to see, show, and say what they’re experiencing in the story.
- Highlight children’s favorite books.
Highlighting children’s favorite books during read-aloud time is a great way to get them engaged and keep their attention. Encourage children to talk about their favorite books and share their recommendations with others. Keep sticky notes and pencils in the Library area so children can identify their favorite books. Ask them to write their name on the sticky note and then place it inside the book’s cover; then, at read-aloud time, you can point out that this book is Charlie or Lia’s favorite.
- Establish read-aloud routines.
Young children thrive with consistent, predictable routines, so it’s important to establish regular times for reading. We recommend scheduling time for read-alouds at least twice a day.
- Read in small groups.
To get the most literacy learning out of a read-aloud experience, make sure you take the time to read to children in small groups. Research shows that children who hear stories in small-group settings develop stronger comprehension skills, ask and answer more questions, and comment more on the text. So while you’re probably already reading aloud to large groups of children, try to find time for these small-group interactions, too!
- Support children who are learning two languages.
To support dual-language learners, include books and recorded readings in children’s first languages and wordless books in your classroom book collection. Whenever you can, read the story in the child’s first language before reading in English.
- Start early! Read with infants and toddlers.
Make reading physically interactive by inviting children to hold the book and turn the pages, if they are physically able, or offering them a toy to hold while you’re reading. Focus their attention by pointing to and naming the things in the pictures. And be prepared to read the same books over and over again—very young children thrive on routine and repetition.
Check out our new landing page for even more resources designed to help you foster a lifelong love of reading!
A new webinar series, a free download of language and literacy Mighty Minutes®, and NEW book bundles will help you introduce children to a variety of genres and types of literature. Learn more >> |
What is bioethanol?
Bioethanol is an example of a renewable energy source because the energy is produced by using an organic substance and sunlight, which cannot be depleted. Up to now, bioethanol has been primarily produced for fuel and used in vehicles, but experts believe this technology can be applied to electricity generation as a green, low carbon alternative. Recent trials have shown that burning bioethanol as a fuel vs. other fuels such as natural gas and diesel fuel emits a reduced level of greenhouse gases such as nitrogen oxides, carbon dioxide and sulphur oxide.
Bioethanol fuel production
Bioethanol is an alcohol made by fermenting the sugar components of plant materials and is made mostly from sugar and starch crops. Creation of ethanol starts with the growth of plants via a process known as photosynthesis which grows a series of feedstock such as sugar cane and corn. These feedstocks are then processed into ethanol, first using enzyme digestion to release sugars from the stored plant starch, which are then fermented, distilled and finally dried. Bioethanol is already the most commonly used biofuel in the world, and is especially prominent in Brazil.
Bioethanol used for electricity generation
Burning bioethanol via combustion produces a lower thermal energy output than other thermoelectric generating processes from fuels such as coal or oil. To generate the same level of energy and electricity a much larger stock of Bioethanol is required. The advantage however is that this is a carbon neutral fuel. During the plant growth process, the plants remove CO2 from the atmosphere, and when they are burnt they release this gas back into the atmosphere, therefore the whole process is carbon neutral where as burning coal or oil adds CO2 to the atmosphere.
The future of bioethanol in producing electricity
The world’s first bioethanol power plant is located in Brazil, opened for testing in early 2010, with an 87MW capacity, enabling it to provide power for over 150,000 inhabitants. The power plant is looking to generate electricity on a commercial scale using sugar-cane bioethanol as one of the key fuels. Testing on emissions from this power plant have shown a 30% reduction in greenhouse gases such as nitrogen oxide, without an impact on its power generating capacity.
In the UK we are probably not best suited to expolit this fuel as a green electricity generating source; with Windfarms, Wave and Tidal Energy probably being the best suited to our unique topography. Bioethanol on the other hand is heavily used as a blended transportation fuel in the US and Brazil. This is because mass production and cultivation of high yielding crops currently takes place in those countries.
However with recent trends forcing the pricing of crops and food upwards has meant that enthusiasm for this fuel source as a full alternative to conventional fossil has been slightly reduced.
- Renewable energy source – it relies on sunlight & photosynthesis process which doesn’t diminish.
- Reduced emissions of greenhouse gases from the combustion process vs. other fossil fuels.
- Works well as an “add-on” fuel, a blending substance to conventional fuel.
- Bioethanol relies on crop yields and crop prices as an input – therefore higher prices make the substance less economical.
- The combustion process produces less energy than conventional fuels such as oil and coal. |
The dimensions are not understood as the classic definition of a dimension, that of which most of us understand.
Conducted by the Blue Brain Project, scientists discovered fascinating new details about the complexity of the human brain.
“We found a world that we had never imagined,” explained neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland.
A universe of multidimensional structures inside the brain
“There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions,” Markram added.
By studying the human brain, researchers discovered that traditional mathematical views were not applicable and ineffective.
“The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly,” Markram revealed.
Instead, scientists decided to give algebraic topology a go.
Algebraic topology is a branch of mathematics that uses tools from abstract algebra to study topological spaces.
“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explained professor Hess.
The Human Brain can create structures in up to 11 dimensions
The scientists discovered that the structures inside the brain are created when a group of neurons – cells that transmit signals in the brain – form something referred to as a clique.
Each neuron is connected to every other neuron in the group in a unique way, creating a new object. The more neurons there are in a clique, the higher the ‘dimension’ of the object.
Algebraic topography allowed the scientists to model the structures within a virtual brain, created with the help of computers. They then carried out experiments on real brain tissue, in order to verify their results.
By adding stimuli into the virtual brain, the researchers found that cliques of progressively HIGHER dimensions assembled. Furthermore, in between the cliques, scientists discovered cavities.
“The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” explained Levi.
“It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc.”
“The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates,” he added.
The new data about the human brain offers unprecedented insight into how the human brain processes information.
However, the scientists have said that it still remains unclear as to how the cliques and cavities form in their highly specific ways.
The new study may eventually help scientists uncover one of the greatest mysteries of neuroscience: where does the brain ‘store’ its memories.
“They may be ‘hiding’ in high-dimensional cavities,” Markram concluded. |
Speaker Basics 101: A closer look at the anatomy & audio specs explained
Have you ever wondered how speakers work? It may seem complicated on the surface, however, speakers utilize a couple of fairly simple concepts to achieve sound reproduction. By taking a closer look the anatomy of a speaker and understanding the basic specifications of audio technology helps you to be confident about your purchase. In this detailed blog-cum-guide, we help quickly understand the Science of Sound and the Anatomy of a Speaker. Let's have a look.
How do speakers work & how do we hear sounds?When the tweeter or woofer, also known as a driver or transducer, moves back and forth it creates vibrations in the air, otherwise known as sound waves. The recording contains an encoding of the sound wave in some method, either by transforming the sound waves into grooves on a record, pits on a CD or just binary data in a digital file. That encoded translation of the sound wave is then converted into an electrical signal which goes from the playback device and is then amplified either at an external amp, receiver or at an amp inside the speakers. It’s these electrical signals, also known as current, that change the polarity of a magnet inside the speaker called a voice coil. As the polarity of the voice coil is switched back and forth by the current, it moves closer or further away to another magnet next to it. Since the voice coil is attached to the driver, they both move together and the current is transformed back into sound waves.
One of the more interesting parts of the process is that once the sound reaches our ears, our nervous system once again translates the vibrations from sound back into an electrical current which is then processed by our brain where the truly mysterious part begins as we interpret that sound through perception.
A closer look at the Anatomy of a speaker driver
Let’s take a more in-depth look at the components that make up just the driver and how each functions:
- Cone: The cone is connected to the voice coil and moves air to create sound waves. Most modern tweeters move air with a dome rather than a cone.
- Voice coil: The electromagnet that drives the cone and is alternately charged positively and negatively.
- Magnet: The non-changing magnetic field that allows the voice coil’s alternating magnetic force to be attracted or repelled.
- The top plate, back plate and pole piece: The magnetically conductive parts that efficiently concentrate the magnet’s energy around the voice coil.
- Spider: A springy cloth disc that keeps the voice coil and bottom of the cone from moving off to the side and focuses the coils motion in a forward and backward motion.
- Surround: A flexible ring that keeps the cone from moving side to side while allowing it to push forward and backwards. Together with the spider, a suspension system is formed for the parts that move, the moving parts being the cone and voice coil.
- Flex wires and wire terminals: These components move the electrical current from the amplifier to the voice coil.
- Dust cap: Covers the middle section of the cone and keeps debris from getting into the gap between the magnet and the pole piece where the voice coil resides.
- Frame (or basket): Holds the entire speaker assembly together and attaches it to the cabinet.
In addition to the driver, there are a couple other parts that are needed to make a complete speaker. First, the cabinet which is just the box into which the drivers are installed. The main purpose is to trap the sound waves that come off the backside of the driver and to ensure that they do not cancel out the sound coming from the front of the driver. The cabinet also ensures that the drivers are positioned properly with respect to one another and allows them to work efficiently.
Another feature you will see on many speakers is a port, which is merely an opening in the speaker which allows the long wavelengths of low frequencies to escape the cabinet and reinforces the speaker’s bass response. By including a port, the speaker will be able to reproduce bass at higher volumes than without it. Another method to increase a speaker’s bass response is to include a passive radiator, which has all the parts of a regular driver, except for a voice coil and magnet and is not wired to an amplifier. The passive radiator moves back and forth with bass sound waves created by the other drivers and allows for more bass output from the speaker. A passive radiator can be preferable to a port in some cases because it doesn’t have the same tendency to turbulence or port noise. It also allows the speaker cabinet to remain small, which is an engineering method that we at Aperion use for our small footprint subs and centre channels.
Finally speakers with more than one driver, that is nearly all loudspeakers, use crossover networks of circuitry to ensure that the different drivers play the frequencies for which they are designed. For instance, in a two-way speaker, which is a speaker with a tweeter and one or more woofers that play the same frequency range, the crossover will filter out low frequencies before the signal reaches the tweeter and then filter out high frequencies before they reach the woofer(s). This ensures that the drivers do not waste energy attempting to reproduce frequencies that are inaudible to our ears when produced by that driver. Commonly, capacitors are used to filter out lower frequencies and a coil or inductor is used to filter high frequencies. The crossover point is the frequency when one driver’s response falls off in decibels (dB) and another driver’s frequency response increases. You can think of the crossover point as the “handoff” of the sound from one driver to another. Using components to create an ideal crossover point for each driver is critical to ensure that the different drivers in a speaker blend together seamlessly while faithfully reproducing the full audio spectrum.
FAQS: Speaker & Audio Specifications |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.