content
stringlengths 275
370k
|
---|
Inflation and Its Influence on the Economy
Welcome to the last lesson of the course!
Today, we’ll talk about inflation and the risks that governments face when trying to shepherd along economic growth.
People studying economics often wonder why Central Banks would ever raise interest rates. If lower interest rates help stimulate the economy, why not set rates as low as possible and leave them there?
The answer is that doing this risks the economy overheating and growing at an unsustainable rate, which in turn can cause a recession. In addition and equally of concern, low interest rates can lead to a severe problem with inflation.
What Is Inflation?
Mathematically, inflation is defined as rate of growth in the price of goods and services across the economy. It is usually measured as a percentage on a year-over-year basis. So, a 5% inflation rate means that the average price level of products across the economy rose by 5% from one year to the next. Inflation may be caused by several reasons, but the common one is that it occurs is when the economy is growing at an unsustainable rate.
As an example, in the US, inflation has averaged between 2% and 3% per year for much of the last century. In other countries, higher inflation has sometimes been a problem. In Germany prior to World War II, inflation was over 1000% per year for a time, and more recently, severe inflation has been reported in Venezuela and Iran. Moderate levels of inflation—anywhere from 4% to 10%—is common in places like China, Brazil, and India, among other developing nations.
Why Is Inflation a Problem?
Inflation can be a severe problem for an economy. If prices are rising rapidly, then the paper money that consumers and businesses hold becomes less valuable over time, meaning that the same amount of paper cannot buy as much in the way of real goods and services. So, people spend time and effort trying to figure out how to avoid the problems associated with the diminishing value of their money. In addition, rising prices makes it harder for business owners to price their own products or enter into long-term contracts. A famous example of this occurred during the Weimar Republic in Germany after World War I: At the time, people would have to use wheelbarrows of paper money to buy a loaf of bread, so they spent all their time trying to change their nearly worthless paper money into things they could actually use, like food and clothing.
Deflation and Inflation
While inflation is a common problem in many economies, there is an even worse related problem: deflation. Deflation occurs when the price of all goods and services in an economy shrinks over time, i.e. goods and services become cheaper each year. In other words, it is the opposite of inflation. In modern economic history, deflation is relatively rare, having happened most notably in Japan off and on again from 1990 to 2010. Because deflation is so rare, economists do not fully understand why it happens, but one common explanation is that deflation occurs when you have an economy undergoing structural changes that severely slow economic growth.
Deflation is a severe economic problem because it gives people an incentive to not spend money, thus artificially reducing demand and slowing the economy. As prices of goods and services fall each year, it causes people to want to wait as long as possible before buying something—the longer they wait, the cheaper the product gets. This, in turn, means that the economy is not operating at full potential.
The solution to both inflation and deflation is proper control of the money supply by the Central Bank, a topic we discussed previously.
This completes our final lesson in economics and this course. Congratulations! I hope you now feel better prepared to talk about major economic topics with friends, family, and coworkers. Good luck and thank you for taking the time to learn more about economics and its impact on the world around us.
Share with friends |
Our bodies are vulnerable to infections from many bacteria and viruses. Because of that we have many natural defenses, collectively called the "immune system", designed to fight infections. It is possible to induce immunity with a vaccine made from components of the infecting bug or the toxin (biochemical poisons) that some bacteria produce, which will prevent future infections with the natural, full-strength bug.
If we don’t maintain optimum rates of immunization or “herd immunity”, the diseases prevented by vaccination will return. While better hygiene, sanitation and clean water help protect people from infectious diseases, many infections can spread regardless of how clean we are. If people are not vaccinated, diseases that have become uncommon such as pertussis (whooping cough), polio and measles, will quickly reappear.
Vaccines work most of the time, but not always. Most childhood vaccinations work between 90% and 100% of the time. Sometimes, though, a child may not respond to certain vaccines, for reasons that aren’t entirely understood. They are very safe. But like any medicine, they are not perfect. They can cause reactions. Usually these are mild, like a sore arm or slightfever. Serious reactions are very uncommon.
Q. Are vaccines safe?
A. Vaccines are safe. Any licensed vaccine is rigorously tested across multiple phases of trials before it is approved for use, and regularly reassessed once it is on the market.
Q. Do vaccines provide better immunity than natural infections?
A. Vaccines interact with the immune system to produce an immune response similar to that produced by the natural infection, but they do not cause the disease or put the immunized person at risk of its potential complications. |
Stroke is the condition which occur when the supply of blood to the different parts of the brain is interrupted or severely decreased depriving the brain tissue of oxygen and essential nutrients. Within the minutes, the brain cells begins to die. A stroke is a kind of medical emergency. Mini-strokes generally occurs when the supply of blood to the brain is concisely interrupted. Basically, the stroke is of two types. Firstly, called ischemic stroke that is caused by a blood clotting that blocks or plugs a blood vessel in the brain. Secondly called hemorrhagic stroke that is caused by a blood vessel that breaks and bleeds inside the brain.
The impact factor of journal provides quantitative assessment tool for grading, evaluating, sorting and comparing journals of similar kind. It reflects the average number of citations to recent articles published in science and social science journals in a particular year or period, and is frequently used as a proxy for the relative importance of a journal within its field. It is first devised by Eugene Garfield, the founder of the Institute for Scientific Information.The impact factor of a journal is evaluated by dividing the number of current year citations to the source items published in that journal during the previous two years.
Last date updated on August, 2021 |
A landfill site, where waste is buried People have made many changes to the world they live in—and some of these have had harmful consequences. As the world population increases, there is need for bigger cities, new towns and farmland. Land has to be cleared, destroying the habitats of animals and plants. Loss of habitat, pollution and hunting have driven some animals to extinction, while others are endangered. Industrialization, when not regulated, is a major threat to the environment. Factories discharge harmful chemicals into rivers and seas, while greenhouse gases are released into the air by factories and vehicles.
Areas of land and sea at risk: huge areas of rainforest have been cleared, many coastal waters have been polluted, and the desert...Read More >>Areas of land and sea at risk: huge areas of rainforest have been cleared, many coastal waters have been polluted, and the desert spreads yearly.
The retreat of Pedersen Glacier in Alaska, pictured in summer 1917 (upper) and summer 2005 (lower).
Carbon dioxide and chlorofluorocarbons (CFCs) are both greenhouse gases. In the right amounts, greenhouse gases in the atmosphere trap heat from the Sun so the Earth is not too hot or too cold. But if greenhouse gases build up, too much heat is trapped and the Earth becomes warmer. This change in climate, known as global warming, will have disastrous effects if current trends continue. The ice in the polar regions will melt, raising sea levels and causing severe flooding in low-lying areas. Changes in the climatic pattern worldwide are already leading to more violent storms, flooding and long droughts.
The largest ocean oil spill ever began on 20th April 2010 when the Deepwater Horizon oil rig off the coast of Louisiana, USA, exploded. The explosion killed 11 workers and the oil spill is thought to have killed over 8000 fish, turtles, marine mammals and seabirds.
Find the answer |
Whooping cranes are endangered and slowly recovering from a low point of just 15 birds and one migratory population in the wild. New efforts have established an eastern second migratory population from captive-bred birds, although not without some difficulty, since migration routes are learned from other adults. In the eastern population two methods were used to teach a new migration pathway: imprinting cranes on ultralight aircraft on the ground, which would lead the cranes to an overwintering destination; or imprinting them to follow older whooping cranes or wild sandhill cranes when they migrate. After the first season, whooping cranes are no longer guided, and gradually change their migration pathways, shortening the migration distance each year. A comparison of the two methods of imprinting (ultralights vs. other cranes) finds big differences in the first few years of age in migration distance, but by age 6, the migration paths of the two groups had converged and shortened to similar distances and locations. The new research by Claire Teitelbaum and Thomas Mueller of Goethe University, Germany, and SAFS professor Sarah Converse, appears in Conservation Letters. |
This article needs additional citations for verification. (June 2017) (Learn how and when to remove this template message)
|Internet media type|
|Type of format||Build automation|
Most often, the makefile directs Make on how to compile and link a program. A makefile works upon the principle that files only need recreating if their dependencies are newer than the file being created/recreated. The makefile is recursively carried out (with dependency prepared before each target depending upon them) until everything has been updated (that requires updating) and the primary/ultimate target is complete. These instructions with their dependencies are specified in a makefile. If none of the files that are prerequisites have been changed since the last time the program was compiled, no actions take place. For large software projects, using Makefiles can substantially reduce build times if only a few source files have changed.
Using C/C++ as an example, when a C/C++ source file is changed, it must be recompiled. If a header file has changed, each C/C++ source file that includes the header file must be recompiled to be safe. Each compilation produces an object file corresponding to the source file. Finally, if any source file has been recompiled, all the object files, whether newly made or saved from previous compilations, must be linked together to produce the new executable program.
Makefiles originated on Unix-like systems and are still a primary software build mechanism in such environments.
Makefiles contain five kinds of things: explicit rules, implicit rules, variable definitions, directives, and comments.
- An explicit rule says when and how to remake one or more files, called the rule's targets. It lists the other files that the targets depend on, called the prerequisites of the target, and may also give a recipe to use to create or update the targets.
- An implicit rule says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives a recipe to create or update such a target.
- A variable definition is a line that specifies a text string value for a variable that can be substituted into the text later.
- A directive is an instruction for make to do something special while reading the makefile such as reading another makefile.
- ‘#’ in a line of a makefile starts a comment. It and the rest of the line is ignored.
A makefile consists of “rules” in the following form:
target: dependencies system command(s)
A target is usually the name of a file that is generated by a program; examples of targets are executable or object files. A target can also be the name of an action to carry out, such as "clean".
A dependency (also called prerequisite) is a file that is used as input to create the target. A target often depends on several files. However, the rule that specifies a recipe for the target need not have any prerequisites. For example, the rule containing the delete command associated with the target "clean" does not have prerequisites.
The system command(s) (also called recipe) is an action that make carries out. A recipe may have more than one command, either on the same line or each on its own line. Note the use of meaningful indentation in specifying commands; also note that the indentation must consist of a single <tab> character.
A makefile is executed with the
make command, e.g.
make [options] [target1 target2 ...].
By default, when make looks for the makefile, if a makefile name was not included as a parameter, it tries the following names, in order: makefile and Makefile.
Here is a makefile that describes the way an executable file called edit depends on four object files which, in turn, depend on four C source and two header files. To be concrete,
edit is a target,
display.o are the objects we link to make the executable,
command.h are headers that our objects need to compile correctly, and
$(CC) -c -o $@ $< $(CCFLAGS) is a system command.
$@ is a macro that refers to the target
$< is a macro that refers to the first dependency
$^ is a macro that refers to all dependencies
% is a macro to make a pattern that we want to watch in both the target and the dependency
The make file will recompile all objects if any of the headers change, but if an individual
.c file changes, the only work that will need to be done is to recompile that file and then relink all the objects. Well written make rules can help reduce compile time by detecting what did and did not change
Notice the way the variables and
static pattern rules are used to make the makefile more extensible and readable. We define the same, reusable rule to make each
.o from each
.c, and to make each
target from the objects.
Also notice that we can only link one main at a time, so we have to filter out other mains at linking.
clean are named
.PHONY because they dont refer to real files, but are things we want make to do.
1 CC := gcc 2 CCFLAGS := 3 LDFLAGS := 4 5 TARGETS:= edit 6 MAINS := $(addsuffix .o, $(TARGETS) ) 7 OBJ := kbd.o command.o display.o $(MAINS) 8 DEPS := defs.h command.h 9 10 .PHONY: all clean 11 12 all: $(TARGETS) 13 14 clean: 15 rm -f $(TARGETS) $(OBJ) 16 17 $(OBJ): %.o : %.c $(DEPS) 18 $(CC) -c -o $@ $< $(CCFLAGS) 19 20 $(TARGETS): % : $(filter-out $(MAINS), $(OBJ)) %.o 21 $(CC) -o $@ $(LIBS) $^ $(CCFLAGS) $(LDFLAGS)
To use this makefile to create the executable file called edit, type
make edit. To use this makefile to delete the executable file and all the object files from the directory, type
|Wikibooks has a book on the topic of: make| |
A patent foramen ovale (PFO) is a hole in the heart that didn't close the way it should after birth.
During fetal development, a small flap-like opening — the foramen ovale (foh-RAY-mun oh-VAY-lee) — is normally present in the wall between the right and left upper chambers of the heart (atria). It normally closes during infancy. When the foramen ovale doesn't close, it's called a patent foramen ovale.
Patent foramen ovale occurs in about 25 percent of the normal population, but most people with the condition never know they have it. A patent foramen ovale is often discovered during tests for other problems. Learning that you have a patent foramen ovale is understandably concerning, but most people never need treatment for this disorder.
Most people with a patent foramen ovale don't know they have it, because it's usually a hidden condition that doesn't create signs or symptoms.
It's unclear what causes the foramen ovale to stay open in some people, though genetics may play a role.
An overview of normal heart function in a child or adult is helpful in understanding the role of the foramen ovale before birth.
Normal heart function after birth
Your heart has four pumping chambers that circulate your blood:
- The right atrium. The upper right chamber (right atrium) receives oxygen-poor blood from your body and pumps it into the right ventricle through the tricuspid valve.
- The right ventricle. The lower right chamber (right ventricle) pumps the blood through a large vessel called the pulmonary artery and into the lungs, where the blood is resupplied with oxygen and carbon dioxide is removed from the blood. The blood is pumped through the pulmonary valve, which closes when the right ventricle relaxes between beats.
- The left atrium. The upper left chamber (left atrium) receives the oxygen-rich blood from the lungs through the pulmonary veins and pumps it into the left ventricle through the mitral valve.
- The left ventricle. The lower left chamber (left ventricle) pumps the oxygen-rich blood through a large vessel called the aorta and on to the rest of the body. The blood passes through the aortic valve, which also closes when the left ventricle relaxes.
Baby's heart in the womb
Because a baby in the womb isn't breathing, the lungs aren't functioning yet. That means there's no need to pump blood to the lungs. At this stage, it's more efficient for blood to bypass the lungs and use a different route to circulate oxygen-rich blood from the mother to the baby's body.
The umbilical cord delivers oxygen-rich blood to the baby's right atrium. Most of this blood travels through the foramen ovale and into the left atrium. From there the blood goes to the left ventricle, which pumps it throughout the body. Blood also travels from the right atrium to the right ventricle, which also pumps blood to the body via another bypass system.
Newborn baby's heart
When a baby's lungs begin functioning, the circulation of blood through the heart changes. Now the oxygen-rich blood comes from the lungs and enters the left atrium. At this point, blood circulation follows the normal circulatory route.
The pressure of the blood pumping through the heart usually forces the flap opening of the foramen ovale closed. In most people, the opening fuses shut, usually sometime during infancy.
Generally, a patent foramen ovale doesn't cause complications. But some studies have found the disorder is more common in people with certain conditions, such as unexplained strokes and migraines with aura.
In most cases, there are other reasons for these neurologic conditions, and it's just a coincidence the person also has a patent foramen ovale. However, in some cases, small blood clots in the heart may move through a patent foramen ovale, travel to the brain and cause a stroke.
The possible link between a patent foramen ovale and a stroke or migraine is controversial, and research studies are ongoing.
In rare cases a patent foramen ovale can cause a significant amount of blood to bypass the lungs, resulting in low blood oxygen levels (hypoxemia).
In decompression illness, which can occur in scuba diving, an air blood clot can travel through a patent foramen ovale.
In some cases, other heart defects may be present in addition to a patent foramen ovale. |
Dark matter is an enigmatic energy that makes up most of the mass in the Universe, whose nature has not yet been identified. Researchers have succeeded in estimating the percentage of dark matter in the Universe and describing the processes related to the very existence of this matter. But, until now, no one has established the distribution and behavior of the dark matter in a galaxy.
Now, astronomers in the Theoretical Physics and Cosmos Department of the University of Granada, led by Eduardo Battaner, in collaboration with researchers in the Applied Mathematics Department, have made great progress: establishing the distribution and behaviour of the dark matter in a galaxy.
New mathematical calculations on the dark matter describe the density profiles which define how the dark matter changes in a galaxy. This had not been specified in the astronomy field yet. Until now, the behaviour of the dark matter had been estimated through simulations, but the new mathematical description approach based on equations and functions which describe each characteristic of the dark matter make this result much more reliable.
Specifically this new discovery allows a better understanding of the actual size of a galaxy. The collaboration of astronomers and mathematicians has allowed the developing of the density function of dark matter in a galaxy, describing how the dark matter is arranged from the galactic centre to its outermost part. When watching a galaxy to study the dark matter, a much larger size of a galaxy can be seen compared to that identified when watching the visible radiation. At the same time, it has been concluded that the density of the dark matter in a galaxy is maximum in the centre and it gradually decreases as it gets to the outermost part, but increases considerably the total size of the galaxy. This finding introduces new criteria into the study of galactic dynamics and, of course, of the dark matter.
Dark matter is a main component of the Universe, which has not yet been directly observed. In fact it is the component that makes up the greatest part of the Universe mass. This concept was used for the first time by Fritz Zwicky in 1933. He deduced the existence of a considerable quantity of mass that could not be observed but had to exist as an explanation to the phenomenon of galaxy movements. Currently, the quantity of dark matter in the Universe is well known: 23% dark matter vs. just over 4% of visible matter. The rest, up to 100%, is enigmatic dark energy. Despite the fact that we know well the quantity of dark matter and its behaviour, its nature has not yet been identified. This is one of the most important challenges in cosmology.
"With these results, we cannot establish what dark matter is, but we have defined its behaviour and we have information that helps to know other characteristics like its temperature," Eduardo Battaner said with regard to the results of his research.
Starting from the wide knowledge of the group of astronomers on the dynamics of a galaxy, and applying it through the mathematical modelling knowledge, some complex descriptive functions have been developed which represent the dynamics of the dark matter. Professor Juan Soler, of the university of Granada, has been the coordinator of the research part related to the mathematical calculus.
Cite This Page: |
WATER GR. 2-4
Explore our Most Important Resource. Fun experiments to investigate water's properties learn about the water cycle, erosion, and water flow. Students will make a water clock and rain gauge. The water theme flows through language arts, math, and research and art activities. Contents include 32 language arts activities 17 math activities 4 research skills 24 science activities 8 art activities Answer key provided. This book supports many of the fundamental concepts and learning outcomes from the curriculums for these provinces: British Columbia, Grade 2, Science, Earth & Space, Air, Water & Soil; Ontario, Grade 2, Science, Understanding Earth & Space Systems, Air & Water in the Environment; Manitoba, Grade 2, Science, Physical Science, Air & Water in the Environment. 102 pages .
We Also Recommend |
There are numerous other deep water masses, especially at intermediate depths, for example, North Pacific Intermediate water. As deep-water masses travel through the ocean they gradually mix with surrounding water masses. For example, NADW mixes with AABW and AIW.
Downwelling supplies oxygen to the deep ocean and therefore ventilates this body of water. It does not bring nutrients. Deep water currents generally move very slowly with a velocity of several cm per second. Typically, surface currents move 10-100 times faster than this. At these rates, deep water currents take thousands of years to encircle the globe. In fact, the oldest deep water in the ocean (in the North Pacific) is about 1500 years old. As deep waters circle the globe their properties change. They mix with waters around them, and their chemistry changes as they acquire nutrients such as phosphate and CO2 from decaying organic matter and lose oxygen.
The opposite process of downwelling is upwelling. Upwelling is where a deep-water mass that is lighter than waters around it rises to the level where it is no longer buoyant. This situation generally results when surface winds move the surface water masses away from a location, resulting in the upward movement of water from depth to fill the void. Upwelling is frequent in coastal regions, especially those in subtropical regions where high pressure results in a dominant offshore wind flow. In addition, the ocean divergences where winds move surface current by Ekman transport are frequented by upwelling. Upwelling is crucial to the supply of nutrients to surface water masses, fueling high levels of productivity in the surface ocean. The most prolific fisheries of the world in coastal regions occur in nutrient-rich waters such as Peru and California and are supplied by upwelling.
As we have seen, the circulation of the deep ocean is driven by density differences that arise as a result of temperature and salinity of the different water masses. This type of circulation is known as thermohaline (temperature=thermo; haline=salt or salinity). Strictly speaking, since surface ocean currents are not driven by thermohaline mechanics but by winds and to a much lesser degree, tides, the circulation of the ocean as a whole is often called the meridional overturning circulation. However, we will continue to use the term thermohaline when addressing deep-water circulation. |
What is Nature-based Early Childhood Education?
Nature-based Early Childhood Education is learning that is rooted in the local natural world and uses nature as the organizing principle to address both early child development and the development of an ecological identity. Nature is both the classroom and a teacher, and learning happens almost exclusively outdoors, where there are endless opportunities to grow and learn. In nature, children explore, jump, balance, solve problems, learn empathy, cooperate with others, and play, all while engaging in early literacy, science, math, and social emotional learning. While predictable rituals and routines guide children into learning, nature-based education allows children’s interests and curiosity to direct the day’s activities and inform the curriculum. There is a growing body of research that shows that this type of frequent play in nature-based early education programs stimulates all of a child’s developmental domains, including their cognitive, creative, physical, social and emotional, and spiritual development. Children discover the wonders of the landscape surrounding them, while also discovering their inner landscape through meaningful relationships with mentors, classmates, and nature.
Thorne Nature Preschool Educational Philosophy
Thorne Nature Preschool provides an early childhood educational experience that initiates young children into a deep relationship with the natural world and plants the seed for life-long environmental stewardship. Through daily immersion in nature with caring mentors and a focus on supporting early childhood development, Thorne Nature Preschool fosters the growth of the whole child (cognitive, physical, social, emotional, creative), while cultivating a profound connection to nature. Utilizing integrated academic and social curriculum grounded in nature, Thorne Nature Preschool nurtures the well-being of each child while preparing them to succeed in school and in life. Thorne believes that every child is a unique and competent individual, who is eager to explore and learn, and comes equipped with a natural and wonder-filled curiosity about life. By tapping into children’s love of learning, Thorne Nature Preschool strives to develop prepared, capable, confident, empathetic individuals who are masters of their own learning, and are inspired to make a difference in the world.
Nature promotes the health and well-being of the whole-child.
Nature is the ideal venue for academic growth, imaginative play, social and emotional learning, problem solving, and promoting active play.
Frequent, immersive experiences in nature with a mentor cultivates an environmental ethic.
Place-based environmental education connects children to their local community and is the starting point, for responsible citizenship.
Today’s youth are tomorrow’s environmental stewards and leaders. |
Every day, people demonstrate their commitment to making a difference. They invest money, passion and hard work in helping the disadvantaged improve their lives, in protecting the natural environment or in supporting their neighborhood’s cultural life. In short, these people aim to advance society and, thus, to achieve the greatest possible social impact every day.
But what exactly do we mean, when we talk about social impact?
In the context of work for social betterment, one refers to social impact when a measure produces results in the form of changes…
- within the target group,
- in that group’s living environment, and/or
- in society at large.
There are numerous levels at which social impact can be achieved. These levels of effect are illustrated using the results staircase:
Results at the societal level will here be referred to as “impact” (as shorthand for social impact). In the illustration, this is represented at level 7, at the top.
Results within the target group are outcomes. Outcomes can be further subdivided, as seen at levels 4 - 6 in the illustration.
- The third outcome level is reached when the target group's circumstances change, and members are able to improve their living conditions (level 6).
- The second outcome level is reached when we observe changes in behavior within the target group (level 5)
- The first outcome level is reached when we observe changes in attitudes and/or skill levels within the target group (level 4).
Impact and outcomes result from the products produced or services provided by a project.
These products or services are referred to as outputs.
Typical examples of outputs include classes held, football practices offered, choir rehearsals conducted, learning materials or films provided. You may interpret this website as an output indicator. In short, outputs can include services, activities or products.
In the results staircase, outputs are represented in levels 1 - 3.
At YEA, a project is designed to help young people find an apprenticeship in a vocational-training program.
The project’s outputs (products and services) include tutoring and job-application training sessions.
However, the mere implementation of tutoring sessions, or even a high number of participants, does not say anything about the effect on the target groups, that is the social impact achieved. This is because participating in these sessions does not automatically imply that the youth will be able to find jobs afterwards. Yet the outputs are, of course, a prerequisite for achieving this goal.
If, as a result of the training, the young people acquire useful job-hunting knowledge and skills, gain self-confidence, and can follow through on their applications independently, these are results on the higher level (outcomes).
If the project does succeed in helping young people take on positions in vocational training, which in turn contributes to an overall decline in unemployment in the region, then a change at the societal level has been made (this is what we call impact).
In summary: A project offers certain services, products and activities. These are outputs.
As a consequence of these outputs, results are produced within the target groups. These are outcomes. These outcomes can be of various kinds.
The outcomes in turn can have results at the level of society as a whole. This would be an impact.
Outcomes always refer to results within the project target group. Impact describes the desired changes at the societal level (social, economic, etc.). Impact always relates to a part of the society as a whole, for instance the population in the district of a city or within a specific region.
More detail on these issues is provided under "Creating a logic model".
Before moving on, please consider the following:
While the terms "impact," "impact orientation" and "impact assessment" have become increasingly popular in recent years – a trend we admit to having contributed to – they involve two basic things:
- Working for impact is a mindset. I engage because I want to support the target group as best I can.
- Working for impact is a form of project management that is adapted to third-sector work. Impact orientation is not more (but also not less!) than that.
So let’s get into the details. We begin by drawing on your intuition in exploring: For whom and what do you (want to) engage? |
Cogeneration sometimes referred to as CHP (Combined Heat and Power) or energy recycling is an efficient and cost-effective method of capturing heat lost during the production of electricity and converting it into thermal energy because energy that would be otherwise disposed as waste heat would be put to good use. Thomas Edison probably was the first to make use of cogeneration or energy recycling in 1882. His Pearl Street Station was the world’s first commercial power plant producing both electricity and thermal energy while using waste heat to warm neighboring buildings. Because of energy recycling, Edison’s plant was able to achieve 50% efficiency.
Cogeneration Benefits Cogeneration systems are up to 80% more efficient than that of the traditional power plants, which is normally around 30%. These gains of efficiency result in cost savings, as less fuel is needed to be consumed to produce the same amount of useful energy. In addition, results of cogeneration also include reduced air pollution, reduced greenhouse gas emissions, increased power reliability and reduced grid congestion.
Today, Con Edison operates seven cogeneration plants to approximately 100,000 buildings in Manhattan, the largest steam district in the U.S. The steam distribution system is the reason for the steaming manholes often seen in New York City. The European Union generates 11% of its electricity using cogeneration and energy savings in Member States ranges between 2% to 60%. Europe has the three countries with the worlds’ most intensive cogeneration economies, which are Denmark, the Netherlands and Finland. In response to the growing energy need, the US Department of Energy maintains an aggressive goal of cogeneration or CHP to comprise 20% of the US generation capacity by the year 2030.
Typical Methods of Cogeneration Include Gas Turbines, which are essentially jet engines that drive turbo generators. Multi-stage heat recovery steam generators use heat to produce steam and even hot water as the exhaust gradually loses its temperature.
Diesel Engines are very similar to the gas turbine. The diesel drives a generator for economical electricity production and then the hot exhaust can produce steam to drive another electrical generator or to provide heat for process operations as either steam or hot water.
In either case, the main goal in either case is to effectively extract every BTU of heat that would exceed normal atmospheric temperature in the final effluent stream of gas and cause it to produce electricity or usable heat such as hot water.
Other Forms of Cogeneration Landfill Gas Cogeneration is a great solution because the emissions of a damaging pollutant are avoided and electricity can be generated from a free fuel. Landfill gas contains approximately 50% methane andhas a heat content of about half the value of natural gas. Capturing LFG reduces greenhouse gases while contributing to energy independence and economic benefits.
Waste to Energy Cogeneration is an excellent energy model. A waste to energy plant has
been launched in Lahti, Finland. It converts municipal waste into heat and power through the large-scale use of waste gasification, gas cleaning and high-efficiency combustion. It has a capacity of 250,000 tons of waste per year and can generate 90 MW of heat and 50 MW of electricity. This system will partially replace a coal-fired plant and will make a substantial reduction of landfill disposal in the region.
Cogeneration in Jamaica The country’s only utility company on the Island of Jamaica is already using cogeneration on a small scale. The electric company plans to use this method of energy source especially in the sugar, manufacturing and tourism industries. In addition, the country also uses solar powered streetlights in the 14 parishes. Jamaica has one operating wind turbine contributing to the grid and uses bio mass energy to primarily burn bagasses to produce steam in the sugar industry.
In this world of increasing energy requirements, cogeneration whether by diesel, gas turbines, landfill gas and waste to energy can only be a good solution not only in the United States, the European Union, but also in Small Island Developing States such as Jamaica and Haiti. Officials in Haiti might ought to take a good look at the potential of waste to energy cogeneration for its most pressing needs of both power generation and of excessive municipal waste.
Photo Credit: ell brown |
FAQ: What are coastal vignettes and how can I create them?
What are coastal vignettes and how can I create them?
The cartographic representation of where land and water meet can be drawn using a number of different methods, some of which are called coastal vignettes. Coastal vignettes symbolize the water from the shoreline towards open water.
A vignette is usually thought of as a drawing (i.e., symbolized graphic mark) that gradually fades into the surrounding background leaving an undefined edge (Loggia.com 2003).
On historic maps, a set of contours parallel to shore highlight the water areas along coastlines. On more recent maps, gradation of color is often used, ranging from white along shore to the blue used for the open water areas (USGS 2002).
Because lighter values are associated with “less” of something, this approach leads the map reader to the impression that coastal areas are shallower than open water areas – an impression that cartographers often want to propagate because of its general truth (Robinson, et al. 1995; Tufte 1991; Tufte 1997). Additionally, the white areas near shore may be associated with the white water of breaking waves along beaches.
The white paper linked in the Related Information below demonstrates how to create coastal vignettes to symbolize the water using two different methods for creating a gradation in color – Buffers and Euclidean Distance. Each method shows how to use tools available in both ArcGIS 8.x and in the geoprocessing framework of ArcGIS 9.0 (ESRI 2004). |
All computer displays show images in bitmap mode. What this means is that every image is really a bunch of tiny little squares that make up the image. What this essentially means is that computers can't display really smooth curves.
These two letters are printed with the same font face, size, and style. The only difference between them is that the top letter is aliased and the bottom is not.
As you can see, the top letter has a jagged, "stair-step" effect that is the hallmark of aliasing. It is the way that computers display curves on the screen. The bottom letter, on the other hand, has a smoother, fuzzier look to it. It is anti-aliased to simulate the look of a smooth curve on the screen.
How does anti-aliasing work?
Anti-aliasing works with the way that our eyes see things. Human eyes do not see in as precise detail as we would like to think. In reality, the mind converts the images into what it "thinks" they are supposed to look like.
With anti-aliasing, the curve is created with squares of color that are shaded darker or lighter depending on how much of the curve would take up that square. For instance, if a portion of a curve takes up 10% of a pixel, that pixel would be shaded with 10% of the color saturation of the curve.
What this amounts to is that anti-aliasing adds shading along the curve to "fool the eye" into thinking it's seeing a smooth curve rather than a jagged bitmap.
Anti-aliasing Pros and Cons
- Makes fonts look smoother
- Rounded edges look round
- Type is easier to read (for some) because it looks more like what printed type looks like
- It's just plain prettier (some would argue)
- Small fonts become too fuzzy to read
- Sharp edges may be fuzzy and not precise
- You can't print anti-aliased text as it comes out blurred
- Images are generally larger
- Type is easier to read (for some) because the blurring is reduced and the fonts are clear
Understanding Antialiasing and Transparency
Images, though, have two fundamental limitations for supporting graphic elements. First, rather than being vector-based (as are text and graphics created in programs such as Illustrator), images are a collection of pixels. Second, images are always rectangular.
In order to make your graphics look as smooth and accurate as possible, and in order to seamlessly integrate them into your design, you will need to understand antialiasing, and how it relates to transparency. This tutorial will explain the basics of antialiasing, and how to use succesfully use it in tandem with transparency.
In our case, the smooth and continuous feature we are interested in is vector data, such as text or an illustration. The sampling that occurs is due to rasterization: the process of converting vector data into pixel data. The limitation of this representation is that while vector data can represent limitless shapes and has infinite resolution, pixels are square and are relatively large.
This limitation isn't visible when dealing with rectangular objects, as in the images below:
|Rectangular features, even when magnified (right) suffer from no visual artifacts.|
|Diagonal lines are rendered less accurately. A magnified view demonstrates jaggies.|
The example below demonstrates the most effective technique of antialiasing graphics: taking advantage of the many levels of color that our monitors can represent.
Here is a simple image that is still complex enough to show jaggies when rendered. This is even more noticeable in the detail image.
|A large atmark rendered without antialiasing.|
|Antialiasing smoothes out the jaggies.|
In this case the border pixels are shades of gray because the foreground is black and the background is white. If the foreground were red, however, the border pixels would be shades of pink.
|With a red foreground and a white background, intermediate pixels are pink.|
To enable smoothed fonts in Wine you will need to run regedit and change these settings.
You may also want to install the free windows core fonts and even the Tahoma font. Most Linux/Unix operating systems come with nice fonts also such as the Liberation font set.
OK, this sounds like chinease to you...Don't worry their is a handy little script that will do everything for you as Wine has supported font smoothing, including subpixel since wine 1.1.12.
Here is a screenshot of the script running.
Wine font smoothing English version can downloaded from here.
And a Russian version can be downloaded from here.
To run the script: |
Evolutionary computation represents some of the most innovative design theory in modern computer algorithms. A cutting-edge field of computer science, it is integral to the advancement of today’s globally connected computer systems. Evolutionary computing is particularly important for the ongoing development of artificial intelligence software, particularly that which has to do with AI perception: how computers sense, and subsequently react, to their surrounding environment. It is widely employed in facial recognition software, the latest in fingerprint scanning, DNA analysis and genome research, military targeting systems, and GPS.
What Is an Algorithm?
At its simplest level, and algorithm is simply a set of instructions for solving a problem. Every item of furniture ever purchased through Ikea, for example, comes with an algorithm—a set of written instructions—detailing its assembly. A shopping list is part of an algorithm for restocking one’s kitchen; to be a complete algorithm, it would have to include steps like “locate car keys” and “visit supermarket.” A recipe is an algorithm for a homemade meal.
With computers, nothing can be taken for granted. Using the above example as a for-instance, a computer would need to be told to look at the car keys, to identify them, and to avoid bumping into any walls while heading for the front door. It would need to be told to open the door, to shut the door behind itself, and to make sure that the door was locked. Coupled with the complexity of the problems that computers are typically tasked to handle, it’s easy to see how computational algorithms can be extremely elaborate. The process of designing new algorithms is continuous, resulting in the gradual simplification of once-convoluted processes.
What Makes Evolutionary Computation Special?
Evolutionary algorithms are based on the concepts of natural selection—in other words, living biological evolution. The origins of evolutionary computing date back to the mid-20th century, when computer scientists first began experimenting with computer processes that mimicked certain highly-evolved living behaviors. For example, there were algorithms that arrived at a solution for their problems based on a process that was similar to how a colony of ants will work together to build their underground colony structures. Another style of evolutionary programming mimics how bees communicate with each other, in order to share knowledge of the location of pollen.
More recently, evolutionary computing techniques have taken on added dimension, with the additional element of automatic refinement over time. Modern evolutionary programming uses advanced AI processes, not only to locate solutions to problems quickly and efficiently, but also to improve their own efficiency over time. Much as evolution represents natural biology “learning” to cope with a range of unforeseen challenges, evolutionary algorithms adapt to overcome situations which their original authors could not have anticipated.
What is Global Optimization?
Global optimization is one of the major applications of evolutionary algorithm design. It involves determining the maximum or minimum practical value for a given variable involved in a particular problem, such as a software package’s ability to detect tiny variations in facial features. This is done regardless of any other variables, meaning that the problem is rendered much more complicated than it would otherwise be, but also that it potentially needs to be solved only once. Using the aforementioned example, global optimization for a software package involving the detection of variations in facial features would work across a diverse range of human facial variations—not simply those inherent to a specific gender, race, or geographic region.
Evolutionary computation is critical to the continuing advancement of modern algorithms. It solves problems on a wide scale, relating to some of the most cutting-edge aspects of modern technological innovation. In the future, as data storage capacity and processing speed both continue to improve, more involved algorithms are likely to be the continuing source of most of our computational advancements. |
October 1897: The Discovery of the Electron
Science lecturers traveling from town to town in the mid-19th century delighted audiences with a device that could be considered the ancestor of the neon sign. They took a glass tube with wires embedded in opposite ends, administered a high voltage and pumped out most of the air. The result: the interior of the tube would glow in lovely fluorescent patterns. Scientists theorized that the glow was produced by some kind of ray emitted by the cathode, but it took the seminal research of a British professor in Cambridge University's Cavendish Laboratory to finally provide a solution to the puzzle.
J.J. Thomson refined previous experiments and designed new ones in his quest to uncover the true nature of these mysterious cathode rays. Three of his experiments proved especially conclusive. First, in a variation of a pivotal 1895 experiment by Jean Perrin, he built a pair of cathode ray tubes ending in a pair of metal cylinders with a slit in them, which were in turn connected to an electrometer. The purpose was to determine if, by bending the rays with a magnet, Thomson could separate the charge from the rays. Failing this, he concluded that the negative charge and the cathode rays were somehow stuck together.
All previous attempts to bend cathode rays with an electric field had failed, so Thomson devised a new approach in a second pivotal experiment. A charged particle will curve as it moves through an electric field, but not if it is surrounded by a conducting material. Thomson theorized that the traces of gas remaining in the tube were being turned into an electrical conductor by the cathode rays themselves, and managed to extract nearly all of the gas from the tube to test his hypothesis. Under these circumstances, the cathode rays did bend with the application of an electric field. From these two experiments, Thomson concluded, "I can see no escape from the conclusion that (cathode rays) are charges of negative electricity carried by particles of matter."
However, he still lacked experimental data on what these particles actually were, and hence undertook a third experiment to determine their basic properties. Although he couldn't measure directly the mass or electric charge of such a particle, he could measure how much the rays were bent by a magnetic field, and how much energy they carried, which would enable him to calculate the ratio of the mass of a particle to its electrical charge (m/e). He collected data using a variety of tubes filled with different gases. Just as Emil Wiechert had reported earlier in the year, the mass-to-charge ratio for cathode rays turned out to be over one thousand times smaller than that of a charged hydrogen atom. Subsequent experiments by Philipp Lenard and others over the next two years confirmed the conclusion that the cathode rays were particles with a mass far smaller than that of any atom.
Thomson boiled down the findings of his 1897 experiments into three primary hypotheses: (1) Cathode rays are charged particles, which he called "corpuscles. (The term "electron" was coined in 1891 by G. Johnstone Stoney to denote the unit of charge found in experiments that passed electrical current through chemicals; it was Irish physicist George Francis Fitzgerald who suggested in 1897 that the term be applied to Thomson's corpuscles.) (2) These corpuscles are constituents of the atom. (3) These corpuscles are the only constituents of the atom.
Thomson's speculations met with considerable skepticism from his colleagues. In fact, a distinguished physicist who attended his lecture at the Royal Institution admitted years later that he believed Thomson had been "pulling their legs." Gradually scientists accepted the first two hypotheses, while later experiments proved the third to be incorrect, thanks to the efforts of Ernest Rutherford and subsequent researchers. The electron itself turned out to be somewhat different from what Thomson imagined, acting like a particle under some conditions and like a wave under others, a phenomenon that would not be explained until the birth of quantum theory. Physicists also discovered that electrons were only the most common members of an entire family of fundamental particles, which are still the subject of intensive research to better understand their properties.
Thomson's work earned him recognition as the "father of the electron," and spawned critical experimental and theoretical research by many other scientists in the United Kingdom, Germany, France and elsewhere, opening a new perspective of the view from inside the atom. The knowledge gained about the electron and its properties has made many key modern technologies possible, including most of our society's computation, communications, and entertainment.
-Adapted from an online exhibit by the History Center of the American Institute of Physics developed in 1997 to commemorate the 100-year anniversary of the discovery of the electron. To view the full exhibit, see http://www.aip.org/history/electron/.
Birthdays for October:
7: Niels Bohr (1885)
23: Il'ja Mikhailovich Frank (1908)
©1995 - 2017, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.
Associate Editor: Jennifer Ouellette |
Deforestation is influencing the evolution of birds in the Ecuadorian Andes, finds a study of a small insectivorous bird species in Buenaventura Reserve, a 5,583 acre (2,259 hectare) reserve owned by WLT partner Fundación Jocotoco.
A study of the Ecuadorian Tapaculo, an Endangered passerine with only a handful of small, isolated and rapidly declining populations, found that wing shape differed according to the fragmentation of their habitat.
Narrow or round wings
Buenaventura Reserve, where the study took place, is made up of recovering forest ecosystems interspersed with areas of abandoned pasture. By studying the birds within forest fragments of different sizes, the researchers found that individuals living in small patches had narrow wings, and birds from larger patches displayed the short, round wings which are typical of Ecuadorian Tapaculos.
The researchers theorised that this morphological adaptation has occurred because narrow wings enhance mobility and flight capacity, which enables them to cross habitat gaps (deforested areas) before establishing their territories. It is possible that these changes only began after the onset of intense forest fragmentation at the beginning of the 20th century (it is estimated that over 90 per cent of the original forest cover in southwestern Ecuador has been logged since then).
Fighting local extinction
As forest loss and fragmentation are the main drivers of species extinctions in the Neotropics, being able to rapidly adapt in response to changes in habitat could determine a species’ survival in the wild.
The Ecuadorian Tapaculo is not the only species under strong selection to adapt to deforestation. Other understory birds which may not be able to disperse through fragmented areas are antbirds, antpittas and hummingbirds.
“The loss of forests in the range of the Tapaculos is so drastic that, in simple terms, the birds have two options: Adapt to their changing environment, or become extinct.” Claudia Hermes, one of the authors of the research paper, told WLT. “Luckily, the Tapaculos seem to be able to rapidly adapt to the fragmentation of their habitat, which can reduce the risk of extinction. Moreover, this gives hope that other understory birds might have similar abilities. However, on the long run, the protection and restoration of the cloud forests still remains the only possibility to preserve the unique biodiversity of the Ecuadorian Andes.”
Buenaventura Reserve is the only protected site within the limited range of the Ecuadorian Tapaculo, and the surrounding area has suffered intense deforestation, with most forest patches smaller than 250 acres (100 hectares). Fundación Jocotoco is working to create a corridor which will protect more of the Tapaculo’s range, as well as another endemic and endangered species in the area, the El Oro Parakeet.
The Ecuadorian Tapaculo, or El Oro Tapaculo, is just one of several species which can only be found on the western slopes of the Andes in southwestern Ecuador, and it is important that as much of this habitat is protected from deforestation as possible to maintain these species in the wild. |
This type of learning is at the core of a 21st century classroom, prompting students to build off of each other’s ideas, create collaboratively and offer constructive feedback. In the process, kindergartners can form the basis for how they will approach issues and conflicts in their lives. And our educators have embraced the approach.
Through a Critical Friends exercise embedded in professional development, they show their peers what deeper learning, effective collaboration and rigorous problem-solving look like—and then all educators model the successful practices to their students.
Teachers make effective use of professional learning communities to build projects and share critical feedback. In 21st century, project-based instruction, it is critical to not only use the concrete measures of student achievement but also those qualitative assessments of school climate.
Conduct a YouthTruth Survey, which measures students’ perceptions of what they are learning, how they are challenged and their relationships with peers. These insights allow educators to adapt their instruction to support an innovative learning culture.
Broad investment in project-based learning and assessment produces a 21st century learning community that emphasizes application and collaboration. In this type of classroom, students as early as kindergarten use the four Cs to tackle complex problems and prepare for success in college and career.
As project-based learning gains traction—even in kindergarten—college and career readiness for our nation’s 5-year-olds may not seem like such a far-off proposition. |
There are various types of hearing loss, classified on the basis of which part of the hearing pathway is affected. These different types of hearing loss are described below:
a. Conductive hearing loss:
Conductive hearing loss occurs as a result of any pathologic condition affecting the outer or middle ear, including the ear canal, tympanic membrane (ear drum), middle ear cavity, middle ear ossicles (bones) or Eustachian tube. Conductive hearing loss interferes with the transmission of sound through the outer and middle ear into the inner ear. Conductive hearing losses primarily reduce the loudness of an incoming signal.
b. Sensorineural hearing loss:
Sensorineural hearing loss refers to a hearing loss related to a problem affecting the inner ear, which includes the cochlea and the auditory nerve. Damage to cochlea as a result of damage to the hair cells or inner ear fluids can cause a disruption in the inner ear’s ability to convert sound vibrations into chemical signals transmitted to the auditory nerve. Dysfunction of the auditory nerve prevents effective transmission of electrical signals to higher-level auditory pathways. Sensorineural hearing losses usually result in both a reduction in speech detection and an inability to understand speech, particularly when it is presented in background noise or when visual cues are absent or reduced.
c. Mixed hearing loss:
Individuals presenting with mixed hearing losses have both a sensorineural and conductive component to the hearing loss in the affected ear.
d. Central hearing loss:
Central hearing loss refers to a condition in which the peripheral parts of the ear (including the outer, middle and inner ear) are working properly, yet the auditory cortex is not able to interpret sounds properly.
e. Auditory Neuropathy/Auditory Dys-synchrony:
Auditory neuropathy refers to a hearing disorder where sound enters the ear normally, but the transmission of signals from the inner ear to the brain is impaired. Individuals with auditory neuropathy present with speech perception that is far worse than what would be predicted by their hearing thresholds. |
Physiological Processes within the Egg
For the embryo to develop without any anatomical connection to the hen's body, nature has provided membranes outside the embryo's body to enable the embryo to use all parts of the egg for growth and development. These "extra-embryonic" membranes are the:
- yolk sac
The yolk sac is a layer of tissue growing over the surface of the yolk. Its walls are lined with a special tissue that digests and absorbs the yolk material to provide sustenance for the embryo. Yolk material does not pass through the yolk stalk to the embryo even though a narrow opening in the stalk is still in evidence at the end of the incubation period. As embryonic development continues, the yolk sac is engulfed within the embryo and is completely reabsorbed at hatching. At this time, enough nutritive material remains to adequately maintain the chick for up to two days.
The amnion is a transparent sac filled with a colorless fluid that serves as a protective cushion during embryonic development. This amniotic fluid also permits the developing embryo to exercise. The embryo is free to change its shape and position while the amniotic fluid equalizes the external pressure. Specialized muscles also develop in the amnion, which by smooth, rhythmic contractions gently agitate the amniotic fluid. The slow and gentle rocking movement apparently aids in keeping the growing parts free from one another, thereby preventing adhesions and resultant malformations.
The chorion serves as a contain for both the amnion and yolk sac. Initially, the chorion has no apparent function but later the allantois fuses with it to form the choric-allantoic membrane. This brings the capillaries of the allantois into direct contact with the shell membrane, allowing calcium reabsorption from the shell.
The allantois has four functions:
- It serves as an embryonic respiratory organ.
- It receives the excretions of the embryonic kidneys.
- It absorbs albumen, which serves as nutriment (protein) for the embryo.
- It absorbs calcium from the shell for the structural needs of the embryo.
The allantois differs from the amnion and chorion in that it arises within the body of the embryo. In fact, its proximal portion remains intra-embryonic throughout the development.
Functions of the Embryonic Membranes
Special temporary organs or embryonic membranes are formed within the egg, both to protect the embryo and to provide for its nutrition, respiration and excretion. These organs include the yolk sac, amnion and allantois.
The yolk sac supplies food material to the embryo. The amnion, by enclosing the embryo, provides protection. The allantois serves as a respiratory organ and as a reservoir for the excreta. These temporary organs function within the egg until the time of hatching and form no part of the fully developed chick.
Functions of the Embryonic Blood Vessels
During the incubation period of the chick, there are two sets of embryonic blood vessels. One set, the viteline vessels, is concerned with carrying the yolk materials to the growing embryo. The other set, the allantoic vessels, is chiefly concerned with respiration and with carrying waste products from the embryo to the allantois. When the chick is hatched, these embryonic blood vessels cease to function.
Notice the various embryonic membranes in this picture of a nine day embryo.
- The clear fluid surrounding the chick is the amnion.
- The yellow area covered with a blood system is the yolk sac.
- The dense blood system in the piece of egg shell is the allantois.
- The milky, clear material to the right of the shell is remaining white or albumen.
TitlePhysiological Processes within the Egg
This publication is available in alternative media on request. |
I am so interested in Universal Design for Learning or UDL. In an article by McGuire, Scott & Shaw (2006) they discuss Universal Design and its origins in architecture by architect Ronald Mace. The idea behind this design is to make environments accessible to all peoples regardless of age or ability. McGuire, Scott & Shaw believe that in the era of the reauthorization of the Individuals with Disabilities in Education Act (IDEA) and No Child Left Behind (NCLB) there needs to be more thought in planning in how we design our educational practices with this idea of accessibility for all. Presently (in New York State) we are facing the consequences of our adoption of the Common Core Learning Standards (CCLS) and Annual Professional Performance Review (APPR) in our Race To The Top (RTTT) efforts to reform and fund education with the federal government. What we need to understand is how this is impacting the education of ALL students. McGuire, Scott & Shaw at what elements of Universal Design can inform a new paradigm in teaching in learning to benefit students of all abilities. Nine elements of accessibility to a curriculum are identified (McGuire, Scott & Shaw, 2006) as: equitable use, flexibility in use, simple and intuitive, perceptible information, tolerance for errors, low physical effort, size and space for approach and use, a community of learners and instructional use. Curriculum accessibility is crucial for success of all students on assessments that rate the effectiveness of teachers and guide instructional supports (Academic Intervention Services) and educational programs (charter schools) for all students. Universal Design of Learning is still developing into a well researched theory or framework. This leads to the question of what theoretical/conceptual framework can inform an understanding and anchoring of UDL?
Digging deeper into UDL, the work of Culturally Responsive Teaching (CRT) is a conceptual theory that shares many commonalities of teaching and learning with UDL. These commonalities are: high expectations for all students, equitable use, flexible use and a learner centered approach to instruction and learning. CRT has its origins in the work of Geneva Gay. Culturally Responsive Teaching is essentially knowing your students and teaching to them, not at them. It is bringing the diversity of the classroom to help all students achieve their potential through the acknowledgement and integration of experiences, abilities and the whole child for the betterment of the entire classroom. While Culturally Responsive Teaching looks at ethnic backgrounds, it also applies to understanding a students ability or disability.
This is the connection to the understanding of Universal Design of Instruction. It is meant to benefit the needs of all students by addressing the nine elements mentioned above to meet the diversity of our population of students and increase our tolerance and acceptance of this diversity.
So now to the big question: How does our adoption of Common Core State Standards and Annual Professional Performance Review impact our ability to provide instructional practices that embrace the pedagogy of Culturally Responsive Teaching? Does it impact our ability at all? |
Contact: Science Press Package
American Association for the Advancement of Science
Baboons can learn to spot printed words
A baboon from the study by Dr. Grainger and colleagues.
[Image courtesy of J. Fagot]
Baboons can't read, but they can learn to tell the difference between real printed words (like KITE) and nonsense words (like ZEVS), scientists say.
These findings are surprising because researchers have long thought that recognizing words in this way is something that you need language skills for. Or, in other words, something that only humans can do.
Jonathan Grainger of CNRS and Aix-Marseille University in Marseille, France and colleagues studied a group of baboons living in a fenced-in area that included several booths holding computers with touch-sensitive screens. The animals could freely enter the booths and participate in the experiments, stopping and starting whenever they wanted.
The baboons would see a four-letter sequence appear on the screen and then tap one of two shapes on the screen, depending on whether the sequences was a real word or a nonsense word. They received a food treat after a correct response.
Over a period of a month and a half, the baboons learned to discriminate dozens of words from more than 7,000 non-words. This ability to identify specific combinations of letters is called "orthographic processing," and it's a key component of reading. Thus, one of the building blocks of reading ability, which is among the most complex of human skills, may be more common in the primate brain than previously thought.
This research appears in the 13 April 2012 issue of the journal Science. |
Feed can be described as the materials which give nourishment to
animals. The components of a feed which are capable of being
utilized by the animal in life support functions are called
nutrients. Nutrients may also be defined as a specific element or
compound derived from ingested food and used to support the
physiological processes of life. Nutrients are required for normal
body functions such as digestion, respiration, blood circulation,
locomotion, reproduction etc.
The major nutrients found in dairy animal feed are water,
carbohydrates, proteins, minerals or ash and vitamins.
Water is the most abundant, the cheapest but the most important
nutrient. Its importance can be estimated from the fact that life
cannot exist without it and adult animal’s body contains 70-80% water.
Moreover animal product such as milk contains a large amount of
water (upto 83 to 87%).
Functions of Water:
It is an essential part of all body
It helps in maintenance of body
temperature and pH
It helps in digestion, absorption
It helps in respiration by
It helps in the transportation of
nutrients to different parts of the body
It acts as solvent for many
constituents of body nutrients
It protects the various vital
organs against outer shocks and injuries
It acts as a cushion for tissue
cells and nervous system
It provides shapes to the body
It maintains proper
fluid and ion balance in body
All the biochemical and
physiological reactions take place in water
Sources of Water in Animals:
There are three sources of water:
Drinking water which is the major
portion of water consumed by an animal
Feed water which reaches the animal
body along with feed. For example green fodder contains 75-95% moisture
Metabolic water which result from
the metabolic activities of various nutrients present inside the
animal body. For example one gram carbohydrates, one gram fat
and one gram protein yield 0.60 ml, 1.70 ml and 0.42 ml
metabolic water respectively.
Carbohydrates are compound of carbon, hydrogen and oxygen in which
the ratio of hydrogen to oxygen is almost the same as that in water.
Carbohydrates may be defined as polyhydroxy aldehyde or ketone or
anhydrides of such derivatives. These are synthesized in plant
through photosynthesis. Plants tissue may contain carbohydrates up
to 50% of its dry weight in forages and about 80% in cereal grains.
Classification of Carbohydrates:
Carbohydrates are divided into three main groups:
– Monosaccharides are simple sugars which cannot further be hydrolyzed.
They are the building block of more complex carbohydrates. They may
be subdivided into trioses (having three carbons), tetroses (having
four carbons), pentoses (having five carbons), hexoses (having six
carbons) and heptoses (having seven carbons). Glucose, fructose,
mannose and ribose are the examples of simple sugars.
– Disaccharides are compound sugars which are composed of two monosaccharides. These mononsaccharides are connected through
glycosidic linkage. Sucrose, maltose, lactose and cellobiose are
the examples disaccharides.
– Polysaccharides are complex sugars which contain a large number of monosaccharides. These are not sweet in taste that is why also
called as non sugars. These are further classified as structural and
non structural carbohydrates.
- Most of the cell wall in plant is composed of structural
polysaccharides in the form of cellulose and hemicelluloses. These
polysaccharides provide structural support for plant tissue. Like
starch, cellulose and hemicelluloses are also made of glucose units
but are less digestible due to complex linkages among the glucose
units. The structural carbohydrate content increases with the
maturity of plants.
Non structural polysaccharides
- Starch is one of the most important non structural and non fibrous
polysaccharide found in plants, particularly in grains ad tubers.
Most of the plant glucose is stored in the form of starch. Starch
contains amylose and amylopectin in variable concentrations. Starch
from different sources varies in its digestibility.
Functions of Carbohydrates:
They are the source of energy for
They are building stones for other
They are stored in animal body in
form of glycogen
They give the filing effect to
Lipids are organic compounds which are soluble in organic solvents
and have important biochemical and physiological functions in body.
Nutritionally important lipids are fats and oils. The building
blocks of lipids are fatty acids and glycerol. Depending upon the
number of fatty acids present in lipids they are classified as monoglycerides, diglycerides, and triglycerides. Fats are solid at
room temperature while oils are liquid at room temperature. Waxes
are esters of fatty acids with alcohols other than glycerol.
They supply energy
They provide heat insulation and
protection from minor injury
They are source of essential fatty
They carry fat soluble vitamins
They play role in structural
Proteins are complex organic compounds which are made up of amino
acids. Like carbohydrates proteins are composed of carbon,
hydrogen, oxygen, but in addition nitrogen is also present. Some
proteins also contain sulphur, iron and phosphorus. Proteins are
found in large amount in muscles, cell membrane, skin, wool/hair,
hormones and enzymes. Plants and some bacteria are the original
sources of all proteins because they have ability to synthesize
their own proteins.
Amino acids present in protein are associated with each other by
peptide linkages. The type of amino acids present in protein
molecule and their relative proportion and arrangement are unique
for each protein. Nutritional value of protein depends primarily on
its amino acid composition. From nutritional point of view amino
acids are grouped as essential and non essential amino acids.
Essential amino acids are those, which body cannot synthesize and
they are required to be supplied in the diets. So they are dietary
essential. On the other hand non essential amino acids are those
which body can synthesize through transamination. Essential amino
acids include threonine, valine, histidine, arginine, lysine, leucine,
isoleusine, methionine, phenyalanine, and tryptophan. Non essential
amino acids include hydroxyproline, proline, alanine, serine,
cystine, gycine, glutamic acid, aspartic acid, tyrosine, and
Functions of Proteins:
They have role in formation of body
structure and tissues
They have regulatory function as
osmotic pressure, water balance and pH
They are necessary for body
hormones and enzymes
They are required for milk
They are involved in hereditary
They play role in antibodies
formation to develop immunity in body
Minerals are essential dietary constituents which are required in
relatively small quantities. Animal tissue and feed contain about 45
mineral elements in varying quantities. On the basis of requirement
minerals are classified as micromineral and macromineral.
Macrominerals are those which are required in relatively large
amounts while micro minerals are those which are required in small
amount. Microminerals are also called trace elements. Calcium,
phosphorus, potassium, sodium, chlorine, magnesium and sulphur are
the examples of some macrominerals while iron, zinc, manganese,
copper, cobalt, iodine, selenium, chromium and molybdenum are the
examples of some microminerals. Animal body contains 3-5% minerals
on empty body weight basis.
Functions of Minerals:
They give rigidity and strength to
They are components of certain
biomolecules such as proteins, phospholipids, mucopolysaccharides, hormones and vitamins
They also act as activator of many
As soluble salts, they play an
important role in osmosis, acid base balance, muscle contraction
and nerve transmission
Mineral status of animal also
affects the balance of symbiotic microflora of gastrointestinal
tract, modulates immunity and helps the animals against stress.
Vitamins are complex organic compounds that are essential for life
and good health. These are classified as fat soluble vitamins and
water soluble vitamins. Fat soluble vitamins include A, D, E and K
while water soluble vitamins are thiamin (B1), riboflavin (vitamin
B2), niacin (vitamin B3), pantothenic acid (vitamin B5), pyridoxamine
(vitamin B6), cobalamin (vitamin B12), choline, folic acid and
ascorbic acid (vitamin C). |
Each year, World Wetlands Day is celebrated on February 2. Wetlands come in many forms and go by many names - estuaries, bogs, mangrove swamps, vernal pools, marshes, riparian wetlands, cypress swamps, playa lakes and more! Wetland areas improve water quality, provide flood protection and support tones of fish, wildlife and plants. If you've been hunting, clamming, crabbing, or enjoy eating salmon, you reaped the benefits of wetland ecosystems. Wetlands are some of the most important resources for migratory birds like ducks, geese, and sandhill cranes, and also support moose, black bears, lynx, beavers and other wildlife. Cranberries and blueberries grow in bogs in the northern United States.
Despite their many benefits, the United States loses about 60,000 acres of wetlands each year. Compared to other coastal states, Florida, Texas, California and Louisiana have lost most coastal marshland - California alone has lost more than 91 percent of its coastal wetlands and the Chesapeake Bay has lost 50 percent of its coastal marshes. Since the arrival of settlers, 70 percent of tidally influenced wetlands in Puget Sound have been lost. And, only about 40 to 50 percent of the prairie region's original prairie pothole wetlands remain undrained today.
Viewer Tip: No matter where you live, chances are there is a wetland nearby. Development that occurs on or nearby wetlands can lead to loss of habitat, changes in water flow, polluted runoff and other impacts. Try these tips to protect your local wetlands:
- Keep lawns and driveways free of pet waste, fertilizers and motor oil. These pollutants can wash into storm drains and eventually reach a wetland.
- Choose native species when planting trees, shrubs and flowers to preserve the ecological balance of local wetlands.
- Use non-toxic products for household cleaning and lawn and garden care. Never spray lawn and garden chemicals outside on a windy day or on a day when it might rain and wash the chemicals into local waterways.
- Many exotic animals are introduced into wetlands by homeowners and hobbyists, where they can harm native wildlife. If you have a home aquarium with exotic saltwater or freshwater fish or raise non-native amphibians or reptiles, do not release them into the wild.
- Volunteer to help monitor local wetlands near you. Visit water.epa.gov/type/watersheds/monitoring/vol.cfm for more information!
For more weather and environment tips, visit Earth Gauge!
(Sources: U.S. Environmental Protection Agency, "Wetlands"; "Volunteer Monitoring"; “American Wetlands Month,”; Izaak Walton League of America, “Wetlands Sight and Sounds Series,”; National Biological Information Infrastructure Digital Image Library; U.S. Fish and Wildlife Service Digital Library System) |
When you hear the word chlamydia, you might think of the sexually transmitted disease (STD) by that name. The STD is caused by Chlamydia trachomatis, one species of Chlamydia bacteria. Another species, called Chlamydia (or Chlamydophila) pneumoniae, causes respiratory illnesses. These lung infections are spread in the same way as many other respiratory diseases. They are passed from person to person directly through coughs or sneezes and indirectly from germs on hands or other objects. The number of these infections peaks in school-aged children between 5 and 15 years of age.
Signs and Symptoms
Illnesses caused by C pneumoniae can cause a prolonged cough, bronchitis, and pneumonia as well as a sore throat, laryngitis, ear infections, and sinusitis. They usually start gradually with a sore throat that is followed by a cough about a week or more later. The cough may last for 2 to 6 weeks. In some cases, the child may get bronchitis or a mild case of pneumonia. While some infected children have only mild to moderate symptoms or no symptoms at all, the infection may be more severe in others.
How Is the Diagnosis Made?
Many cases of C pneumoniae are diagnosed by a pediatrician after doing a physical examination of the child and looking at his symptoms. The doctor can also order blood tests that detect antibodies to the bacteria. However, it can take a week or more for the antibodies to show up in the blood. Although there are special laboratories that can evaluate swab specimens from the nose or throat, there are no reliable commercially available studies at this time.
Recovery from a Chlamydia respiratory infection may be slow. Your pediatrician can prescribe antibiotics such as erythromycin or tetracycline to clear up the infection and help your child get better faster.
To lower the chances of your child getting a C pneumoniae infection, he should practice good hygiene, including frequent hand washing. |
Chloroplasts are organelles found in plant cells and eukaryotic algae which conduct photosynthesis. Chloroplasts are similar to mitochondria but are found only in plants. Both organelles are surrounded by a double membrane with an intermembrane space; both have their own DNA and are involved in energy metabolism; and both have reticulations, or many foldings, filling their inner spaces. Chloroplasts convert light energy from the sun into ATP through a process called photosynthesis.
Chloroplasts are one of the forms a plastid may take, and are generally considered to have originated as endosymbiotic cyanobacteria. In green plants chloroplasts are surrounded by two lipid bilayer membranes, now thought to correspond to the outer and inner membranes of the ancestral cyanobacterium. The genome is considerably reduced compared to that of free-living cyanobacteria, but the parts that are still present show clear similarities.
It is interesting to note that in some algae, chloroplasts seem to have arisen through a secondary event of endosymbiosis, where an eukaryotic cell joined with a second eukaryotic cell containing chloroplasts, forming chloroplasts with four membrane layers.
The fluid within the chloroplast is called the stroma, corresponding to the cytoplasm of the bacterium, and contains tiny circular DNA and ribosomes, though most of their proteins are synthesized by the cell nucleus. Within the stroma are stacks of thylakoids, the sub-organelle where photosynthesis actually takes place. A stack of thylakoids is called a granum. A thylakoid looks like a flattened disk, and inside is an empty area called the thylakoid space or lumen. The photosynthesis reaction takes place on the surface of the thylakoid.
The photosynthetic proteins in the membrane bind chlorophyll, which is present with various accessory pigments. These give chloroplasts their green colour. Algal chloroplasts may be golden, brown, or red and show variation in the number of membranes and the presence of thylakoids. |
Turtles are members of the Phylum Chordata, Class Reptilia, and Order Testudines. Their unique shell (Figure 1), lack of teeth, and bony jaws, which are covered with a hard, keratinized beak somewhat like that of birds, make them unusual. A turtle shell has as many as 60 bones. It has two sections: a carapace, covering the animals back, and a plastron, covering its belly. The carapace and plastron are connected on the turtles right and left sides by a bony bridge, which is formed by extensions of the plastron. The shell is fashioned from bones originating in the skin, which fuse with one another as well as with the ribs, vertebrae, and parts of the shoulder girdle (Figure 2). In most species, large scales, called scutes, overlay the bones. However, in softshell turtles, a tough, leathery skin replaces the scutes.
Most Illinois turtles are able to withdraw their head and neck into the shell by bending the neck into a vertical S-shaped curve. In species such as box turtles and mud turtles, the plastron is hinged, allowing it to close on the carapace. This feature provides the animal with more complete protection. Turtles usually have prominent tails that vary in size with sex (tails of males are longer and heavier than those of females) and with species (snapping turtles have the longest tails of Illinois species). Turtles use their limbs to propel themselves in water as well as over land. The toes of most species are extensively connected by webbing, an adaptation that aids them in aquatic locomotion.
Showing Bone Structure (plastron removed)
Cross section of turtle showing relationship between skeleton and shell. |
The Living Museum
By Joanna Streetly
Excerpted from Paddling Through Time, Raincoast Books 2000.
Everything about the rainforest suggests abundance, from the massive girths of the hanging garden tree and other western red cedars to the plump clusters of huckleberries and salal berries, just waiting to ripen. It's hard to believe that this forest is old, even though evidence of decay is everywhere. Vibrant colours and textures conceal the infinite processes at work, but looking carefully, it's possible to see beyond the facade of colour.
The first challenge is to feel the centuries of life these big cedars have experienced. Some of them may be a thousand years old or more, which brings us back to the tenth century. The forest may have looked remarkably similar then; it is said to have been evolving continuously since the glaciers started retreating 10,000 years ago. Coastal forests are too moist to be significantly affected by fire. This environment, then, has experienced a long and continuous evolution, and may contain many invisible links to its ancient beginnings.
Given the intact appearance of the forest on Meares Island, it's easy to presume that it's never been altered by humans. It hasn't been clearcut – the method that leaves the most obvious evidence of human presence – so it's difficult to grasp how much it may have been used by First Nations people. They seem to have left no trace of their labours. There are clues, though. It's just that they are so subtle. Many trees have been used but left standing, having only had a section of bark, or wood, removed. But the work was done so discreetly the result is almost invisible. Wood taken from a living tree never exceeded a certain amount, so the ability of the tree to heal itself wasn't compromised. The scars blend into the naturally convoluted shapes of the trees. In other places, there is a mossy stump, cleanly cut, but no evidence of the accompanying tree trunk, which may have been removed to begin life anew as a dugout canoe.
Information gleaned by Europeans who passed this area in 1835 suggest that Clayoquot Sound may have been home to 2,000 people at that time. These people were dependent on the cedar for nearly every aspect of their lives: the bark was used for clothing, weaving and rope making; planks they cut from it were used for longhouses; the trunks were used for canoes and totem poles. The cedar provided warmth and shelter; it allowed cultural development through art; it provided the means to hunt and fish; it facilitated travel and trade and thus a knowledge of other cultures. The identity of the people was enormously tied to the cedar. It was a magnificent resource, one that was respected and well-used, but never “used†in a way consistent with the Western understanding of that word.
My wonder at this forest never ceases when I think of it as the historical showcase that it is. Museums are hardly necessary on this coast; all that is needed are eyes to look with and time for contemplation. And what better place for contemplation?
About the author
Joanna Streetly is an author, editor and illustrator based in Tofino and currently at work on her fourth book. Look for her previous books: Silent Inlet, Paddling Through Time, and Salt in our Blood in local bookstores, or on the internet. For now, you can find her at JoannaStreetly.com |
A new Center for Biological Diversity report outlines the important ecological benefits of last summer’s Rim Fire in northern California – and exposes how a U.S. Forest Service plans to allow 30,000 acres of logging in the burned area that could cause significant harm to wildlife, water and the re-growing forest.
The report, Nourished by Wildfire: The Ecological Benefits of the Rim Fire and the Threat of Salvage Logging, explains how fires are essential for maintaining biological diversity in the Sierra Nevada ecosystem.
“Burned forests are not dead zones, but rather teem with life,” said the Center’s Justin Augustine. “The reflex reaction to log after forest fires directly contradicts decades of scientific research showing both the immense ecological importance of post-fire landscapes and the significant harm that can occur when such areas are logged.”
The moderate and high-intensity fire areas in conifer forest within the Rim fire created what is known as ‘complex early seral forest’ — one of the rarest, most biodiverse habitat types in the Sierra Nevada. Not only do post-fire landscapes provide critical wildlife habitat; if not logged, they can also result in a forest that is naturally more resilient to climate change.
The new report analyzes the Rim fire in relation to the relevant biological science and recommends that, rather than focusing on industrial-scale salvage logging, post-fire management should focus on activities that benefit forest health, water quality and the many native species that depend on fire for their existence. “The Rim fire provided many environmental benefits,” said Dr Chad Hanson, a research ecologist with the John Muir Project. “Most significantly, the high-intensity fire areas created critical wildlife habitat — a habitat that is even rarer and more threatened than old-growth forest.” |
Dr. C. George Boeree
[Note: The quotes in italics below
are from Mental Health: A
of the Surgeon General, U.S. Public Health Services (1999),
The anxiety disorders are the most common,
occurring, mental disorders. They encompass a group of conditions that
share extreme or pathological anxiety as the principal disturbance of
or emotional tone. Anxiety, which may be understood as the pathological
counterpart of normal fear, is manifest by disturbances of mood, as
as of thinking, behavior, and physiological activity.
Anxiety is at the root of many, if not
all, of our psychological
disorders. It is, physically, a kind of fear response, involving
the activation of the sympathetic nervous system, in response to a
dangerous situation. More specifically, anxiety is the
anticipation of danger, learned through repeated stress or
trauma. Some people are innately more sensitive to stress, and so
are more likely to experience anxiety and develop anxiety
disorders. But everyone becomes sensitized to stress and trauma
with repeated experiences: Each experience "tunes" the nervous
system to respond more quickly and more profoundly to perceived danger.
We often talk about anxiety as some sort
of genetic issue, and also as
something based on traumas in childhood. But long term stress
is probably more often the root of anxiety disorders. The
constant demands of living in poverty, discrimination, war, and abuse
are a part of daily life for millions of people around the world.
There are basically five ways in
which people respond to
unrelenting stress and trauma and the anxiety that comes with them:
Panic Attacks and Panic Disorder
A panic attack is a discrete period of intense fear or discomfort that is associated with numerous somatic and cognitive symptoms (DSM-IV). These symptoms include palpitations, sweating, trembling, shortness of breath, sensations of choking or smothering, chest pain, nausea or gastrointestinal distress, dizziness or lightheadedness, tingling sensations, and chills or blushing and “hot flashes.” The attack typically has an abrupt onset, building to maximum intensity within 10 to 15 minutes. Most people report a fear of dying, “going crazy,” or losing control of emotions or behavior. The experiences generally provoke a strong urge to escape or flee the place where the attack begins and, when associated with chest pain or shortness of breath, frequently results in seeking aid from a hospital emergency room or other type of urgent assistance. Yet an attack rarely lasts longer than 30 minutes.
Panic disorder is about twice as common among women as men (American Psychiatric Association, 1998). Age of onset is most common between late adolescence and midadult life, with onset relatively uncommon past age 50.
Panic attacks are themselves traumatic, and so lead to increased
anxiety, which makes the person more vigilant and more likely to
misinterpret situations as well as bodily symptoms, and so have more
panic attacks. They are the classic example of anticipatory anxiety: Being
afraid of having a panic attack is the very thing that causes the panic attack!
The ancient term agoraphobia is translated from Greek as fear of an open marketplace. Agoraphobia today describes severe and pervasive anxiety about being in situations from which escape might be difficult or avoidance of situations such as being alone outside of the home, traveling in a car, bus, or airplane, or being in a crowded area (DSM-IV).
Most people who present to [are seen by] mental health specialists develop agoraphobia after the onset of panic disorder (American Psychiatric Association, 1998). Agoraphobia is best understood as an adverse behavioral outcome of repeated panic attacks and the subsequent worry, preoccupation, and avoidance (Barlow, 1988).
Agoraphobia occurs about two times more commonly among women than men (Magee et al., 1996).
Since 95% of agoraphobics also have panic disorder, perhaps the two
categories are really only one.
These common conditions are characterized by marked fear of specific objects or situations (DSM-IV). Exposure to the object of the phobia, either in real life or via imagination or video, invariably elicits intense anxiety, which may include a (situationally bound) panic attack. Adults generally recognize that this intense fear is irrational. Nevertheless, they typically avoid the phobic stimulus or endure exposure with great difficulty. The most common specific phobias include the following feared stimuli or situations: animals (especially snakes, rodents, birds, and dogs); insects (especially spiders and bees or hornets); heights; elevators; flying; automobile driving; water; storms; and blood or injections.
Approximately 8 percent of the adult population suffers from one or more specific phobias in 1 year.... Typically, the specific phobias begin in childhood, although there is a second “peak” of onset in the middle 20s of adulthood (DSM-IV). Most phobias persist for years or even decades, and relatively few remit [improve] spontaneously or without treatment.
The specific phobias generally do not result from exposure to a single traumatic event (i.e., being bitten by a dog or nearly drowning) (Marks, 1969). Rather, there is evidence of phobia in other family members and social or vicarious learning of phobias (Cook & Mineka, 1989). Spontaneous, unexpected panic attacks also appear to play a role in the development of specific phobia, although the particular pattern of avoidance is much more focal and circumscribed.
Phobias can be understood in part as a matter of conditioned
fear: Strong anxiety or a panic attack is experienced at
time as the phobic object, and so becomes associated with that
More often than not, the panic is not a response to the phobic object
(snake, mouse, or spider), but rather to the loss of security
experienced when someone (such as your mom or dad) responds
dramatically to that object. If mom or dad is scared, I should be
It also seems that many phobias have a strong built-in
component. Many people are at least uncomfortable, if not phobic,
around snakes, mice, spiders, reptiles, heights, tight spaces, barking
birds. These things make us fearful even before we learn their
potential danger. These fears do make some sense, if you consider
the dangers these could have posed for our ancient ancestors. Of
course, it is not the figure of a bird, a snake, a spider, or a dog
that directly leads to
the fear response. It is rather the swooping motion, the
slithering, the unpredictable presence, the low growling noises, and so
Social phobia, also known as social anxiety disorder, describes people with marked and persistent anxiety in social situations, including performances and public speaking (Ballenger et al., 1998). The critical element of the fearfulness is the possibility of embarrassment or ridicule. Like specific phobias, the fear is recognized by adults as excessive or unreasonable, but the dreaded social situation is avoided or is tolerated with great discomfort. Many people with social phobia are preoccupied with concerns that others will see their anxiety symptoms (i.e., trembling, sweating, or blushing); or notice their halting or rapid speech; or judge them to be weak, stupid, or “crazy.” Fears of fainting, losing control of bowel or bladder function, or having one’s mind going blank are also not uncommon. Social phobias generally are associated with significant anticipatory anxiety for days or weeks before the dreaded event, which in turn may further handicap performance and heighten embarrassment.
Social phobia is more common in women (Wells et al., 1994). Social phobia typically begins in childhood or adolescence and, for many, it is associated with the traits of shyness and social inhibition (Kagan et al., 1988). A public humiliation, severe embarrassment, or other stressful experience may provoke an intensification of difficulties (Barlow, 1988). Once the disorder is established, complete remissions are uncommon without treatment. More commonly, the severity of symptoms and impairments tends to fluctuate in relation to vocational demands and the stability of social relationships.
Social phobia is another example of anticipatory
expectation of social embarrassment causes the anxiety that leads to
social embarrassment... In the U.S., social phobia often begins
in early adolescence, when peers often humiliate shy children.
This is common in any highly competitive society like ours. Also,
people in lower social positions in a very hierarchical society (and
yes, ours is one) often find themselves victimized this way, and
developing social phobia.
In Japan, there is an interesting variation on social phobia called taijin kyofusho (interpersonal
phobia). This involves
great anxiety that other people find your appearance, your face, and
your odor offensive.
Generalized Anxiety Disorder
Generalized anxiety disorder is defined by a protracted (> 6 months’ duration) period of anxiety and worry, accompanied by multiple associated symptoms (DSM-IV). These symptoms include muscle tension, easy fatiguability, poor concentration, insomnia, and irritability.... [T]he excessive worries often pertain to many areas, including work, relationships, finances, the well-being of one’s family, potential misfortunes, and impending deadlines. Somatic anxiety symptoms are common, as are sporadic panic attacks.
Generalized anxiety disorder occurs more often in women, with a sex ratio of about 2 women to 1 man (Brawman-Mintzer & Lydiard, 1996). The 1-year population prevalence is about 3 percent (Table 4-1). Approximately 50 percent of cases begin in childhood or adolescence.
In Latin America, some people suffer from something called nervios (nerves). They feel
a great deal of anxiety, insomnia, headaches, dizziness, even
palpitations. It usually begins with a loss of someone close, or
with family conflicts. Since family is everything in many
cultures, family problems are often at the root of psychological
Acute and Post-Traumatic Stress Disorders
Acute stress disorder refers to the anxiety and behavioral disturbances that develop within the first month after exposure to an extreme trauma. Generally, the symptoms of an acute stress disorder begin during or shortly following the trauma. Such extreme traumatic events include rape or other severe physical assault, near-death experiences in accidents, witnessing a murder, and combat. The symptom of dissociation, which reflects a perceived detachment of the mind from the emotional state or even the body, is a critical feature. Dissociation also is characterized by a sense of the world as a dreamlike or unreal place and may be accompanied by poor memory of the specific events, which in severe form is known as dissociative amnesia [loss of memory not based on physical causes]. Other features of an acute stress disorder include symptoms of generalized anxiety and hyperarousal, avoidance of situations or stimuli that elicit memories of the trauma, and persistent, intrusive recollections of the event via flashbacks, dreams, or recurrent thoughts or visual images.
By virtue of the more sustained nature of post-traumatic stress disorder (relative to acute stress disorder), a number of changes, including decreased self-esteem, loss of sustained beliefs about people or society, hopelessness, a sense of being permanently damaged, and difficulties in previously established relationships, are typically observed. Substance abuse often develops, especially involving alcohol, marijuana, and sedative-hypnotic drugs.
About 50 percent of cases of post-traumatic stress disorder remit within 6 months. For the remainder, the disorder typically persists for years and can dominate the sufferer’s life. A longitudinal [long-term] study of Vietnam veterans, for example, found 15 percent of veterans to be suffering from post-traumatic stress disorder 19 years after combat exposure (cited in McFarlane & Yehuda, 1996). In the general population, the 1-year prevalence is about 3.6 percent, with women having almost twice the prevalence of men (Kessler et al., 1995) (Table 4-1). The highest rates of post-traumatic stress disorder are found among women who are victims of crime, especially rape, as well as among torture and concentration camp survivors (Yehuda, 1999).
PTSD appears to involve a number of problems with the hippocampus
which, if you recall, is devoted to moving short-term memories into
long-term storage. First, intensely emotional events lead to
intense memories called flashbulb memories. It seems that these
memories may actually be partially stored in the amygdala, which
accounts for the fearfulness involved. In addition, the prolonged
stress of experiences such as war or childhood abuse actually begins to
destroy tissue in the hippocampus, making it more difficult to create
new long term memories. Studies show that people who have
suffered long-term trauma have anywhere from 8 to 12% less
hippocampus. The net result could be that they are, in a sense,
stuck in their traumatic past.
PTSD is an example of an anxiety disorder that also involves some of
the other responses to trauma I mentioned above. Many
self-medicate with alcohol and drugs, only
making the problem worse. Many are severely depressed.
There is also a degree of dissociation involved, meaning that victims
become numb, detached, showing little emotion. They no longer
feel real. Perhaps this is actually an adaptive response to
traumatic stress. We find this kind of dissociation commonly in
refugee populations, who can sometimes seem like
zombies. They may simply be protecting themselves from further
Obsessions are recurrent, intrusive thoughts, impulses, or images that are perceived as inappropriate, grotesque, or forbidden (DSM-IV). The obsessions, which elicit anxiety and marked distress, are termed “ego-alien” or “ego-dystonic” because their content is quite unlike the thoughts that the person usually has. Obsessions are perceived as uncontrollable, and the sufferer often fears that he or she will lose control and act upon such thoughts or impulses. Common themes include contamination with germs or body fluids, doubts (i.e., the worry that something important has been overlooked or that the sufferer has unknowingly inflicted harm on someone), order or symmetry, or loss of control of violent or sexual impulses.
Compulsions are repetitive behaviors or mental acts that reduce the anxiety that accompanies an obsession or “prevent” some dreaded event from happening (DSM-IV). Compulsions include both overt behaviors, such as hand washing or checking, and mental acts including counting or praying. Not uncommonly, compulsive rituals take up long periods of time, even hours, to complete. For example, repeated hand washing, intended to remedy anxiety about contamination, is a common cause of contact dermatitis [a common skin disease].
Although once thought to be rare, obsessive-compulsive disorder has now been documented to have a 1-year prevalence of 2.4 percent (Table 4-1). Obsessive-compulsive disorder is equally common among men and women.
Obsessive-compulsive disorder typically begins in adolescence to young adult life (males) or in young adult life (females).... Approximately 20 to 30 percent of people in clinical samples with obsessive-compulsive disorder report a past history of tics, and about one-quarter of these people meet the full criteria for Tourette’s disorder (DSM-IV).
Obsessive-compulsive disorder has a clear familial pattern and somewhat greater familial specificity than most other anxiety disorders. Furthermore, there is an increased risk of obsessive-compulsive disorder among first-degree relatives with Tourette’s disorder. Other mental disorders that may fall within the spectrum of obsessive-compulsive disorder include trichotillomania (compulsive hair pulling), compulsive shoplifting, gambling, and sexual behavior disorders (Hollander, 1996).
We are beginning to understand some of the brain activities
associated with OCD . The caudate nucleus (a part of the basal
ganglia near the limbic system) is responsible, among other things, for
urges, including things like reminding you to lock doors, brush your
teeth, wash your hands, and so on. It sends messages to the
orbital area (above the eyes) of the prefrontal area, which tells us
that something is not right. It also sends messages to the
cingulate gyrus (just under the frontal lobe), which keeps attention
focused, in this case on the feeling of something not being right and
needing to be done. It is believed that, in people with
OCD, this system is stuck on "high alert."
It should be noted that OCD responds fairly well to the same
medications (such as Prozac) that help people who are depressed, which
suggests that the seratonin pathways of the frontal lobe and limbic
system are involved, just as they are with depression. More
recently, scientists have discovered several genes that appear
to be strongly tied to OCD.
But don't think OCD is a purely physiological disorder! It
varies a great deal from culture to culture. In some cultures,
the behaviors are even seen as positive. Remember that there are
still all kinds of superstitious behaviors that people engage in today,
which are no different from compulsions. And, while being
obsessed with, say, germs is considered odd, being obsessed with, say,
football is considered perfectly okay in our culture!
We might also include hypochondriasis here (even though it is "officially" classified as a somatoform disorder). People with hypochondriasis (called hypochondriacs) are preoccupied with fears of having or getting a serious disease. Even after being told that they do not have the disease they are concerned about, they continue to worry. They often exaggerate minor abnormalities, go from doctor to doctor, and ask for repeated examinations and medical tests. A guess at prevalence of hypochondriacs is that it involves between 4% and 9% of the population.
A curious version of hypochondriasis is found in India, called dhat. People with dhat suffer
from anxiety, fatigue, aches, weakness, depression, and so on - all
revolving around an obsessive concern with having lost too much
semen! We may laugh, but 100 years ago, westerners also believed
that a man has only so much semen to use in his life-time, and 50 years
ago, coaches would warn their players not to have sex the night before
a big game because it would drain them of energy. It isn't that
much different from who, in the U.S.
today, people are obsessed with
aging to such a degree that they are willing to undergo surgery and
injections of poisons to appear younger - even though these activities
may actually decrease their life-span!
Three other disorders are related to obsessive-compulsive disorder
(although officially categorized as impulse-control
Trichotillomania is the “recurrent pulling out of one’s hair
for pleasure, gratification, or relief of tension that results in
hair loss.” (DSM IV) It is not restricted to hair on head,
and may even involve pulling out eyelashes.
Trichotillomania is often associated with stress, but sometimes occurs
while the person is relaxed as well. It usually starts in
or adolescence. 1 to 2% of college students report having had
at some time. The students I have known who suffer from
trichotillomania also had OCD.
Kleptomania is the “recurrent failure to
resist impulses to steal
objects not needed for personal use or monetary value.” (DSM IV)
The person knows it is wrong, fears being caught, and feels guilty
it, but can’t seem to resist the impulse. It is rare, but
more common among women than among men. It is, as you can
imagine, difficult to differentiate from intentional stealing!
Pathological gambling is “recurrent and persistent maladaptive gambling behavior.” (DSM IV) We often call it compulsive gambling. A lot of distorted thinking goes with it - superstition, overconfidence, denial. Pathological gamblers tend to be people with a lot of energy who are easily bored, and the urge to gamble increases when they are under stress. It may involve 1 to 3% of the population, and two thirds are men. |
What is it?
- Pink eye (conjunctivitis) is an inflammation or infection of the transparent membrane (conjunctiva) that lines your eyelid and part of your eyeball. Inflammation causes small blood vessels in the conjunctiva to become more prominent, which is what causes the pink or red cast to the whites of your eyes.
- The cause of pink eye is commonly a bacterial or viral infection, an allergic reaction or — in babies — an incompletely opened tear duct.
- Though the inflammation of pink eye can be irritating, it rarely affects your vision. If you suspect pink eye, you can take steps to ease your discomfort. But because pink eye can be contagious, early diagnosis and treatment is best to help limit its spread.
The most common pink eye symptoms include:
- Redness in one or both eyes
- Itchiness in one or both eyes
- A gritty feeling in one or both eyes
- A discharge in one or both eyes that forms a crust during the night
Causes of pink eye include:
- A chemical splash in the eye
- A foreign object in the eye
- In newborns, a blocked tear duct
Viral and bacterial conjunctivitis
- Viral conjunctivitis and bacterial conjunctivitis may affect one or both eyes. Viral conjunctivitis usually produces a watery discharge. Bacterial conjunctivitis often produces a thicker, yellow-green discharge. Both viral and bacterial conjunctivitis can be associated with colds or with symptoms of a respiratory infection, such as a sore throat.
- Both viral and bacterial types are very contagious. Adults and children alike can develop both of these types of pink eye. However, bacterial conjunctivitis is more common in children than it is in adults.
- Allergic conjunctivitis affects both eyes and is a response to an allergy-causing substance such as pollen. In response to allergens, your body produces an antibody called immunoglobulin E (IgE). This antibody triggers special cells called mast cells in the mucous lining of your eyes and airways to release inflammatory substances, including histamines. Your body's release of histamine can produce a number of allergy signs and symptoms, including red or pink eyes.
- If you have allergic conjunctivitis, you may experience intense itching, tearing and inflammation of the eyes — as well as sneezing and watery nasal discharge. Most allergic conjunctivitis can be controlled with allergy eyedrops.
Conjunctivitis resulting from irritation
Irritation from a chemical splash or foreign object in your eye is also associated with conjunctivitis. Sometimes, flushing and cleaning the eye to rid it of the chemical or object causes redness and irritation. Signs and symptoms, which may include watery eyes and a mucous discharge, usually clear up on their own within about a day.
Risk factors for pink eye include:
- Exposure to an allergen for allergic conjunctivitis
- Exposure to someone infected with the viral or bacterial form of conjunctivitis
- Using contact lenses, especially extended-wear lenses
In both children and adults, pink eye can cause inflammation in the cornea that can affect vision. Prompt evaluation and treatment by your doctor can reduce the risk of complications.
To determine whether you have pink eye, your doctor may examine your eyes. Your doctor may also take a sample of eye secretions from your conjunctiva for laboratory analysis to determine which form of infection you have and how best to treat it.
Treatments and drugs
Treatment for bacterial conjunctivitis
If your infection is bacterial, your doctor may prescribe antibiotic eyedrops as pink eye treatment, and the infection should go away within several days. Antibiotic eye ointment, in place of eyedrops, is sometimes prescribed for treating bacterial pink eye in children. An ointment is often easier to administer to an infant or young child than are eyedrops, though the ointment may blur vision for up to 20 minutes after application. With either form of medication, expect signs and symptoms to subside within a few days. Follow your doctor's instructions and use the antibiotics until your prescription runs out, to prevent recurrence of the infection.
Treatment for viral conjunctivitis
There is no treatment for most cases of viral conjunctivitis. Instead, the virus needs time to run its course — up to two or three weeks. Viral conjunctivitis often begins in one eye and then infects the other eye within a few days. Your signs and symptoms should gradually clear on their own.
Antiviral medications may be an option if your doctor determines that your viral conjunctivitis is caused by the herpes simplex virus.
Treatment for allergic conjunctivitis
If the irritation is allergic conjunctivitis, your doctor may prescribe one of many different types of eyedrops for people with allergies. These may include antihistamines, decongestants, mast cell stabilizers, steroids and anti-inflammatory drops. You may also reduce the severity of your of allergic conjunctivitis symptoms by avoiding whatever causes your allergies, when possible.
To help you cope with the signs and symptoms of pink eye until it goes away, try to:
- Apply a compress to your eyes. To make a compress, soak a clean, lint-free cloth in water and wring it out before applying it gently to your closed eyelids. A cool water compress may help relieve allergic conjunctivitis. If you have bacterial or viral conjunctivitis, you may prefer a warm compress. If pink eye affects only one eye, don't touch both eyes with the same cloth. This reduces the risk of spreading pink eye from one eye to the other.
- Try eyedrops. Over-the-counter eyedrops called artificial tears may relieve symptoms. Some eyedrops contain antihistamines or other medications that can be helpful for people with allergic conjunctivitis.
- Stop wearing contact lenses. If you wear contact lenses, you may need to stop wearing them until your eyes feel better. How long you'll need to go without contact lenses depends on what's causing your conjunctivitis. Ask your doctor whether you should throw away your disposable contacts, as well as your cleaning solution and lens case. If your lenses aren't disposable, clean them thoroughly before reusing them.
Preventing the spread of pink eye
Practice good hygiene to control the spread of pink eye. For instance:
- Don't touch your eyes with your hands.
- Wash your hands often.
- Use a clean towel and washcloth daily.
- Don't share towels or washcloths.
- Change your pillowcases often.
- Throw away your eye cosmetics, such as mascara.
- Don't share eye cosmetics or personal eye-care items.
Although pink eye symptoms may resolve in three or four days, children with viral conjunctivitis may be contagious for a week or more. Children may return to school when they no longer experience tearing and matted eyes.
If your child has bacterial conjunctivitis, keep him or her away from school until after treatment is started. Most schools and child care facilities require that your child wait at least 24 hours after starting treatment before returning to school or child care. Check with your doctor if you have any questions about when your child can return to school or child care.
Preventing pink eye in newborns
Newborns' eyes are susceptible to bacteria normally present in the mother's birth canal. These bacteria cause no symptoms in the mother. In rare cases, these bacteria can cause infants to develop a serious form of conjunctivitis known as ophthalmia neonatorum, which needs treatment without delay to preserve sight. That's why shortly after birth, an antibiotic ointment is applied to every newborn's eyes. The ointment helps prevent eye infection. |
The Pequot War has long been an obscure event in the historical perspective of the general public. The film is intended to increase public understanding of the significance of this event, not only for northeastern Native Peoples and descendants of the English and Dutch colonists who settled the region, but also for Native Peoples across America and for all Americans today. Broadcast will be sought on public television, with distribution to schools and educational institutions on videotape.
The producers' intent is to make the documentary as historically accurate and as unbiased as possible. A responsibly balanced representation of viewpoints is essential. Not only has the project relied on a broadly-based Advisory Board, but it also has utilized scholars, Native Americans, and descendants of the colonists to help tell the story and provide their own personal and often passionate viewpoints.
The film does not seek to characterize the War solely as a conflict between the Pequots and the colonists for control of territory, but rather as a struggle between different value systems that included not only the Pequots, but a number of Native American tribes, most of which allied with the English. It not only present s facts, but also seeks to help the viewer better understand on a human level the people who fought the War. It does not seek to sympathize with or condemn any particular group, but rather to increase our understanding of the groups involved and the forces that precipitated the War.
The documentary examines the underlying human motivations and cultural/religious differences that led to war and explores how the legacy of the Massacre at Mystic and the Pequot War still affects the lives of Native American and Puritan descendants in the region today.
Basic Themes of the Documentary
Cultural Value Systems and Religious Perspective
Native Americans and the English Puritans saw the world around them in entirely different ways, especially with respect to land ownership and warfare. Natives believed land could be occupied and used, but they had no real concept of land ownership. The English believed they had divine rights (through patents from the King, purchase, occupation of unused land, or rights of conquest) to possess the land.
Compared to European warfare, Native warfare was conducted on a small scale. Although capture, torture, and other foul deeds were routinely exercised on individuals, large numbers of people were not killed in conflicts. The Natives were not prepared for the kind of unlimited warfare practiced against them by Europeans.
Natives saw themselves as being in communion with other peoples, animals, and indeed all of nature as part of a world embraced by Manitou, the living Spirit in all things. The Puritans saw themselves as the chosen people of God establishing a "New Jerusalem" in the wilderness of America, surrounded by people they saw as savages. The Puritans feared that their very survival in that wilderness was at stake. Ultimately, they believed that their ability to survive and overcome threats from heathen savages was a measure of their own righteousness before God.
Misconceptions and Miscommunications
Neither the Natives nor the Puritans completely understood what their actions meant to the other culture. Language differences and lack of understanding about how each culture practiced politics and negotiation contributed to the problem.
The Puritan English clearly feared for their survival. The Puritans were acutely aware of the 1622 Powhatan uprising in Virginia, in which Indians had killed hundreds of English settlers. The stories the New England settlers heard from most of the other tribes in the region, many of which had been subjugated by the Pequots, in their mind clearly showed that the Pequots were powerful, hostile, and devious. Most of these tribes ultimately fought with the English against the Pequots, somewhat dispelling the notion that the War was exclusively a "conflict of cultures."
The Legacy of the War
From a historical perspective, the War was an important early test of the "Indian Policy" of European settlers in America. Some Native Americans believe the legacy of the War is still with us, reflected by the greed, bigotry, racism, and intolerance they see around them. To them, the Pequot War is not over.
Creative ApproachA primary objective of the project is to present a balanced view of the historical events and their interpretation for us today. To achieve that objective, we present often highly divergent opinions, including both Puritan and Native American viewpoints. Although the program is characterized as a documentary, dramatic elements also will be used to involve the viewer. The documentary uses paintings, historical documents, and reenactments of events, with narration, and interviews with scholars and descendants of the people who fought the War. Photography utilizes both 16-mm film and video. Because of its perceived archival image quality, film is used for historical and dramatic segments to covey a sense of looking backward in time. Because of its "here and now news" quality, video is used for interviews and photography of locations as they appear today.
The documentary is intended to appeal to a broad pre-adolescent, adolescent, and adult audience, encompassing various demographic characteristics, including education level. Since the story deals with inter-cultural conflict and fundamental human motivations and emotions, it is "placeless" and "classless" in many respects. Its subject matter, however, may appeal more to those people with interests in colonial and Native American history, as well as Native American Issues.
Interested audiences should not be geographically limited, because the documentary deals with the clash between Native Americans and European-American settlers in the New World, a theme not limited to seventeenth-century New England.
Nationally, broadcast will be via American Public Television, with Rhode Island PBS serving as the presenting station. DVD/videocassette distribution will be via The Cinema Guild. Since it deals with early European (English and Dutch) colonization of America, the documentary also should be of interest to audiences in Europe, especially Great Britain, Ireland, and The Netherlands.
|©2005-2010 Mystic Voices LLC | Cast & Crew | The History | The Project | Funding | News & Reviews | Contact| |
A sine is half of a chord. More accurately, the sine of an angle is half the chord of twice the angle.
Consider the angle BAD in this figure, and assume that AB is of unit length. Let the point C be the foot of the perpendicular dropped from B to the line AD. Then the sine of angle BAD is defined to be the length of the line BC, and it is written sin BAD. You can double the angle BAD to get the angle BAE, and the chord of angle BAE is BE. Thus, the sine BC of angle BAD is half the chord BE of angle BAE, while the angle BAE is twice the angle BAD. Therefore, as stated before, the sine of an angle is half the chord of twice the angle.
The point of this is just to show that sines are all that difficult to understand. (Whoops, that’s a slip! I meant to write “not all that difficult to understand.”)
This word history for sine is interesting because it follows the path of trigonometry from India, through the Arabic language from Baghdad through Spain, into western Europe in the Latin language, and then to modern languages such as English and the rest of the world.
but BC = sin A, so
This result is most easily remembered as the sine of an angle in a right triangle equals the opposite side divided by the hypotenuse:
Consider a right triangle ABC with a right angle at C. We’ll generally use the letter a to denote the side opposite angle A, the letter b to denote the side opposite angle B, and the letter c to denote the side opposite angle C, that is, the hypotenuse.
With this notation, sin A = a/c, and sin B = b/c.
Next we’ll look at cosines. Cosines are just sines of the complementary angle. Thus, the name “cosine” (“co” being the first two letters of “complement”). For triangle ABC, cos A is just sin B.
44. In a right triangle B = 55° 30', and b = 6.05. Find c and a.
191. If the height of a gable end of a roof is 22.5 feet and the rafters are 30 feet 8 inches long, at what angle do the rafters slope, and how wide is the gable end at the base?
194. The top of a ladder 50 feet long rests against a building 43 feet from the ground. At what angle does the ladder slope, and what is the distance of its foot from the wall?
28. The hypotenuse c is 15". Since sin A = a/c, therefore a = c sin A. That gives you a. Next use the Pythagorean theorem to find b knowing a and c.
44. Since sin B = b/c, you can determine c. Once you’ve got b and c, you can determine a by the Pythagorean theorem.
191. A gable end ABD of a roof is an isosceles triangle with the base being the width of the house, and the two equal sloping sides the rafters at the end of the roof. If you drop a perpendicular from the apex B of the triangle, you’ll get two congruent right triangles, ABC and DBC. Since you know two sides of the right triangle ABC, you can compute the third by using the Pythagorean theorem. You can use sines to determine the angle of slope, since sin A = BC/AB = 22.5'/30'8" = 0.7337. To find the angle A, you’ll need what’s called the arcsine of 0.7337.
The arcsine function is inverse to the sine function, and your calculator can compute them. Usually there’s a button on the calculator labeled “inv” or “arc” that you press before pressing the sin button. Then you’ll have the angle. Your calculator can probably be set to either degree mode or radian mode. If it’s set to degree mode, then you’ll get the angle in degrees; and if it’s set to radian mode, then you’ll get the angle in radians. Always be sure you know which mode your calculator’s set to.
194. Draw a triangle ABC as above. You know the hypotenuse c and the vertical side a. The distance b can be found by the Pythagorean theorem. Just take the square root of c2 – a2. You can find the slope, that is, angle A, using sines. You know sin A = a/c = 43/50 = 0.86. As in problem 191, use arcsin to find the angle A.
28. a = c sin A = 15 (2/5) = 6 inches.
b2 = c2 – a2 = 189, so b = 13.7 inches.
44. c = b/sin B = 6.05/sin 55°30' = 7.34.
a = 4.16.
191. Angle A is 0.824 radians, or 47.2° = 47°12'. The width of the gable end is 41.7' = 41'8".
194. Since c2 – a2 = 651, therefore the distance b is the square root, namely 25.5 feet.
Now, sin A = 0.86, so A is 1.035 radians, or about 59.32° = 59°20'. |
In the 87 days that Dennis McGillicuddy and colleagues spent in the Sargasso Sea in the summer of 2005, they were tossed around or chased by four hurricanes and two tropical storms: Franklin, Harvey, Irene, Maria, Nate, and Ophelia.
Not one of those massive storms was as powerful as the one swirling in the water beneath them.
From June to September, McGillicuddy and a team of more than 20 scientists from Woods Hole Oceanographic Institution and five other marine science labs tracked an eddy named A4. It was the oceanic equivalent of a hurricane—a huge mass of water spinning like a whirlpool, moving through the ocean for months, stretching across more than 62 miles (100 kilometers), stirring up a vortex of water and material from the depths to the surface.
“Eddies are the internal weather of the sea,” says McGillicuddy, an associate scientist in the WHOI Applied Ocean Physics and Engineering Department. But unlike destructive hurricanes, eddies can be productive. As certain types of eddies stir the ocean, they draw nutrients up from the deep, fertilizing the waters to create blooms of microscopic marine plants in the open ocean, where little life was once thought to exist.
“The open ocean is twice as productive as we can explain based on what we know about nutrients in the water,” said McGillicuddy. “Where do all the nutrients come from to make these oases in the oceanic desert?”
The Sargasso Sea—south and east of the Gulf Stream—forms the geographic center of the North Atlantic Ocean. It is warmer, saltier, bluer, and clearer than most other parts of the North Atlantic, except for the floating mats of sargassum seaweed that gave the sea its name. For centuries, prevailing wisdom was that such open ocean waters were mostly desert-like, unproductive regions.
A lecture on the Sargasso Sea in the early 1990s sparked McGillicuddy’s curiosity. In the talk, Bill Jenkins, a senior scientist in the WHOI Marine Chemistry and Geochemistry Department, pointed out that scientists were finding more oxygen being produced and consumed in the open ocean than anyone expected. The suspects were phytoplankton, microscopic marine plants that produce oxygen in photosynthesis, and zooplankton (microscopic animals) and bacteria, which use oxygen as they consume plants and organic detritus that sink to the seafloor.
Scientists found 10 times more microscopic life in the Sargasso Sea than anyone could explain, given the dearth of nitrate, phosphate, trace metals, and other nutrients that plants need to grow in sunlit surface waters. Researchers slowly developed the hypothesis that vortices of cold or warm water—eddies—might somehow act as a biological pump.
“I had proposed a problem, and Dennis suggested a solution,” Jenkins said. “He had the clever idea that eddies were perturbing the layers of the water column, mixing different waters, and bringing nutrients up from below.” The upwelling of nutrients into the euphotic zone (the top 330 feet or 100 meters of the ocean, where light penetrates) would stimulate prodigious blooms of phytoplankton, which attract zooplankton and other animals up the food chain.
The Eddies Dynamics, Mixing, Export, and Species composition (EDDIES) project was born.
“Dennis has wanted to do this experiment since he was a graduate student,” said Dave Siegel, a longtime collaborator with McGillicuddy and an oceanographer from the University of California, Santa Barbara (UCSB).
McGillicuddy mustered chemists, biologists, and physical oceanographers from WHOI, UCSB, Rutgers University, Bermuda Biological Station for Research (BBSR), Virginia Institute of Marine Sciences, Dalhousie University, and the University of Miami. Together, they secured $3.5 million from the National Science Foundation, as well as five months of ship time over two years on the WHOI-operated research vessel Oceanus and the BBSR-operated Weatherbird II.
The goal: to make detailed chemical, biological, and oceanographic measurements of a specific eddy by getting right into the middle of it.
“We didn’t want to just sit on the fence and watch from one point,” said Ken Buesseler, chairman of the WHOI Department of Marine Chemistry and Geochemistry. “Eddies move and develop, so we decided to follow a parcel of ocean as it moved. This was the first time anyone has really studied an eddy in this way.”
Eddies are distinct parcels of water that move and jostle within the ocean, much like warm and cold air masses or high- and low-pressure systems in the atmosphere. Eddies are formed by differences in ocean temperature and salinity that give water different densities. Like oil and water, water masses of different densities tend to keep separate, rather than mix.
The largest eddies can contain up to 1,200 cubic miles (5,000 cubic kilometers) of water and can last for months to a year. Earth’s rotation—the Coriolis force—gives eddies their spin.
To hunt for their target, McGillicuddy and colleagues used data from satellites, whose measurements of sea surface heights show telltale signs of eddies. Warm-water eddies form bumps in the ocean; cold-water eddies form depressions. The team examined several eddies and settled on anticyclone No. 4, or A4, a “mode water” eddy (see "The Hunt for 18° Water") that stretched some 93 miles (150 kilometers) in diameter at the surface.
The EDDIES program took a truly integrated approach, combining many tools—satellites, ships, moorings, drifters, robotic vehicles, computer models—and many types of scientists.
From June 20 to Sept. 14, 2005, the researchers zigzagged across the eddy as it drifted southwest about 3.7 miles (6 kilometers) per day. The team on Oceanus buzzed around collecting water and nutrient samples, measuring current speeds and directions, and towing WHOI biologist Cabell Davis’ Video Plankton Recorder through the turbulent swirl. Bill Jenkins and his lab mates measured natural chemical markers such as tritium, an indicator of the amount of plant-fueling nitrate being raised from the depths. WHOI Senior Scientist Jim Ledwell, an expert on using tracers in the ocean, injected sulfur hexafluoride, a harmless chemical, into the middle of the eddy and tracked how it spread up, down, and across the sea.
At the same time, a research team on Weatherbird II made targeted measurements in the core of the eddy, measuring plant and animal productivity, the movement of particles, and thorium, a radioisotope that marks how much organic material is sinking from surface waters. Siegel used a radiometer to measure whether the eddy was disturbing the light penetrating the blue water.
“Ocean scientists are moving toward a more holistic view of their research problems,” said Siegel. “Ocean science grows by filling in the cracks between disciplines. If you put a smart and diverse group of people together in a boat, a lot of good things can happen. People start to think outside of their own little research worlds, and together we can tell scientific stories that we couldn’t put together individually.
Fueled by nutrients from the deep, diatoms bloomed to concentrations 10,000 to 100,000 times the norm—among the highest ever observed in the Sargasso Sea.
At the same time, the team was surprised to find historically low concentrations of oxygen in the depths, a sign of zooplankton and bacterial population explosions. It also meant that an awful lot of heat-trapping carbon dioxide may have been drawn out of the atmosphere and ocean surface, transformed by phytoplankton, and sunk to the bottom of the ocean.
Six months after the last EDDIES researcher stepped off Oceanus, the scientists are still assessing and analyzing the wealth of data they collected on A4. The team met in February 2006 at the international Ocean Sciences Meeting in Hawaii to share observations and collectively make sense of what they saw. Ultimately, the goal is to develop high-resolution computer models—McGillicuddy’s specialty—that can simulate and predict the full range of eddy dynamics.
The EDDIES project is a critical step toward comprehending these great ocean storms, whose sheer size and scale are daunting. During the expedition, tropical storm Harvey made a direct hit in early August, cutting a path right across eddy A4. The eddy hardly felt Harvey; the monstrous atmospheric storm never came come close to breaking up the potent, voluminous swirl of water in the ocean.
The EDDIES project received funding from the Chemical Oceanography, Biological Oceanography, and Physical Oceanography branches of the National Science Foundation. |
Endoscope: Thin tube containing opical fibres
- 1 to carry light -1 to carry an image back
- Image seen on eyepiece or screen
- called Key Hole surgary, good as doctors only need to cut a tiny hole
Pules oximeters use light to check the oxygen levels in the blood
Haemoglobian carries oxygen from lungs to cells. It changes colour due oxygen content. Lots of oxygen - Red (oxyhaemoglobian) Lackign oxygen - Purply(reduced haemoglobin)
How it works
- Trasmitter emits two beams of light (red & infared). It also has a photo detector to measure light.
- These are placed either side of the finger (or ear lobe etc)
- Beams of light are passed through the finger, some of the light is then absorbed by the blood.
- It is the difference in light intensity before the light passes through the blood, and after, that is used to measure the oxygen levels in the blood |
1. Observe and Wonder
Watch the live feed, and encourage students to share thoughts, questions, and predictions as they observe wildlife in real-time. Wonder aloud to model how to watch the action with the eyes of a scientist.
2. Document Observations
Set up scheduled viewing times. Have pairs of students take turns viewing the live cam for a set period of time. Create an observation log where students can record what they observe. Read aloud and discuss log entries.
3. Build Observation Skills
Introduce a video clip with recorded footage and give students a viewing challenge:
- What verbs describe the behaviors you see?
- What nouns describe the animals and objects?
- What words could describe the habitat? the weather? the season?
Watch the clip several times as students fill the chart with words. Have them check a thesaurus for synonyms and related words that could be added to the A-Z chart.
4. Research and Explore
Track down facts to answer questions that were sparked by observing live cam footage. After research, revisit the live feed or video clips. Share thoughts about your before-and-after-research observations. Challenge students to explore the animal's annual cycle to discover how it responds to seasonal change. Month by month, where is the animal and what is it doing?
5. Showcase Discoveries
Have students work in small groups to creatively showcase what they discovered by observing and researching wildlife. |
Spanish conquistadors fought and explored their way through the Americas in the 16th, 17th, and 18th centuries. In their wake, they left destroyed empires and millions of lives lost. But beyond the pure number of people killed, what is most disturbing are the horrific ways conquistadors murdered the native population.
While the ways conquistadors killed people were no doubt brutal, many historians argue that some of the most extreme elements may have been exaggerated as part of an anti-Spanish smear campaign known as the Black Legend. Nonetheless, there is no doubt among historians that the Spanish conquistadors ruthlessly slaughtered millions of native people.
They Vowed to Spare an Inca Ruler for Converting to Christianity, Then Killed Him Anyway
In 1533, Spanish conquistador Francisco Pizarro captured the Inca leader Atahualpa. In order to save his own life, Atahualpa promised to fill a 24-foot-long, 18-foot-wide, and 8-foot-tall room with gold (and then double that amount in silver). Pizarro quickly accepted the deal.
But as the riches were slowly delivered over the next two months, the Spanish conquistadors grew paranoid that it was a trick, so Pizarro sentenced Atahualpa to be burned alive.
But Atahualpa knew Pizarro would never burn a Christian, so he converted. This worked - and the conquistadors garroted him instead.
They Gathered Aztec Nobles in a Courtyard, Then Killed Them All
In 1519, Aztec nobles, priests, and leaders were led into a courtyard in the city of Cholula. Then conquistadors, under the orders of Hernan Cortes, attacked and slaughtered the unarmed crowd.
Soon thereafter, the city itself was attacked by Tlaxcalan soldiers. Tlaxcalans were longtime enemies of the Aztecs, and allied with the Spanish against them.
In the end, thousands of Cholulans were dead, and Cholula, one of the key cities of the Aztec Empire, was left in ruins.
They Fed Native People to Dogs
In his book Devastation of the Indies, Fray Bartolome de Las Casas wrote about conquistadors training dogs to attack and kill natives:
The Spaniards train their fierce dogs to attack, kill and tear to pieces the Indians... The Spaniards keep alive their dogs' appetite for human beings in this way. They have Indians brought to them in chains, then unleash the dogs. The Indians come meekly down the roads and are killed. And the Spaniards have butcher shops where the corpses of Indians are hung up, on display, and someone will come in and say, more or less,"Give me a quarter of that rascal hanging there, to feed my dogs until I can kill another one for them."
They Devised a Way to Hang Natives and Burn Them Alive Simultaneously
The conquistadors often devised ways to make the deaths of the native peoples as elaborate and painful as possible. In his book The Devastation of the Indies, Fray Bartolome de Las Casas wrote, "They built a long gibbet, low enough for the toes to touch the ground and prevent strangling, and hanged thirteen [natives] at a time in honor of Christ Our Saviour and the twelve Apostles..."
Then, once the natives were near death, "straw was wrapped around their torn bodies and they were burned alive." |
This NASA/ESA Hubble Space Telescope image shows a spiral galaxy known as NGC 7331. First spotted by the prolific galaxy hunter William Herschel in 1784, NGC 7331 is located about 45 million light-years away in the constellation of Pegasus (The Winged Horse). Facing us partially edge-on, the galaxy showcases it’s beautiful arms which swirl like a whirlpool around its bright central region.
Astronomers took this image using Hubble’s Wide Field Camera 3 (WFC3), as they were observing an extraordinary exploding star — a supernova — which can still be faintly seen as a tiny red dot near the galaxy’s central yellow core. Named SN2014C, it rapidly evolved from a supernova containing very little Hydrogen to one that is Hydrogen-rich — in just one year. This rarely observed metamorphosis was luminous at high energies and provides unique insight into the poorly understood final phases of massive stars.
NGC 7331 is similar in size, shape, and mass to the Milky Way. It also has a comparable star formation rate, hosts a similar number of stars, has a central supermassive black hole and comparable spiral arms. The primary difference between our galaxies is that NGC 7331 is an unbarred spiral galaxy — it lacks a “bar” of stars, gas and dust cutting through its nucleus, as we see in the Milky Way. Its central bulge also displays a quirky and unusual rotation pattern, spinning in the opposite direction to the galactic disc itself.
By studying similar galaxies we hold a scientific mirror up to our own, allowing us to build a better understanding of our galactic environment which we cannot always observe, and of galactic behaviour and evolution as a whole.
Credit: ESA/Hubble & NASA/D. Milisavljevic (Purdue University) |
Between 1764 and the Declaration of Independence in 1776 Americans produced a rich series of pamphlets and resolutions listing their grievances against the central government of the British Empire. As I have pointed out before, reading those pamphlets is very helpful in understanding what the Constitution really means. And ignorance of them contributes to common constitutional mistakes.
These pamphlets are particularly useful in comprehending the Founders’ version of federalism. This is because the constitutional balance between states and federal government partly reflected what the Founders had wanted the balance to be between colonies and imperial government.
One of the most extraordinary of these pamphlets is little-known today, but it deserves much more attention. It is ” The Votes and Proceedings of the Freeholders and other Inhabitants of the Town of Boston in Town Meeting assembled According to Law.” Historians refer to it as “The Boston Pamphlet.”
The Boston Pamphlet was the product of the Boston Committee of Ccorrespondence, a group consisting of patriots such as James Otis and Sam Adams. The people of the Town of Boston formally approved the Pamphlet on November 20, 1772, whereupon they sent it to other Massachusetts towns for their consideration and response.
The Boston Pamphlet’s statement of natural rights anticipates the statement of natural rights expressed in the Declaration of Independence. The Pamphlet’s view of the limits on British power anticipates the balance the Framers struck in the Constitution.
Among the Boston Pamphlet’s statements of natural law are:
* All men have a “right to life, liberty, property.”
* In Case of intollerable [sic] Oppression, people have the right to leave the Society they belong to, and enter into another.
* Every natural Right, not expressly given up, or from the Nature of the social Compact necessarily ceded, remains.
* It is absurd to argue that men renounce their essential natural Rights, or the Means of preserving those Rights; when the grand End of civil Government from the very Nature of its Institution, is for the Support, Protection and Defence of those very Rights.
Other portions of the document shed light on provisions in the Constitution. For example, the statement that the people have “the Right to support and defend [their natural rights] in the best Manner they can,” is an important indication that the Second Amendment’s right to keep and bear arms includes a personal right of self-defense. Similarly, the statement that “every Man . . . has a Right peaceably and quietly to worship God, according to the Dictates of his Conscience,” supports the well-documented conclusion that the First Amendment requires the federal government to treat all religions equally, but does not protect irreligion. (Regretfully, for reasons too complicated to discuss here, the Boston Pamphlet excluded Catholics from protection; however, the First Amendment did not.)
In addition, the Pamphlet tells us that:
* “Public officials are mere servants of those they serve,” a primary tenet of Founding-Era political theory that underlies several constitutional provisions.
* “The Legislative has no Right to absolute arbitrary Power over the Lives and Fortunes of the People, ” foreshadowing the limited nature of congressional authority.
* “There should be one Rule of Justice for Rich and Poor; for the Favourite at Court, and the Countryman at the Plough,” embodying the “equal protection” principle appearing in several parts of the original Constitution and strengthened by the Fourteenth Amendment.
* “The Supreme Power cannot justly take from any Man, any Part of his Property without his Consent, in Person or by his Representative,” foreshadowing legislative control over finance.
* The colonists “have and enjoy, all Liberties and Immunities of free and natural Subjects. . . as if they . . . were born within . . . [the] Realm of England,” foreshadowing the Privileges and Immunities Clause of Article IV.
* Complaining of how royal officials would “enter and go board any Ship, Boat, or other Vessel. . . and also in the day-time to go into any House, Shop, Cellar, or any other Place, where any Goods, Wares or Merchandizes lie concealed, or are suspected to lie concealed” and “our Boxes, Trunks and Chests broke open, ravaged and plundered . . . ” all giving meaning to the Fourth Amendment.
* Complaining of extending central control of the judicial system (and violating trial by jury) at the expense of local courts, thereby foreshadowing the limits on federal courts set forth in Article III and in the Bill of Rights.
* While implicitly conceding London’s control over commerce among units of the British Empire (as virtually all Americans did), still bitterly complaining of London’s efforts to restrict colonial manufacturing and local commerce, thus anticipating the limits on Congress’s Commerce Power.
Several of the governmental abuses recited in the Boston Pamphlet have returned. As the imperial government did then, the federal government now meddles in local judicial matters, restricts manufacture and intra-state transport, and engages in random searches and seizures.
There were at least two other ways the Boston Pamphlet foreshadowed the future. First, it accurately predicted that, “The Inhabitants of this Country, in all Probability, in a few Years, will be more numerous, than those of Great Britain and Ireland together. . . .”
And by noting that “The Colonists have been branded with the odious Names of Traitors and Rebels only for complaining of their Grievances,” the Boston Pamphlet anticipated the venom slung at the Tea Party patriots of our own time. |
Why is Robert Bruce a giant? Because he proved to the medieval world that superior military might could be overcome even in battle by a well-disciplined, tactically innovative and brilliantly led force. Because his approach to warfare gave food for thought to many on the opposing side, and helped to inspire the evolution of English military tactics over the following century, contributing to their extraordinary performance during the Hundred Years war with France. Because the difficult circumstances of his reign, both internally and externally, helped to produce a more coherent and uniform expression of Scottish national identity, contributing to the development of such ideas more generally. This is the story of the revolutionary medieval Scottish king, renowned across Christendom as a great and innovative warrior. |
Scientific name: Ophiostoma ulmi and O. novo-ulmi
Native range: Europe
Some municipalities require control of elm trees infected with Dutch elm disease in order to prevent its spread to other elm trees in the municipality. Because it is widespread, there are not state or federal regulations related to Dutch elm disease in Minnesota.
Dutch elm disease was first found in Minnesota during 1961 in St. Paul and can now be found throughout Minnesota. The spread of Dutch elm disease in Minnesota was documented by the University of Minnesota.
Dutch elm disease is caused by the fungi Ophiostoma novo-ulmi and O. ulmi. These fungi are often vectored by elm bark beetles of which there are a few species found in Minnesota. When bark beetles feed on twigs and branches, the fungus is introduced into the vascular system and spreads to other parts of the tree, including the roots. The tree tries to stop the spread of the fungus by producing plug-like structures which actually block the flow of water and contribute to its wilt. Very susceptible trees may die the same season they were infected; others may take several years. The fungus can spread to adjacent elm trees through root grafts.
The first symptom in trees infected with Dutch elm disease is usually a small area of yellow or brown wilting foliage called “flagging,” often beginning with a branch on the edge of the crown. The area expands and progresses toward the trunk. Wilted branches may have brown streaking in the sapwood which can be seen if the bark is removed. The University of Minnesota plant disease clinic can test elm samples for Dutch elm disease.
More on identifying Dutch elm disease from the U.S. Forest Service.
Host Plants and Impact - Elm trees are still a significant part of many forests and urban landscapes. Elm trees planted in communities today are usually of cultivars considered resistant to Dutch elm disease.
Visit the University of Minnesota website for information about Dutch elm disease.
Check and follow local regulations regarding removal and disposal of Dutch elm-infected trees as well as storage of elm firewood. One of the primary means of minimizing the incidence of Dutch elm disease in urban areas is good sanitation of infected material. Injections with fungicides can help protect elms against the disease when it is spread by insect feeding. |
Sometimes, human beings face such mental disorders in which he becomes unable to behave like a normal attitude. In medical science, such mental disorder conditions are categorized into different groups of diseases that must have some causes, symptoms and treatments as well. Some types of such metal diseases are Psychosis, Aspergers, Schizophrenia and Bipolar disorder. Here is the comprehensive overview of these mental diseases.
Psychosis is a type of mental disorder in which the affected person gets involved into imaginary believes and loses its grip on real things. In severe state, the affected also claim that he is seeing and hearing the things that are not present there in front of normal human beings. Excessive drug use is a major cause of this disease.
Aspergers is also characterized as abnormal mental condition that is a type of pervasive development disorder. This type of mental disorder especially beings in the age of childhood due to lack of attention and communication gap. In this condition, the affected person shows no willingness towards developing social interactions, friendships and competition with others.
Schizophrenia is a severe mental disorder condition that mostly appears in early age or teen age. In this condition, the patient feels difficulty in thinking sensibly, shows abnormal and emotional behavior in society and becomes confuse in real and imaginary things. Still, there are no definite causes of this mental disability have come in front, but there might be some genetic factors behind such disorder.
Bipolar disorder is a type of mental abnormality, in which a person behaves both irritated and depressing behavior at different times. Irritated mood, high energy level and quick mood swings between mania and depression are the signs of this mental disorder that may be due to excessive use of drugs and steroids, side-effects of medicines and change in life style.
Psychosis vs Aspergers vs Schizophrenia vs Bipolar Disorder
All these diseases are related to mental disorder, but have different symptoms, causes and treatment. In case of Psychosis, the patient made its own fantasy world and claim the seeing and hearing of unreal things. Aspergers is a childhood mental disease in which the patient like to remain alone and show no keenness towards social interactions and developing relations. Similarly, in case of Schizophrenia, the person behaves dull attitude with no sensible thinking and attitude. While in case of Bipolar disorder, the person shows the swing mood between depression and mania with irritated behavior.
- What is the Difference between 1st Degree, 2nd Degree and 3rd Degree Burns
- What is the Difference between Parkinson's, Senility, Alzheimer's and Dementia
- What is the Difference between Haplotype, Serotype, Genotype and Phenotype
- What is the Difference between Chesty Cough, Dry Cough and Wet Cough
- What is the Difference between Stress, Depression, Anxiety, Phobia and Fear
- What is the Difference between Osteoarthritis and Rheumatoid Arthritis
- What is the Difference between Myeloma Leukopenia Anemia Leukemia and Lymphoma
- What is the Difference between Kwashiorkor and Marasmus Malnutrition
- What is the Difference between Heartburn, Chest Pain, Angina and Heart Attack
- What is the Difference between Sickness Disorder Syndrome Illness and Disease |
Listening to the sounds the sun can help in predicting sunspots that are in their early stages of development and can give at least two days of warning — possibly enough time for a safety plan to be executed, researchers at Stanford University found.
They have developed a method for allowing them to peer deep into the sun's interior by using acoustic waves to catch early stage sunspots. Sunspots develop in active solar regions of strong, concentrated magnetic fields and appear dark when they reach the surface of the sun. Eruptions of the intense magnetic flux leads to solar storms, but until now, no one was able to predict them.
Many solar physicists tried different ways to predict when sunspots would appear, but with no success, said Phil Scherrer, a professor of physics in whose lab the research was done. He spoke via statement from the university.
The new method uses acoustic waves generated inside the sun by the turbulent motion of plasma and gases in constant motion. In the near-surface region, small-scale convection cells, which are about the size of California, generate sound waves that travel to the interior of the sun and are refracted back to the surface.
The researchers got some assistance from the Michelson Doppler Imager aboard NASA's Solar and Heliospheric Observatory satellite, known as SOHO. The craft spent some 15 years making detailed observations of the sound waves within the sun. It was superseded in 2010 with the launch of NASA's Solar Dynamics Observatory satellite, which carries the Helioseismic and Magnetic Imager, according to the university in a news release.
Stathis Ilonidis, a Stanford graduate student in physics and lead author of the paper on the research, used the data generated by the two imagers and was able to develop a way to reduce the electronic clutter in the data so he could accurately measure the solar sounds. The new method enabled Ilonidis to detect sunspots in the early stages of formation as deep as 65,000 kilometers inside the sun. One to two days later, the sunspots would appear on the surface.
The principles used to track and measure the acoustic waves traveling through the sun are comparable to measuring seismic waves on Earth. The researchers measure the travel time of acoustic waves between widely separated points on the solar surface, a news release stated.
We know enough about the structure of the sun that we can predict the travel path and travel time of an acoustic wave as it propagates through the interior of the sun, said Junwei Zhao, a senior research scientist at Stanford's Hansen Experimental Physics Lab. Travel times get perturbed if there are magnetic fields located along the wave's travel path.
Those alarms are what tip the researchers that a sunspot is forming.
By measuring and comparing millions of pairs of points and the travel times between them, the researchers are able to pinpoint the anomalies that reveal the growing presence of magnetic flux related with an early sunspot.
They found that sunspots that ultimately become large rise up to the surface quicker than those that stay small. The larger sunspots are the ones that spawn the biggest disruptions, and for those the warning time is roughly a day. The smaller ones can be found up to two days before they reach the surface, the release stated.
Researchers have suspected for a long time that sunspot regions are generated in the deep solar interior, but until now the emergence of these regions through the convection zone to the surface had gone undetected, Ilonidis said. We have now successfully detected them four times and tracked them moving upward at speeds between 1,000 and 2,000 kilometers per hour.
One of the big goals with forecasting space weather is achieving a three-day warning time of impending solar storms. That would give the potential victims a day to plan, another day to put the plan into action and a third day as a safety margin.
If disruptions such as solar flares and mass eruptions could be predicted, measures could be taken to protect vulnerable electronics before solar storms strike. They can wreak havoc on communication systems, air travel, power grids and satellites, as well as astronauts in space.
The research is published in the Aug. 19 edition of Science. |
Over this year, I have had countless times when I have heard my son say, “It’s not fair” about something that happened on the playground at school. Usually, I have to talk about how even if it is not fair, there are certain constraints that he may have to face. He usually begrudgingly goes up to his room to ponder why – when Roger took cuts in the handball line – that he has to live with the unfairness. Come to find out that my son’s sense of fairness and equality in treatment is hardwired in his and everyone’s brain. According to new research, the human brain physically wants things to be fair and equal. Amazingly, scientists from the California Institute of Technology (Caltech) and Trinity College in Dublin, Ireland, have become the first to have photographs (MRI) to prove it.
Specifically, the team found that the reward centers in the human brain respond more strongly when a poor person receives a financial reward than when a rich person does. The surprising thing? This same pattern holds true even if the brain being looked at is in the rich person’s head, rather than the poor person’s.
“It’s long been known that we humans don’t like inequality, especially when it comes to money. Tell two people working the same job that their salaries are different, and there’s going to be trouble,” notes John O’Doherty, professor of psychology at Caltech, Thomas N. Mitchell Professor of Cognitive Neuroscience at the Trinity College Institute of Neuroscience, and the principal investigator on the project. However, what we apparently didn’t know is how much the brain really disliked inequality of treatment.
“In this study, we’re starting to get an idea of where this inequality aversion comes from,” he says. “It’s not just the application of a social rule or convention; there’s really something about the basic processing of rewards in the brain that reflects these considerations.”
The study found that the reward centers in the volunteers’ brains — reacted to the various scenarios differently depending strongly upon whether they started the experiment with a financial advantage over their peers. The study made subjects in experiment more or less poor compared to others in the study.
“People who started out poor had a stronger brain reaction to things that gave them money, and essentially no reaction to money going to another person,” Mr. Camerer, a co-author, says. “By itself, that wasn’t too surprising.”
What was surprising was the other side of the coin. “In the experiment, people who started out rich had a stronger reaction to other people getting money than to themselves getting money,” Camerer explains. “In other words, their brains liked it when others got money more than they liked it when they themselves got money.” Wow!
The discovery finds that the brain’s positive reaction is not just when we are self-interested, adds O’Doherty. “They don’t exclusively respond to the rewards that one gets as an individual, but also respond to the prospect of other individuals obtaining a reward.”
What was especially interesting about the finding, he says, is that the brain responds “very differently to rewards obtained by others under conditions of disadvantageous inequality versus advantageous inequality. It shows that the basic reward structures in the human brain are sensitive to even subtle differences in social context.”
“As a psychologist and cognitive neuroscientist who works on reward and motivation, I very much view the brain as a device designed to maximize one’s own self interest,” says O’Doherty. “The fact that these basic brain structures appear to be so readily modulated in response to rewards obtained by others highlights the idea that even the basic reward structures in the human brain are not purely self-oriented.”
Having watched the brain react to inequality, O’Doherty says, the next step is to “try to understand how these changes in valuation actually translate into changes in behavior. For example, the person who finds out they’re being paid less than someone else for doing the same job might end up working less hard and being less motivated as a consequence. It will be interesting to try to understand the brain mechanisms that underlie such changes.”
This physical reaction could party explain why there is so much trauma associated with early offers or demands by parties that are so far out of the realm of possibility as to make them unfair. Many times people have a visceral reaction to unfair offers — and that reaction can often be violent (in the non physical sense). Indeed, just yesterday I had a mediation that one side felt was going in an unfair fashion. The plaintiff felt that the offers being provided by the insurance company were far below the value of the case. The plaintiff felt that the end of the day number was also far below what was “fair” for the case. This presented a huge obstacle to overcome.
In my case, time was an important ally to allow the party to digest the information and for the logical brain to rationalize and make sense of the instant physical reaction that is now apparent was going on. Second, we discussed that although the plaintiff may perceive the result as unfair, what would be their reaction to a possibly even more unfair jury verdict in a very difficult jurisdiction. That time and discussion was helpful in allowing the party to finally realize that even if the end offer was unfair, it was better than what could happen at trial.
This brings up an important point: People’s initial reaction (which could be physical and uncontrollable) do not dictate the end outcome. Given time and effort that initial sense of unfairness might be overcome given the right conditions to demonstrate that some sense of fairness or justice can be delivered.
California Institute of Technology (2010, February 24). Scientists find first physiological evidence of brain’s response to inequality. ScienceDaily. Retrieved February 26, 2010, from http://www.sciencedaily.com /releases/2010/02/100224132453.htm |
Dinosaurs descended from reptiles and evolved into today's birds, but their growth and sexual maturation were more like that of mammals - complete with teen pregnancy, according to a new study by University of California, Berkeley, scientists.
Though dinosaurs grew for much of their lives, they experienced a rapid growth spurt in adolescence, like mammals, said UC Berkeley graduate student Sarah Werning. She and Andrew H. Lee, a recent UC Berkeley Ph.D. recipient who is currently a postdoctoral fellow at Ohio University's College of Osteopathic Medicine in Athens, Ohio, have now shown that dinosaurs reached sexual maturity near the end of this rapid growth phase, well before reaching maximum body size. Medium-to-large mammals, including humans, also are able to reproduce before they finish growing.
The finding, Werning said, suggests that dinosaurs were born precocious and suffered high adult mortality, making early sexual maturity necessary for survival.
"This is an exciting finding, because age at sexual maturity is related to so many things," said the students' advisor, Kevin Padian, who is a professor of integrative biology and a curator in UC Berkeley's Museum of Paleontology. "It also shows that you can't use reptiles as a model for dinosaur growth, as many scientists still do."
Pinpointing the age of reproductive maturity "opens up so many complementary avenues of dinosaur research," Werning added. "You can talk about dinosaur physiology, lifespan, reproductive strategies. And you could use this technique to look at all kinds of extinct animals."
The conclusion, reported the week of Jan. 14 in the online early edition of the journal Proceedings of the National Academy of Sciences, comes from an analysis of the only three dinosaur fossils that have been definitively identified as female. Thin slices of these dinosaurs' fossil bones all show an internal structure similar to tissue found in living female birds - a layer of calcium-rich bone tissue called medullary bone that is deposited in the marrow cavity just before egg-laying as a resource for making eggshells.
Dinosaurs, which also laid eggs, apparently stored calcium in similar structures prior to ovulation. In their new paper, Werning and Lee report that leg bones from the carnivorous Allosaurus and the plant eater Tenontosaurus both contained this structure, which means both creatures died shortly before laying eggs. The researchers concluded that these dinosaurs were both mere adolescents, because the Allosaurus was age 10 and the Tenontosaurus age eight at time of death, and prior studies have shown that these types of dinosaurs probably lived up to 30 years.
Werning and Lee also confirmed that a third bone, from a female Tyrannosaurus rex (T. rex) reported by Museum of the Rockies paleontologist Mary H. Schweitzer in 2005, contained medullary tissue upon the dinosaur's death at the age of 18. Werning noted that all three dinosaurs might have reached sexual maturity much earlier.
"We were lucky to find these female fossils," Werning said. "Medullary bone is only around for three to four weeks in females who are reproductively mature, so you'd have to cut up a lot of dinosaur bones to have a good chance of finding this."
In the past 10 to 15 years, studies of dinosaur bones have revealed much about the growth strategy of dinosaurs because bone lays down rings much like tree rings. If, as with trees, each ring signifies one year, then dinosaurs grew rapidly after birth and continued to grow over several years until death. Despite the presumed close relationship between dinosaurs and reptiles, dinosaurs grew faster than living reptiles, and their bones had a bigger blood supply. Among living vertebrates, only birds and mammals exhibit such fast growth. Birds and small mammals grow quickly to maturity and then become sexually mature, but large mammals reach sexual maturity just before growth slows.
Attempts to determine when dinosaurs became sexually mature, and thus whether they more closely resemble birds or mammals, have been difficult because there have been no clear signs of reproductive maturity in dinosaur skeletons.
Hence the excitement when Schweitzer discovered medullary bone in a T. rex femur. Though other paleontologists have searched fruitlessly for similar signs in fossil bones, Werning and Lee found success by focusing on Tenontosaurus, perhaps the most common and most boring dinosaur in North America, and Allosaurus, a T. rex-like predator.
Tenontosaurus lived in North America during the Early Cretaceous period, 125 to 105 million years ago, and was an ancestor of the duck-billed dinosaurs. A common plant eater, it is known for its long tail that made the dinosaur up to 27 feet long when walking on four legs. Because fossils of these one- to two-ton beasts are common in Oklahoma, Werning was able to obtain many fossil bone slices from the Oklahoma Museum of Natural History. Both a femur (thigh bone) and a tibia (shin bone) from the same fossilized Tenontosaurus showed medullary bone, while growth rings in its bones indicated the pregnant dinosaur was eight years old.
"These were prey dinosaurs, so they were probably taken out when really young and small or when old," Werning said. "So, if you don't reproduce early, you lose your chance."
Lee, on the other hand, focused on Allosaurus fossils from the Cleveland-Lloyd quarry in Utah, where several thousand Allosaurus bones from at least 70 individuals have been discovered. A smaller and older version of T. rex, Allosaurus lived 155 to 145 million years ago in the late Jurassic period. Lee found one tibia with medullary bone from the University of Utah vertebrate paleontology collection.
The two researchers are continuing to analyze thin slices of fossilized dinosaur bone in hopes of finding more skeletons with medullary bone.
The work was made possible by grants from the Geological Society of America, the Paleontological Society and the University of Oklahoma Graduate Student Senate to Werning and by grants to Lee from the Jurassic Foundation and UC Berkeley's Department of Integrative Biology.
Materials provided by University of California - Berkeley. Note: Content may be edited for style and length.
Cite This Page: |
For the first time, astronomers have analyzed the atmosphere of an exoplanet in the class known as super-Earths. Using data gathered with the Hubble Space Telescope and new analysis techniques, the exoplanet 55 Cancri e is revealed to have a dry atmosphere without any indications of water vapor. The results indicate that the atmosphere consists mainly of hydrogen and helium.
The international team, led by scientists from University College London (UCL), took observations of the nearby exoplanet 55 Cancri e, a super-Earth with a mass of eight Earth-masses. It is located in the planetary system of 55 Cancri, a star about 40 light-years from Earth.
Using observations made with the Wide Field Camera 3 (WFC3) on board Hubble, the scientists were able to analyze the atmosphere of this exoplanet. This makes it the first detection of gases in the atmosphere of a super-Earth. The results allowed the team to examine the atmosphere of 55 Cancri e in detail and revealed the presence of hydrogen and helium, but no water vapor. These results were only made possible by exploiting a newly-developed processing technique.
“This is a very exciting result because it’s the first time that we have been able to find the spectral fingerprints that show the gases present in the atmosphere of a super-Earth,” said Angelos Tsiaras from UCL, who developed the analysis technique along with his colleagues Ingo Waldmann and Marco Rocchetto. “The observations of 55 Cancri e’s atmosphere suggest that the planet has managed to cling on to a significant amount of hydrogen and helium from the nebula from which it originally formed.”
Super-Earths like 55 Cancri e are thought to be the most common type of planet in our galaxy. They acquired the name “super-Earth” because they have a mass larger than that of Earth but are still much smaller than the gas giants in the solar system. The WFC3 instrument on Hubble has already been used to probe the atmospheres of two other super-Earths, but no spectral features were found in those previous studies.
55 Cancri e, however, is an unusual super-Earth as it orbits close to its parent star. A year on the exoplanet lasts for only 18 hours, and temperatures on the surface are thought to reach around 3,600° F (2,000° C). Because the exoplanet is orbiting its bright parent star at such a small distance, the team was able to use new analysis techniques to extract information about the planet during its transits in front of the host star.
Observations were made by scanning the WFC3 quickly across the star to create a number of spectra. By combining these observations and processing them through analytic software, the researchers were able to retrieve the spectrum of 55 Cancri e embedded in the light of its parent star.
“This result gives a first insight into the atmosphere of a super-Earth. We now have clues as to what the planet is currently like and how it might have formed and evolved, and this has important implications for 55 Cancri e and other super-Earths,” said Giovanna Tinetti, also from UCL.
Intriguingly, the data also contain hints of hydrogen cyanide, a marker for carbon-rich atmospheres.
“Such an amount of hydrogen cyanide would indicate an atmosphere with a very high ratio of carbon to oxygen,” said Olivia Venot, KU Leuven, who developed an atmospheric chemical model of 55 Cancri e that supported the analysis of the observations.
“If the presence of hydrogen cyanide and other molecules is confirmed in a few years time by the next generation of infrared telescopes, it would support the theory that this planet is indeed carbon rich and a very exotic place,” concluded Jonathan Tennyson from UCL. “Although hydrogen cyanide, or prussic acid, is highly poisonous, so it is perhaps not a planet I would like to live on!”
This video shows an artist's impression of the super-Earth 55 Cancri e moving in front of its parent star. During these transits astronomers were able to gather information about the atmosphere of the exoplanet. The Scientists were able to retrieve the spectrum of 55 Cancri e embedded in the light of its parent star. |
Music makes a kind of liquid link between the study of languages, literature and the other arts, history, and the sciences – joining them together in the outer world of feelings and relationships and the inner world of imagination.
Dr Robin Holloway, Composer.
Here at Amesbury Primary School we recognise that music pervades every aspect of life. It can soothe, excite, stimulate or calm. It can satisfy creativity through composition and performance. It can enrich the understanding of other nations through the study of cultural variety of music styles and instruments.
We believe that all pupils should have access to music appropriate for their age and stage of development. Learning objectives will follow National Curriculum guidelines, broken down into Key Skills. The Music Scheme of work is to be integrated as far as possible into class topics.
Curriculum and school organisation
As outlined in the National Curriculum, the Programmes of Study for music contains two sets of requirements:
Knowledge, Skills and Understanding identifies thE aspects of music in which pupils make progress:
- Controlling sounds through singing and playing-performing skills
- Creating and developing musical ideas – composing skills.
- Responding and reviewing – appraising skills.
- Listening, and applying knowledge and understanding.
These are broken down into Key Skills under the headings of Performing, Composing and Appraising and show progression through National Curriculum levels. Teachers should use these when planning to ensure all aspects of the music curriculum are covered.
Breadth of study – details how the above aspects of music are developed in teaching the content through:
- A range of musical activities that integrates performing, composing and appraising.
- Responding to a range of musical and non-musical starting points.
- Working on their own, in groups of different sizes and as a class.
- Using ICT to manipulate and refine sounds.
- A range of live and recorded music from different times and cultures.
Class teachers are responsible for planning and delivering the music curriculum with guidance from the Music Subject Leader as required. Shared planning within the Key Stages as part of topic work is encouraged, and is part of the school’s wider “Creative Curriculum”.
Activities may be grouped according to ability, friendship, small groups, year groups or whole class. They may be teacher led or open-ended and differentiated by task/outcome.
Music and our local area
Where possible, Amesbury Primary School includes local culture, skills and traditions as part of its wider curriculum provision.
In music, we meet this obligation by:
- Liaison with local high schools.
- Welcoming local musicians and bands to our schools.
- “Arts Week” or ‘Mardi Gras’
- Local songs and musical traditions
- Taking part and/or visiting appropriate shows
As part of our wider curriculum provision, the school has decided to consider what opportunities exist outside of the National Curriculum in each of its subject areas.
In music, we ensure that:
- All children experience live music played at least once every year.
- Some children will have the opportunity to be part of a large musical event such as the Primary Choir event which brings together 1,500 children to sing and perform.
- Our local Secondary School offer the opportunity for children to join a Rock Band & perform life.
- During their time at Amesbury Primary School, all children will experience live music from different cultures.
Learning to play a musical instrument.
All children will have the opportunity to learn to play the recorder. This starts in year 3 with the children being given the opportunity to move onto the clarinet in Year 4. Overall the children will have had at least 3 terms of recorder lessons during their time at Amesbury School.
Extra Curricular Activities.
At present Choir is open to all KS2 children and held weekly. A small group participate in Clarinet tuition. Other clubs including guitars may be run on a short term basis each school year.
A specialist teacher visits the school to deliver instrumental lessons – currently clarinet although guitar and recorder lessons are delivered within school.
Children who play instruments at school or take part in extra-curricular activities are given the chance to perform for others in class, assemblies and the school events such as Church Services.
We welcome local musicians into school to perform for the children when possible to provide opportunities to hear different styles of music.
Our Choir visits the Amesbury Abbey and various other Nursing Homes and Events annually to sing for residents. We take part in the nationally organised 1,500 strong Primary School Choir event in Basingstoke annually.
It is the responsibility of all teachers to ensure that all pupils, irrespective of gender, ability, race and social circumstance, have access to the music curriculum and make the greatest progress possible.
Special Educational Needs / Inclusion
The school will work to ensure that all pupils including those with special educational needs are provided with an appropriate music curriculum. In order to achieve this, teachers will work to:
- Set suitable learning challenges.
- Respond to pupils’ diverse learning needs.
- Overcome potential barriers to learning and assessment for individuals and groups of pupils.
Able, Gifted and Talented children will be identified and noted on the school AGT register. Appropriate opportunities should be provided for them to share and develop their talents. |
What is a Hook? In software development, Hooking is a concept that allows modifying the behavior of a program. It’s the chance that code gives you to change the original behavior of something without changing your code of the corresponding class. This is done by overwriting the hook methods.
This type of implementation is very useful in the case of adding new functionalities to applications, also facilitating the communication between the other processes and messages of the system. Hooks tend to decrease system performance by increasing the processing load that the system needs to perform for each message. It should be installed only when needed and removed as soon as possible.
Imagine that you are using a Customer Management System (CMS) from a third party and you would like a super administrator to be warned by email every time a new post was published and that this behavior is not the default of the tool. There would be a few ways forward:
- Change the CMS source code is not a good idea, after all in the next update of the tool you will face the dilemma of losing your change or not be able to keep everything updated;
- Create your own CMS is another bad idea, after all, you do not have the time or resources enough to create new things or even maintain what to build;
- Investigate the possibility of using a hook, that is, check if the CMS looks in external modules or plugins for functions of a given name to be executed at the desired moment, in this case, the publication of new posts.
The extensibility is another advantage of using hook methods that allow the application to extend its stable interfaces. Hook methods decouple stable interfaces and behavior of a variation domain that can arise from the instantiation of an application for a particular context.
Hooks as Design Patterns
It is interesting to note that many (almost all) design patterns typify semantics for hooks. They represent how to implement sub-systems of hot spots. Some are based on the principle of separation construction: Abstract, Factory, Builder, Command, Interpreter, Observer, Prototype, State, and Strategy.
Others in both patterns of unification and separation construction: Template Method and Bridge.
Semantics is typically expressed in the hook method name (for example, in the Command, the method is called execute()).
Virtual Method Table hooking
Virtual methods are called in the same way as static methods, but because virtual methods can be modified, the compiler does not know the address of a particular virtual function when you call it in your code. The compiler, for this reason, builds a Virtual Method Table (VMT), which provides a means to search for function addresses in runtime. All virtual methods are triggered at runtime through the VMT. The VMT of an object contains all the virtual methods of its ancestors, as well as those it declares. For this reason, virtual methods use more memory than dynamic methods, although they run faster.
Since VMT is a table that contains the pointers with memory addresses for the interface functions, what needs to be done is to replace the original memory address with an address of a valid hook function. In this way, the called method will be overwritten, and the new desired behavior of the function will be executed.
The Hooking API technique literally allows you to reprogram the functions of the operating system. With the power to intercept such commands, you can change their parameters by changing the action that would be performed originally.
It is possible, for example, to block deletion of a particular file, prevent an application from running, and request a user confirmation to save a document to the disk, and so on.
Without a doubt, the biggest slice of choice is in the area of security, such as antivirus and antispyware. But there are situations in our everyday development where the Hooking API, can be possibly being the only way out.
API Hooking, in our context, means catching an API from the OS, or from any DLL, and change its normal execution to another place, more precisely, to another function. There are basically two ways to do this:
- EAT and IAT: all EXE / DLL contains API to import and export tables. These tables contain pointers that indicate the API Entry Point. By changing these pointers, making them point to our callback, we have a hook. However, if this EXE / DLL does not import API’s, this method will not work;
- Simple Code Overwriting: As previously mentioned, if it were possible to add a call to our callback at the beginning of the API code, we could “hook it”, making our function run whenever the API was called. But there is a problem: if after our code was processed, we wanted to call the original API, we would fall back on our callback, and a stack overflow would be generated. One solution would be to undo the hook to be able to call the API, redoing it once it is executed. However, during this middle ground, several API calls can be made and would not execute our callback;
- Inline Hook is when we get the first instructions of a function, and we exchange for a Jump, Push or a Call for our function.
Recommended read: Windows operating system also supports hooking API. Let’s know how Windows API hooking works?
As hook methods decouple stable interfaces and behavior of a variation domain that can arise from the instantiation of an application for a specific context occurs an inversion of control. Objects event handlers customize processing steps. In other words, when an event occurs, the handler reacts invoking hook methods on pre-registered objects that execute specific event processing actions. Examples of events: window messages, packets arriving from communication ports.
Internal IAT Hooking
Each process in Windows has a table called Import Address Table (IAT), which stores pointers to the functions exported by the DLLs of each process. This table is populated dynamically with the address of the functions of the DLLs at run time.
Through the use of specific functions, we can make the IAT table writable, being possible to change its address by an address of a custom function, re-marking the table as read-only after this change. When the process tries to call the function, its address is fetched in the IAT table, and a pointer is returned. As the IAT table has been modified, the custom function is called in place of the original function and the code injected into the process is obtained.
Netfilter is a Linux kernel subsystem greater than 2.4. It is responsible for packet filtering, NAT, firewall, redirection, among others. Netfilter is very extensible, and its documentation is very well done. It leaves the possibility of using Hooks in the Kernel code, making its use very malleable and widely adopted by the community. These Hooks leave several possibilities and can serve as triggers for certain events.
The hooking programming techniques are powerful and open up a range of possibilities for programmers, but it should be used with caution since they add a greater complexity in the flow of processes and change the original behavior of the OS, applications or other software components, making it difficult to understand the logic of software. Besides that, as mentioned earlier in this article, the use of these techniques without criterion may degrade the performance of the applications.
Disclosure: Some of our articles may contain affiliate links; this means each time you make a purchase, we get a small commission. However, the input we produce is reliable; we always handpick and review all information before publishing it on our website. We can ensure you will always get genuine as well as valuable knowledge and resources. |
This information is produced and provided by the National Cancer Institute (NCI). The information in this topic may have changed since it was written. For the most current information, contact the National Cancer Institute via the Internet web site at http://cancer.gov or call 1-800-4-CANCER.
Childhood astrocytoma is a disease in which benign (noncancer) or malignant (cancer) cells form in the tissues of the brain.
Astrocytomas are tumors that start in star-shaped brain cells called astrocytes . An astrocyte is a type of glial cell . Glial cells hold nerve cells in place and help them work the way they should. There are several types of astrocytomas. They can form anywhere in the central nervous system (brain and spinal cord ). Brain tumors are the third most common type of cancer in children.
The tumors may be benign (not cancer) or malignant (cancer). Benign brain tumors grow and press on nearby areas of the brain. They rarely spread into other tissues . Malignant brain tumors are likely to grow quickly and spread into other brain tissue. When a tumor grows into or presses on an area of the brain, it may stop that part of the brain from working the way it should. Both benign and malignant brain tumors can cause symptoms and need treatment.
This summary is about the treatment of primary brain tumors that begin in the glial cells in the brain. Information is included about the following tumors that form from glial cells:
Treatment of metastatic brain tumors is not discussed in this summary. Metastatic brain tumors are formed by cancer cells that begin in other parts of the body and spread to the brain.
Brain tumors can occur in both children and adults. However, treatment for children may be different than treatment for adults. (See the PDQ treatment summary on Adult Brain Tumors for more information.)
The central nervous system controls many important body functions.
Astrocytomas most commonly form in these parts of the central nervous system (CNS):
|Anatomy of the brain, showing the cerebrum, cerebellum, brain stem, and other parts of the brain.||Anatomy of the inside of the brain, showing the pineal and pituitary glands, optic nerve, ventricles (with cerebrospinal fluid shown in blue), and other parts of the brain.|
Anatomy of the brain, showing the cerebrum, cerebellum, brain stem, and other parts of the brain.
Anatomy of the inside of the brain, showing the pineal and pituitary glands, optic nerve, ventricles (with cerebrospinal fluid shown in blue), and other parts of the brain.
The cause of most childhood brain tumors is not known.
Anything that increases your risk of getting a disease is called a risk factor. Having a risk factor does not mean that you will get cancer; not having risk factors doesn't mean that you will not get cancer. Parents who think their child may be at risk should discuss this with their child's doctor. Possible risk factors for astrocytoma include:
Having NF1 may increase a child's risk of a type of tumor called visual pathway glioma. These tumors usually do not cause symptoms. Children with NF1 who develop visual pathway gliomas may not need treatment for the tumor unless symptoms, such as vision problems, appear or the tumor grows.
The symptoms of astrocytomas are not the same in every child.
Symptoms are different depending on the following:
Some tumors do not cause symptoms. Other conditions may cause the same symptoms as those caused by childhood astrocytomas. Check with your child's doctor if any of the following problems occur:
Tests that examine the brain and spinal cord are used to detect (find) childhood astrocytomas.
The following tests and procedures may be used:
Childhood astrocytomas are diagnosed and removed in surgery.
If doctors think there may be an astrocytoma, a biopsy may be done to remove a sample of tissue. For tumors in the brain, the biopsy is done by removing part of the skull and using a needle to remove tissue. Sometimes, the needle is guided by a computer. A pathologist views the tissue under a microscope to look for cancer cells. If cancer cells are found, the doctor may remove as much tumor as safely possible during the same surgery . Because it can be hard to tell the difference between types of brain tumors, you may want to have your child's tissue sample checked by a pathologist who has experience in diagnosing brain tumors.
The following tests may be done on the tissue that was removed:
A biopsy may not be needed for children who have NF1.
Certain factors affect prognosis (chance of recovery) and treatment options.
The prognosis (chance of recovery) and treatment options depend on the following:
For recurrent astrocytoma, prognosis and treatment depend on how long it was from the time treatment ended to the time the astrocytoma recurred.
The grade of the tumor is used in place of a staging system to plan cancer treatment.
Staging is the process used to find out how much cancer there is and if cancer has spread. It is important to know the stage in order to plan treatment.
There is no standard staging system for childhood astrocytoma . Treatment is based on the grade of the tumor and whether it is untreated or recurrent (has come back after treatment). The grade of the tumor describes how abnormal the cancer cells look under a microscope and how quickly the tumor is likely to grow and spread.
The following grades are used:
Low-grade astrocytomas are slow-growing and rarely spread to other parts of the brain and spinal cord or other parts of the body. These include grade I (pilocytic , which form like a cyst and look almost like normal cells) and grade II (fibrillary, with cells that look long or slender like fibers) astrocytomas.
High-grade astrocytomas are fast-growing and often spread within the brain and spinal cord. These include grade III (anaplastic or malignant ) and grade IV (glioblastoma, which spreads the fastest) astrocytomas.
Childhood astrocytomas may form at more than one place in the brain, but they do not usually spread to other parts of the body. Children who have neurofibromatosis type 1 are more likely to have tumors in more than one place.
Tests are done to find out how much tumor remains after surgery and to plan further treatment.
Some of the tests used to detect astrocytomas are repeated after the tumor is removed. (See the General Information section.) This is to find out how much tumor remains after surgery and to plan further treatment. An MRI (magnetic resonance imaging) is done in the first 2 days after the surgery to see if there is any tumor left.
There are three ways that cancer spreads in the body.
The three ways that cancer spreads in the body are:
When cancer cells break away from the primary (original) tumor and travel through the lymph or blood to other places in the body, another (secondary) tumor may form. This process is called metastasis . The secondary (metastatic) tumor is the same type of cancer as the primary tumor. For example, if breast cancer spreads to the bones, the cancer cells in the bones are actually breast cancer cells. The disease is metastatic breast cancer, not bone cancer.
A recurrent childhood astrocytoma is an astrocytoma that has recurred (come back) after it has been treated. The cancer may come back in the same place as the first tumor or in other parts of the body. High-grade astrocytomas often recur within 3 years.
There are different types of treatment for patients with childhood astrocytoma.
Different types of treatment are available for children with astrocytomas . Some treatments are standard (the currently used treatment), and some are being tested in clinical trials . A treatment clinical trial is a research study meant to help improve current treatments or obtain information on new treatments for patients with cancer. When clinical trials show that a new treatment is better than the standard treatment, the new treatment may become the standard treatment.
Because cancer in children is rare, taking part in a clinical trial should be considered. Some clinical trials are open only to patients who have not started treatment.
Children with astrocytomas should have their treatment planned by a team of health care providers who are experts in treating childhood brain tumors.
Treatment will be overseen by a pediatric oncologist , a doctor who specializes in treating children with cancer. The pediatric oncologist works with other healthcare providers who are experts in treating children with brain tumors and who specialize in certain areas of medicine . These may include the following specialists:
Childhood brain tumors may cause symptoms that begin before diagnosis and continue for months or years.
Symptoms caused by the tumor may begin before diagnosis. These symptoms may continue for months or years. It is important to talk with your child's doctors about symptoms caused by the tumor that may continue after treatment.
Some cancer treatments cause side effects months or years after treatment has ended.
Side effects from cancer treatment that begin during or after treatment and continue for months or years are called late effects. Late effects of cancer treatment may include the following:
Some late effects may be treated or controlled. It is important to talk with your child's doctors about the effects cancer treatment can have on your child. (See the PDQ summary on Late Effects of Treatment for Childhood Cancer for more information).
Six types of standard treatment are used:
Surgery is used to diagnose and treat childhood astrocytoma as discussed in the General Information section of this summary. If cancer cells remain after surgery, further treatment depends on:
Even if the doctor removes all the cancer that can be seen at the time of the surgery, some patients may be given chemotherapy or radiation therapy after surgery to kill any cancer cells that are left. Treatment given after the surgery, to lower the risk that the cancer will come back, is called adjuvant therapy.
Cerebrospinal fluid diversion
Cerebrospinal fluid diversion is a method used to drain fluid that has built up around the brain and spinal cord . A shunt (long, thin tube) is placed in a ventricle (fluid-filled space) of the brain and threaded under the skin to another part of the body, usually the abdomen. The shunt carries excess fluid away from the brain so it may be absorbed elsewhere in the body.
Cerebrospinal fluid (CSF) diversion. Extra CSF is removed from a ventricle in the brain through a shunt (tube) and is emptied into the abdomen. A valve controls the flow of CSF.
Watchful waiting is closely monitoring a patient's condition without giving any treatment until symptoms appear or change. Watchful waiting is often used for patients who have neurofibromatosis type1 or a tumor that is not growing and spreading.
Radiation therapy is a cancer treatment that uses high-energy x-rays or other types of radiation to kill cancer cells or keep them from growing. There are two types of radiation therapy. External radiation therapy uses a machine outside the body to send radiation toward the cancer. Internal radiation therapy uses a radioactive substance sealed in needles, seeds, wires, or catheters that are placed directly into or near the cancer. The way the radiation therapy is given depends on the type and location of cancer being treated.
Radiation therapy to the brain can affect growth and development in young children. Certain ways of giving radiation therapy can lessen the damage to healthy brain tissue:
For children younger than 3 years, chemotherapy may be given instead, to delay or reduce the need for radiation therapy.
Chemotherapy is a cancer treatment that uses drugs to stop the growth of cancer cells, either by killing the cells or by stopping them from dividing. When chemotherapy is taken by mouth or injected into a vein or muscle, the drugs enter the bloodstream and can reach cancer cells throughout the body (systemic chemotherapy ). When chemotherapy is placed directly into the cerebrospinal fluid , an organ , or a body cavity such as the abdomen , the drugs mainly affect cancer cells in those areas (regional chemotherapy ). Combination chemotherapy is the use of more than one anticancer drug. The way the chemotherapy is given depends on the type and location of the cancer being treated.
High-dose chemotherapy with stem cell transplant
High-dose chemotherapy with stem cell transplant is a way of giving high doses of chemotherapy and replacing blood -forming cells destroyed by the cancer treatment. Stem cells (immature blood cells) are removed from the blood or bone marrow of the patient or a donor and are frozen and stored. After the chemotherapy is completed, the stored stem cells are thawed and given back to the patient through an infusion. These reinfused stem cells grow into (and restore) the body's blood cells.
New types of treatment are being tested in clinical trials.
This summary section describes treatments that are being studied in clinical trials. It may not mention every new treatment being studied. Information about clinical trials is available from the NCI Web site.
Targeted therapy is a type of treatment that uses drugs or other substances to identify and attack specific cancer cells without harming normal cells. One type of targeted therapy under study for childhood astrocytomas is monoclonal antibody therapy.
Monoclonal antibody therapy is a cancer treatment that uses antibodies made in the laboratory, from a single type of immune system cell. These antibodies can identify substances on cancer cells or normal substances that may help cancer cells grow. The antibodies attach to the substances and kill the cancer cells, block their growth, or keep them from spreading. Monoclonal antibodies are given by infusion. They may be used alone or to carry drugs, toxins, or radioactive material directly to cancer cells.
Patients may want to think about taking part in a clinical trial.
For some patients, taking part in a clinical trial may be the best treatment choice. Clinical trials are part of the cancer research process. Clinical trials are done to find out if new cancer treatments are safe and effective or better than the standard treatment.
Many of today's standard treatments for cancer are based on earlier clinical trials. Patients who take part in a clinical trial may receive the standard treatment or be among the first to receive a new treatment.
Patients who take part in clinical trials also help improve the way cancer will be treated in the future. Even when clinical trials do not lead to effective new treatments, they often answer important questions and help move research forward.
Patients can enter clinical trials before, during, or after starting their cancer treatment.
Some clinical trials only include patients who have not yet received treatment. Other trials test treatments for patients whose cancer has not gotten better. There are also clinical trials that test new ways to stop cancer from recurring (coming back) or reduce the side effects of cancer treatment.
Clinical trials are taking place in many parts of the country. See the Treatment Options section that follows for links to current treatment clinical trials. These have been retrieved from NCI's listing of clinical trials.
Follow-up tests may be needed.
Some of the tests that were done to diagnose the cancer or to find out the stage of the cancer may be repeated. (See the General Information section for a list of tests.) Some tests will be repeated in order to see how well the treatment is working. Decisions about whether to continue, change, or stop treatment may be based on the results of these tests.
Some of the tests will continue to be done from time to time after treatment has ended. The results of these tests can show if your child's condition has changed or if the astrocytoma has recurred (come back). If the tumor recurs in the brain, a biopsy may also be done to find out if it is made up of dead tumor cells or if new cancer cells are growing. These tests are sometimes called follow-up tests or check-ups. MRIs may be done regularly as follow-up to see if the tumor is growing back.
Childhood Low-Grade Astrocytomas
When the tumor is first diagnosed , treatment for childhood low-grade astrocytoma depends on the location of the tumor and is usually surgery . An MRI is done after surgery to see if there is tumor remaining.
If the tumor was completely removed by surgery, more treatment may not be needed and the child is closely watched to see if symptoms appear or change. This is also called watchful waiting.
If there is tumor remaining after surgery, treatment may include the following:
In some cases, children who have a visual pathway glioma will be treated by watchful waiting. In other cases, treatment may include surgery or radiation therapy. A goal of treatment is to save as much vision as possible. The effect of tumor growth on the child's vision will be closely followed during treatment.
Children with neurofibromatosis type 1 (NF1) may not need treatment unless the tumor grows or symptoms, such as vision problems, appear.
Children with tuberous sclerosis may develop benign (not cancer ) tumors in the brain called subependymal giant cell astrocytomas (SEGAs). These tumors may be treated with drugs to shrink them instead of surgery.
Recurrent Childhood Low-Grade Astrocytomas
Before more cancer treatment is given, imaging tests , biopsy , or surgery are done to be sure cancer is present and find out how much cancer there is.
Treatment of recurrent childhood low-grade astrocytoma may include the following:
Childhood High-Grade Astrocytomas
Treatment of childhood high-grade astrocytoma may include the following:
Recurrent Childhood High-Grade Astrocytomas
Treatment of recurrent childhood high-grade astrocytoma may include the following:
Check for U.S. clinical trials from NCI's list of cancer clinical trials that are now accepting patients with childhood astrocytoma. For more specific results, refine the search by using other search features, such as the location of the trial, the type of treatment, or the name of the drug. General information about clinical trials is available from the NCI Web site.
For more information from the National Cancer Institute about childhood astrocytomas, see the following:
For more childhood cancer information and other general cancer resources from the National Cancer Institute, see the following:
The PDQ cancer information summaries are reviewed regularly and updated as new information becomes available. This section describes the latest changes made to this summary as of the date above.
Editorial changes were made to this summary.
For more information, U.S. residents may call the National Cancer Institute's (NCI's) Cancer Information Service toll-free at 1-800-4-CANCER (1-800-422-6237) Monday through Friday from 8:00 a.m. to 8:00 p.m., Eastern Time. A trained Cancer Information Specialist is available to answer your questions.
The NCI's LiveHelp® online chat service provides Internet users with the ability to chat online with an Information Specialist. The service is available from 8:00 a.m. to 11:00 p.m. Eastern time, Monday through Friday. Information Specialists can help Internet users find information on NCI Web sites and answer questions about cancer.
Write to us
For more information from the NCI, please write to this address:
|NCI Public Inquiries Office|
|6116 Executive Boulevard, MSC8322|
|Bethesda, MD 20892-8322|
Search the NCI Web site
The NCI Web site provides online access to information on cancer, clinical trials, and other Web sites and organizations that offer support and resources for cancer patients and their families. For a quick search, use the search box in the upper right corner of each Web page. The results for a wide range of search terms will include a list of "Best Bets," editorially chosen Web pages that are most closely related to the search term entered.
There are also many other places to get materials and information about cancer treatment and services. Hospitals in your area may have information about local and regional agencies that have information on finances, getting to and from treatment, receiving care at home, and dealing with problems related to cancer treatment.
The NCI has booklets and other materials for patients, health professionals, and the public. These publications discuss types of cancer, methods of cancer treatment, coping with cancer, and clinical trials. Some publications provide information on tests for cancer, cancer causes and prevention, cancer statistics, and NCI research activities. NCI materials on these and other topics may be ordered online or printed directly from the NCI Publications Locator. These materials can also be ordered by telephone from the Cancer Information Service toll-free at 1-800-4-CANCER (1-800-422-6237).
PDQ is a comprehensive cancer database available on NCI's Web site.
PDQ is the National Cancer Institute's (NCI's) comprehensive cancer information database. Most of the information contained in PDQ is available online at NCI's Web site. PDQ is provided as a service of the NCI. The NCI is part of the National Institutes of Health, the federal government's focal point for biomedical research.
PDQ contains cancer information summaries.
The PDQ database contains summaries of the latest published information on cancer prevention, detection, genetics, treatment, supportive care, and complementary and alternative medicine. Most summaries are available in two versions. The health professional versions provide detailed information written in technical language. The patient versions are written in easy-to-understand, nontechnical language. Both versions provide current and accurate cancer information.
The PDQ cancer information summaries are developed by cancer experts and reviewed regularly.
Editorial Boards made up of experts in oncology and related specialties are responsible for writing and maintaining the cancer information summaries. The summaries are reviewed regularly and changes are made as new information becomes available. The date on each summary ("Date Last Modified") indicates the time of the most recent change.
PDQ also contains information on clinical trials.
A clinical trial is a study to answer a scientific question, such as whether one treatment is better than another. Trials are based on past studies and what has been learned in the laboratory. Each trial answers certain scientific questions in order to find new and better ways to help cancer patients. During treatment clinical trials, information is collected about the effects of a new treatment and how well it works. If a clinical trial shows that a new treatment is better than one currently being used, the new treatment may become "standard." In the United States, about two-thirds of children with cancer are treated in a clinical trial at some point in their illness.
Listings of clinical trials are included in PDQ and are available online at NCI's Web site. Descriptions of the trials are available in health professional and patient versions. For additional help in locating a childhood cancer clinical trial, call the Cancer Information Service at 1-800-4-CANCER (1-800-422-6237).
The PDQ database contains listings of groups specializing in clinical trials.
The Children's Oncology Group (COG) is the major group that organizes clinical trials for childhood cancers in the United States. Information about contacting COG is available on the NCI Web site or from the Cancer Information Service at 1-800-4-CANCER (1-800-422-6237).
Last Revised: 2012-02-10
If you want to know more about cancer and how it is treated, or if you wish to know about clinical trials for your type of cancer, you can call the NCI's Cancer Information Service at 1-800-422-6237, toll free. A trained information specialist can talk with you and answer your questions.
Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. |
Ben Franklin Classroom Activity
Table of Contents
Objectives: NCTM Standards: Number and Operations and Connections]
Students will learn the definition of a magic square.
Students will learn the historical background of magic squares.
Students will experience the mathematical aspects of magic squares.
Prepare overhead transparencies and/or handouts before presenting the activity, including:
Franklin's Magic Square
Franklin's Magic Square with Lines
Just the lines (for an art activity)
Blank overhead transparency and pens
Display the Franklin Magic Square.
Who was Benjamin Franklin?
After reading about Benjamin Franklin's background, why do you think he created such an intricate magic square?
Was Benjamin Franklin best known as a mathematician?
What is magic about the arrangement of the numbers in the 8x8 cell square?
What is the first number that Franklin used?
What is the last number?
How many numbers are there?
Is any number repeated?
What is the sum of the numbers in the1st row? 2nd row? the other rows?
What is the sum of the numbers in the1st column? 2nd column? the other columns?
If you start in the upper left hand corner and add the numbers halfway down the column, what is the sum? How does this compare to the total sum of that column?
What is the sum of the four corner numbers?
What other numerical relationships can you find?
If you vertically separate the square into two rectangles in your mind,
are the numbers from 1 to 10 on the right or the left?
Consider the placement of 1, 2, 63, and 64. What is their sum? Consider the placement of 31, 32, 33, and 34. What is their sum?
When you draw lines connecting the numbers in the Franklin square in order from 1 to 64, do you see a pattern?
are the numbers from 54 to 64 on the right or the left?
[Refer to Franklin Magic Square with lines.]
Are there symmetrical relationships?
Is the line design you have created an example of
rotation symmetry? . .
glide reflection symmetry?
[Note: To discover the symmetry involved, create a transparency (trace the
original) of the lines and use it to test for rotation, translation (slide), and reflection (flip).]
© 1994- Drexel University. All rights reserved.
Home || The Math Library || Quick Reference || Search || Help
The Math Forum is a research and educational enterprise of the Drexel University School of Education.
Send comments to: Suzanne Alejandre |
On July 4, 1861 in speaking to a special session of Congress Abraham Lincoln tried to explain what their side was fighting for in the Civil War:
"This is essentially a People's contest. On the side of the Union, it is a struggle for maintaining in the world, that form, and substance of government, whose leading object is, to elevate the condition of men---to lift artificial weights from all shoulders---to clear the paths of laudable pursuit for all---to afford all, an unfettered start, and a fair chance, in the race of life. Yielding to partial, and temporary departures, from necessity, this is the leading object of the government for whose existence we contend."
It can be argued that the passage of the Homestead Act by Congress in the spring of 1862 was an attempt to meet Lincoln’s definition of the “leading object” of our government; that is, in giving away free land through the Homestead Act Congress hoped “to elevate the condition of men---to lift artificial weights from all shoulders---to clear the paths of laudable pursuit for all---to afford all, an unfettered start, and a fair chance, in the race of life.”
Other Presidents have agreed on the purpose of the Homestead Act:
- Lyndon B. Johnson, August 26, 1965: Like the lawmakers in our past who created the Homestead Act….we say that it is right and that it is just, and that it is a function of government, and that we are going to carry out that responsibility to help our people get back on their feet and share once again in the blessings of American life.
- George H.W. Bush, November 28, 1990: Abraham Lincoln's Homestead Act empowered people; it freed people from the burden of poverty. It freed them to control their own destinies, to create their own opportunities, and to live the vision of the American dream.
- George W. Bush, January 20, 2005: In America's ideal of freedom, citizens find the dignity and security of economic independence instead of laboring on the edge of subsistence. This is the broader definition of liberty that motivated the Homestead Act, the Social Security Act, and the GI Bill of Rights.
However, others would argue the motivation behind the Homestead Act was “Manifest Destiny.”
Manifest Destiny was the belief that the people of the United States were destined to extend the "boundaries of freedom," democratic institutions, and American ideals from the Atlantic seaboard to the Pacific Ocean.
Still others would argue the motivation behind the Homestead Act was greed; that Eastern capitalists wanted to see the West settled so there would be an expanding market for the products of industrialization.
What do you feel was the reasoning behind the Homestead Act? |
MAKING CHOICES: A FRAMEWORK FOR MAKING ETHICAL DECISIONS
Decisions about right and wrong permeate everyday life. Ethics should concern all levels of life: acting properly as individuals, creating responsible organizations and governments, and making our society as a whole more ethical. This document is designed as an introduction to making ethical decisions. It recognizes that decisions about “right” and “wrong” can be difficult, and may be related to individual context. It first provides a summary of the major sources for ethical thinking, and then presents a framework for decision-making.
1. WHAT IS ETHICS?:
Ethics provides a set of standards for behavior that helps us decide how we ought to act in a range of situations. In a sense, we can say that ethics is all about making choices, and about providing reasons why we should make these choices.
Ethics is sometimes conflated or confused with other ways of making choices, including religion, law or morality. Many religions promote ethical decision-making but do not always address the full range of ethical choices that we face. Religions may also advocate or prohibit certain behaviors which may not be considered the proper domain of ethics, such as dietary restrictions or sexual behaviors. A good system of law should be ethical, but the law establishes precedent in trying to dictate universal guidelines, and is thus not able to respond to individual contexts. Law may have a difficult time designing or enforcing standards in some important areas, and may be slow to address new problems. Both law and ethics deal with questions of how we should live together with others, but ethics is sometimes also thought to apply to how individuals act even when others are not involved. Finally, many people use the terms morality and ethics interchangeably. Others reserve morality for the state of virtue while seeing ethics as a code that enables morality. Another way to think about the relationship between ethics and morality is to see ethics as providing a rational basis for morality, that is, ethics provides good reasons for why something is moral.
2. TRADITIONAL ARRANGEMENT OF THE FIELD OF ETHICS:
There are many systems of ethics, and numerous ways to think about right and wrong actions or good and bad character. The field of ethics is traditionally divided into three areas: 1.) meta-ethics, which deals with the nature of the right or the good, as well as the nature and justification of ethical claims; 2.) normative ethics, which deals with the standards and principles used to determine whether something is right or good; 3.) applied ethics, which deals with the actual application of ethical principles to a particular situation. While it is helpful to approach the field of ethics in this order, we might keep in mind that this somewhat “top down” approach does not exhaust the study of ethics. Our experience with applying particular ethical standards or principles can inform our understanding of how good these standard or principles are.
Three Broad Types of Ethical Theory:
Ethical theories are often broadly divided into three types: i) Consequentialist theories, which are primarily concerned with the ethical consequences of particular actions; ii) Non-consequentialist theories, which tend to be broadly concerned with the intentions of the person making ethical decisions about particular actions; and iii) Agent-centered theories, which, unlike consequentialist and non-consequentialist theories, are more concerned with the overall ethical status of individuals, or agents, and are less concerned to identify the morality of particular actions. Each of these three broad categories contains varieties of approaches to ethics, some of which share characteristics across the categories. Below is a sample of some of the most important and useful of these ethical approaches.
i.) Consequentialist Theories:
The Utilitarian Approach
Utilitarianism can be traced back to the school of the Ancient Greek philosopher Epicurus of Samos (341-270 BCE), who argued that the best life is one that produces the least pain and distress. The 18th Century British philosopher Jeremy Bentham (1748-1832) applied a similar standard to individual actions, and created a system in which actions could be described as good or bad depending upon the amount and degree of pleasure and/or pain they would produce. Bentham’s student, John Stuart Mill (1806-1873) modified this system by making its standard for the good the more subjective concept of “happiness,” as opposed to the more materialist idea of “pleasure.”
Utilitarianism is one of the most common approaches to making ethical decisions, especially decisions with consequences that concern large groups of people, in part because it instructs us to weigh the different amounts of good and bad that will be produced by our action. This conforms to our feeling that some good and some bad will necessarily be the result of our action and that the best action will be that which provides the most good or does the least harm, or, to put it another way, produces the greatest balance of good over harm. Ethical environmental action, then, is the one that produces the greatest good and does the least harm for all who are affected—government, corporations, the community, and the environment.
The Egoistic Approach
One variation of the utilitarian approach is known as ethical egoism, or the ethics of self- interest. In this approach, an individual often uses utilitarian calculation to produce the greatest amount of good for him or herself. Ancient Greek Sophists like Thrasymacus (c. 459-400 BCE), who famously claimed that might makes right, and early modern thinkers like Thomas Hobbes (1588-1679) may be considered forerunners of this approach. One of the most influential recent proponents of ethical egoism was the Russian-American philosopher Ayn Rand (1905-1982), who, in the book The Virtue of Selfishness (1964), argues that self-interest is a prerequisite to self-respect and to respect for others. There are numerous parallels between ethical egoism and laissez-faire economic theories, in which the pursuit of self-interest is seen as leading to the benefit of society, although the benefit of society is seen only as the fortunate byproduct of following individual self-interest, not its goal.
The Common Good Approach
The ancient Greek philosophers Plato (427-347 BCE) and Aristotle (384-322 BCE) promoted the perspective that our actions should contribute to ethical communal life life. The most influential modern proponent of this approach was the French philosopher Jean-Jacques Rousseau (1712-1778), who argued that the best society should be guided by the “general will” of the people which would then produce what is best for the people as a whole. This approach to ethics underscores the networked aspects of society and emphasizes respect and compassion for others, especially those who are more vulnerable.
ii.) Non-consequentialist Theories:
The Duty-Based Approach
The duty-based approach, sometimes called deontological ethics, is most commonly associated with the philosopher Immanuel Kant (1724-1804), although it had important precursors in earlier non-consquentialist, often explicitly religious, thinking of people like Saint Augustine of Hippo (354-430), who emphasized the importance of the personal will and intention (and of the omnipotent God who sees this interior mental state) to ethical decision making. Kant argued that doing what is right is not about the consequences of our actions (something over which we ultimately have no control) but about having the proper intention in performing the action. The ethical action is one taken from duty, that is, it is done precisely because it is our obligation to perform the action. Ethical obligations are the same for all rational creatures (they are universal), and knowledge of what these obligations entail is arrived at by discovering rules of behavior that are not contradicted by reason.
Kant’s famous formula for discovering our ethical duty is known as the “categorical imperative.” It has a number of different versions, but Kant believed they all amounted to the same imperative. The most basic form of the imperative is: “Act only according to that maxim by which you can at the same time will that it should become a universal law.” So, for example, lying is unethical because we could not universalize a maxim that said “One should always lie.” Such a maxim would render all speech meaningless. We can, however, universalize the maxim, “Always speak truthfully,” without running into a logical contradiction. (Notice the duty-based approach says nothing about how easy or difficult it would be to carry out these maxims, only that it is our duty as rational creatures to do so.) In acting according to a law that we have discovered to be rational according to our own universal reason, we are acting autonomously (in a self-regulating fashion), and thus are bound by duty, a duty we have given ourselves as rational creatures. We thus freely choose (we will) to bind ourselves to the moral law. For Kant, choosing to obey the universal moral law is the very nature of acting ethically.
The Rights Approach
The Rights approach to ethics is another non-consequentialist approach which derives much of its current force from Kantian duty-based ethics, although it also has a history that dates back at least to the Stoics of Ancient Greece and Rome, and has another influential current which flows from work of the British empiricist philosopher John Locke (1632-1704). This approach stipulates that the best ethical action is that which protects the ethical rights of those who are affected by the action. It emphasizes the belief that all humans have a right to dignity. This is based on a formulation of Kant’s categorical imperative that says: “Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means to an end.” The list of ethical rights is debated; many now argue that animals and other non-humans such as robots also have rights.
The Fairness or Justice Approach
The Law Code of Hammurabi in Ancient Mesopotamia (c. 1750 BCE) held that all free men should be treated alike, just as all slaves should be treated alike. When combined with the universality of the rights approach, the justice approach can be applied to all human persons. The most influential version of this approach today is found in the work of American philosopher John Rawls (1921-2002), who argued, along Kantian lines, that just ethical principles are those that would be chosen by free and rational people in an initial situation of equality. This hypothetical contract is considered fair or just because it provides a procedure for what counts as a fair action, and does not concern itself with the consequences of those actions. Fairness of starting point is the principle for what is considered just.
The Divine Command Approach
As its name suggests, this approach sees what is right as the same as what God commands, and ethical standards are the creation of God’s will. Following God’s will is seen as the very definition what is ethical. Because God is seen as omnipotent and possessed of free will, God could change what is now considered ethical, and God is not bound by any standard of right or wrong short of logical contradiction. The Medieval Christian philosopher William of Ockham (1285-1349) was one of the most influential thinkers in this tradition, and his writings served as a guide for Protestant Reformers like Martin Luther (1483-1546) and Jean Calvin (1509-1564). The Danish philosopher Søren Kierkegaard (1813-1855), in praising the biblical Patriarch Abraham’s willingness to kill his son Isaac at God’s command, claimed that truly right action must ultimately go beyond everyday morality to what he called the “teleological suspension of the ethical,” again demonstrating the somewhat tenuous relationship between religion and ethics mentioned earlier.
iii.) Agent-centered Theories:
The Virtue Approach
One long-standing ethical principle argues that ethical actions should be consistent with ideal human virtues. Aristotle, for example, argued that ethics should be concerned with the whole of a person’s life, not with the individual discrete actions a person may perform in any given situation. A person of good character would be one who has attainted certain virtues. This approach is also prominent in non-Western contexts, especially in East Asia, where the tradition of the Chinese sage Confucius (551-479 BCE) emphasizes the importance of acting virtuously (in an appropriate manner) in a variety of situations. Because virtue ethics is concerned with the entirety of a person’s life, it takes the process of education and training seriously, and emphasizes the importance of role models to our understanding of how to engage in ethical deliberation.
The Feminist Approach
In recent decades, the virtue approach to ethics has been supplemented and sometimes significantly revised by thinkers in the feminist tradition, who often emphasize the importance of the experiences of women and other marginalized groups to ethical deliberation. Among the most important contributions of this approach is its foregrounding of the principle of care as a legitimately primary ethical concern, often in opposition to the seemingly cold and impersonal justice approach. Like virtue ethics, feminist ethics concerned with the totality of human life and how this life comes to influence the way we make ethical decisions.
Terms Used in Ethical Judgments
Applied ethics deals with issues in private or public life that are matters for ethical judgments. The following are important terms used in making moral judgments about particular actions.
Obligatory: When we say something is ethically “obligatory” we mean that it is not only right to do it, but that it is wrong not to do it. In other words, we have a ethical obligation to perform the action. Sometimes the easiest way to see if an action is ethically obligatory is to look at what it would mean NOT to perform the action. For example, we might say it is ethically obligatory for parents to care for their children, not only because it is right for them to do it, but also because it is wrong for them not to do it. The children would suffer and die if parents did not care for them. The parents are thus ethically “obligated” to care for their children.
Impermissible: The opposite of an ethically obligatory action is an action that is ethically impermissible, meaning that it is wrong to do it and right not to do it. For example, we would say that murder is ethically impermissible.
Permissible: Sometimes actions are referred to as ethically permissible, or ethically “neutral,” because it is neither right nor wrong to do them or not to do them. We might say that having plastic surgery is ethically permissible, because it is not wrong to have the surgery (it is not impermissible), but neither is it ethically necessary (obligatory) to have the surgery. Some argue that suicide is permissible in certain circumstances. That is, a person would not be wrong in committing suicide, nor would they be wrong in not committing suicide. Others would say that suicide is ethically impermissible.
Supererogatory: A fourth type of ethical action is called supererogatory. These types of actions are seen as going “above and beyond the call of duty.” They are right to do, but it is not wrong not to do them. For example, two people are walking down a hallway and see a third person drop their book bag, spilling all of their books and papers onto the floor. If one person stops to help the third person pick up their books, but the other person keeps on walking, we somehow feel that the person who stopped to help has acted in a more ethically appropriate way than the person who did not stop, but we cannot say that the person who did not stop was unethical in not stopping. In other words, the person who did not help was in no way obligated (it was not ethically obligatory) to help. But we nevertheless want to ethically praise the person who did stop, so we call his or her actions supererogatory.
3. FRAMEWORKS FOR ETHICAL DECISION-MAKING:
Making good ethical decisions requires a trained sensitivity to ethical issues and a practiced method for exploring the ethical aspects of a decision and weighing the considerations that should impact our choice of a course of action. Having a method for ethical decision making is essential. When practiced regularly, the method becomes so familiar that we work through it automatically without consulting the specific steps. This is one reason why we can sometimes say that we have a “moral intuition” about a certain situation, even when we have not consciously thought through the issue. We are practiced at making ethical judgments, just as we can be practiced at playing the piano, and can sit and play well “without thinking.” Nevertheless, it is not always advisable to follow our immediate intuitions, especially in particularly complicated or unfamiliar situations. Here our method for ethical decision making should enable us to recognize these new and unfamiliar situations and to act accordingly.
The more novel and difficult the ethical choice we face, the more we need to rely on discussion and dialogue with others about the dilemma. Only by careful exploration of the problem, aided by the insights and different perspectives of others, can we make good ethical choices in such situations.
Based upon the three-part division of traditional normative ethical theories discussed above, it makes sense to suggest three broad frameworks to guide ethical decision making: The Consequentialist Framework; The Duty Framework; and the Virtue Framework.
While each of the three frameworks is useful for making ethical decisions, none is perfect—otherwise the perfect theory would have driven the other imperfect theories from the field long ago. Knowing the advantages and disadvantages of the frameworks will be helpful in deciding which is most useful in approach the particular situation with which we are presented.
The Consequentialist Framework
In the Consequentialist framework, we focus on the future effects of the possible courses of action, considering the people who will be directly or indirectly affected. We ask about what outcomes are desirable in a given situation, and consider ethical conduct to be whatever will achieve the best consequences. The person using the Consequences framework desires to produce the most good.
Among the advantages of this ethical framework is that focusing on the results of an action is a pragmatic approach. It helps in situations involving many people, some of whom may benefit from the action, while others may not. On the other hand, it is not always possible to predict the consequences of an action, so some actions that are expected to produce good consequences might actually end up harming people. Additionally, people sometimes react negatively to the use of compromise which is an inherent part of this approach, and they recoil from the implication that the end justifies the means. It also does not include a pronouncement that certain things are always wrong, as even the most heinous actions may result in a good outcome for some people, and this framework allows for these actions to then be ethical.
The Duty Framework
In the Duty framework, we focus on the duties and obligations that we have in a given situation, and consider what ethical obligations we have and what things we should never do. Ethical conduct is defined by doing one’s duties and doing the right thing, and the goal is performing the correct action.
This framework has the advantage of creating a system of rules that has consistent expectations of all people; if an action is ethically correct or a duty is required, it would apply to every person in a given situation. This even-handedness encourages treating everyone with equal dignity and respect.
This framework also focuses on following moral rules or duty regardless of outcome, so it allows for the possibility that one might have acted ethically, even if there is a bad result. Therefore, this framework works best in situations where there is a sense of obligation or in those in which we need to consider why duty or obligation mandates or forbids certain courses of action.
However, this framework also has its limitations. First, it can appear cold and impersonal, in that it might require actions which are known to produce harms, even though they are strictly in keeping with a particular moral rule. It also does not provide a way to determine which duty we should follow if we are presented with a situation in which two or more duties conflict. It can also be rigid in applying the notion of duty to everyone regardless of personal situation.
The Virtue Framework
In the Virtue framework, we try to identify the character traits (either positive or negative) that might motivate us in a given situation. We are concerned with what kind of person we should be and what our actions indicate about our character. We define ethical behavior as whatever a virtuous person would do in the situation, and we seek to develop similar virtues.
Obviously, this framework is useful in situations that ask what sort of person one should be. As a way of making sense of the world, it allows for a wide range of behaviors to be called ethical, as there might be many different types of good character and many paths to developing it. Consequently, it takes into account all parts of human experience and their role in ethical deliberation, as it believes that all of one’s experiences, emotions, and thoughts can influence the development of one’s character.
Although this framework takes into account a variety of human experience, it also makes it more difficult to resolve disputes, as there can often be more disagreement about virtuous traits than ethical actions. Also, because the framework looks at character, it is not particularly good at helping someone to decide what actions to take in a given situation or determine the rules that would guide one’s actions. Also, because it emphasizes the importance of role models and education to ethical behavior, it can sometimes merely reinforce current cultural norms as the standard of ethical behavior.
Putting the Frameworks Together
By framing the situation or choice you are facing in one of the ways presented above, specific features will be brought into focus more clearly. However, it should be noted that each framework has its limits: by focusing our attention on one set of features, other important features may be obscured. Hence it is important to be familiar with all three frameworks and to understand how they relate to each other—where they may overlap, and where they may differ.
The chart below is designed to highlight the main contrasts between the three frameworks:
What kind of outcomes should I produce (or try to produce)?
What are my obligations in this situation, and what are the things I should never do?
What kind of person should I be (or try to be), and what will my actions show about my character?
Directs attention to the future effects of an action, for all people who will be directly or indirectly affected by the action.
Directs attention to the duties that exist prior to the situation and determines obligations.
Attempts to discern character traits (virtues and vices) that are, or could be, motivating the people involved in the situation.
Definition of Ethical Conduct
Ethical conduct is the action that will achieve the best consequences.
Ethical conduct involves always doing the right thing: never failing to do one's duty.
Ethical conduct is whatever a fully virtuous person would do in the circumstances.
Aim is to produce the most good.
Aim is to perform the right action.
Aim is to develop one’s character.
Because the answers to the three main types of ethical questions asked by each framework are not mutually exclusive, each framework can be used to make at least some progress in answering the questions posed by the other two.
In many situations, all three frameworks will result in the same—or at least very similar—conclusions about what you should do, although they will typically give different reasons for reaching those conclusions.
However, because they focus on different ethical features, the conclusions reached through one framework will occasionally differ from the conclusions reached through one (or both) of the others.
4. APPLYING THE FRAMEWORKS TO CASES:
When using the frameworks to make ethical judgments about specific cases, it will be useful to follow the process below.
Recognizing an Ethical Issue
One of the most important things to do at the beginning of ethical deliberation is to locate, to the extent possible, the specifically ethical aspects of the issue at hand. Sometimes what appears to be an ethical dispute is really a dispute about facts or concepts. For example, some Utilitarians might argue that the death penalty is ethical because it deters crime and thus produces the greatest amount of good with the least harm. Other Utilitarians, however, might argue that the death penalty does not deter crime, and thus produces more harm than good. The argument here is over which facts argue for the morality of a particular action, not simply over the morality of particular principles. All Utilitarians would abide by the principle of producing the most good with the least harm.
Consider the Parties Involved
Another important aspect to reflect upon are the various individuals and groups who may be affected by your decision. Consider who might be harmed or who might benefit.
Gather all of the Relevant Information
Before taking action, it is a good idea to make sure that you have gathered all of the pertinent information, and that all potential sources of information have been consulted.
Formulate Actions and Consider Alternatives
Evaluate your decision-making options by asking the following questions:
Which action will produce the most good and do the least harm? (The Utilitarian Approach)
Which action respects the rights of all who have a stake in the decision? (The Rights Approach)
Which action treats people equally or proportionately? (The Justice Approach)
Which action serves the community as a whole, not just some members?
(The Common Good Approach)
Which action leads me to act as the sort of person I should be? (The Virtue Approach)
Make a Decision and Consider It
After examining all of the potential actions, which best addresses the situation? How do I feel about my choice?
Many ethical situations are uncomfortable because we can never have all of the information. Even so, we must often take action.
Reflect on the Outcome
What were the results of my decision? What were the intended and unintended consequences? Would I change anything now that I have seen the consequences?
Making ethical decisions requires sensitivity to the ethical implications of problems and situations. It also requires practice. Having a framework for ethical decision making is essential. We hope that the information above is helpful in developing your own experience in making choices.
This framework for thinking ethically is the product of dialogue and debate in the seminar Making Choices: Ethical Decisions at the Frontier of Global Science held at Brown University in the spring semester 2011. It relies on the Ethical Framework developed at the Markkula Center for Applied Ethics at Santa Clara University and the Ethical Framework developed by the Center for Ethical Deliberation at the University of Northern Colorado as well as the Ethical Frameworks for Academic Decision-Making on the Faculty Focus website which in turn relies upon Understanding Ethical Frameworks for E-Learning Decision-Making, December 1, 2008, Distance Education Report (find url)
Primary contributors include Sheila Bonde and Paul Firenze, with critical input from James Green, Margot Grinberg, Josephine Korijn, Emily Levoy, Alysha Naik, Laura Ucik and Liza Weisberg. It was last revised in May, 2013. |
Earth has a diameter of about 12,756 km (7,972 mi). The Earth’s interior consists of rock and metal. It is made up of four main layers:
1) the inner core: a solid metal core made up of nickel and iron (2440 km diameter)
2) the outer core: a liquid molten core of nickel and iron
3) the mantle: dense and mostly solid silicate rock
4) the crust: thin silicate rock material
The temperature in the core is hotter than the Sun’s surface. This intense heat from the inner core causes material in the outer core and mantle to move around.
The movement of material deep within the Earth may cause large plates made of the crust and upper mantle to move slowly over the Earth’ssurface. It is also possible that the movements generate the Earth’s magnetic field, called the magnetosphere.http://rst.gsfc.nasa.gov/Sect16/Sect16_1.html |
What was the "social and intellectual context" of the Progressive Movement?
The progressive movement had several goals among them eliminating corruption in the government. They did this by exposing and undercutting political bosses and machines. They supported prohibition with the aim of destroying political power of local bosses. Women’s suffrage was also promoted so as to bring a purer vote in the arena. Also, they aimed at achieving efficiency in all public sectors by highlighting old ways that needed to be modernized. It is through this movement that there was the reformation of the education system, medical sector, industries and other public sectors.
How did Theodore Roosevelt change the role of American government and help fuel the expansion of the American empire?
Theodore Roosevelt peacefully intervened in the Dominican Republic where United States had exchanged their loans. This was aimed at encouraging investments of United States capital in other countries with the aim of safeguarding foreign investments. Having acquired financial power and superiority, America was able to control other nations, and as a result, expanded their empire. In other words, Theodore ensured that United States benefited from the war by taking advantage of every opportunity.
What roles did women play in the reform movement?
It was obvious that Women played a significant role in the reform movement, socially and politically throughout the Progressive Era. Women could not have done this alone, but through their activists who represented them, it was clearly evident that played a vital role in the reform movement. Apart from participating in demonstrations, and voting, they also had powerful speakers who had a great influence on people. Women like Louisa May Alcott, Dorothea Dix, and others made quite an impression in representation of the feminine gender.
Describe the election of 1912 and the progressive reforms of the Wilson administration?
In the 1992 presidential elections, there were four presidential aspirants; William Taft, nominated by the republican party, Woodrow Wilson by the democratic party, Theodore Roosevelt by the progressive party after he failed at republican nominations, and Eugene debs, who was nominated by the socialist Party of America. Woodrow Wilson won the election and became the president of United States between 1892 and 1932. It is under his leadership that the war represented a climax of the progressive movements was formed; however, some people described Wilson’s policies to be racist. |
Prevention of swine influenza has three components: prevention in swine, prevention of transmission to humans, and prevention of its spread among humans.
Methods of preventing the spread of influenza among swine include facility management, herd management, and vaccination (ATCvet code: QI09AA03). Because much of the illness and death associated with swine flu involves secondary infection by other pathogens, control strategies that rely on vaccination may be insufficient.
Control of swine influenza by vaccination has become more difficult in recent decades, as the evolution of the virus has resulted in inconsistent responses to traditional vaccines. Standard commercial swine flu vaccines are effective in controlling the infection when the virus strains match enough to have significant cross-protection, and custom (autogenous) vaccines made from the specific viruses isolated are created and used in the more difficult cases. Present vaccination strategies for SIV control and prevention in swine farms typically include the use of one of several bivalent SIV vaccines commercially available in the United States. Of the 97 recent H3N2 isolates examined, only 41 isolates had strong serologic cross-reactions with antiserum to three commercial SIV vaccines. Since the protective ability of influenza vaccines depends primarily on the closeness of the match between the vaccine virus and the epidemic virus, the presence of nonreactive H3N2 SIV variants suggests that current commercial vaccines might not effectively protect pigs from infection with a majority of H3N2 viruses. The United States Department of Agriculture researchers say that while pig vaccination keeps pigs from getting sick, it does not block infection or shedding of the virus.
Facility management includes using disinfectants and ambient temperature to control virus in the environment. The virus is unlikely to survive outside living cells for more than two weeks, except in cold (but above freezing) conditions, and it is readily inactivated by disinfectants. Herd management includes not adding pigs carrying influenza to herds that have not been exposed to the virus. The virus survives in healthy carrier pigs for up to 3 months and can be recovered from them between outbreaks. Carrier pigs are usually responsible for the introduction of SIV into previously uninfected herds and countries, so new animals should be quarantined. After an outbreak, as immunity in exposed pigs wanes, new outbreaks of the same strain can occur.
Prevention of pig to human transmission
Swine can be infected by both avian and human influenza strains of influenza, and therefore are hosts where the antigenic shifts can occur that create new influenza strains.
The transmission from swine to human is believed to occur mainly in swine farms where farmers are in close contact with live pigs. Although strains of swine influenza are usually not able to infect humans this may occasionally happen, so farmers and veterinarians are encouraged to use a face mask when dealing with infected animals. The use of vaccines on swine to prevent their infection is a major method of limiting swine to human transmission. Risk factors that may contribute to swine-to-human transmission include smoking and not wearing gloves when working with sick animals.
Prevention of human to human transmission
Influenza spreads between humans through coughing or sneezing and people touching something with the virus on it and then touching their own nose or mouth. Swine flu cannot be spread by pork products, since the virus is not transmitted through food. The swine flu in humans is most contagious during the first five days of the illness although some people, most commonly children, can remain contagious for up to ten days. Diagnosis can be made by sending a specimen, collected during the first five days for analysis.
Recommendations to prevent spread of the virus among humans include using standard infection control against influenza. This includes frequent washing of hands with soap and water or with alcohol-based hand sanitizers, especially after being out in public. Chance of transmission is also reduced by disinfecting household surfaces, which can be done effectively with a diluted chlorine bleach solution.
Experts agree that hand-washing can help prevent viral infections, including ordinary influenza and the swine flu virus. Also avoiding touching eyes, nose and mouth with hands prevents flu. Influenza can spread in coughs or sneezes, but an increasing body of evidence shows small droplets containing the virus can linger on tabletops, telephones and other surfaces and be transferred via the fingers to the mouth, nose or eyes. Alcohol-based gel or foam hand sanitizers work well to destroy viruses and bacteria. Anyone with flu-like symptoms such as a sudden fever, cough or muscle aches should stay away from work or public transportation and should contact a doctor for advice.
Social distancing is another tactic. It means staying away from other people who might be infected and can include avoiding large gatherings, spreading out a little at work, or perhaps staying home and lying low if an infection is spreading in a community. Public health and other responsible authorities have action plans which may request or require social distancing actions depending on the severity of the outbreak.
Vaccines are available for different kinds of swine flu. The U.S. Food and Drug Administration (FDA) approved the new swine flu vaccine for use in the United States on September 15, 2009. Studies by the National Institutes of Health (NIH), show that a single dose creates enough antibodies to protect against the virus within about 10 days.
Read more at wikipedia |
Newswise — A research team has genetically engineered a mouse with glowing primary cilia, the tiny outgrowths seen on the surface of most cells, according to a study published today in BioMed Central’s open access journal, Cilia. The model will enable researchers to better study what is now recognized as the “cell’s antenna,” with key signaling roles in development and tissue function, for the first time in a live mammal.
Studies in recent years had suggested that cilia regulate vital processes including growth, appetite, mood, healing and vision. Defects in cilia have been tied to depression, obesity and cancer, along with kidney disease and several rare genetic syndromes. Despite these suggestive outlines, the details of how cilia send signals remain largely unknown.
The cilia examined in the study were not the wavy ones that enable single-celled protozoans to dart around on microscope slides in biology class. Neither were they the kind that sweeps mucus out of airways. They were primary cilia, which occur one per cell and were once thought to be vestiges from our cellular past with no current function.
In the new study, researchers tagged a protein concentrated in cilia with the fluorescent protein GFP, which enabled the first video recordings of cilia at work in the kidneys of live mice.
“There is tremendous interest in being able to closely study cilia in a live mammal, and to study something, you must be able to see it,” said Bradley Yoder, Ph.D, professor in the Department of Cell, Developmental and Integrative Biology at the University of Alabama at Birmingham (UAB), and corresponding author on the study. “By tagging the right protein in cilia, we were able to visualize them in their natural environment. This will greatly accelerate research into cilia-driven disorders with serious consequences for development and adult health.”
In a signal that the new mouse model may be useful, researchers made a discovery the first time they tested it. They were able to observe primary cilia on cells lining kidney tubules as they filtered blood to control levels of water, salt and electrolytes, for example. Going into the study, researchers had believed such cilia stood up unless knocked over by fluid moving through a tubule.
This bending, they thought, might act as a sensor that released chemical signals to regulate the level of urine production, or perhaps the growth rate of nearby cells. Evidence from the new model suggests, instead, that the tubules are usually knocked flat by fluid flow, and that cilia-based sensors might work differently than once thought. Such insights are crucial because related problems lead to the formation of the clogging cysts seen in polycystic kidney disease, a main reason that people need dialysis.
Interestingly, when mice were under anesthesia and had slower blood flow, cilia were no longer continually knocked in one direction by fluid, but instead swung back and forth in a pulse timed with the heartbeat. Future studies will tell if injury, disease or blockage in the kidney, by disrupting flow, change signals sent via primary cilia to further damage tissue.
Better modelMany labs, including Yoder’s, have studied cilia in the past using organisms like yeast and algae. These simple creatures offer rapid genetic screens and readily visible cilia in living samples, and they have contributed greatly to the understanding of cilia. They share vital signaling pathways with human cells but are far from the same. A better model is the mouse, which is closer to humans on the evolutionary tree, but no one had been able to visualize primary cilia in a live mammal. Past attempts to affix glowing tags to cilia had required that the cells be static (dead). The problem with that approach is that those cells have features that behave differently.
One reason the UAB team was the first to achieve this mouse model is the years spent by the team studying proteins unique to primary cilia, a prerequisite to attaching tags to them. They found that the protein somatostatin receptor 3, or SSTR3, was well suited for this role because it occurs in great numbers on cilia, but it is not so central to their function that adding the tag causes problems.
Another key element enabling UAB to design the study was its Hepatorenal Fibrocystic Diseases Core Center, which has for years been looking at the role of primary cilia in polycystic kidney disease. The genetic engineering behind the new mouse was very expensive and would not have happened without the support provided by the National Institutes of Health.
The research team used standard molecular biology techniques to stitch the genetic code for the SSTR3-GFP fluorescent tag combination into the DNA of a mouse embryo at a special spot called ROSA26. Researchers discovered in 1991 that one can insert any DNA sequence at this spot in the mouse genome, and the desired gene will be expressed in nearly every cell in the mouse as it develops, while leaving the mouse healthy and fertile.
The new mouse will allow for the study of cilia in one organ at a time, as well as at different time points as a fetus develops or as a disease progresses. For example, past studies have shown that cilia on nerve cells in the brain region called the hypothalamus regulate feeding behavior and may have a role in obesity when they malfunction. There are many types of nerves cells in the hypothalamus, however, and which ones have cilia that contribute to disease is unknown. With the new model, the team anticipates being able to label cilia on each nerve cell type on the way to evaluating their role.
“Beyond kidney disease and obesity, cilia on nerve cells are packed with receptors for serotonin, for instance, suggesting they are part of nerve signaling pathways known to control learning and mood,” said Erik Malarkey, Ph.D., a postdoctoral scholar in Yoder’s lab and study author. “Cilia also appear to play a role in cell division, suggesting they may have role in cancer when cells reproduce uncontrollably. Cilia help cells to tell the left side of the body from the right as the fetus develops. It goes on and on.”
Along with Yoder and Malarkey, the other UAB study authors were Nicolas Berbari and Mandy Croyle in CDIB, along with Robert Kesterson in the UAB Department of Genetics. Also making important contributions were Amber O’Connor from the Center for Translational Science, Children’s National Medical Center, Courtney Haycraft of the Department of Craniofacial Biology, and Darwin Bell at the Ralph H. Johnson Veterans Administration Medical Center; at the Medical University of South Carolina in Charleston; as well as Peter Hohenstein of The Roslin Institute at University of Edinburgh in Scotland. |
The thought of black holes brings Steven Hawkins to mind. The great physics professor cut his teeth on black hole research and of course, the question of time. The definition of time is the duration of objects, and the question that is on everyone’s mind is, are black holes considered objects? Or are they what’s left of an object when it collaspes The other question is, does time react the same way in space as it does on this planet? Some experts say we are using our measurement of time and applying it to space. There are a lot of holes in that measuring process, and black holes are proof of that.
The massive black hole in question is big, and it is getting bigger fast. Scientists say it is 875 million years old, but they can only see it at 12.8 billion years because it is so far away. Flavio Maluf says that the theory is the farther away an object is in space, the older it is. The universe is expanding outward, but black holes expand inward, so what the heck do we really know?
Scientists believe black holes are born when stars collapse. That started about 100 million years after the big bang, but this new black hole shots big holes in that theory. The truth is the birth of black holes is still a mystery, and this discovery proves that fact. |
Brilliant scientists have discovered a way to make small generators to create electricity on their own, enabling us to hopefully one day replace batteries.
The Nanogenerator uses a technique of lining up zinc oxide nanowires inside of a special electrode. Tiny filaments are sent into movement by natural forces like blood flow, walking, mechanical vibration or ultrasonic waves. The devices are so minuscule that thousands of them could fit on a head of a pin.
Enough electricity could be generated to power a small biosensor implanted within the human body which has great benefits since typical batteries cannot be used in the body because of toxic materials like lithium and cadmium.
One of the men behind this project, Professor Zhong Lin Wang says:
“If you had a device like this in your shoes when you walked, you would be able to generate your own small current to power small electronics,” Wang noted. “Anything that makes the nanowires move within the generator can be used for generating power. Very little force is required to move them.”
In the near future, nanogenerators are hoped to replace batteries including the one in your cellphone. |
Theory of relativity
From Wikipedia, the free encyclopediaThe theory of relativity, or simply relativity, refers specifically to two theories of Albert Einstein: special relativity and general relativity. However, "relativity" can also refer to Galilean relativity.
The term "theory of relativity" was coined by Max Planck in 1908 to emphasize how special relativity (and later, general relativity) uses the principle of relativity.Special Relativity
Special relativity is a theory of the structure of spacetime. It was introduced in Albert Einstein's 1905 paper "On the Electrodynamics of Moving Bodies".
Special relativity is based on two postulates which are contradictory in classical mechanics:
- The laws of physics are the same for all observers in uniform motion relative to one another (Galileo's principle of relativity),
- The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the source of the light.
The resultant theory has many surprising consequences. Some of these are:
- Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock.
- Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer.
- Relativity of simultaneity: two events that appear simultaneous to an observer A will not be simultaneous to an observer B if B is moving with respect to A.
- Mass-energy equivalence: E = mc2, energy and mass are equivalent and transmutable.
The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations.
General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example when standing on the surface of the Earth) are physically identical.
The upshot of this is that free fall is inertial motion: In other words an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics.
This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so.
To resolve this difficulty Einstein first proposed that spacetime is curved. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and momentum within it.
Some of the consequences of general relativity are:
- Time goes more slowly in higher gravitational fields. This is called gravitational time dilation.
- Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars).
- Even rays of light (which have zero mass) bend in the presence of a gravitational field.
- The Universe is expanding, and the far parts of it are moving away from us faster than the speed of light. This does not contradict the theory of special relativity, since it is space itself that is expanding.
- Frame-dragging, in which a rotating mass "drags along" the space time around it.
Technically, general relativity is a metric theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially. |
Iceland is the most volcanic place in the world. Explosive eruptions, from many different volcanoes, are common, unpredictable and, when seen from afar, exciting. But don’t be fooled. These volcanic explosions are small fry, and mostly harmless. Less common but far more devastating are the other type, responsible for 80% of all Icelandic lava and 99% of its volcanic casualties: Iceland’s fissure eruptions.
Fissure eruptions occur along a linear fault rather than at a central volcano. Elsewhere in the world they are known as Icelandic eruptions, but in Iceland they are known as ‘fires’. A vigorous fissure eruption produces curtains of fire along the fissure – hence the name. Over the past 1200 years, there have been 14 fires in Iceland. Four of these were in the past 100 years, an excess which suggests there may have been more and older records are not fully complete. The most recent fires were Holuhraun (2014) which produced over 1 km3 of lava, drawn from below Bardarbunga 40 km away, Krafla (1975-1984), Surtsey (1963-1967), and Askja (1921-1929). Holuhraun in particular was a large, impressive fire. Bigger by far was the Laki eruption (1783-1784), also known as the Skafta fires (Skaftareldar, to be precise), which produced 15 km3 (DRE), drawn from beneath Grimsvotn. Laki was devastating and destructive: 78% of all horses on Iceland were killed, and 50% of cattle. The human population declined by 22%, there were significant fatalities from air pollution as far away as the UK, and Laki caused a winter so severe that as many as 1 million people may have perished from cold and starvation. It was a fire from hell.
And Laki was not even the largest fire on record. The Eldgja eruption which began in 934 AD was bigger, at almost 20 km3. It happened not longer after the settlement of Iceland began – the Vikings initiation to Iceland’s volcanoes was a baptism by fire.
The name is self-explanatory: ‘Eld’ mean fire, and ‘gja’ means fissure. But don’t let this name fool you into thinking this is what was seen. Although Iceland was already well occupied at the time (the population around 930 is estimated at 30,000), no written or oral records exist of the eruption. It is as if no one noticed it. The fires would have been visible from many places in the south of Iceland. Soon those areas would be covered by thick tephra, clouds of sulphuric acid, and overrun by floods of lava and water. How could it not have been noticed? Were all early reporters wiped out?
The duration of Eldgja has been estimated as 3 years (Hammer), up to 6 years (Zielinski) or even 8 years (Strothers). There are claims that weather was disrupted over Europe and Asia for as long as 9 years. But can this really be correct, if Laki only managed two years of mayhem? The story of Eldgja is far from complete: what are the facts – and what is fiction?
Let’s first look at the early Icelanders who themselves experienced Eldgja. The oldest civil records of Iceland are in the Book of Settlement and the Book of the Icelanders, both written down during the 12th century, mostly by the same person. They describe how the Vikings first settled Iceland in 874 AD or 870 AD (the two books don’t quite agree). 870 AD sounds more plausible: the winter of 873/874 was severe in Europe with significant death rates (reported as high as 30% in Mainz) which is not ideal for starting a colonization. On arrival, the Vikings found evidence of earlier occupation by Irish monks, but it is not clear whether these monks were still there at the time or had already left. The period of settlement lasted for 60 years, by which time the population may have been as high as 30,000. The Icelandic parliament, the Althingi, started either 930 or 934. The Books give names and fragments of background on many early settlers.
Archaeology confirms this sequence. There is a convenient tephra layer in Iceland dated to 877, and remnants of a few Viking sites have been found below this, all on the south-west coast. This favours the earlier date of 870 for the first settlers. Between the two tephra layers of 877 and Eldgja there are many Viking sites, particularly on the north side of Iceland. Many sites are well in-land, suggesting the coast had already filled up. Iceland was much more forested and vegetated than it is now. The Vikings settled on farmsteads, but the soils were often thin and quickly eroded. Farming became marginal and the carrying capacity of Iceland was reached quickly. Through various ups and downs (including the black death), by 1700 the population was still only 50,000. After Laki it was down to 40,000, not much more than in 930!
Eldgja is thought to have started in 934 and to have lasted to perhaps 941 (but I will come back to dates later). The eruption eventually covered 840 km2. At least 6 km3 of tephra was deposited over the country (the actual amount may have been larger: some will have ended up at sea). The lava volume is not fully known, as part has since been covered by sediments. On the western side, sediments from later jokulhlaups from Katla have covered some of the lava flows under a depth of 10 meter. The most recent estimate of the size of the eruption is 19.8 km3 (DRE), which includes the tephra. For comparison, the 1783-1784 Laki eruption was 14.7 km3. The Laki flows covered some of the Eldgja ones, which is one way to usurp your predecessor. But although both eruptions occurred close to each other, in the East Volcanic Zone (EVZ), they were fed by volcanoes on opposite sides of the zone.
The eruption came from within a 200-meter deep rift which ran north-east from the Katla glacier. Four separate rift segments are recognized. The west (or southern) rift, closest to Katla, extends 9 km from the ice cap and ends at the crater of Raudibotn. The rift extends underneath the icecap, perhaps by the full 15 km to the caldera. A series of spatter cones have formed in the west rift. About half the Eldgja lava was erupted from here. Where the west rift ends, a section of graben survives, which continues for 3 km. This section ends at the mountain Svartahnjuksfjoll and did not erupt. Here the central rift begins: it runs for 9 km along the western margin of the Alftavatn valley. The craters along this portion of the fissure consist of a series of ‘gjas’ which don’t quite connect. At the northern end of the central rift, the fissure jumps 1.5 km east where the eastern fissure begins, also called Eldgja proper. It runs for for 8.5 km, terminating in the mountain Gjatindur. The fissure includes the popular tourist attraction Ofaerufoss, with a spectacular waterfall.
Another graben follows, until it reaches the Skafta river. Here the northern fissure begins, with a row of spatter cones extending 19 km. The lava flows from this section were later covered by the Skafta fires, which fissured parallel to but a little south of the northern rift. It was only recognized late that the northern rift was part of Eldgja. The total length of the fissure is 57 km from the edge of the glacier, or over 70 km if, optimistically, counting from the caldera.
Before the Eldgja eruption, the entire rift was a graben, which had existed already for some time. The lava filled the graben and flowed out mainly following river valleys, but fragments of the graben have survived. There are three main lava fields. Each followed a valley or river to towards the coast, and spread out on the coastal plain. The largest is the Alftaver flow which came from the western rift and accounts for half of Eldgja’s lava. The Landbrot and Medalland flows account for most of the rest: they came from the central and eastern rift. Lava flows from the northern rift are much smaller, estimated at 0.5 km3: these have largely been covered by Laki. One hyaloclastite flow has been identified next to the Katla glacier, from an underwater (glacial lake) eruption.
The lavas are a type of basalt rich in titanium, as typical for Katla. But there are complications: the northern and to some degree the eastern rift seem to contain another, tholeiitic component as well. It has been suggested that at the northern end, there was an inflow (or prefill) of this lighter magma from Grimsvotn, and that the eruption in this location was a 50/50 mix of Katla and Grimsvotn magma. That would be very unusual: eruptions may draw from different regions in a magma chamber, but not normally from different volcanoes. The push from one volcano will override that of the other. But perhaps it can happen in a spreading ridge where the lava is sucked in rather than pushed in.
The eruption sequence is only partly known. From the fact that ash was found in the Greenland ice cap, we can deduce that there was an explosive event, most likely underneath the ice cap, perhaps within the caldera where most of Katla’s eruptions happen. The Eldgja tephra in Iceland shows over 30 distinct layers, over at least 8 separate episodes: the eruption was not continuous but there were a different events at different times. The lowest tephra layers are thickest towards the Katla glacier. This shows that the eruption began there. Later, when the rift opened, the eruptions migrated north east.
Each of the rift eruptions would have started with strong earthquakes, typically M5. Laki showed such earthquakes throughout the 8-month eruption. Each new eruption was probably phreato-magmatic to begin with (the top layer of the Icelandic crust tends to be quite wet and there may have been lakes in the graben): the tephra show alternating layers of phreato-magmatic and pure magmatic tephra. Than the fire fountains begin. For Laki, the fountains reached a kilometer high; Eldgja may well have shown similar heights. The glow would have been visible over all of southern Iceland, and the fountains seen on the horizon. What a view it would have been, magnificent and terrifying. And during the months and years of the eruption, this may have happened over 30 times.
The fountains build up cones, and lava lakes develop inside the cones. The lakes greatly reduce the height of the fountains (because the exit hole becomes larger). The eruption remains just as fast but is less visible. Later, as more magma has been pushed out, the internal pressure reduces and the flows begins to wane. Eventually it becomes so slow that the magma underground has time to solidify, and now the eruption stops. This is how Holuhraun ended. But underground, with the exit blocked, the pressure from behind begins to increase again. The magma may break through in a new place (this did not happen in Holuhraun -there wasn’t enough magma left). You expect a waning eruption to work its way back to the origin, with new break-outs happening closer to the actual volcano. But Eldgja appears to have migrated the other way, stepwise away from the origin. Perhaps the eruption was not solely controlled by the magma supply. A migrating rifting event may have played a part.
Eldgja, like Laki, erupted huge amounts of sulphur, about 220 Mt of SO2. This is much higher even than Tambora which produced about 80 Mt. Much of the environmental and health impact comes from this.
Finally, the eruption subsided and Iceland was left in peace. But much of the region was devastated. It has been argued (for instance by Mathias Nordvig) (but remains speculative) that the Eldgja eruption is behind the saga of Ragnarök – the twilights of the gods.
To boldly flow
Before the eruption, the land was very different. It had been sufficiently vegetated to allow for several settlements, but the soil was thin and easily eroded, and early settlements were apparently quickly abandoned. The eruption changed the landscape and moved the coast line.
Not all flows were of lava. Katla eruptions invariably lead to water floods, jokulhlaups which appear from underneath the glacier. The jokulhlaups travel typically at 20 km/h, and without warning it is difficult to outrun or survive them. The deposits are eroded rock from the mountain underneath the glacier and soil from the upstream flows. Jokulhaups left substantial deposits after or during Eldgja around the ice cap, on the western edge of the lava flows. Katla’s eruptions typically deposit more volume from the jokulhaups than from the eruption itself, although for the Eldgja eruption the lava flows were more voluminous.
Furthermore, the tephra ‘floods’ would also have been deeply damaging. An area of 2600 km2 was covered under more than 10 cm of tephra, and 600 km2 under more than 1 meter, about the same area as was covered by lava. The tephra thickness dropped below 0.5 cm only beyond 100 km, which is twice as far as for Laki. Whereas the lava and jokulhaups flow downhill, tephra goes where the wind blows. Even a centimeter of tephra is destructive to agriculture and many areas would have many years to recover.
Coming up So far, we have focussed on the facts, the things we know. The next part will talk about less certain aspects: the date, the volcanic winter, and how the fire was fed. And finally, could this happen again?
To be continued |
P.E. Central Lesson Plan: State Geography
The students need to know where the states of the United States are located in relation toeach other. Example: South Carolina is below North Carolina.
Purpose of Activity:
This activity reinforces the students' knowledge of the locations of the states.
Suggested Grade Level:
Strips of paper with the names of the states printed on them and an overhead with anoutline of the United States for students to check with as needed during the activity;
Physical activity: Locomotor skills
Description of Idea
Have the students start at the
edge of the playing area. Each one draws a slip of paper with the name of a
state. (For a small number of students, focus on the regions of the
states.) When the teacher says, "Go" the students jump, skip, or
crawl to the area where their state should be. To figure out who
they should stand by, the students ask the other students which
states they represent until every state is in the correct
Instead of telling
the names of their states, the students can give hints such as:
The state bird is...
The state capital is...
This event or this landmark is here.
If the activity appears too difficult for the
students then let each student start with an outline of the USA to
assist in the activity.
who teaches at Clemson University
in Clemson , SC .
Posted on PEC: 7/20/2001.
This lesson plan was provided courtesy of P.E. Central (www.pecentral.org).
Products for This Lesson: |
Shunt Wound DC Generators
A generator having a field winding connected in parallel with the external circuit is called a shunt generator, as shown in views A and B of Figure 10-272. The field coils of a shunt generator contain many turns of small wire; the magnetic strength is derived from the large number of turns rather than the current strength through the coils. If a constant voltage is desired, the shunt wound generator is not suitable for rapidly fluctuating loads. Any increase in load causes a decrease in the terminal or output voltage, and any decrease in load causes an increase in terminal voltage; since the armature and the load are connected in series, all current flowing in the external circuit passes through the armature winding. Because of the resistance in the armature winding, there is a voltage drop (IR drop = current × resistance). As the load increases, the armature current increases and the IR drop in the armature increases. The voltage delivered to the terminals is the difference between the induced voltage and the voltage drop; therefore, there is a decrease in terminal voltage. This decrease in voltage causes a decrease in field strength, because the current in the field coils decreases in proportion to the decrease in terminal voltage; with a weaker field, the voltage is further decreased. When the load decreases, the output voltage increases accordingly, and a larger current flows in the windings. This action is cumulative, so the output voltage continues to rise to a point called field saturation, after which there is no further increase in output voltage.
The terminal voltage of a shunt generator can be controlled by means of a rheostat inserted in series with the field windings as shown in Figure 10-272A. As the resistance is increased, the field current is reduced; consequently, the generated voltage is reduced also. For a given setting of the field rheostat, the terminal voltage at the armature brushes will be approximately equal to the generated voltage minus the IR drop produced by the load current in the armature; thus, the voltage at the terminals of the generator will drop as the load is applied. Certain voltage sensitive devices are available which automatically adjust the field rheostat to compensate for variations in load. When these devices are used, the terminal voltage remains essentially constant.
Compound Wound DC Generators
A compound wound generator combines a series winding and a shunt winding in such a way that the characteristics of each are used to advantage. The series field coils are made of a relatively small number of turns of large copper conductor, either circular or rectangular in cross section, and are connected in series with the armature circuit. These coils are mounted on the same poles on which the shunt field coils are mounted and, therefore, contribute a magnetomotive force which influences the main field flux of the generator. A diagrammatic and a schematic illustration of a compound wound generator is shown in A and B of Figure 10-273.
If the ampere turns of the series field act in the same direction as those of the shunt field, the combined magnetomotive force is equal to the sum of the series and shunt field components. Load is added to a compound generator in the same manner in which load is added to a shunt generator, by increasing the number of parallel paths across the generator terminals. Thus, the decrease in total load resistance with added load is accompanied by an increase in armature circuit and series field circuit current. The effect of the additive series field is that of increased field flux with increased load. The extent of the increased field flux depends on the degree of saturation of the field as determined by the shunt field current. Thus, the terminal voltage of the generator may increase or decrease with load, depending on the influence of the series field coils. This influence is referred to as the degree of compounding. A flat compound generator is one in which the no load and full load voltages have the same value; whereas an under compound generator has a full load voltage less than the no load value, and an over compound generator has a full load voltage which is higher than the no load value. Changes in terminal voltage with increasing load depend upon the degree of compounding.
If the series field aids the shunt field, the generator is said to be cumulative compounded.
If the series field opposes the shunt field, the machine is said to be differentially compounded, or is called a differential generator. Compound generators are usually designed to be overcompounded. This feature permits varied degrees of compounding by connecting a variable shunt across the series field. Such a shunt is sometimes called a diverter. Compound generators are used where voltage regulation is of prime importance.
Differential generators have somewhat the same characteristics as series generators in that they are essentially constant current generators. However, they generate rated voltage at no load, the voltage dropping materially as the load current increases. Constant current generators are ideally suited as power sources for electric arc welders and are used almost universally in electric arc welding.
If the shunt field of a compound generator is connected across both the armature and the series field, it is known as a long shunt connection, but if the shunt field is connected across the armature alone, it is called a short shunt connection. These connections produce essentially the same generator characteristics.
A summary of the characteristics of the various types of generators discussed is shown graphically in Figure 10-274.
|ŠAvStop Online Magazine Contact Us Return To Books|
Grab this Headline Animator |
"What this system allows us to do is to look in incredible detail at the whole process," said Lenski. "Most of what we saw pretty much confirms what biologists have reported from other lines of evidence, but there was an interesting twist."
The twist is that some of the intermediate steps rather than always being steps up, or even sideways, were steps down. That is, some of the key mutations were harmful in the short term but survived the forces of natural selection and ultimately played a crucial role in the genetic development of a newly evolved complex function.
Squash, Bugs, and Computers
Lenski's laboratory is crammed with petri dishes filled with thousands of generations of the bacteria Escherichia coli which he uses to study evolution in realor wetorganisms. The bacteria replicate, mutate, and compete relatively quickly, allowing Lenski to watch and manipulate the process of evolution.
During a friendly game of squash with a colleague in the physics department, Lenski learned that Adami was to speak at Michigan State University about his experiments with digital organisms.
Lenski says he was skeptical, but went to the talk and found that Adami was using a different language to describe "similar dynamics to what we were finding with bacteria." The scientists decided to collaborate.
The researchers use the computer program designed by Adami, which is called Avida. The program is basically an artificial petri dish in which organisms reproduce and if they evolve the right skills, can perform mathematical calculations to obtain rewards.
The reward is more computer time that the digital organisms use to copy themselves. To mimic real life, Avida is programmed to randomly add mutations to the copies, thus spurring natural selection and evolution.
"As an evolutionary biologist who does experiments rather than looking at ancient fossils, I like to joke 'what has evolution done for us lately?,'" said Lenski. "I like to be able to watch the process. Microorganisms are one way of doing that. With digital ones we can measure all aspects of a complex system as it mutates and evolves."
The researchers found that if the digital organisms lived in a computer environment that only rewarded them for performing a complex mathematical taskakin to solving a logic puzzlethey never could evolve the ability to do it.
But when the scientists repeated the experiment in a computer environment that also rewarded the digital organisms for solving several simpler puzzles, the organisms eventually evolved the ability to solve the most complicated problem they were given.
"In order to get to the point of doing the most complex operations, the experiments showed it was necessary that they had to solve easier problems first," said Lenski.
The digital organisms solved the most complicated problem by borrowing and modifying bits and pieces of the "genetic code" that their ancestors had used to solve the simpler tasks, just as predicted by Darwin.
The surprise, says Lenski, is that the evolutionary process is not a ladder in which the fittest organisms are descended from the fittest organisms in earlier generations. Instead, some mutations are harmful in the short term but can set up subsequent changes that are quite beneficial.
"What we are able to do is show how all components of the evolutionary process, the random and non-random, get together to form a highly complex gene which could not have evolved by random drift," said Adami.
This research is funded by a grant from the U.S. National Science Foundation.
SOURCES AND RELATED WEB SITES |
RefineDet model is a popular Deep Learning model that is used for Object Detection applications as an alternative to SSD and YOLO based CNN models.
Table of contents:
- Introduction to Deep Learning Applications
- RefineDet model
- Different Object Detection models
Introduction to Deep Learning Applications
Computer vision is a branch of artificial intelligence (AI) that allows computers and systems to extract useful information from digital photos, videos, and other visual inputs, as well as to conduct actions or make recommendations based on that data. If artificial intelligence allows computers to think, computer vision allows them to see, watch, and comprehend.
Human vision is similar to computer vision, with the exception that people have a head start. Human vision benefits from lifetimes of context to teach it how to distinguish objects apart, how far away they are, whether they are moving, and whether something is incorrect with an image. Computer vision teaches computers to execute similar tasks, but using cameras, data, and algorithms rather than retinas, optic nerves, and a visual cortex, it must do it in a fraction of the time. Because a system trained to check items or monitor a production asset may evaluate hundreds of products or processes per minute, detecting faults or issues that are invisible to humans, it can swiftly outperform humans.
A lot of data is required for computer vision. It repeats data analyses until it detects distinctions and, eventually, recognizes images. To teach a computer to recognize automotive tires, for example, it must be fed a large number of tire photos and tire-related materials in order for it to understand the differences and recognize a tire, particularly one with no faults.
Deep learning, a sort of machine learning, and a convolutional neural network are two key technologies utilized to do this (CNN). Machine learning is a technique that allows a computer to train itself about the context of visual input using algorithmic models. If enough data is supplied into the model, the computer will "look" at the data and learn to distinguish between images. Instead of someone training the machine to recognize an image, algorithms allow it to learn on its own. By breaking images down into pixels that are given tags or labels, a CNN aids a machine learning or deep learning model in "seeing." It creates predictions about what it's "seeing" by using the labels to do convolutions (a mathematical operation on two functions to produce a third function).
In a series of iterations, the neural network executes convolutions and assesses the accuracy of its predictions until the predictions start to come true. It then recognizes or sees images in a human-like manner. A CNN, like a human recognizing a picture from a distance, detects hard edges and simple forms first, then fills in the details as it runs iterations of its predictions.
To comprehend single images, a CNN is employed. In video applications, a recurrent neural network (RNN) is used in a similar way to help computers grasp how visuals in a sequence of frames are related to each other.
figure1: object detection
When an image is classified, it is passed through a classifier (such as a deep neural network) to generate a tag. Classifiers consider the entire image, but they do not indicate where the tag appears in the image.
Object detection is a deep learning system that allows items like people, buildings, and cars to be recognized as objects in images and movies. Object detection has been determined in a variety of computer vision applications, including object tracking, retrieval, video surveillance, picture captioning, image segmentation, Medical Imaging, and a slew of others.
Image segmentation is the process of determining which pixels in an image belong to which object class. Semantic picture segmentation will identify all pixels that relate to that tag, but it will not specify the object boundaries. Instead of segmenting the object, object detection uses a box to explicitly indicate the location of each individual object instance. When semantic segmentation and object detection are combined, instance segmentation is created, which finds object instances first and then segments each within the detected boxes (known in this case as regions of interest).
To obtain the final result, RefineDet generates a predetermined number of bounding boxes and scores indicating the existence of distinct kinds of items in those boxes, followed by non-maximum suppression (NMS). The anchor refinement module (ARM) and the object detection module (ODM) are the two interconnected components that make up RefineDet (ODM). The backbone is made up of ILSVRC CLS-LOC pretrained VGG-16 and ResNet-101.
figure2: RefineDet: Network Architecture
Cascaded Regression in Two Steps
As previously stated, ARM removes negative anchors from the search area for the classifier and coarsely adjusts anchor placements and sizes.
To improve the regression and forecast multi-class label, ODM uses the revised anchors as input from ARM.
Designing and Matching Anchors
To handle varied scales of objects, four feature layers with total stride sizes of 8, 16, 32, and 64 pixels are used.
Each feature layer has a different anchor scale and three different aspect ratios.
Module for Anchor Refinement (ARM)
In SSD, there are pre-defined anchor boxes with fixed sizes, ratios, and placements.
As previously stated, ARM filters away negative anchors to limit the classifier's search space and coarsely adjusts the classifier's output. ARM tries to remove negative anchors to minimize the classifier's search space, as well as coarsely alter anchor placements and sizes to give better initialization for the subsequent regressor.
Each regularly divided cell on the feature map has n anchor boxes linked with it.
Four refined anchor box offsets are projected for each feature map cell.
To indicate the existence of foreground items in those boxes, two confidence ratings are used.
Filtering using Negative Anchors
The anchor box is rejected in ODM training if its negative confidence is greater than a predefined threshold (i.e. 0.99 experimentally). It's almost certain that it's a backdrop. To teach the ODM, only the refined hard negative anchor boxes and refined positive anchor boxes are used.
Object detection module (ODM)
The revised anchor boxes are given to the respective feature maps in the ODM when they have been obtained. Based on the improved anchors, ODM tries to regress accurate object locations and forecast multi-class labels.
To complete the detection task, c class scores and the four accurate offsets of objects relative to the refined anchor boxes are calculated, producing c + 4 outputs for each refined anchor box.
Hard Negative Mining
To compensate for the high foreground-background class imbalance, hard negative mining is employed. Instead of employing all negative anchors or randomly selecting negative anchors in training, some negative anchor boxes with top loss values are chosen to keep the ratio between negatives and positives below 3:1.
Transfer connection block (TCB)
For detection, TCB transforms the characteristics from the ARM to the ODM. The goal of TCBs is to improve detection accuracy by integrating large-scale context by adding high-level features to the transferred information.
The deconvolution procedure is used to increase the high-level feature maps and sum them element-wise to match the dimensions between them.
figure3: Transfer connection block
Loss Function & Inference
As a result, RefineDet's loss function is divided into two parts: the loss in the ARM and the loss in the ODM. For the ARM, each anchor is given a binary class name (whether it is an object or not) and its location and size are regressed at the same time to produce the refined anchor. The improved anchors with a negative confidence smaller than the threshold are then provided to the ODM, which uses them to forecast object categories as well as precise item locations and sizes.
The loss function is as follows:
The numbers of positive anchors in the ARM and ODM, respectively, are Narm and Nodm.
The cross-entropy/log loss over two classes is the binary classification loss Lb (object vs. not object). The softmax loss over multiple classes confidences is the multi-class classification loss Lm. Smooth L1 loss is employed as the regression loss Lr, similar to Fast R-CNN. For negative anchors, [li* ≥ 1] signifies that the regression loss is ignored.
The regularly tiled anchors with negative confidence scores greater than the threshold 0.99 are first filtered out by ARM. ODM replaces these refined anchors with the top 400 most confident detections every image. NMS is used, with a jaccard overlap of 0.45 per class. To obtain the final detection results, the top 200 high confident detections per image are kept.
Different Object Detection models
The Single Shot MultiBox Detector (SSD) starts with a collection of default boxes rather than predicting them from scratch. It uses a fixed set of default boxes of different aspect ratios per cell in each of those grids/feature maps and several feature maps of different scales (i.e. several grids of different sizes like 4 x 4, 8 x 8, etc.) and a fixed set of default boxes of different aspect ratios per cell in each of those grids/feature maps. The model then computes the "offsets" as well as the class probabilities for each default box. The offsets are made up of four numbers: cx, cy, w, and h, which represent the center coordinates, width, and height of the real box in relation to the default box.
SSD also takes a different approach to matching object ground truth boxes to default boxes. There is no one default box that is held responsible for an item and is matched to it. Instead, any ground truth with IOU greater than a threshold is matched to default boxes (0.5). This means that numerous default boxes overlapping with the object will be projected to get high scores, rather than only one of those boxes being held responsible.
Yolo divides the input image into a S S grid, for example. Each grid cell is assessed not just for the class probabilities discussed above, but also for a set of "B" bounding boxes and confidence scores for those boxes.
To put it another way, unlike our basic thought exercise, the boxes are predicted together with the class probabilities for the cell. There are five predictions in each bounding box: x, y, w, h, and confidence. The first four deal to coordinates, while the last one, confidence, shows the model's confidence that the box includes an object as well as the accuracy of the box coordinates.
Furthermore, the grid cell in which the item's center is located is responsible for establishing the box coordinates of the object. This helps prevent numerous cells determining boxes around the same object. Nonetheless, each cell predicts many bounding boxes. During training, one of these boxes is designated as "responsible" for predicting an object, based on whose prediction has the highest current IOU with the ground truth. During training, the multiple bounding boxes in each cell specialize in predicting specific sizes, aspect ratios, or item types, which improves overall recall.
These forecasts are represented as a S S (B 5 + C) tensor (S x S is the grid dimension, B is the boxes each cell in the grid will determine, 5 is the predictions made per box namely x, y, w, h and confidence, C is the probabilities for the C classes of objects the model can identify).
With this article at OpenGenus, you must have a good idea of RefineDet model. |
For the first time, the boundary of the heliosphere has been mapped, giving scientists a better understanding of how solar and interstellar winds interact.
Dan Reisenfeld, a scientist at Los Alamos National Laboratory and lead author on the paper, said; “Physics models have theorized this boundary for years, but this is the first time we’ve actually been able to measure it and make a three-dimensional map of it.” Reisenfeld’s paper was published in the Astrophysical Journal today.
The heliosphere is the vast, bubble-like region of space created by the influence of our Sun and extends into interstellar space. The two major components to determining its edge are the heliospheric magnetic field and the solar wind from the Sun.
Three major sections from the beginning of the heliosphere to its edge are the termination shock, the heliosheath, and the heliopause. A type of particle called an energetic neutral atom (ENA) has also been observed to have been produced from its edges.
They did this by using IBEX satellite’s measurement of energetic neutral atoms (ENAs) that result from collisions between solar wind particles and those from the interstellar wind. The intensity of that signal depends on the intensity of the solar wind that strikes the heliosheath. When a wave hits the sheath, the ENA count goes up and IBEX can detect it.
Stay Tuned For More Latest Research and Development |
The Birth of Modern Astronomy
Astronomy made no major advances in strife-torn medieval Europe. The birth and expansion of Islam after the seventh century led to a flowering of Arabic and Jewish cultures that preserved, translated, and added to many of the astronomical ideas of the Greeks. Many of the names of the brightest stars, for example, are today taken from the Arabic, as are such astronomical terms as “zenith.”
As European culture began to emerge from its long, dark age, trading with Arab countries led to a rediscovery of ancient texts such as Almagest and to a reawakening of interest in astronomical questions.
One of the most important events of the Renaissance was the displacement of Earth from the center of the universe, an intellectual revolution initiated by a Polish cleric in the sixteenth century. Nicolaus Copernicus was born in Torun, a mercantile town along the Vistula River. His training was in law and medicine, but his main interests were astronomy and mathematics. His great contribution to science was a critical reappraisal of the existing theories of planetary motion and the development of a new Sun-centered, or heliocentric, model of the solar system. Copernicus concluded that Earth is a planet and that all the planets circle the Sun. Only the Moon orbits Earth (Figure 2.17).
Copernicus described his ideas in detail in his book De Revolutionibus Orbium Coelestium (On the Revolution of Celestial Orbs), published in 1543, the year of his death. By this time, the old Ptolemaic system needed significant adjustments to predict the positions of the planets correctly. Copernicus wanted to develop an improved theory from which to calculate planetary positions, but in doing so, he was himself not free of all traditional prejudices.
He began with several assumptions that were common in his time, such as the idea that the motions of the heavenly bodies must be made up of combinations of uniform circular motions. But he did not assume (as most people did) that Earth had to be in the center of the universe, and he presented a defense of the heliocentric system that was elegant and persuasive. His ideas, although not widely accepted until more than a century after his death, were much discussed among scholars and, ultimately, had a profound influence on the course of world history.
Copernicus argued that the apparent motion of the Sun about Earth during the course of a year could be represented equally well by a motion of Earth about the Sun. He also reasoned that the apparent rotation of the celestial sphere could be explained by assuming that Earth rotates while the celestial sphere is stationary. To the objection that if Earth rotated about an axis it would fly into pieces, Copernicus answered that if such motion would tear Earth apart, the still faster motion of the much larger celestial sphere required by the geocentric hypothesis would be even more devastating.
The Heliocentric Model
The most important idea in Copernicus’ De Revolutionibus is that Earth is one of six (then-known) planets that revolve about the Sun. Using this concept, he was able to work out the correct general picture of the solar system. He placed the planets, starting nearest the Sun, in the correct order: Mercury, Venus, Earth, Mars, Jupiter, and Saturn. Further, he deduced that the nearer a planet is to the Sun, the greater its orbital speed. With his theory, he was able to explain the complex retrograde motions of the planets without epicycles and to work out a roughly correct scale for the solar system.
Copernicus could not prove that Earth revolves about the Sun. In fact, with some adjustments, the old Ptolemaic system could have accounted, as well, for the motions of the planets in the sky. But Copernicus pointed out that the Ptolemaic cosmology was clumsy and lacking the beauty and symmetry of its successor.
In Copernicus’ time, in fact, few people thought there were ways to prove whether the heliocentric or the older geocentric system was correct. A long philosophical tradition, going back to the Greeks and defended by the Catholic Church, held that pure human thought combined with divine revelation represented the path to truth. Nature, as revealed by our senses, was suspect. For example, Aristotle had reasoned that heavier objects (having more of the quality that made them heavy) must fall to Earth faster than lighter ones. This is absolutely incorrect, as any simple experiment dropping two balls of different weights shows. However, in Copernicus’ day, experiments did not carry much weight (if you will pardon the expression); Aristotle’s reasoning was more convincing.
In this environment, there was little motivation to carry out observations or experiments to distinguish between competing cosmological theories (or anything else). It should not surprise us, therefore, that the heliocentric idea was debated for more than half a century without any tests being applied to determine its validity. (In fact, in the North American colonies, the older geocentric system was still taught at Harvard University in the first years after it was founded in 1636.)
Contrast this with the situation today, when scientists rush to test each new hypothesis and do not accept any ideas until the results are in. For example, when two researchers at the University of Utah announced in 1989 that they had discovered a way to achieve nuclear fusion (the process that powers the stars) at room temperature, other scientists at more than 25 laboratories around the United States attempted to duplicate “cold fusion” within a few weeks—without success, as it turned out. The cold fusion theory soon went down in flames.
How would we look at Copernicus’ model today? When a new hypothesis or theory is proposed in science, it must first be checked for consistency with what is already known. Copernicus’ heliocentric idea passes this test, for it allows planetary positions to be calculated at least as well as does the geocentric theory. The next step is to determine which predictions the new hypothesis makes that differ from those of competing ideas. In the case of Copernicus, one example is the prediction that, if Venus circles the Sun, the planet should go through the full range of phases just as the Moon does, whereas if it circles Earth, it should not (Figure 2.18). Also, we should not be able to see the full phase of Venus from Earth because the Sun would then be between Venus and Earth. But in those days, before the telescope, no one imagined testing these predictions.
Galileo and the Beginning of Modern Science
Many of the modern scientific concepts of observation, experimentation, and the testing of hypotheses through careful quantitative measurements were pioneered by a man who lived nearly a century after Copernicus. Galileo Galilei (Figure 2.19), a contemporary of Shakespeare, was born in Pisa. Like Copernicus, he began training for a medical career, but he had little interest in the subject and later switched to mathematics. He held faculty positions at the University of Pisa and the University of Padua, and eventually became mathematician to the Grand Duke of Tuscany in Florence.
Galileo’s greatest contributions were in the field of mechanics, the study of motion and the actions of forces on bodies. It was familiar to all persons then, as it is to us now, that if something is at rest, it tends to remain at rest and requires some outside influence to start it in motion. Rest was thus generally regarded as the natural state of matter. Galileo showed, however, that rest is no more natural than motion.
If an object is slid along a rough horizontal floor, it soon comes to rest because friction between it and the floor acts as a retarding force. However, if the floor and the object are both highly polished, the object, given the same initial speed, will slide farther before stopping. On a smooth layer of ice, it will slide farther still. Galileo reasoned that if all resisting effects could be removed, the object would continue in a steady state of motion indefinitely. He argued that a force is required not only to start an object moving from rest but also to slow down, stop, speed up, or change the direction of a moving object. You will appreciate this if you have ever tried to stop a rolling car by leaning against it, or a moving boat by tugging on a line.
Galileo also studied the way objects accelerate—change their speed or direction of motion. Galileo watched objects as they fell freely or rolled down a ramp. He found that such objects accelerate uniformly; that is, in equal intervals of time they gain equal increments in speed. Galileo formulated these newly found laws in precise mathematical terms that enabled future experimenters to predict how far and how fast objects would move in various lengths of time. |
In electromagnetism, absolute permittivity is the measure of the resistance that is encountered when forming an electric field in a medium. In other words, permittivity is a measure of how an electric field affects, and is affected by, a dielectric medium. The permittivity of a medium describes how much electric field (more correctly, flux) is 'generated' per unit charge in that medium. More electric flux exists in a medium with a high permittivity (per unit charge) because of polarization effects. Permittivity is directly related to electric susceptibility, which is a measure of how easily a dielectric polarizes in response to an electric field. Thus, permittivity relates to a material's ability to transmit (or "permit") an electric field.
In SI units, permittivity ε is measured in farads per meter (F/m); electric susceptibility χ is dimensionless. They are related to each other through
where εr is the relative permittivity of the material, and ε0 = 8.854187817.. × 10−12 F/m is the vacuum permittivity. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Endodontics is a specialized branch of dentistry that deals with the complex structures found inside the teeth. The Greek word “Endodontics” literally means “inside the tooth” and relates to the tooth pulp, tissues, nerves, and arterioles. Endodontists receive additional dental training after completing dental school to enable them to perform both complex and simple procedures, including root canal therapy.
Historically, a tooth with a diseased nerve would be removed immediately, but endodontists are now able to save the natural tooth in most cases. Generally, extracting the inner tooth structures, then sealing the resulting gap with a crown restores health and functionality to damaged teeth.
Signs and symptoms of endodontic problems:
- Inflammation and tenderness in the gums.
- Teeth that are sensitive to hot and cold foods.
- Tenderness when chewing and biting.
- Tooth discoloration.
- Unexplained pain in the nearby lymph nodes.
Reasons for endodontic treatment
Endodontic treatment (or root canal therapy) is performed to save the natural tooth. In spite of the many advanced restorations available, most dentists agree that there is no substitute for healthy, natural teeth.
Here are some of the main causes of inner tooth damage:
Bacterial infections – Oral bacteria is the most common cause of endodontic problems. Bacteria invade the tooth pulp through tiny fissures in the teeth caused by tooth decay or injury. The resulting inflammation and bacterial infection jeopardize the affected tooth and may cause an abscess to form.
Fractures and chips – When a large part of the surface or crown of the tooth has become completely detached, root canal therapy may be required. The removal of the crown portion leaves the pulp exposed, which can be debilitating painful and problematic.
Injuries – Injuries to the teeth can be caused by a direct or indirect blow to the mouth area. Some injuries cause a tooth to become luxated or dislodged from its socket. Root canal therapy is often needed after the endodontist has successfully stabilized the injured tooth.
Removals – If a tooth has been knocked clean out of the socket, it is important to rinse it and place it back into the socket as quickly as possible. If this is impossible, place the tooth in special dental solution (available at pharmacies) or in milk. These steps will keep the inner mechanisms of the tooth moist and alive while emergency dental treatment is sought. The tooth will be affixed in its socket using a special splint, and the endodontist will then perform root canal therapy to save the tooth.
What does an endodontic procedure invlove?
Root canal therapy usually takes between one and three visits to complete. Complete X-rays of the teeth will be taken and examined before the treatment begins.
Initially, a local anesthetic will be administered, and a dental dam (protective sheet) will be placed to ensure that the surgical area remains free of saliva during the treatment. An opening will be created in the surface of the tooth, and the pulp will be completely removed using small handheld instruments.
The space will then be shaped, cleaned, and filled with gutta-percha. Gutta-percha is a biocompatible material that is somewhat similar to rubber. Cement will be applied on top to ensure that the root canals are completely sealed off. Usually, a temporary filling will be placed to restore functionality to the tooth prior to the permanent restoration procedure. During the final visit, a permanent restoration or crown will be placed.
If you have questions or concerns about endodontic procedures, please contact our office. |
Solve Global Warming and make Interstellar Travel possible at once
We could not only save our planet but also make possible to reach Proxima Centaury by the end of our century. And it might cost less than the International Space Station.
About a year ago, I wrote an article on using a large fleet of solar sail spacecraft to compensate for the released CO2. It seems to be far the cheapest option to save our planet of the effect of Global Warming. However, that is not the only benefit. We could use them for interstellar travel.
Power of Sun
The fleet of solar sails probes to combat Global Warming will need to cover 1.2 million sq. km. That represents the power of about 10 TW ~ 120.000.000 Newton force provided by solar radiation (assuming F slightly higher than 9.08 μN/m2 at 1AU — the distance between Sun and Earth). That is enough force to accelerate full-scale Interstellar Spacecraft with weight up to 10.000.000 kg by 120 m/s2 = 12 g (12 times the acceleration we do experience on Earth). The Spacecraft could have a solar sail of a few sq. kilometers and a lot of apparatus.
The fleet of solar sail probes protecting Earth from the effects of Global Warming is parked close to the L1 point between Earth and Sun. It would need to form a convex shape to reflect the beam of light into a single point. All that solar radiation will aim into the sail of our Interstellar Spacecraft. Even with an average 50% efficiency (depending on the angle of the probe), we would have an abundant of power to accelerate Spacecraft.
How fast we can go to spread life?
If we would achieve 20% of the speed of light (0.2 c ~ 60.000.000 m/s) the Spacecraft will reach Proxima Centauri solar systems in less than 22 years in Earth time (4.244 light years). The time passed on Spacecraft time will be almost the same. At that speed, the time dilatation is about 1%.
If humans should be on board, we will need to accelerate approximately for 6.000.000 seconds with 1g. That sounds like a lot, but every hour has 3600 seconds, so it represents just a bit less than 70 days. However, Spacecraft would make 180,000,000,000 km within these 70 days. That is 1200 AU (1 AU = the distance between Sun and Earth). Too far out of our Solar system to reflected even carefully pointed laser beam to a giant solar sail.
So we would need to accelerate much much faster.
Let’s assume that a medically seduced crew with special equipment could survive 50 g acceleration. Spacecraft will need 34 hours to achieve 0.2c (20% speed of light) and traveled of 24 AU. That is still too far away to be propelled to such speed. We will need to aim to solar sail 6 AU to speed-up to 0.1c. However, the time to travel to Proxima Centaury will be almost 50 years.
What about a robotic mission with 500 g of acceleration? It would take just 3.4 hours and 2.4AU to achieve 0.2 c and 0.6 AU if we would like to achieve just 0.1 c. Our Interstellar Spacecraft will stay quite close to the fleet of solar sails and in a directly focused beam of the reflected solar radiation. This acceleration is also survival for plant seeds or animal embryos/stem cells. So life could be eventually spread.
To prolong the acceleration phase, we could equip the solar probes with cheap short-wavelength lasers. These will be powered by solar panels. Every 100W of laser beams per probe would mean 12GW of total power. These could be also very useful for Spacecraft direction steering. As it will need to be pointed very accurately as course correction during the flight will be practically impossible.
Top speed flyby
What if we could build much smaller and robust Interstellar Spacecraft with higher acceleration. For example, Breakthrough Starshot design expects 10.000 g acceleration with a spacecraft of a few meters in diameter.
With that acceleration, the travel distance to achieve 0.2 c is just 0.12 AU. The Spacecraft will accelerate for 10 minutes. To get to 0.5c (50% speed of light) we would need to keep the beam focus into 0.75 AU distance. That seems possible even that the size of the spacecraft is much smaller. More and more probes from the fleet of solar sails could be gradually involved to compensate the part of the beam which would miss the Spacecraft. It would shorten the time to travel to just less than 9 years for an observer on Earth! We will learn more about Proxima and Alpha Centaury solar systems just in 15 years after launch. That’s about the duration we do explore distant objects in our solar system nowadays.
Unfortunately at speed of 0.5c would a small Spacecraft rush through the solar system without any option to even slow down. Even the solar radiation of all three starts of Alpha Centaury would not be enough to slow the Spacecraft down. It will spend just a few hours within the explored solar system. However, It could take photos, make distance measurements and observations. Practically any collision with more than a few atoms would destroy the Spacecraft.
Could we find a way to slow it down?
The kinetic energy at 0.2c is enormous. Each kilogram is like the most powerful nuclear bomb. Nevertheless, a larger Mother Spacecraft equipped with a strong power source could deploy a smaller and lighter Baby Spacecraft and slow it down by laser as it is closing to the Proxima Centaury.
The laser beam could be even bounced back multiple times between the solar sails of both Spacecraft reducing the power need by a factor of 10x. We will still need a powerful laser beam (~GW) to slow down Baby Spacecraft with the weight of a few kgs. All slow-down has to happen within a few hours before the distance between these two ships is too high. That leaves us again with a deceleration of a couple of hundred g. So the seeds of life could survive. Ideally, multiple spacecraft babies will be deployed as passing around the star.
The maneuver could be used for the correction of the interstellar navigation error, which wouldl cumulate over the years. So Baby Spacecrafts get closer to its target locations and Mother Spacecraft bounces towards its next star to explore.
As a result, Baby Spacecrafts would slow down and start orbiting and exploring the solar systems for years. They could study all planets, its moons and measure stars of Alpha and Proxima Centaury in detail. They will use its sail to navigate around and could leverage the Mother Spacecraft to boost-up its communication signal towards Earth.
How much it would cost?
We would not need to worry about the cost of the solar fleet orbiting our Sun. These probes will be built anyway to protect Earth from the impact of Global Warming. As part of that endeavor, we would already master the technology of solar sailing.
There is no need to build extremely expensive Earth-based lasers as suggested by Breakthrough Starshot design. Hence only a small incremental cost would be required to launch the Interstaller Spacecrafts.
The smallest version designed for interstellar flyby would leverage most of the components from the solar probes. We could send thousands to each of the nearest stars for a few billion USD in total.
To build the pair of Spacecrafts Mother and Baby, we would need to master the nuclear fusion reactor first to power. Otherwise, there would not be enough power for the laser. Baby Spacecraft could leverage some ion drive eventually a positron anti-matter drive to navigate itself. These highly efficient engine converting energy/radiation into momentum to power a spaceship. Both technologies are being developed and would be available by the time, we have mapped the nearest start with flyby interstellar explorers.
Proxima Centauri b is an exoplanet orbiting in the habitable zone of the red dwarf star Proxima Centauri. We could park our Spacecrafts there and explore the planet. Perhaps we could find or seed a new life there by the end of 21. century. Let’s save our Earth and expand to Universe! |
Bird Migration – November 6, 2019
Given a lot of recent avian activity by Canadian geese here in Colorado as the temperatures drop and the snow arrives, today’s topic is on bird migration.
The timing of bird migrations is an intriguing phenomenon in nature and scientists are *still* working to solve its mysteries! According to Ian Newton, author of “The Migration Ecology of Birds,” many long-distance migrants are remarkably regular in their departure and arrival dates. That’s a crucial part of their continued survival, as it ensures that birds arrive in nesting areas just as environmental conditions become suitable for breeding, and then leave before they change. But birds seem to adjust to variations in weather that occur from year to year, which suggests that their migratory instincts are triggered by external stimuli.
While we know what conditions attract them (warm temperatures, food availability), scientists are still trying to figure out the mechanism that actually tells birds it’s time to take off. All About Birds, a site maintained by Cornell University’s Lab of Ornithology, offers a hypothesis that birds have some sort of “undiscovered interface” – basically, a sort of biological wifi connection – that enables them to sense distant temperature and weather conditions.
The secrets of birds’ navigational skills aren’t fully understood either, partly because birds combine several different types of senses when they navigate: they can get directional information from the sun, stars and by sensing the earth’s magnetic field. Information also comes to them from the position of the setting sun and landmarks seen during the day. Recent studies using eBird data are revealing that many small birds take different routes in spring and fall, to take advantage of seasonal patterns in weather – riding the prevailing winds saves calories!
photo: Ricardo Peña
For more reading: |
Anonymous Functions in Scala
In Scala, An anonymous function is also known as a function literal. A function which does not contain a name is known as an anonymous function. An anonymous function provides a lightweight function definition. It is useful when we want to create an inline function.
(z:Int, y:Int)=> z*y Or (_:Int)*(_Int)
- In the above first syntax, => is known as a transformer. The transformer is used to transform the parameter-list of the left-hand side of the symbol into a new result using the expression present on the right-hand side.
- In the above second syntax, _ character is known as a wildcard is a shorthand way to represent a parameter who appears only once in the anonymous function.
When a function literal is instantiated in an object is known as a function value. Or in other words, when an anonymous function is assigned to a variable then we can invoke that variable like a function call. We can define multiple arguments in the anonymous function.
We are allowed to define an anonymous function without parameters. In Scala, We are allowed to pass an anonymous function as a parameter to another function.
Welcome to GeeksforGeeks...!! DogCat DogCat |
Oral Bacteria: Get the Facts
We all have bacteria in our mouth, good and bad. But what exactly do these bacteria do? We’ve got all kinds of information on the role bacteria play in your oral health. Learn more about those pesky bacteria in your mouth!
There are anywhere between 500 and 1,000 different kinds of bacteria in our mouths.
Babies’ mouths are free of bacteria at birth. However, bacteria is transferred into their mouths from their mothers within hours of birth, mainly through kissing and food sharing.
Saliva flushes harmful bacteria out of the mouth by making it hard for bacteria to stick to the surfaces of our teeth.
Some foods can also flush bacteria from the teeth. Crunchy vegetables like carrots and celery stimulate the gums, while acidic fruits like apples increase saliva production to wash the teeth clean.
The tongue holds a significant portion of the mouth’s bacteria. It’s just as important to clean the tongue as it is to brush and floss, because bacteria on the tongue contributes to gum disease and bad breath. Try using a plastic or metal tongue scraper to clear out bacteria!
Hormonal changes during pregnancy put soon-to-be mothers at a higher risk of tooth erosion. Morning sickness and general hormonal changes cause acidity in the mouth to increase, which in turn erodes enamel.
Smoking increases your risk of tooth decay and gum disease. Not all bacteria are bad; in fact, some are even necessary to maintain hygienic balance. However, smoking tobacco destroys helpful bacteria in the mouth, which promotes the growth of harmful oral bacteria.
Oral bacteria multiply in number every 4-5 hours. No wonder it’s so important to brush teeth twice a day!
Who knew something so small could have such a big impact on your oral health! Make sure to schedule regular dental exams with us to keep oral bacteria under control for a clean, healthy smile! |
For most prokaryotic chromosomes, the replicon is the entire chromosome. One notable exception found comes from archaea, where two Sulfolobus species have been shown to contain three replicons. Examples of bacterial species that have been found to possess multiple replicons include Rhodobacter sphaeroides, Vibrio cholera, and Burkholderia multivorans.
These “Secondary” (or tertiary) chromosomes are often described as a molecule that is a mixture between a true chromosome and a plasmid and are sometimes called “chromids”.
For Eukaryotic chromosomes, there are multiple replicons per chromosome. The definition of replicons is somewhat confused with mitochondria, as they use unidirectional replication with two separate origins.
Detailed Explanation about Replicons
- It is critical that all the DNA in a cell be replicated once, and only once, per cell cycle. Jacob, Brenner, and Cuzin defined a replicon as the unit in which the cell controls individual acts of replication.
- The replicon initiates and completes synthesis once per cell cycle. Control is exerted primarily at the initiation. They proposed that an initiator protein interacted with a DNA sequence, called a replicator, to start replication.
- The replicator can be identified genetically as a DNA sequence required for replication, whereas the origin is defined by physical or biochemical methods as the DNA sequence at which replication begins.
- For many replicons, such as the E. coli ori.C and the autonomously replicating sequences (or ARS) in yeast, the replicator is also an origin. However, this need not be the case: the replicon for amplified chorion genes in silkmoths has an origin close to, but separable from, the replicator.
- Initiator proteins have now been identified for some replicons, such as the DnaA protein in E. coli and the Origin Recognition Complex in the yeast Saccharomyces cerevisiae.
- In both cases, they bind to the replicators, which are also origins in these two species.
- The replicator is a sequence of DNA needed for the synthesis of the rest of the DNA in a replicon. It is a control element that affects the chromosome on which it lies.
- We say that this element acts in cis since the replicator and the replicon are on the same chromosome. In contrast, the initiator is a protein that can be encoded on any chromosome in a cell.
- Thus is acting in trans, since it does not have to be encoded on the same chromosome as the replicon that it controls.
- In general, a trans-acting factor is an entity, usually a protein, that can diffuse through the cell to act in the regulation of a certain target, whereas a cis-acting DNA sequence is on the same chromosome as the target of control.
- This pattern of a trans-acting protein binding to a cis-acting site on the DNA is also seen in transcriptional control.
- What are DNA Polymerase and its function in DNA Replication
- What is Fidelity of DNA Replication in Normal
- DNA Replication: Simple Steps of DNA replication in E.Coli
- DNA Damage: The Causes of DNA damage
- Mechanism of Eukaryotic DNA Replication
Plasmids and bacteriophages are usually replicated as single replicons, but large plasmids in Gram-negative bacteria have been shown to carry several replicons. |
24/7 writing help on your phone
Save to my list
Remove from my list
The Out of Africa Theory is a widely renown theory describing the origin of the human race and their early dispersal throughout the world. According to this theory, humans have a monogensis, or a single and common origin; Africa. The concept was first introduced in 1871 by Charles Darwin but was deliberated for years until further studies of mitochondrial DNA and evidence ”based on physical anthropology of archaic specimens” was added. During the early 19th century, scientists, archeologist and other scholars, speculated, studied and disagreed about the development of humans and our origins.
Some experts theorized that humans are monogenism and developed into various varieties of species. Others argue that we are a polygenism mammal and that we either had separate development of various human species or developed as separate species through ”transmutation” of apes. It was until the year 1871 when one of the first theories had been proposed openly. During that year, Charles Darwin published the book ”Descent of Man”, in which he suggested that all humans were descendant from early humans who lived in Africa based on his studies of the behavior of African apes.
In his book, Charles Darwin concluded, ”In each great region of the world the living mammals are closely related to the extinct species of the same region. It is, therefore, probable that Africa was formerly inhabited by extinct apes closely allied to the gorilla and chimpanzee; and as these two species are now man’s nearest allies, it is somewhat more probable that our early progenitors lived on the African continent than elsewhere.
But it is useless to speculate on this subject, for an ape nearly as large as a man, namely the Drypithecus of Lartet, which was closely allied to the anthropomorphous Hylobates, existed in Europe during the Upper Miocene period; and since so remote a period the earth has certainly undergone many great revolutions, and there has been ample time for migration on the largest scale. ” Unfortunately, it wasn’t until 50 years later when other scholars began to support Charles Darwin’s theory when an sufficient number of fossils of early humans in several areas of Africa was found.
During the 1980s, three specialists, Allan Wilson, Rebecca Cann and Mark Stoneking, worked together on another theory that supports Charles Darwin’s speculation, the “Mitochondrial Eve” hypothesis. In these tests, the scientists solemnly focused on mitochondrial DNA, human genes that lay within the cell and are passed from mother to child. These genes allow mutation, as they mutate quickly for adaptation, thus allowing those studying to find and track changes during short time periods.
By focusing on these genes and comparing their differences, the three scientists were able to create a hypothesis about the time and place when modern humans began to evolve. According to their findings, they believe that modern humans are decentants from a single population, while earlier humans e. g. Neandertals and Homo erectus, had become extinct. Furthermore, the team compared the DNA of numerous people of differerent ethnic backgrounds and concluded that all humans did indeed evolve from ‘one mother’ in Africa about 150,000 years ago.
According to physical evidence, and theories, scholars have concluded upon a whole hypothesis. Based on their knowledge and belief, modern humans diverged from Homo sapiens between 200,000 and 150,000 years ago specifically in Africa, that between 125,000 and 60,000 years ago members of Homo sapiens left Africa, and that these humans species gradually replaced earlier human populations. East Africa has been the single and specific orgin of the human race that most scientists have conluded on.
There are still speculations and debates on whether there had been one single dissolution or several. Due to genetic, linguisitc and archaeological findings, the Southern Dispersal theory, a theory in which included several exodus, has become the most favorable theory, however many researchers are gradually considering the first and original haven was Northern Africa. Scientists believe the population of early humans had been around 2,000 to 5,000 while they persisted in Africa, and only small groups of persumably 150 to 1,000 migrated out towards the Red Sea.
These few individuals went out to expand and eventually populate the rest of the world. For example, one theory suggests that those who traveled along the southern coastline of Asia ultimately crossed the sea and colonized in Australia about 50,000 years ago. Other researchers believe and have proposed to a multiple dispersal theory in which states that there had been two major migrations out of Africa. According to this speculation, one group crossed the Red Sea and traveled along the coastline until reaching India.
The other, meanwhile, migrated north, following the Nile River, and crossed into Asia through the Sinai. From there, several members dispersed in various directions, some headed towards Europe while other went east into Asia. It is unclear, however, if Homo sapiens migrated to North America 30,000 years ago, or later around 14,000 years ago. From there on, Homo sapiens gradually and continiously migrated and settled on to every continent, except Antartica, and gradually began to increase and populate the world.
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.get help with your assignment |
Two ancient cities have been discovered, perfectly preserved, at the bottom of the Nile.
But how do you go about finding a lost city (or two)?
Dr Damian Robinson, director of the Oxford Centre for Maritime Archaeology, is a member of the discovery team. The treasures he and his team found are the subject of a new exhibition, Sunken Cities: Egypt’s Lost Worlds, at the British museum.
“In 1933 a British RAF pilot flew over the Aboukir Bay and saw what he thought was remains under the water,” he tells me. “The pilot reported them to a local prince who sent a diver to investigate but nothing was found.”
The Second World War and later the Cold War stopped any further exploration of these ruins until 2000, when archaeologist Franck Goddio of the Institut Européen d’Archólogie Sous-Marine entered the story.
Image: Franck Goddio / Hilti Foundation – Photo: Christoph Gerigk
With geo-sensing survey techniques, he measured different properties of the Earth’s surface and made a detailed map of the seabed. “Using a nuclear resonance magnetometer, specially developed by a French energy commission, he was able to measure the earth’s magnetic field and variations in it caused by the local deposit geology.”
The maps chronicled the sunken landscape and its main topographical features. The team began detailed investigations by zeroing in on areas that looked like they had a lot of potential for excavation.
The two cities they rediscovered, the Egyptian city Thonis-Heracleion and the Greek city of Canopus, were built on unstable Nile clays. “They have a lot of water in them,” Robinson says. “The load of people caused the sediment to collapse, pushing the water out and causing people to abandon the city. This, in 800AD, was the first of two dramatic collapses, but the land itself didn’t fully sink until 1000 years later due to rising sea levels.”
As the water was squeezed out of the sediment, the sand settled on the ruins, preserving them perfectly. But it didn’t look like much at first glimpse. “[The recovery was] not as spectacular as you’d think because there’s so much sand,” Robinson recalls. “Visibility was also very poor because of algae that give the water a green tinge. So we can only go digging at specific times – now and in October.”
And how did they date the relics? “Historically and scientifically – for dating the city itself – by pots and artefacts discovered because pottery changes each period and are massively studied. For ships, we did radiocarbon dating.” The team found 69 ships – the largest collection of ancient ships found to date.
The importance of the Egyptian city make the discoveries all the more exciting. “It was an obligatory entry point for Greek ships when Greece was providing Egypt with mercenaries to help defend it. The two also swapped ideas about religion, equating many of their gods,” Robinson says. Thonis-Heracleion (or Heracleion for short), for example, was named after its Greek temple to Hercules.
“The right to rule Egypt was established in this city by pharaohs going to the temple to receive a case that contained a contract or inventory of all the things he had to take care of on behalf of the office.” As the discoveries have been predominantly Egyptian until now, Robinson hopes they’ll find more of a Greek presence there too.
So far though, the haul of religious artefacts is rich and is already broadening our knowledge of the time. “The resurrection of Osiris is a major festival period in Egyptian life. The ritual navigation of Osiris on a boat through the waterways was a major part of this. We discovered a local version of this done at Heracleion. We can tell entire stories just from the objects and the texts we found.”
Stories of everyday life, however, have proven harder to piece together. Robinson tells me that ordinary houses were made of mud brick that have now decomposed. However, there are some glimpses into common rituals: “Athenian coin weights bought by a merchant and used as a thank you offering for the gods for safely bringing him into the port were deposited in the middle of the port.”
Some of the treasures are enormous, with the statue of the god Hapi – the personification of the river Nile – standing at five metres tall, making him and the statues of the king and queen the biggest ever discovered.
Do the discoverers have any favourite findings?
“Franck Goddio likes the black stele [a stone erected as a monument] inscribed with the decree of Saïs because it tells him the name of the city and things like how its taxation works. I like a series of small barges – about 30cm long that have lead models of the gods and are carved. They are ritual barges – replicas of those that floated along in processions. I like it because it talks about the boating traditions of these individuals.”
The piece originally appeared on our sister site, the New Statesman.This article is from the CityMetric archive: some formatting and images may not be present. |
The horizon or skyline is the apparent line that separates earth from sky, the line that divides all visible directions into two categories: those that intersect the Earth’s surface, and those that do not. At many locations, the true horizon is obscured by trees, buildings, mountains, etc., and the resulting intersection of earth and sky is called the visible horizon. When looking at a sea from a shore, the part of the sea closest to the horizon is called the offing. The word horizon derives from the Greek “ὁρίζων κύκλος” horizōn kyklos, “separating circle”,from the verb ὁρίζω horizō, “to divide”, “to separate”, and that from “ὅρος” (oros), “boundary, landmark”.
Historically, the distance to the visible horizon has long been vital to survival and successful navigation, especially at sea, because it determined an observer’s maximum range of vision and thus of communication, with all the obvious consequences for safety and the transmission of information that this range implied. This importance lessened with the development of the radio and the telegraph, but even today, when flying an aircraft under visual flight rules, a technique called attitude flying is used to control the aircraft, where the pilot uses the visual relationship between the aircraft’s nose and the horizon to control the aircraft. A pilot can also retain his or her spatial orientation by referring to the horizon.
In many contexts, especially perspective drawing, the curvature of the Earth is disregarded and the horizon is considered the theoretical line to which points on any horizontal plane converge (when projected onto the picture plane) as their distance from the observer increases. For observers near sea level the difference between this geometrical horizon (which assumes a perfectly flat, infinite ground plane) and the true horizon (which assumes a spherical Earth surface) is imperceptible to the naked eye dubious – discuss but for someone on a 1000-meter hill looking out to sea the true horizon will be about a degree below a horizontal line. |
Comets and Rock Art
Comets in Negev Desert Rock Art
Deciphering rock art from Israel, israelrockart.com.
To an earthbound observer, a comet appears as a large star surrounded by a bright transparent cloud with a long tail that travels through the sky. Comets’ appearances in a relatively “known” sky, captivated people’s minds and their interpretation of comets has been found on rock art, coins, and art. Different cultures describe it as a sparkling star, broom star, long sword, spear, a human head with hair, a demon, a burning torch, and even a horse’s mane blown by air. Comet’s visible impact profoundly affected people that interpreted them as messages from their gods.
Comet Description and Movement
Comets are icy bodies, made from frozen gases and dust that resembled in their outer composition a dirty snowball. Their unusual eccentric orbit, around the Sun, made them infrequent visitors to earth. Their appearance from the earliest days until the 16th-century ignited people’s imagination and they thought of them as harbingers of doom, bad omens, catastrophes, and deaths.
The Greeks originated the word kometes, which translates to “long-haired star” because of their glowing long tails. Aristotle, a Greek philosopher, phrased them as “running like a road through the constellations”. Comets often have two types of luminous tails: a straight one made of ionized gas (typically bluish) and a curved tail (white to yellowish) made up of tiny particles of dust compressed by radiation pressure. The comet tail doesn’t indicate its movement direction as expected– it always points away from the Sun and sometimes its travel direction appears to defy gravity. In other words, the comet movement as seen from the earth can be either toward the tail or its nucleus.
Comets in Negev Desert Rock Art, Israel
The Sumerian expression “like above, so below” expresses the people’s desire to create on earth a parallel to the heavens. For the ancients, the stars represented the mighty gods, and the constellation’s outline created earthly images of man/women and animals living in heaven. The earthly scenes we see in rock art are copies from the star’s outline or the constellations.
Numerous rock art engravings show us how people interpreted comets’ sky appearance. These repeated scenes display a horse rider holding a very long spear fighting an invisible enemy, see Fig.5. The “spear” has an odd shape with a bulky nucleus that ends with a long tail that gets thinner. This is not a spear!. The scenes in Fig.5 illustrate the difference between a comet and a spear abstraction. In scene1 the curved spear signifies the comet movement as seen from earth curvature. Notice the comet’s bulky nucleus on the right and the comet flight direction as indicated by its tail, the same for scene2. For comparison, a rock art with a horse and rider with a real spear, notice the spear sharp edge as illustrated in scene3.
Horse and rider as a comet in rock art
In ancient times a comet would be identified as a star with tails. Fig.6 illustrates a creative example of comet abstraction with multiple tails. This scene shows a rider holding a long object with a bulky end, not sharp, and a long, thin tail. Multiple tails, which extend from the bulky end, are drawn as diagonal dots running through the horsetail. On the right side of the horse are more engraved tails that are shorter and less developed, resembling a broom, which is a known abstraction of a comet. The horse-foot portrayed as wheels represents the artist’s imagination about the comet’s ability to travel through the sky, much like the Roman Sun chariots.
The horse and rider form a metaphorical comet that gallops through the sky. The rider throws the spear toward the horsetail, which indicates the direction the comet is moving in, as attested by his turned head and feet.
The Demon Comet
The Jewish Maccabean Revolt 164 BC (Horwowitz W 2018) coincided with Halley’s Comet appearance, which was brighter and larger than Venus. Records show that Halley’s Comet returned in 66AD, just months before the outbreak of the Jewish revolt against Rome, in 66-73 AD, Josephus a first-century Romano-Jewish historian described it: “And so it was that a star resembling a sword stood over the city (Jerusalem); a comet persisted for a very long time”.
Fig.7, rock art from the Negev Desert, depicts a horned figure holding a spear. This is a classical illustration of a horned demon thought by the ancients to be responsible for the spread of illnesses and the deeds of evil spirits. The curved spear is a copy of a comet’s shape in the sky, tracing the comet’s movement along the earth’s curvature. The moon can be seen to the left of the horse, proving that the scene is a view of the sky.
In this rock art piece, the astral phenomenon of a comet is explained by earthly symbols, which made this scene believable. The horse with the wheel hoofs adapts this scene to that of a comet transiting through the sky similar to the Roman Sun Chariot.
Coimbra F. The sky on the Rock: Cometary images on Rock Art
Gardner S. 2016 The sun, moon, and stars of the southern Levant at Gezer
Horowitz W 2018 Halley’s Comet and Judean Revolts Revisited
Aksoy O., A combat Archeology viewpoint on weapon representation in Arabia Rock Art
More deciphering, in a new book Rock Art in Israel, available online.
Copyright © All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of israelrockart.com |
Polarization describes the path along which light’s electric field vector oscillates. An essential quality of electromagnetic radiation, polarization is often omitted in its mathematical treatment. Nevertheless, polarization and its measurement are of interest in almost every area of science, as well as in imaging technology.
Imaging the polarization of light scattered from an object gives an additional opportunity for digging more information from the scene. Conventional polarimeters can be bulky and usually comprise of precisely moving parts.
In a new study, scientists at the Harvard John A. Paulson of Engineering and Applied Sciences (SEAS) have devised highly compact, portable camera to image polarized light in a single shot.
The camera comes without any conventional without conventional polarization optics and moving parts. Works on the principle of polarisation, the camera offers detail information about the objects with which light interacts.
Paul Chevalier, a postdoctoral fellow at SEAS and co-author of the study, said, “Polarisation is a feature of light that is changed upon reflection off a surface. Based on that change, polarisation can help us in the 3D reconstruction of an object, to estimate its depth, texture, and shape, and to distinguish human-made objects from natural ones, even if they’re the same shape and color.”
Scientists primarily harnessed the potential of metasurfaces, nanoscale structures that interact with light at wavelength size-scales. Then by understanding how polarized light interacts with objects, they designed a metasurface that uses an array of subwavelength spaced nanopillars to direct light based on its polarization.
The light then forms four images, each one showing a different aspect of the polarization. Taken together, these give a full snapshot of polarization at every pixel.
The researchers tested the camera to show defects in injection-molded plastic objects, took it outside to film the polarization off car windshields and even took selfies to demonstrate how a polarization camera can visualize the 3D contours of a face.
Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS, said, “This research is game-changing for imaging. Most cameras can typically only detect the intensity and color of light but can’t see polarization. This camera is a new eye on reality, allowing us to reveal how light is reflected and transmitted by the world around us.”
“This research opens an exciting new direction for camera technology with unprecedented compactness, allowing us to envision applications in atmospheric science, remote sensing, facial recognition, machine vision, and more.”
Noah Rubin, first author of the paper and graduate student in the Capasso Lab said, “This technology could be integrated into existing imaging systems, such as the one in your cell phone or car, enabling the widespread adoption of polarization imaging and new applications previously unforeseen.”
The research is published in Science.
Other co-authors of the study include Gabriele D’Aversa, Zhujun Shi, and Wei Ting Chen. It was supported by the National Science Foundation, the Air Force Office of Scientific Research, a Physical Sciences & Engineering Accelerator grant from Harvard University’s Office of Technology Development, Google Accelerated Science and King Abdullah University of Science and Technology. This work was performed in part at Harvard’s Center for Nanoscale Systems. |
Electrical recordings from the auditory cortex of 15 people identified brain cells that specifically respond when we listen to singing
22 February 2022
Humans may have neurons whose main job is to process singing. Scientists have previously found neurons that are selective for speech and music, suggesting that our brains have specific cells that handle different types of sounds we hear.
Sam Norman-Haignere at the University of Rochester, New York, and his colleagues recorded brain electrical activity from 15 people while they listened to 165 different sounds. These included music, speech, animal calls and the sound of a flushing toilet.
The participants already had electrodes implanted into their heads, as they were in hospital for epilepsy treatment, which enabled the researchers to get more precise data compared with functional magnetic resonance imaging (fMRI) scans.
With these recordings, the researchers discovered a population of neurons that seemed to respond nearly exclusively to singing, although they also had a very small response to speech and instrumental music.
“This work suggests there’s a distinction in the brain between instrumental music and vocal music,” says Norman-Haignere, although the researchers didn’t test whether the neurons also responded to spoken word or rap music.
They overlaid these results with fMRI data from 30 other people who listened to the same sounds so that they could map the neurons to a specific region of the brain. The “singing” neurons were located roughly between the music and speech-selective areas of the auditory cortex.
The researchers don’t know why we would have such neurons. “It could have been due to some evolutionary role,” says Norman-Haignere. “Many people think that singing has some important role in the evolution of music.”
“But it’s also totally possible that it’s all driven by exposure,” he says. “People spend a huge amount of time listening to music.” The team is confident that these neurons aren’t driven by musical training and that we all probably have them.
“To be able to distinguish the musical properties of sounds is fundamental for survival,” says Jörg Fachner at Anglia Ruskin University in Cambridge, UK. “It makes sense that this dispositional ability is wired into our auditory cortex.”
“It may also explain why singing a beloved song to a person with dementia may allow responses [even though] the neurodegenerative process has limited the functionality of brain areas,” he says. “This result, along with other neuroimaging-related results of musical memory, may help to explain why songs may help dementia patients.”
Journal reference: Current Biology, DOI: 10.1016/j.cub.2022.01.069
More on these topics: |
Image of the child: Children are capable, active participants in their own learning experiences. They have rights, and are valued, respected, contributing members of our society.
Children and relationships: Children do not learn in isolation, yet rather through interconnected, respectful and reciprocal relationships with adults and peers in their community.
Parents as partners: Parents are partners in their child’s learning, working with teachers and making connections with their children. They are welcomed into the school and families are valued for what they bring.
Teachers as investigators: Teachers are also viewed as partners in a child's learning experience. Teachers respect a child’s pace, acknowledge and support individual learning styles, and make knowledgeable observations regarding a child’s inquiries, interactions and skills. They facilitate challenges and experiential queries by asking pertinent questions that help a child understand how to develop their own learning. Teachers serve as a collaborative investigator, helping to guide a child's questions, observations, discoveries and connections.
Environment: The classroom environment is mindfully prepared. Natural objects, sensory rich materials, and open-ended “provocations” engage deep curiosity and invite play and interaction.
Emergent: Teachers work with children’s interests and ideas to develop an arc of learning through interconnected subjects. Listening closely to children’s observations and conversations helps teachers understand what concepts to explore in greater detail, in choosing relevant materials to introduce, and in creating opportunities for further understanding.
Documentation: Children use many “languages” through which they express their ideas, thoughts and observations. The recording and thoughtful displaying of dialogue, art, questions and sensory processes that children and teachers engage with, helps represent (and guide further) the thoughts and cognitive experiences of the students: with themselves, to other children, and to parents and teachers.
Community: The many facets of a specific place and space influences and informs learning. No two schools will be exactly alike, given that the human, environmental, structural, economic and cultural differences, are unique from place to place. Children are viewed as integral members of a community, not just of families, or of the school, yet of a larger, global community. Schools engage with the community to help children see themselves a part of the world, to strive to understand their impacts, and to realize/make sense of the resources around them. |
- Will Mercury be pulled into the sun?
- How long does it take for Mercury to get around the sun?
- Do asteroids ever hit the sun?
- Are we getting closer to the moon?
- How long is 1 year in space?
- What is the hottest planet?
- Would the Earth survive without the moon?
- What keeps the Earth spinning?
- Can a comet crash into Earth?
- Is the American flag still on the moon?
- How long is a year in mercury?
- Can humans live on the moon?
- What would happen if Mercury collided with the sun?
- What prevents planets from being pulled into the sun?
- What holds the sun in place?
- What happens if a comet hits earth?
- What happens if an asteroid hits the ocean?
- What would happen if we nuked the moon?
Will Mercury be pulled into the sun?
Mercury, like the other planets, is in a stable orbit around the Sun.
A planet’s orbit is a geodesic through curved spacetime.
So, Mercury is unlikely to fall into the Sun.
In about 6 billion years time, the Sun will run out of Hydrogen fuel in its core..
How long does it take for Mercury to get around the sun?
Mercury spins slowly compared to Earth, so one day lasts a long time. Mercury takes 59 Earth days to make one full rotation. But a year on Mercury goes fast. Because it’s the closest planet to the sun, it goes around the Sun in just 88 Earth days.
Do asteroids ever hit the sun?
No. Asteroids ORBIT the sun. … The sun is the only object in the solar system that NOTHING ELSE in the solar system could ever impact. It’s not possible for anything to impact the sun, because that’s not how orbits work, and there is nothing in the solar system that does not orbit the sun.
Are we getting closer to the moon?
The Moon will swing ever closer to Earth until it reaches a point 11,470 miles (18,470 kilometers) above our planet, a point termed the Roche limit. “Reaching the Roche limit means that the gravity holding it [the Moon] together is weaker than the tidal forces acting to pull it apart,” Willson said.
How long is 1 year in space?
Why is that considered a year? Well, 365 days is about how long it takes for Earth to orbit all the way around the Sun one time. It’s not exactly this simple though. An Earth year is actually about 365 days, plus approximately 6 hours.
What is the hottest planet?
VenusVenus is the exception, as its proximity to the Sun and dense atmosphere make it our solar system’s hottest planet. The average temperatures of planets in our solar system are: Mercury – 800°F (430°C) during the day, -290°F (-180°C) at night.
Would the Earth survive without the moon?
Without the moon, a day on earth would only last six to twelve hours. There could be more than a thousand days in one year! That’s because the Earth’s rotation slows down over time thanks to the gravitational force — or pull of the moon — and without it, days would go by in a blink.
What keeps the Earth spinning?
“The Earth keeps spinning because it was born spinning,” Luhman said. Different planets have different rates of rotation. Mercury, closest to the sun, is slowed by the sun’s gravity, Luhman noted, making but a single rotation in the time it takes the Earth to rotate 58 times.
Can a comet crash into Earth?
A PLANET-busting comet could crash into the Earth and wipe out humanity with less than six months’ warning, a scientist has warned. … “Potentially we could have as little as six months notice of a comet that is on its first orbit into the inner solar system on an Earth-crossing trajectory.”
Is the American flag still on the moon?
Images taken by a Nasa spacecraft show that the American flags planted in the Moon’s soil by Apollo astronauts are mostly still standing. The photos from Lunar Reconaissance Orbiter (LRO) show the flags are still casting shadows – except the one planted during the Apollo 11 mission.
How long is a year in mercury?
88 daysMercury/Orbital period
Can humans live on the moon?
Colonization of the Moon is the proposed establishment of a permanent human community or robotic industries on the Moon, the closest astronomical body to Earth. … Because of its proximity to Earth, the Moon is seen by many as the best and most obvious location for the first permanent human space colony.
What would happen if Mercury collided with the sun?
Mercury’s path around the Sun is already nearly as elliptical as Pluto’s. … At that point, the simulations predict Mercury will suffer generally one of four fates: it crashes into the Sun, gets ejected from the solar system, it crashes into Venus, or — worst of all — crashes into Earth.
What prevents planets from being pulled into the sun?
The planets all formed from this spinning disk-shaped cloud, and continued this rotating course around the Sun after they were formed. The gravity of the Sun keeps the planets in their orbits. They stay in their orbits because there is no other force in the Solar System which can stop them.
What holds the sun in place?
gravityIts gravity holds the solar system together, keeping everything – from the biggest planets to the smallest particles of debris – in its orbit. The connection and interactions between the Sun and Earth drive the seasons, ocean currents, weather, climate, radiation belts and auroras.
What happens if a comet hits earth?
While the impact of the comet would be pretty destructive, the brunt of the damage would come from the gases it released in Earth’s atmosphere. … “An event like this would likely cause the planet’s climate to change drastically, leading to mass extinctions around the globe.”
What happens if an asteroid hits the ocean?
When an asteroid hits the ocean, it’s more likely to produce storm-surge-sized waves than giant walls of watery death.
What would happen if we nuked the moon?
The moon, however, is essentially a vacuum. It has some gases hanging around on its surface, but it really doesn’t have an atmosphere like Earth’s. Without the weight of a dense atmosphere, there would be no resistance to the expansion of the nuclear-produced dust and debris. |
Humans have used nanoparticles since antiquity. Stone age workers, artists during the renaissance, and ancient metallurgists have all used nanoparticles either for decorations or to enhance the properties of materials.
Today, nanoparticles play an important role in many engineering and medical applications. Metal nanoparticles like gold, silver, zinc, platinum etc. have been used for coloring and strengthening materials, as catalysts in chemical reactions etc. Palladium is one such material commonly used to make metal nanoparticles. Palladium Nanoparticles (Pd NP) are increasingly being used as a catalyst in many oxidation and reduction reactions.
In a new study, scientists from Dibrugarh University, Dibrugarh and University of Aveiro, Portugal, with assistance from the Department of Biotechnology, India have developed a new biogenic method of synthesizing palladium nanoparticles. Using leaf extracts and starch, the scientists were able to synthesize highly dispersed Pd NPs.
The scientist collected fresh green leaves of Garcinia pedunculata, better known as Bar Thekera in Assamese, from Dergaon area in Assam. The leaves are chopped and turned into an aqueous solution which served as a bio-reductant, reducing Palladium acetate into palladium nanoparticles, while starch served as a bio-stabilizer, keeping the reactions stable. Once synthesized, the solution was viewed under spectrophotometer and a Transmission Electron Microscope to verify the presence of the Pd NPs.
On synthesizing the particles, the scientists then performed chemical reactions using the Pd NPs as a catalyst. They tested the effect of the nano particles in three situations or reactions – in Suzuki- Miyaura reaction, Selective oxidation of alcohols to corresponding carbonyl compounds and reducing toxic Chromium into a non-toxic form. The newly synthesized Pd nanoparticles were proven to be an effective catalyst in all three reactions. The scientists also found another remarkable property of the Pd NPS – the particles also exhibited anti-microbial and anti-biofilm properties. When exposed to a newly discovered multidrug resistant bacterial strain—Cronobacter sakazakii, the Pd NPs were found to prevent the growth of the bacteria suggesting uses in medical applications as well. The newly developed method can be considered as a ‘green way’ to synthesize Palladium nanoparticles. |
About 15 percent of people with gallstones will develop stones in the common bile duct. Obstruction of the common bile duct may also lead to obstruction of the pancreatic duct because these ducts are usually connected.
Gallstones that form in the gallbladder are the most common cause for blocked bile ducts. Additionally, bile duct stones can develop anywhere in the biliary tract where there is bile: within the liver, gallbladder and common bile duct. Gallstones and bile duct stones are usually comprised of cholesterol or bile salts — common components of bile — that have hardened into a stone. These stones can cause sudden pain when the cystic duct in the gallbladder or the common bile duct leading from the liver is blocked.
Gallstones can be miniscule in size or as large as a ping-pong ball. You may have one stone or develop many of them. Not all gallstones or bile stones cause symptoms. Some are discovered incidentally during imaging studies for other reasons.
The most common symptom is upper abdominal pain on the right side of the body, where the liver and gallbladder are situated. The pain may start suddenly and be intense. Or it may be a slow, dull pain or occur intermittently. The pain may shift from the abdominal area to the upper back or shoulder.
Prolonged blockage of a bile duct can cause a buildup of waste products in the biliary tract and in the bloodstream, leading to an infection called cholangitis. It also can prevent the release of bile into the small intestine to help digest food or cause a serious bacterial infection in the liver called ascending cholangitis.
A blocked bile duct may result in inflammation of the gallbladder, called cholecystitis. A gallstone or bile stone in the common bile duct may block the pancreatic duct, causing painful inflammation of the pancreas or pancreatitis.
If a stone completely blocks the ducts of the gallbladder, liver, common bile duct or pancreas, other symptoms may include:
Yellow skin or eyes (from the build up of bilirubin, a waste product in blood)
Loss of appetite
Greasy or light-colored stools
Patients who develop gallstones are at a slightly increased risk of developing gallbladder cancer, called cholangiocarcinoma. However, this is a rare disease and most people with gallstones do not go on to develop cancer.
Your gastroenterologist may suspect that you have gallstones or blockage of a bile duct based on your symptoms and results of a blood test showing high levels of bilirubin. Bilirubin is a waste product in blood caused from the normal breakdown of red blood cells.
Your gastroenterologist can diagnose and treat gallstones and bile duct stones at the same time with minimally invasive endoscopic technology. Common diagnostic tests and procedures for confirming the presence of stones include:
In addition to a bilirubin test, your blood may be tested for the presence of elevated white blood cells used by the body to fight infection, and for abnormal levels of pancreatic and liver enzymes.
This non-invasive procedure uses sound waves rather than x-rays to produce images that can reveal gallstones and bile duct stones within the common bile duct. An ultrasound probe is passed over the abdomen and images are sent to a computer monitor. Abdominal ultrasound is commonly used in pregnant women.
ABDOMINAL CT SCAN
A CT scan of the abdomen also can identify stones with the biliary tract and is a noninvasive procedure. During a CT scan images are shown on a computer monitor.
Endoscopic retrograde cholangiopancreatography, or ERCP, is a specialized endoscopic technique used to study the ducts of the gallbladder, pancreas and liver, and has the added benefit of being a therapeutic tool. ERCP has been used for more than 30 years. It is considered the standard modality for diagnosing and treating disorders of the biliary tract.
During this procedure, and after first receiving a mild sedative and an anesthetic to numb the throat, an endoscope containing a miniature camera is passed down your esophagus and into the biliary tract. When your gastroenterologist sees the biliary and pancreatic ducts, he or she then passes a catheter (a narrow plastic tube) containing a contrast dye through the endoscope. The dye is injected into the pancreatic and biliary ducts and X-rays are taken that are viewed on a computer monitor. The procedure takes 60 to 90 minutes and is performed in the Endoscopy Suite within Virginia Mason’s Section of Gastroenterology and Hepatology.
Your gastroenterologist can treat a bile duct disorder at the same time it is being diagnosed by passing miniaturized instruments through the ERCP. Special preparations are required for this endoscopic procedure.
ERCP WITH ENDOSCOPIC ULTRASOUND
Increasingly, gastroenterologists at Virginia Mason are using endoscopic ultrasound (EUS) in place of x-rays for better viewing of the bile and pancreatic ducts. During this procedure, an ultrasound probe is passed through the ERCP, which sends images to a computer monitor. Gastroenterologists can then treat disorders of the bile duct, including removal of gallstones and bile duct stones, with miniaturized instruments passed through the ERCP.
Magnetic resonance cholangiopancreatography is newer technology being employed at Virginia Mason. This noninvasive diagnostic procedure is performed using MRI technology that uses magnets and radio waves to produce computer images of the bile ducts. A contrast dye is injected first through the skin near the gallbladder to enhance the images. Patients are not required to undergo endoscopy preparation and they do not undergo sedation. MRCP is being used primarily in patients who may have failed or who are not good candidates for ERCP, in those who do not want to undergo an endoscopic procedure, and in individuals considered to be at low risk of having a pancreatic or bile duct disorder. While ERCP allows for therapeutic options with cholangioscopy, MRCP is a diagnostic tool only.
Virginia Mason also is involved in national clinical trials to determine the accuracy of MRCP in diagnosing disorders of the biliary tract.
Gallstones and bile duct stones may be treated first with antibiotics to help control infection. They also can be treated at the time of diagnosis with miniaturized surgical instruments inserted through an ERCP. Alternatively, stones may be treated with medications that dissolve them, with lithotripsy that uses sound waves to break them up, or with surgery to remove the gallbladder.
ENDOSCOPIC TECHNIQUES When a stone has been identified on x-ray, ultrasound or MRI imaging as blocking a bile or pancreatic duct, it can be removed with miniaturized instruments inserted through the ERCP. These surgical instruments gently enlarge the ductal opening that then allows the stone to be removed.
Medications can be given that dissolve gallstones but they are not always effective and are not indicated in all cases. The most common medication is a bile salt (ursodiol) that slowly dissolves cholesterol within the stones. However, the stones can return when the medication is discontinued.
EXTRACORPOREAL SHOCK WAVE LITHOTRIPSY
This treatment employs high-frequency sound waves to break up gallstones. Patients then take bile salt tablets, sometimes indefinitely, to dissolve the pieces and to ensure that the stones do not return. Only a minority of patients are candidates for this type of treatment, however. The best candidates have a single small stone. If an infection (cholangitis) or inflammation (cholecystitis) of the gallbladder is present, lithotripsy is not an option. Extracorporeal (meaning outside of the body) shock wave lithotripsy is performed by directing pulsating, high-intensity sound waves at the area where the stone is located, identified first by ultrasound. The procedure takes about 45 minutes and patients are usually lightly sedated before treatment.
Surgery to remove the gallbladder, called cholecystectomy, is a common procedure in the United States for individuals with symptoms caused by gallstones. Virginia Mason was one of the first medical centers in the country to remove the gallbladder by the minimally invasive laparoscopic approach, called laparoscopic cholecystectomy.
This minimally invasive surgery for removing the gallbladder is one of the most common procedures performed at Virginia Mason and is, in fact, the preferred approach today for removal of the gallbladder. In cases in which a gallstone or bile stone has blocked a bile duct – a situation that can lead to infection or inflammation of organs within the biliary tract – surgeons will likely recommend removal of the gallbladder.
During laparoscopy, the surgeon makes several ¼ to ½ inch incisions in the abdomen. He or she then inserts miniaturized endoscopic and surgical instruments, and a small camera, through these “ports.” Images from the camera are sent to a video monitor that allows the surgeon to “deflate” and then remove the gallbladder through one of the ports. Individuals return to their regular activities often within a few days.
Sometimes the surgeon must revert to an open surgical procedure during a scheduled laparoscopy to remove the gallbladder. These occurrences happen infrequently and are most often caused when the gallbladder is found to be infected or when the gallbladder lining is hardened, making it more difficult for the organ to be removed laparoscopically.
At other times, the surgeon may make the decision that the open surgical procedure is the best option for the patient based on the severity of the individual’s gallbladder disease. Open surgery involves making a large incision in the abdomen and removing the gallbladder. Recovery time is longer, five to seven days in the hospital, and there is a longer return to daily activities: two to three weeks, for example. |
Sides and vertices of a polygon
Definition: each endpoint of a side of a polgon, plural is vertices
Sentence: You can have sides and vertices in a polygon.
Definition: curving or bulging outward
Sentence: A polygon is convex if no line that contains a side of the polygon contains a point in the polygon.
Definition: having all sides or faces equal
Sentence: If a polygon's sides are the congruent it is called equilateral.
Definition: having all angles equal
Sentence: A polygon is equiangular if all its interior angles are congruent.
Definition: Equilateral and equiangular
Sentence: If a polygon is regular it is equilateral and equiangular.
Definition: A straight line connecting any two vertices of a polygon that are not adjacent
Sentence: Diagonals are lines inside a triangle.
Definition: A quadrilateral with two pairs of parallel sides
Sentence: In addition to quadrilaterals we will learn about parallelogram.
Definition: a parallelogram with 4 congruent sides
Sentence: Rhombuses are shapes in geometry.
Definition: A parallelogram with four right angles
Sentence: A rectangles width is congruent and its height is congruent.
Definition: A quadrilateral with 4 sides that are equal and has all 90 degree angles
Sentence: A square has four congruent sides.
Definition: a quadrilateral with exactly one pair of parallel sides
Sentence: A trapezoid always has a parallel pair.
Definition: a trapezoid with congruent legs
Sentence: A isosceles trapezoid has two congruent sides.
midsegment of a trapezoid
Definition: A segment whose endpoints are the midpoints of the non-parallel sides of the trapezoid
Sentence: A trapezoid can have a midpoint. |
Protection against Ebola, one of the world’s deadliest viruses, can be achieved by a vaccine produced in insect cells, raising prospects for developing an effective vaccine for humans, say scientists at the Southwest Foundation for Biomedical Research (SFBR) in San Antonio.
“The findings are significant in that the vaccine is not only extremely safe and effective, but it is also produced by a method already established in the pharmaceutical industry,” says SFBR’s Ricardo Carrion, Ph.D., one of the primary authors of the study. “The ability to produce the vaccine efficiently is attractive in that production can be scaled up quickly in the case of an emergency and doses can be produced economically.”
The new study was published in the January 2009 issue of the journal Virology, and was supported by the National Institutes of Health. Jean Patterson, Ph.D., also of SFBR, participated in the research.
Ebola viruses, which cause severe bleeding and a high fatality rate in up to 90 percent of patients, have no effective treatment or vaccine. Since its first identification in Africa in 1987, Ebola outbreaks have caused some 1,800 human infections and 1,300 deaths. Outbreaks have become increasingly frequent in recent years, and are likely to be caused by contact with infected animals followed by spread among humans through close person-to-person contacts. Ebola viruses cause acute infection in humans, usually within four to 10 days. Symptoms include headache, chill, muscle pain, followed by weight loss, delirium shock, massive bleeding and organ failure leading to death in two to three weeks.
Ebola viruses are considered a dangerous threat to public health because of their high fatality rate, ability to transmit person-to-person, and low lethal infectious dose. Moreover, their potential to be developed into biological weapons causes grave concern for their use as a bioterrorism agent. While some vaccines show protection in non-human primate studies, the strategies used may not be uniformly effective in the general human population due to pre-existing immunity to the virus-based vaccines.
In the new study, a vaccine using Ebola virus-like particles (VLPs) was produced in insect cells using traditional bio-engineering techniques and injected into laboratory mice. A VLP vaccine is based upon proteins produced in the laboratory that assemble into a particle that, to the human immune system, looks like the virus but cannot cause disease.
Two high-dose VLP immunizations produced a high level immune response in mice. And when the twice-immunized mice were given a lethal dose of Ebola virus, they were completely protected from the disease. In contrast, mice that were not immunized had a very low immune system response and became infected. In another experiment, a three low-dose VLP immunization effectively boosted immune system response in mice and protected them against the Ebola virus. This finding is important because it demonstrates that since the vaccine produces immunization in dilute quantities, many more vaccine doses can be generated compared with a poorly immunogenic vaccine.
VLPs are attractive candidates for vaccine development because they lack viral genomic material and thus are not infectious, are safe for broad application, and can be administered repeatedly to vaccinated individuals to boost immune responses.
The findings will be validated in additional animal systems. The vaccine will then undergo FDA safety and efficacy testing prior to use in humans in potentially five years.
Collaborators on the study included Richard Compans, Ph.D., and Chinglai Yang, Ph.D., of the Emory University School of Medicine in Atlanta.
Cite This Page: |
Toothpaste has a history that stretches back nearly 4,000 years. Until the mid-nineteenth century, abrasives used to clean teeth did not resemble modern toothpastes. People were primarily concerned with cleaning stains from their teeth and used harsh, sometimes toxic ingredients to meet that goal. Ancient Egyptians used a mixture of green lead, verdigris (the green crust that forms on certain metals like copper or brass when exposed to salt water or air), and incense. Ground fish bones were used by the early Chinese.
In the Middle Ages, fine sand and pumice were the primary ingredients in teeth-cleaning formulas used by Arabs. Arabs realized that using such harsh abrasives harmed the enamel of the teeth. Concurrently, however, Europeans used strong acids to lift stains. In western cultures, similarly corrosive mixtures were widely used until the twentieth century. Table salt was also used to clean teeth.
In 1850, Dr. Washington Wentworth Sheffield, a dental surgeon and chemist, invented the first toothpaste. He was 23 years old and lived in New London, Connecticut. Dr. Sheffield had been using his invention, which he called Creme Dentifrice, in his private practice. The positive response of his patients encouraged him to market the paste. He constructed a laboratory to improve his invention and a small factory to manufacture it.
Modern toothpaste was invented to aid in the removal foreign particles and food substances, as well as clean the teeth. When originally marketed to consumers, toothpaste was packaged in jars. Chalk was commonly used as the abrasive in the early part of the twentieth century.
Sheffield Labs claims it was the first company to put toothpaste in tubes. Washington Wentworth Sheffield's son, Lucius, studied in Paris, France, in the late nineteenth century. Lucius noticed the collapsible metal tubes being used for paints. He thought putting the jar-packaged dentifrice in these tubes would be a good idea. Needless to say, it was adopted for toothpaste, as well as other pharmaceutical uses. The Colgate-Palmolive Company also asserts that it sold the first toothpaste in a collapsible tube in 1896. The product was called Colgate Ribbon Dental Creme. In 1934, in the United States, toothpaste standards were developed by the American Dental Association's Council on Dental Therapeutics. They rated products on the following scale: Accepted, Unaccepted, or Provisionally Accepted.
The next big milestone in toothpaste development happened in the mid-twentieth century (1940-60, depending on source). After studies proving fluoride aided in protection from tooth decay, many toothpastes were reformulated to include sodium fluoride. Fluoride's effectiveness was not universally accepted. Some consumers wanted fluoride-free toothpaste, as well as artificial sweetener-free toothpaste. The most commonly used artificial sweetener is saccharin. The amount of saccharin used in toothpaste is minuscule. Companies like Tom's of Maine responded to this demand by manufacturing both fluoridated and non-fluoridated toothpastes, and toothpastes without artificial sweetening.
Many of the innovations in toothpaste after the fluoride breakthrough involved the addition of ingredients with "special" abilities to toothpastes and toothpaste packaging. In the 1980s, tartar control became the buzz word in the dentifrice industry. Tarter control toothpastes claimed they could control tartar build-up around teeth. In the 1990s, toothpaste for sensitive teeth was introduced. Bicarbonate of soda and other ingredients were also added in the 1990s with claims of aiding in tartar removal and promoting healthy gums. Some of these benefits have been largely debated and have not been officially corroborated.
Packaging toothpaste in pumps and stand-up tubes was introduced during the 1980s and marketed as a neater alternative to the collapsible tube. In 1984, the Colgate pump was introduced nationally, and in the 1990s, stand-up tubes spread throughout the industry, though the collapsible tubes are still available.
Every toothpaste contains the following ingredients: binders, abrasives, sudsers, humectants, flavors (unique additives), sweeteners, fluorides, tooth whiteners, a preservative, and water. Binders thicken toothpastes. They prevent separation of the solid and liquid components, especially during storage. They also affect the speed and volume of foam production, the rate of flavor release and product dispersal, the appearance of the toothpaste ribbon on the toothbrush, and the rinsibility from the toothbrush. Some binders are karaya gum, bentonite, sodium alginate, methylcellulose, carrageenan, and magnesium aluminum silicate.
Abrasives scrub the outside of the teeth to get rid of plaque and loosen particles on teeth. Abrasives also contribute to the degree of opacity of the paste or gel. Abrasives may affect the paste's consistency, cost, and taste. Some abrasives are more
The most commonly used abrasives are hydrated silica (softened silica), calcium carbonate (also known as chalk), and sodium bicarbonate (baking soda). Other abrasives include dibasic calcium phosphate, calcium sulfate, tricalcium phosphate, and sodium metaphosphate hydrated alumina. Each abrasive also has slightly different cleaning properties, and a combination of them might be used in the final product.
Sudsers, also known as foaming agents, are surfactants. They lower the surface tension of water so that bubbles are formed. Multiple bubbles together make foam. Sudsers help in removing particles from teeth. Sudsers are usually a combination of an organic alcohol or a fatty acid with an alkali metal. Common sudsers are sodium lauryl sulfate, sodium lauryl sulfoacetate, dioctyl sodium sulfosuccinate, sulfolaurate, sodium lauryl sarcosinate, sodium stearyl fumarate, and sodium stearyl lactate.
Humectants retain water to maintain the paste in toothpaste. Humectants keep the solid and liquid phases of toothpaste together. They also can add a coolness and/or sweetness to the toothpaste; this makes toothpaste feel pleasant in the mouth when used. Most toothpastes use sorbitol or glycerin as humectants. Propylene glycol can also be used as a humecant.
Toothpastes have flavors to make them more palatable. Mint is the most common flavor used because it imparts a feeling of freshness. This feeling of freshness is the result of long term conditioning by the toothpaste industry. The American public associates mint with freshness. There may be a basis for this in fact; mint flavors contain oils that volatize in the mouth's warm environment. This volatizing action imparts a cooling sensation in the mouth. The most common toothpaste flavors are spearmint, peppermint, wintergreen, and cinnamon. Some of the more exotic toothpaste flavors include bourbon, rye, anise, clove, caraway, coriander, eucalyptus, nutmeg, and thyme.
In addition to flavors, toothpastes contain sweeteners to make it pleasant to the palate because of humecants. The most commonly used humectants (sorbitol and glycerin) have a sweetness level about 60% of table sugar. They require an artificial flavor to make the toothpaste palatable. Saccharin is the most common sweetener used, though some toothpastes contain ammoniated diglyzzherizins and/or aspartame.
Fluorides reduce decay by increasing the strength of teeth. Sodium fluoride is the most commonly used fluoride. Sodium perborate is used as a tooth whitening ingredient. Most toothpastes contain the preservative p-hydrozybenzoate. Water is also used for dilution purposes.
Each batch of ingredients is tested for quality as it is brought into the factory. The testing lab also checks samples of final product.
Garfield, Sydney. Teeth Teeth Teeth. Simon and Schuster, 1969.
Colgate-Palmolive. 1996. http://www.colgate.com/ (July 9, 1997).
Crest web site. 1996. http://www.pg.com/docYourhome/docCrest/directory_map.htm 1 (July 9, 1997).
— Annette Petrusso |
- What groups were discriminated in sporting competitions?
- When did the discrimination get ruled out/ fixed?
- How much money do sportsmen get compared to sportswoman and has the gap changed?
- Why/what do people stereotype?
- Were there any sports where certain genders weren't allowed to do the exact same activities as the opposite gender?
- Are there certain ‘groups’ that are formed in sport and who is accepted into them?
- Have stereotypes been used more frequently than when they were first used? Has there always been discrimination in sport?
- When and where was there a significant change in women playing sport?
- Has racism increased or decreased over time?
While researching, we found out that there were women in every single sport from every country in the Olympics only from 2012 games. Also, women were only allowed to do mild sports like dance and gymnastics until the early 1800's.Also, recently the number
Questions for history teachers
1. How do we know if sites such as the ABC and government websites are keeping their integrity and not exaggerating through articles and stats?
2. How do we know when we find a trustworthy site?
3. What are the best sources to gather information?
4. What kind of society would have made different genders and races think that they were not accepted in a sport?
5. When we make a timeline when researching significant events that have happened in the past, why is it necessary to sequence them in order? Is it just easier to read? |
This workshop is a short, hands-on robotics course that introduces children to real-life robotics and programming using Arduino and the SwissCHEESE board. Students will learn about basic electronics, mechanical design concepts, and robotics theories while using real-world hardware components. Honing their skills through numerous exercises, students will eventually construct a line-following rover at the end of the class.
The full-day version of this workshop runs for approximately six hours. The half-day version is significantly more condensed and fast-paced, but takes half the time – as a half-day Pi workshop will likely be running alongside it.
Regardless of which version they participate in, during this workshop students will:
- Learn about input and output on an Arduino board
- Learn how to use the CarduBlock EDU to program loops, if/else statements, variables, and comparative statements
- Use the serial monitor to read inputs
- Use buttons, LED lights, DC motors, and infrared sensors in conjunction with the Arduino to make it perform tasks
- Learn how to use tools and given materials to construct a rover
- Combine everything they’ve learned to create a rover capable of following a line around a track |
Plant hormones a.k.a Plant growth Regulators (PGR’s)
Hormone is an endogenous (produced within the organism of a body) compound usually synthesized in one part and transported to another part where it exerts a physiological effect in lower concentrations.
Unlike animals each plant cell has a capacity to produce different hormones.
What are PGR’s??
PGR’ s are organic compounds which are naturally synthesized by plants in very small concentrations.
These can also be synthesized artificially!
They promote, inhibit or modify any physiological process of a plant.
Plant hormones also referred as phyto hormones and are called as ‘signaling molecules’
These hormones are produced in the plants in extremely lower concentrations
These hormones control all the aspects of growth and development in plants. Ex: Plant defense mechanism, stress tolerance etc
Basically these plant hormones are not nutrients but chemicals!!
These plant hormones promote the plant growth, development and differentiation of cells and tissues
These hormones usually synthesize in one part and differ with in the plant tissues
As it is said above there are different types of PGR’s like growth promoters, inhibitors or modifiers.
We have five major classes of PGR’s!!
Under growth promoters there are
- Abscissic acid
-Auxin helps in cell elongation of both root and shoots.
– In plants, auxin (IAA) is synthesized in growing tips.
How auxin plays a role in plant system
- This will kill weeds (selective herbicide) when applied at higher doses (Example-2,4-D)
- Used in tissue culture (IAA)
- Induce rooting in plants where cuttings are used instead of seeds (Example-IBA)
- Increase number of female flowers and inhibit male flower production in monoecious vegetables
- Induction of parthenocarpy (seedlessness) in fruits(IAA)
- Induces flowering and enhance fruit set
- Controls immature falling of fruits from plants (Pre-harvest fruit drop) and increase yield
Gibberlins are synthesized in young tissues of the shoot and also the developing seed
Role of gibberlins
- Substitutes long day requirement of plants and help long day plants to flower in short days
- Sex Expression – Promote maleness at higher concentration (1500ppm)
Kinetin or Cytokinin name because of its ability to promote cytokinesis(cell division)
Cytokinin is generally found in higher concentrations in growing tissues such as roots, embryos, and fruits, where cell division is occurring.
Role of cytokinin in plants
- Stimulate cell division and cell enlargement
- Delay leaf senescence (Aging)
- Break dormancy of seeds and buds
- Flower induction (In short day palnts)
Application of Cytokinins
- Prevention of premature senescence
- Used in Tissue culture
Ethylene is called as Ripening Harmone
Used in various fruit ripening process
Role of Ethylene
- Induces early flowering
- Induces formation of abscission layer
- Causes inhibition of root growth
- Stimulates the formation of adventitious roots
- Stimulates fading of flowers
5.Abscissic acid (ABA)
This hormone helps plants in adapting to stress conditions (ex. Release of this hormone in the plant system will helps in closure of ‘Stomata’, through which loss of water will takes place. When these stomata are closed, there will no loss/reduced water loss, hence plant can able to adopt and grow in water stress conditions
-This ABA also helps in seed and bud dormancy (Dormancy is a state where plant is alive but there is no active growth)
Role of ABA
- Accelerates leaf and fruit abscission and senescence
- Induces bud and seed dormancy due to accumulation of ABA
- Stomatal Regulation- Moisture stressed plants produce ABA which facilitates stomata closure and help in maintaining turgidity.
Application- it is used as Anti-transpirant i.e to avoid transpiration rate from plants
Here comes the list of growth retardants
How to apply these PGR’s
The many methods employed are :
- Spraying method,
- Injection of solutions into internal tissues,
- Root feeding method,
- Application of powder mixture to the bases of cutting,
- Dipping of the cuttings in PGR solution,
- Soaking in dilute aqueous solution.
- For Cucurbit plants one needs to provide foliar spray at 2-4 leaf stage.
Thus , PGR’s enable a better understanding about what can be best done to ensure that the plant growth and yield can be optimised.
Sushma completed her B. Sc. from College of Horticulture, Hiriyur and M. Sc. from College of Horticulture, Bengaluru. She is an avid gardener with expertise across soil based and soilless gardening techniques using substrates. |
A reinforcer is a stimulus that follows some behavior and increases the probability that the behavior will occur. For example, when a dog's owner is trying to teach the dog to sit on command, the owner may give the dog a treat every time the dog sits when commanded to do so. The treat reinforces the desired behavior.
In operant conditioning (as developed by B. F. Skinner), positive reinforcers are rewards that strengthen a conditioned response after it has occurred, such as feeding a hungry pigeon after it has pecked a key. Negative reinforcers are stimuli that are removed when the desired response has been obtained. For example, when a rat is receiving an electric shock and presses a bar that stops the shock, the shock is a negative reinforcer— it is an aversive stimulus that reinforces the bar-pressing behavior. The application of negative reinforcement may be divided into two types: escape and avoidance conditioning. In escape conditioning, the subject learns to escape an unpleasant or aversive stimulus (a dog jumps over a barrier to escape electric shock). In avoidance conditioning, the subject is presented with a warning stimulus, such as a buzzer, just before the aversive stimulus occurs and learns to act on it in order to avoid the stimulus altogether.
Punishment can be used to decrease unwanted behaviors. Punishment is the application of an aversive stimulus in reaction to a particular behavior. For children, a punishment could be the removal of television privileges when they disobey their parents or teacher. The removal of the privileges follows the undesired behavior and decreases its likelihood of occurring again.
Reinforcement may be administered according to various schedules. A particular behavior may be reinforced every time it occurs, which is referred to as continuous reinforcement. In many cases, however, behaviors are reinforced only some of the time, which is termed partial or intermittent reinforcement. Reinforcement may also be based on the number of responses or scheduled at particular time intervals. In addition, it may be delivered in regularly or irregularly. These variables combine to produce four basic types of partial reinforcement. In fixed-ratio (FR) schedules, reinforcement is provided following a set number of responses (a factory worker is paid for every garment he assembles). With variable-ratio (VR) schedules, reinforcement is provided after a variable number of responses (a slot machine pays off after varying numbers of attempts). Fixed-interval (FI) schedules provide for reinforcement of the first response made within a given interval since the previous one (contest entrants are not eligible for a prize if they have won one within the past 30 days). Finally, with variable-interval (VI) schedules, first responses are rewarded at varying intervals from the previous one.
See also Behavior modification |
Norsemen, or Northmen, a name given to Scandinavians of ancient and medieval times, especially the late eighth through mid-eleventh century. Norsemen called Vikings were feared raiders who plundered much of Europe. (The origin of the term Viking is unclear. It is believed to come from the Old Norse vik, meaning a fjord-type inlet, or vig, meaning battle.) The Norsemen were also traders and colonists, and were the first Europeans to visit North America.A typical Viking village.
Vikings began raiding and plundering the coasts of England, Ireland, and western Europe in the late eighth century when the population in Scandinavia had grown so much that the area's resources could no longer support it. They attacked unprotected cities and towns, taking what they could carry and destroying what was left.
During the mid-ninth century, the Norsemen began military expeditions to conquer and colonize foreign lands and, in some cases, to open trade routes. By the early eleventh century, Norsemen had colonized parts of Europe and North America, and had established trade routes extending to the Byzantine Empire.
The first Norse raid was against an English monastery in 793. In the mid-ninth century, Norsemen, predominantly. Norwegians and Danes, began colonizing the Orkney and Hebrides Islands, the east coast of Ireland, and the west coast of Scotland and England. In 994 the Danes began an invasion of England. In 1016 Knut, the heir to the Danish throne, became king of England. [Knut II.]
Meanwhile, Norsemen raided northwestern France and plundered much of the rest of the country. They also went on raiding expeditions to Spain and North Africa. In 911 Charles the Simple, the Frankish king, obtained some relief from the raids by creating the duchy of Normandy for the Norsemen. They gradually converted to Christianity and adopted local customs and the French language.
In the late ninth century, Norsemen began sailing across the Atlantic. They colonized Iceland in 874, and expanded to Greenland in the 980's. In about 1000 they became the first Europeans to discover North America, landing in a territory they called Vinland (Newfoundland). A settlement was made but soon abandoned.
Varangians, Norsemen that are believed to have been predominantly Swedish, expanded to the east. In the eighth century, they sailed across the Baltic Sea and established a state called Rusland in territory that is now part of Russia. (It is believed that the name Russia is derived from Rusland.) By the end of the ninth century, they had established a trading center at Novgorod and had made Kiev the capital of Rusland.
During the ninth and tenth centuries, the Varangians sailed down the Dnieper River and across the Black Sea, and down the Volga River and across the Caspian Sea, establishing trade routes to the Byzantine Empire. They acquired silk, silver, and gold in return for timber, furs, and slaves.
By the middle of the eleventh century, Norse expansion had ended. Colonization in Greenland declined. The Irish expelled the Norsemen in 1014. In 1042 the Saxons regained the English throne. The formation of professional armies in Europe made raiding more dangerous and less profitable.
Norsemen frequently intermarried with the local population and adopted the languages and customs of the people that they conquered. Thus, little evidence of Norse influence in cultures outside of Scandinavia and Iceland remains. |
|Yale-New Haven Teachers Institute||Home|
"Few children learn to love books by themselves. Someone has to lure them into the wonderful world of the written word; someone has to show them the way."Orville Prescott, A Father Reads to His Children The New Haven school system has recently proclaimed that trade books are the primary medium for teaching children how to read. This approach will prove challenging to even the best classroom teacher. Teachers will no longer be able to depend on the textbook and workbook approach to reading; we will have to be more innovative and creative. In addition to reading as part of the daily curriculum, students will be expected to do supplemental reading on their own. Group activities will include small group discussions of books read, along with supplemental exercises to explore characters, settings, historical perspectives, etc. What better way to bring about this reading "revolution" than by setting a positive example every day of the school year—by reading aloud to the class? Old-fashioned teaching methods associated with reading and reciting lost out to the invention of the ditto machine and more recently to photocopy machines. Educational researchers are now blaming the use of written busy work for the general lack of interest—even boredom-.children feel toward reading. The challenge for teachers today is to keep their students interested in reading purely for the sake of enjoyment, because reading is fundamental to acquiring other forms of news and information.
"If we could get our parents to read to their preschool children fifteen minutes a day, we could revolutionize the schools."Dr. Ruth Love, Superintendent, Chicago Public Schools (1981) Even though our students are older, we can still stress to them the importance of reading aloud. As teachers we can invite the parents to our classrooms for reading activities, for storytelling, for sharing life experiences. Our students need to be drawn beyond their own boundaries, and to feel important and positive about who they are. Reading is a key vehicle through which to accomplish this. Children are members of a society that needs to know and understand them. Reading and talking about the characters in various books and stories gives them a basis of comparison for themselves and for people they know. Life's lessons found in reading go beyond relating to characters in stories; they include introducing the reader to situations and conflicts that occur in all walks of life. Reading provides children with strategies for how to cope with and resolve these problematic situations. Through reading we can establish that there is a sameness of human needs that young people share. What better way to do this than by class discussions about the books we are reading together?
I have prepared a unit of eight books to be read aloud to fifth-grade students. Over the past few years, we have taken part in the city wide "Read Aloud Day" but I have usually found it a chore to complete the book that is so generously donated to our class library. Last year I discovered R. L. Stine's "Goosebumps" and "Fear Street" series of books. My class and I were delighted to be so entertained; every chapter ended with us wanting to continue to see what would happen next. Sometimes we would read the first paragraph of the next chapter because we just couldn't wait; students would actually sneak a peek if I left the book lying on my desk. These books are not written to a specific gender, race, or age. They are both silly and scary, just plain entertaining and fun.
I believe that I have found other books that the students will enjoy as much as they do the R.L. Stine series. I want to expose them to other authors who have written books that they can compare to the ones they so enjoy. We will read these books together and form a means of comparison, a literature appreciation course so to speak. The beauty of reading aloud is that everyone can participate; the activities can be done both as a class and in small groups. In addition to the topics for discussion at the end of each summary, the class will talk about how the books are alike and how they are different. All of the books are supposed to be an adventure, which (according to Webster) is:
I am sure that each book fits the broad definition of adventure and will easily lend itself to an in-depth discussion during and after the reading. As the teacher I will have a general plan for discussion of each book, but I will be seeking to create an atmosphere of spontaneity and curiosity as we move into questions and answers concerning each text. The children's insights and perspective on each book are what is most important, and my role will be to facilitate their sense of discovery. Remember that this is to be enjoyable for both student and teacher!
- 1. Hazard; risk; chance.
- 2. An enterprise of hazard; a bold undertaking in which hazards are to be encountered, and the issue is staked upon unforeseen events.
- 3. A remarkable occurrence in one's personal history; a striking event; as, the adventures of one's life.
- Scott: a boy who is bored with the lack of adventure in his life. He's yearning for a good story to tell at school.
- Glen: Scott's best friend.
- Kelly: Scott's older sister who often bears the ill effects of Mac's reign of terror.
Topics for discussion during or after the reading of this book:
- 1. Scott and Glen are the very best of friends. Tell about your best friend and an adventure you had when you were really glad you had each other to depend on.
- 2. Do you have an older brother or sister that you consider to be a pain in the neck?
- 3. Role-play one of the situations that the boys got in trouble for; pretend that you are trying to explain to your parents what happened.
- 4. They way the book ends leaves it wide open for the author to write a sequel. Let's brainstorm and see if we can write just one more chapter.
- Aremis Slake: the main character in the story, a story about his survival in the city of New York.
- The Pink Cleaning Lady: one of Slake's newspaper customers, who befriends him and gives him some used clothes.
- The Manager: a man who runs a coffee shop in the subway station.
- The Waitress: a kind woman who works in a subway restaurant.
- Willy Joe Whinny: a subway motorman whose route goes by Slake's hiding place.
Topics for discussion while, or after, reading this book are listed as follows:
This book was also used as the basis for the ABC Kidsworks video called Runaway. Viewing the video after reading the book will provide your class with the opportunity to compare characters as well as the story line. The book jacket depicts Slake as a white boy with blond hair and blue eyes but the video depicts him as a young black boy.
- 1. You can't hide from life; you have to face your problems and find solutions.
- 2. What is meant by the bird in Slake's chest? Can you think of other metaphors?
- 3. Think about the people Slake makes friends with: how are they the same and how are they different?
- 4. Why is Willy Joe Whinney important? Could he have been left out of the book?
- 5. Consider the symbols used in the book; write a paragraph about rose-colored glasses, the bird, or the rat.
- Nancy Drew: an eighteen-year-old super sleuth.
- Hannah: housekeeper for the Drews since Nancy was born.
- Daryl Gray: a sexy high school senior with an instant attraction to Nancy.
- Walt "Hunk" Hogan: the tough football captain who's acting strangely paranoid.
- Carla Dalton: a student who hates Nancy on sight.
- Hal Morgan: the class brain who catches Nancy cheating on a test.
- Ned: Nancy's long time boyfriend.
Topics for discussion during, or after, the reading of this book:
- 1. What are the best and worst parts of Nancy Drew's job as a detective?
- 2. Was this story believable? Is this the way that crimes are solved?
- 3. Compare this story to another detective story that you have either read or seen on television.
- 4. Write a letter to Nancy Drew describing a case that you would like her to solve for you. Be sure to include all the details that she will need to know (who, what. when,. where and why.)
- 5. Draw a poster advertising a detective agency of your own.
- 6. Why did Nancy cheat on the test?
- Matthew Carlton: the club president.
- Katie Carlton: the pesky younger sister.
- Quentin, Hooter, and Tony: the other boys in the club.
Topics for discussion during, or after, reading this book:
- 1. Choose one of the people that the children met up with when they went back to the days of the revolution and look them up in the encyclopedia. Write a short report about them.
- 2. Have you ever formed a club with some of your friends? What did you do together?
- 3. Why did they call the army "Washington's ragtag band of rebels?" Do you think you could come to school tomorrow dressed as one of Washington's men? Let's look back in the book to find some descriptions.
- 4. What souvenir did the group have from their adventure? When you go somewhere on a school trip or a vacation do you like to bring back something to remember the occasion with? Maybe you can bring it to school tomorrow to show to the class!
- Jeffrey Lionel (Maniac) Magee: a very human, caring boy who crosses boundaries that many people don't dare to cross.
- Amanda Beale: a young black girl who befriends Maniac when he comes to Two Mills.
- Mars Bar Thompson: the first black kid to challenge Maniac.
- John McNab: a white kid who taunts Maniac about his views on race.
- Russell and Piper McNab: John's little brothers who idolize Maniac.
- Grayson: a former baseball player who works at the zoo.
- Mrs. Beale: Amanda's mother, who gives Maniac a home
Topics for discussion during, or after, reading the book:
Jerry Spinelli has written a number of books for children; many are available through the Scholastic book clubs.
- 1. Is maniac a homeless person or a runaway? What is the difference?
- 2. Maniac doesn't say much about himself; how would you describe him?
- 3. If everyone in the world were the same color, would the author have been able to write this book?
- 4. What do you think is the most important lesson in this book?
- 5. Let's write a sequel to this book; what characters should we keep and what happens to them?
Cracker Jackson: the eleven-year-old hero.
Goat: his best friend.
Alma: his former baby-sitter who is in trouble.
Billy Ray: Alma's husband.
Topics for discussion during, or after, the reading of the book:
- 1. Cracker and Goat do some things that are against the law while they are trying to help Alma. Is this behavior excusable because, in their minds, they are doing something good?
- 2. What is Cracker's real name and how did he get his nickname? A discussion of personal experiences with nicknames can follow.
- 3. Have you ever known someone who was in trouble but you didn't know what to do to help them? Did you get any ideas from reading this book as to what you might have done?
- Jamal Hicks: A seventh grader who is torn between being in a gang and being the good child his mother believes him to be.
- Tito: Jamal's best friend.
- Sassy: Jamal's younger sister who is always minding his business and threatening to tell their mother everything she knows.
- Mama: a hardworking woman trying to do her best for her children (Randy, Jamal, and Sassy,) but not fully aware of what is going on.
- Abuela: Tito's grandmother with whom he lives.
- Mack: a member of a gang called the Scorpions.
Topics for discussion during, or after, reading the book:
- 1. Choose a scene from the book and act it out for the class the way that it happened in the book. Then act it out the way that you wish that it had happened.
- 2. Pretend that Mama and Abuela are having a telephone conversation about Jamal and Tito. Write a short skit with the dialogue between them.
- 3. Jamal never tells anyone except Tito what is going on in his life. Is this a good way to handle his problems? Why or why not? What would you do differently?
- 4. Write a letter to Mama telling her about Jamal's feelings.
- Milo: a very bored young man until he embarks on his adventure.
- Tock: an imaginary talking dog who is Milo's partner throughout the story.
Topics for discussion during, or after, the reading of this book:
Introducing these books to a group of children and having lofty goals for their interest in and their analysis of the plots, characters, and settings is a little unnerving. The challenge lies in sustaining the attention of the group. I hope to be creative and use a variety of approaches. Having taught for close to twenty years, one of my theories is that successful teaching is often a variation of entertaining the group. Children have a great imagination, a sense of humor, and lots of curiosity. These characteristics, along with the fine books selected, lead me to believe the read aloud unit and the group activities will be successful.
- 1. In the end, Milo realizes that he has been dreaming. Do you ever recall your dreams when you wake up?
- 2. Make a list of at least ten different things that you can do when you feel bored.
- 3. Choose one of the places that Milo visits and draw a picture of it. Don't label it as we will display the pictures and see if the class can identify where it is and what happened there.
- 4. Think about the role sounds play in the story. Make a list of ten good sounds and ten bad sounds.
- 5. The king gives Milo a box of all the words that he knows. How many words do you think are in there? I am going to set the timer for five minutes and I would like you to write as many words as you can in that time.
Students who enjoy reading horror and suspense books by R.L. Stine might also enjoy books by the following authors:
Richie Tankerley Cusick
Joan Lowery Nixon
This was taken from a Wallingford, CT Public Library booklist for Teens 6/95.
Butler, Francella and Richard Rotert. Triumphs of the Spirit in Children's Literature. Library Professional Publications, 1986. PN1009/A1/T76/1986
Landsberg, Michelle. Reading For the Love of It. New York, 1987. Z1037/L313X/1987/(LC)
Miller-Lachmann, Lyn. Our Family, Our Friends, Our Worlds. New Jersey, 1992. Landsberg, Michelle.Z1037/M654X/1992/(LC)
Rosenberg, Judith K. Young People's Books in Series. Englewood, Colorado, 1992.
Trelease, Jim. The Read-Aloud Handbook. Penguin Books, 1995.
Contents of 1997 Volume II | Directory of Volumes | Index | Yale-New Haven Teachers Institute |
30th anniversary of apollo 11 - july, 1999 the apollo program was designed to land humans on the moon and bring them safely back to earth six of the missions (apollos 11, 12, 14, 15, 16, and 17) achieved this goal. After apollo, there was the skylab space station program, which cost $22 billion in then-year money ($10 billion in 2010 dollars) during its nine-year existence (1966–1974) considering that three three-men crews spent a total of 510 person-days onboard skylab, this mean that each day spent by a crewman costs $55 million. Green team apollo 13 case analysis the primary questions and issues you debated and more about essay on apollo program apollo 13 case analysis 1599 words. We present an analysis of the economic, political and social factors that underlay the apollo program, one of the most exceptional and costly projects ever undertaken by the united states in peacetime that culminated in 1969. Lunar exploration: past and future the apollo program greatly accelerated interest in analysis of the lunar surface showed that the dark maria had a.
20029-h006-ro-00 project technical report task aspo - 92 executive summary analysis of the apollo spacecraft operational data management system nas 9-12330 december 1971. The apollo program, also known as project apollo, was the third united states human spaceflight program carried out by the national aeronautics and space administration (nasa), which accomplished landing the first humans on the moon from 1969 to 1972. Apollo 40th anniversary web site celebrates the many accomplishments of the apollo program apollo: a retrospective analysis (monographs in aerospace history. Nasa's fmea system for the apollo missions by pmarshall_8 in types government & politics, apollo, and nasa.
Reading 1: preparing the way why was the work for the apollo program spread out reading 1 was compiled from project apollo: a retrospective analysis web. Frontier publishes cost benefit analysis of the global apollo program frontier (europe) and the grantham institute at imperial college london have assessed the costs and benefits associated with increasing global investment in low carbon innovation. President kennedy announces the apollo program the goal was to put a man on the moon, and return him safely to analysis of samples yielded geological history.
Apollo program analysis the endurance and determination that snoopy displayed represented the apollo program after the incident of apollo 1. Nasa missions landed humans on the moon, but did the apollo program cost create a worthwhile roi how does this impact the future of space exploration.
The naming of things a deep mythology analysis of the apollo program archtypes glimpses of gaia: apollo’s accidental epiphanies: on december 24th, 1968 three men gazed at the earth from their apollo spacecraft to the earth as it orbited the moon. Answer to: what was the purpose of the apollo program by signing up, you'll get thousands of step-by-step solutions to your homework questions.
Moonfakers at work an analysis of the apollo program for collier's magazine earn your college degree at university of phoenix begin the admissions process today. The manhattan project, the apollo program, and federal energy technology r&d programs: a comparative analysis september 3, 2008 – june 30, 2009 rl34645. Sem categoria an analysis of the apollo program by publicado em 09/10/2017 09/10/2017. ‘houston, we have a solution’: a case study of the analysis of astronaut speech during nasa apollo 11 for long-term speaker modeling chengzhu yu 1, john h l hansen and douglas w oard2.
Thinking about the indispensable books on the with his erudite analysis of the race to about the indispensable books on the apollo program. The cost and benefits of the apollo programme a report for the global apollo programme programme3 december 2015 any analysis. Apollo: a retrospective analysis preface the program to land an american on the moon and return safely to earth in the 1960s has been called by some observers a defining event of the twentieth century pulitzer prize-winning historian arthur m. New analysis of apollo 17 seismic data ask an apollo 17 was the final mission of nasa ’s apollo programthe mission delivered human explorers to. |
Grades PreK - 2
Grade level Equivalent: Not Available
Lexile® Measure: IG850L
Guided Reading: N
About This Book
From hurricanes and tornadoes to blizzards and sandstorms, the world is filled with amazing weather, and we can see it changing every day. This fascinating book introduces young readers to nature's wildest storms and weirdest weather patterns.
Kids will learn that many different types of weather can occur around the planet at the same time. They'll also learn that hailstones can be as large as soccer balls, that tornado winds can reach 300 miles per hour, and other fascinating facts. Beautiful, full-color photographs help kids understand weather events they've never experienced firsthand. |
A group of disorders caused by cell degeneration, frontotemporal dementia (FTD) affects the brain, specifically its areas associated with personality, behavior and language. Once considered a rare disease, FTD may account for 20-50% of dementia cases in people younger than age 65, according to the Alzheimer’s Association.
FTD causes cell damage that shrinks the brain’s frontal (area behind the forehead) and temporal (area behind the ears) lobes. The disease generally starts with personality and behavior changes and may eventually lead to severe memory loss.
Often miscategorized as psychiatric illness, frontotemporal dementia typically strikes between the ages of 45 and 65. However, the Association for Frontotemporal Dementia Degeneration (AFTD) indicates that cases have occurred as early as age 21 and as late as age 80.
What Causes Frontotemporal Dementia?
Although it has been linked to a variety of gene mutations, the cause of FTD remains unknown. Physicians may use multiple tests to identify characteristics of FTD and rule out other possible conditions, such as liver or kidney disease. Standard testing may involve blood work, MRI, CT scan, PET scan and neuropsychological testing.
Signs and Symptoms of Frontotemporal Dementia
Each case of FTD is different, but the illness generally becomes more distinguishable from other brain conditions as it progresses. Symptoms may occur in clusters, and some may be more prevalent in early or later stages. Here is a list of ten signs of FTD:
- Poor judgment
- Loss of empathy
- Socially inappropriate behavior
- Lack of inhibition
- Repetitive compulsive behavior
- Inability to concentrate or plan
- Frequent, abrupt mood changes
- Speech difficulties
- Problems with balance or movement
- Memory loss
What is the Difference Between FTD and Alzheimer’s?
Like Alzheimer’s disease, FTD causes brain atrophy that leads to a progressive loss of brain function. Key differences between the two diseases include:
- Age at diagnosis: Symptoms of FTD usually appear between the ages of 45 and 65, whereas the majority of Alzheimer’s cases occur in people over age 65.
- From behavior changes to memory loss: Changes in behavior are an early sign of FTD, and problems with memory may occur in advanced stages. In contrast, Alzheimer’s affects memory early on and may lead to behavior issues as it progresses.
- Speech problems: People with FTD often suffer greater problems speaking, understanding speech and reading than people with Alzheimer’s.
Unfortunately, FTD has no cure. Current FTD treatments focus on easing symptoms but cannot slow the disease’s progress. Physicians may prescribe antidepressant or antipsychotic drugs to combat behavioral symptoms. Patients suffering from language issues may benefit from speech therapy.
The average survival rate after FTD diagnosis is six to eight years. In the final stages, patients typically require 24-hour care.
Long-Term Care for FTD
Experts recommend that caregivers prepare for long-term care management for their loved one with FTD. Medical specialists, nursing care, and legal and financial advisors should all be under consideration.
The Association for Frontotemporal Degeneration provides a Support and Resources page to help guide you through a new diagnosis. This page also provides a place for sharing stories with other families as a means of helping each other cope and gaining insight on this disease.
What particular signs of FTD did your loved one show? What resources did you seek for help with managing his or her condition?
- What is Frontal Lobe Dementia?
- What is Alzheimer’s Disease?
- 3 Things to Know About Alzheimer’s Power of Attorney |
All children who build sandcastles on the beach know that in addition to sand you also need to add a little water to prevent the structure from collapsing. But why is this? In an article which appeared today in Scientific Reports from the publishers of Nature, researchers from the University of Amsterdam’s (UvA) Institute of Physics (IoP) answer this question.
The function of water in sandcastles is to form small 'bridges' which make the grains of sand stick together, thus increasing the solidity of the structure. The researchers show that the optimum amount of water is very small (only a few per cent). If this optimum concentration is used, sandcastles reaching five metres in height can be built.
The research team led by Daniel Bonn, Professor of Complex Fluids at the UvA, tested the theory successfully on cylindrical 'laboratory' sandcastles. They also show that with specially treated water-repellent sand, even an underwater sandcastle can be built (see photo). The water that serves as glue when building normal sandcastles is substituted with air bubbles when building the underwater versions.
The results are of practical importance to the civil engineering and soil mechanics sectors, the fields of science that deal with the stability of soil structures.
The research was conducted by the IoP-UvA’s Soft Matter research group, who are examining the special properties of soft materials such as polymers, emulsions, or granular materials (sand).
Cite This Page: |
Paleontology and geology
The Precambrian: Precambrian rocks occur only in Delaware’s Piedmont region at the northernmost end of the state. These metamorphosed rocks preserve the history of an approximately one-billion-year-old mountain-building event called the Grenville Orogeny.
The Paleozoic: The Paleozoic rocks of Delaware are metamorphic and so do not contain fossils. Metamorphism took place during a series of tectonic events that built the Appalachian Mountains.
The Mesozoic: There are no Jurassic or Triassic rocks in Delaware. The Cretaceous sediments record a change from terrestrial environments in the Early Cretaceous to shallow marine environments in the Late Cretaceous. Plant remains are common in the terrestrial sediments, while the marine sediments contain fossils of invertebrates as well as those of marine reptiles, dinosaurs, and pterosaurs.
The Cenozoic: A sea covered Delaware during most of the Cenozoic, and sea level fluctuated throughout this time interval. Major rises and falls of the sea during the later part of the Cenozoic left a record of alternating sandy nearshore and muddy offshore environments and, ultimately, bay environments. A diverse fossil record includes the remains of ancient relatives of horses, rhinoceroses, porpoises, whales, seals, manatees, bats, beavers, dogs, birds, snakes, fish, snails, and molluscs. Most of the state is covered by a veneer of sandy Quaternary sediments deposited by rivers formed from melting glaciers. Fossils are rare in these deposits. |
In 1568 the artist and art historian Giorgio Vasari wrote in his Lives of the Artists that the Venetians did not make preliminary drawings for their paintings. The recent use of infrared reflectography to study Venetian paintings has forced a revision of this view. Infrared reflectograms, which allow us to peer beneath the paint surface, reveal that Venetian painters often drew directly on the canvas instead of making numerous studies on paper.
The infrared images show that Giorgione and Titian used fluid brushstrokes to indicate the placement and shape of the figures and their settings. Such drawings provided only a guideline; x-rays of paintings, called x-radiographs, expose underlying paint layers which demonstrate that the creative process continued in the course of painting. Artists experimented with different poses and compositions, adding and eliminating details, such as the exotic headdress in Giorgione's Three Philosophers, which was painted out in the final composition. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.