content
stringlengths 275
370k
|
---|
- Using PVAAS for a Purpose
- Key Concepts
- Concept of Growth
- Growth Measures and Standard Errors
- Growth Standard Methodology
- Predictive Methodology
- Topics in Value-Added Modeling
- Public Reports
- Additional Resources
- General Help
Concept of Growth
Value-added reports provide reliable measures of the academic progress a group of students has made, on average, in a tested grade and subject or course. These measures are different from measures of student achievement. Achievement measures, such as test scores or the percentage of students who tested proficient or above, indicate where students performed academically at a single point in time. Growth Measures indicate how much progress the students have made, as a group, over time.
To understand what growth means in the PVAAS reports, it's helpful to imagine a child's physical growth plotted on a chart. Every year, during the child's wellness check, the pediatrician will measure the child's height. Each measurement is then plotted on a chart to show the child's growth over time.
Typically, the child's growth curve doesn't follow a smooth line. Instead, there are "dimples" and "bubbles" along the way. These variations might occur because the child had a growth spurt or because there was error in measuring the height. Perhaps the child did not stand up straight for last year's measurement. Although the growth curve isn't smooth, we can see the child's progress over time. The child's height at a single point in time would not be meaningful to the doctor, but seeing the child's growth over time gives the doctor important information about the child's health and ongoing development.
A growth chart used by a pediatrician has more information than just that one child's growth. It also has curves that show average, or typical, growth for children at all heights. The pediatrician compares the child's growth to these curves to determine whether the child is making appropriate growth. The pediatrician would not be alarmed if a child's current height is at the 10th percentile if the child has been relatively short historically. On the other hand, if a child was average in height at a younger age but has not made expected growth over the past few years, then a current height at the 10th percentile might be cause for concern.
We can think about measuring students' academic growth in a similar way. Although PVAAS does not measure growth for individual students, this analogy can be helpful when thinking about growth measures for LEAs/districts, schools, and teachers. When students are tested at the end of each grade or course, we can plot the scores for the group of students who are served the same way a pediatrician plots a child's physical growth. Like the pediatrician's graph, the curve we get from plotting a group of students' average achievement level each year will likely exhibit a pattern of dimples and bubbles.
If we discover a dimple effect occurring in fourth-grade math in a school, then the dimple is evidence that the instructional program for fourth-grade math might need to be examined. Likewise, if we see a dimple effect for the group of students for whom a teacher had instructional responsibility, the dimple is evidence that the teacher might need to adjust the instructional practices to better meet students' academic needs. Comparing the growth of this teacher's students to a standard expectation of growth is helpful in determining whether the students' progress has been sufficient.
Let's consider another analogy: measuring the progress or growth of two track relay teams, team A and team B. Each team includes runners whose individual times contribute to the overall speed of the team. The two teams have performed very differently in past races.
- Team A has been lower performing, typically only beating about a third of the teams in the league.
- Team B has been higher performing, typically beating three-fourths of the teams in the league.
Given his team's historically lower performance, Team A's coach wants his team to grow and improve. But Team B's coach wants to see growth for her team, too. Even though they have been high performing, the coach still wants each runner to continue making strong progress. Both coaches need to measure their team's improvement, or growth, in a meaningful way.
To start, each coach needs a solid measure of the team's performance at the beginning of the year. Later, the coach could then compare the team's performance level from the beginning of the season to the end of the season to see how much they've grown.
A data-savvy coach would avoid relying on a single race to determine the team's overall performance level at the beginning of the year. Imagine how inaccurate that might be. If the team has an unusually good day and each runner scores a personal best, the team would appear to be higher achieving than they actually are. Likewise, if one runner stumbles and costs the relay team a lot of time, the team would underperform that day. Instead of using a single race time, the coach would consider all the data from multiple races and practices to get a good sense of how the team is performing at the beginning of the year. This assessment of the team's performance would provide a solid starting point for measuring the team's growth and improvement.
Both coaches would reasonably expect their teams to run faster after a year of practice and training. But faster race times aren't necessarily enough to ensure that the team moves up in the standings. To demonstrate that kind of growth, each team needs to show improvement compared to others. If Team A improves their race times but go from beating a third of the teams in the league to beating only a fifth of the teams, they have not improved as much as the other teams have. Likewise, if Team B improves their already fast race times, but go from beating three-fourths of the teams to only beating two-thirds of them, then the coach should be concerned that her team did not show enough growth and improvement that year.
Despite their teams' very different levels of performance, the key question in both coaches' minds is whether their team's performance in the league improved, dropped, or stayed about the same. Both coaches have the same expectation and the same goal. They expect their team to at least maintain their standing in the league, and they both have a goal of helping the team improve and perform at a higher level compared to their peers.
The approach these coaches use to measure their teams' growth is similar to the way PVAAS determines the academic growth of students served by LEAs/districts, schools, and teachers. PVAAS growth measures provide a reliable comparison of the achievement level of each group of students from one year to the next. Just as the two track teams in our analogy had different levels of performance, the students in different LEAs/districts, schools, and classrooms across the state are at different academic achievement levels.
Despite these differences, all educators want to help their students grow and improve. To determine whether students are making enough growth, we need reliable growth measures based on as much data as we can include. |
Review by John Miles
By Matthew J. Kauffman, James Meacham, Hall Sawyer, Alethea Y. Steingisser, William J. Rudd, Emilene Ostlind, Wild Migrations: Atlas of Wyoming’s Ungulates. Corvallis: Oregon State University Press, 2018.
Migration is a well-known phenomenon, but only recently have researchers been able to precisely track migration corridors, and just in time. Across North America and the world, development of various kinds has interrupted migrations. Perhaps the most famous such interruption in American history was when the transcontinental railroad sliced through the migratory paths of millions of bison. The great migration herds were literally split apart by the railroad which allowed passengers to shoot the animals from trains as they crossed the Great Plains. This along with other stresses led to the rapid decline of bison populations. Migration is defined in this book: “To migrate means to travel seasonally, usually each spring and autumn, between distinct and often distant habitats or ranges. Migration entails an animal returning, year after year, to the same seasonal habitats, as opposed to wandering from place to place across a landscape or dispersing from the range where it was born without ever returning.”
Conservation has often involved drawing lines around habitats, which often are summer range of terrestrial species, especially important hot spots. This has been good and necessary work, but not often has the definition of habitat encompassed migration corridors which can be long and narrow and difficult to protect from surrounding human communities. All sorts of developments, but especially roads, have bisected these long corridors, cutting off migrations and reducing the viability of migratory animal populations. Recognizing this, researchers today are using technology to trace migratory routes in hopes of protecting them and thus conserving migratory species. Such research is especially important to rewilding work, which seeks to protect and restore connectivity on the landscape necessary for conservation of biological diversity. Migration is only a part of the challenge of allowing species to move across landscapes, seascapes, aquatic environments, and even through the air, but it is an important part.
When Yellowstone National Park was established in 1872, Americans were still in the grip of what Stewart Udall in his book The Quiet Crisis called the “myth of superabundance.” Dramatic decline in the late 19th century of populations of ungulates like bison and even deer and extinction of species like the passenger pigeon and Carolina parakeet, weakened the myth, and wildlife conservation emerged as part of a broad conservation movement. Back then, understanding of wildlife ecology was very limited. More than a century later that understanding has advanced dramatically, as Wild Migrations illustrates.
This book is an atlas unlike any that I have seen. Its focus is on the state of Wyoming in which two national parks, Grand Teton and Yellowstone, are very much at the core of migrations of elk, bison, and mule deer. Many migrations in the state do not touch on these national parks, but the point of the book is that migratory ungulate populations, once nearly eradicated by overhunting and other factors, are mostly on the rebound. This recovery, however, is threatened today by energy development, roads, rural housing development, fencing, disease, and climate change, among other stressors. Long-term assurance of viable populations of elk and bison (moose as well, recent returnees) requires scientifically informed policy. Migration research, presented in this elegant 208-page book, will contribute to this goal.
The fact of ungulate migration in Wyoming and across the world has long been known, but exactly where the migration routes were located has been difficult to learn until now. Mathew Kauffman, a University of Wyoming wildlife ecology professor and cofounder of the Wyoming Migration Initiative which built the knowledge base for this book, notes in the preface that his ecological research in Yellowstone led him to perceive that the Wyoming landscape was changing to the detriment of wildlife. Many migrations were becoming more difficult, and some populations were consequently being lost. He writes:
“ … we were starting to understand how changes to Wyoming landscapes were making the migrations more difficult. Migrating wildlife need vast swaths of connected habitat. Even now, we don’t know how many holes we can punch in these corridors before migrating wildlife will abandon their traditional movements, but we are certain that ever-expanding roads, fences, houses and well pads make it harder for animals to migrate and reduce the benefits these seasonal journeys provide.”
The Wyoming Migration Initiative was launched to contribute to the goal of sustaining migration corridors, which would require much research to determine where those corridors are and what measures are needed to sustain them.
Wild Migrations is a report on the research done so far and its historical context. Its atlas format is intended to make that research accessible to many decision-makers. The goal of the project was and is to produce results that can be used in development and conservation planning and management to assure that migration corridors are protected. All major sections of the atlas are chock full of information provided as text, maps, charts, graphs, timelines, illustrations, and photographs. The introductory section provides an overview of migration – what it is and why it is important – an overall picture of migration in Wyoming, and relevant information about the migratory species studied including mule deer, pronghorn, elk, moose, bighorn sheep, bison, white-tailed deer, and mountain goats. Next is history, describing what is known of ungulate populations in times before trappers and explorers, the decline of populations by “exploitation and overharvest,” followed by population recovery. A section on science describes the history of migration research, explains data collection and analysis, and presents what has been learned about stopovers, “surfing the green wave” (how migration follows seasonal green-up), fidelity (to migration routes), and predation, among other topics. Final sections describe threats and conservation.
An atlas like this could not have been produced until very recently with development of technology that allows precise tracking of migrating animals. Global positioning system collars and “store on board collars” allow researchers to accumulate mountains of data over long time periods. “A few animals wearing GPS collars can record tens of thousands of locations in a year or two, greatly expanding our understanding of seasonal ranges and migration routes.” Researchers then use statistical techniques to outline polygons of winter and summer ranges of each animal, then “merge individual migration routes to high- and low-use migration routes, stopovers, summer ranges, and winter ranges for the entire population.” This sounds like an immense amount of work, and it is. The result is detailed information about migration routes which can be used in various ways, though migration researchers admit that there is much yet to learn.
Complementing all this research technology and analysis is Geographic Information System cartography which allows remarkable visualization of the data collected. The cartography in the atlas is the work of the University of Oregon InfoGraphics Lab, and the maps and graphics in Wild Migrations are remarkable. For instance, one full page, eight by eleven-inch map, titled “Population-Level Migration Corridors” overlays mule deer migration routes, winter range, summer range, stopovers, low-use corridors, and high use corridors on the physical geography over a 65-mile distance. Another map titled “Greater Yellowstone Ecosystem Elk Migrations” displays migration routes of nine elk herds, each herd’s routes presented in different colors clearly indicating how elk migrate in and around Yellowstone National Park. A third map presents the “Path of the Pronghorn” in western Wyoming, a 100-mile migratory route, along which are a Forest Service Protected Corridor, conservation easements, BLM Areas of Critical Environmental Concern, wildlife overpasses, and well pads. Challenges and conservation efforts along this path are clearly portrayed. The book contains hundreds of such maps and other graphic displays of the information gathered.
While this is a Wyoming project, the most exciting aspect of this publication for me is what it reveals about the conservation potential of such work which, if done more widely in the American West and across the world, could be a giant step forward in wildlife conservation. Ungulate migration corridors are as yet more intact in Wyoming than in most other states in America, and the wildlife there is more abundant, but the research technology, methods, and the GIS presentation of it shows great potential for use in a range of wildlife conservation challenges. I learned of the Wyoming Migration Initiative through photographer Joe Riis’s marvelous 2017 book Yellowstone Migrations. The photographs in Riis’s book are wonderful, and there is text, but I was left wanting to know more about what is being done to understand migration and to protect migration corridors. Wildlife Migrations gives me just what I wanted after poring over Yellowstone Migrations.
Conservation is the avowed goal of migration research and this final section of the book describes the Red Rim Fence Dispute that stretched from the mid-1970s to the early 1990s. The dispute arose when a rancher decided to build a 28-mile long, five-foot high, woven wire fence around his land which happened to include critical wintering habitat for a large herd of pronghorn antelope. Blocked from their winter range, an estimated 700 pronghorn perished in the winter after the fence was installed. After years of public outrage and litigation, the “rancher” whose goal was to reduce or eliminate the pronghorn so he could mine coal on the enclosed land, lost multiple lawsuits against efforts to reduce effects of the fence and lost the land when his mining company declared bankruptcy. The story ended with Wyoming Game and Fish purchasing 15,000 acres of the critical habitat and establishing the Red Rim-Daley Wildlife Habitat Management area.
This is a cautionary tale, suggesting what might happen when critical migratory habitat is left unprotected. With the knowledge resulting from research and presented on the atlas, decisions can be made to protect migrations with various tools – protected areas like national parks, the Wyoming Range Withdrawal Area, and designated Wilderness. Other tools, their recent use profusely illustrated, include conservation easements, highway over- and under-passes at critical pinch points, and fence modifications to allow animals to move over or under them. The 150-mile Red Desert to Hoback mule deer migration provides an example of how many land ownerships and conservation tools can be coordinated over a narrow but long area to sustain a migration corridor. The authors write, “Because there is no mechanism or clear approach to protect entire migration routes in multiple-use landscapes, the only viable strategy is for local, state, and federal stakeholders to work together in a concerted effort.” A coalition of conservation groups has formed to help landowners and land managers protect the long mule deer migration.
In an article about the atlas, veteran Wyoming reporter Angus Thuermer, Jr. quotes Fred Lindzey, former president of the Wyoming Fame and Fish Commission. “There’s a lot of things we don’t know about why certain paths are chosen,” Lindzey said, but if some oil or residential development has to be forgone here or there, “that’s a small price to pay.” Thurmer notes too that “Thirty years after white-collared pronghorn does showed up in Jackson Hole after trekking 150 miles from the south, Game and Fish still hasn’t designated a corridor for these animals. Meantime, the Bureau of Land Management has approved gas field development … that lies astride the route in southern Sublette County.” The atlas presents the information and suggests what needs to be done but makes no specific prescriptions. Whether it is used for conservation by decision-makers remains to be seen.
While large ungulates are the most visible migrants in Wyoming and much of the American West, the most well-known migrations are of birds. Most of their migratory routes are above ground, but there are stopovers, refueling stations, and these must be identified and protected along the east coast and Midwest flyways and that work is well under way. Neotropical migrating birds are in big trouble, for various reasons. Land-bound migrators like ungulates present complex conservation challenges; and these challenges are even greater in the highly populated and developed American East than in the West. For instance, three species of bats are true migrants, summering in the northern United States and Canada then migrating to winter in South Carolina, Georgia, and Florida. Perhaps the most famous migration across the East is that of the monarch butterfly, and many other insects migrate as well. Mapping such migrations and then doing something to sustain them across many political jurisdictions and in the face of climate change is a daunting but crucial task.
Migration ecology is of course only one part of the great challenge of rewilding, but an important one; and the atlas of Wyoming ungulates draws attention to the need and the potential of this science. Examining it awakened in me an awareness, curiosity, and the beginnings of understanding of this dimension of the rewilding project.
In her forward to Wild Migrations, writer Annie Proulx who has written much about Wyoming, observes that “The essence of the place is compressed into its pages, information so revelatory it seems to stand in three dimensions … The richly detailed pages are also a preparation for the future as climate alteration, increasing human development, and population inflow affect the state.” I would add that works like this are preparation for changes in the natural world everywhere and especially where wildlife migration is part of the ecology.
David Brower, then Executive Director of the Sierra Club, gave a talk at Dartmouth College in 1965 on the threat of dams to Grand Canyon National Park. John, a New Hampshire native who had not yet been to the American West, was flabbergasted. “What Can I do?” he asked. Brower handed him a Sierra Club membership application, and he was hooked, his first big conservation issue being establishment of North Cascades National Park.
After grad school at the University of Oregon, John landed in Bellingham, Washington, a month before the park was created. At Western Washington University he was in on the founding of Huxley College of Environmental Studies, teaching environmental education, history, ethics and literature, ultimately serving as dean of the College.
He taught at Huxley for 44 years, climbing and hiking all over the West, especially in the North Cascades, for research and recreation. Author and editor of several books, including Wilderness in National Parks, John served on the board of the National Parks Conservation Association, the Washington Forest Practices Board, and helped found and build the North Cascades Institute.
Retired and now living near Taos, New Mexico, he continues to work for national parks, wilderness, and rewilding the earth. |
In Paper 1, we developed new analytical techniques for studying weather balloon data. Using these techniques, we found that we were able to accurately describe the changes in temperature with height by just accounting for changes in water content and the existence of a previously unreported phase change. This shows that the temperatures at each height are completely independent of the greenhouse gas concentrations.
This disproves the greenhouse effect theory. It also disproves the man-made global warming theory, which is based on the greenhouse effect theory.
In Paper 2, we suggest that the phase change we identified in Paper 1 involves the “multimerization” of oxygen and/or nitrogen in the air above the “troposphere” (the lower part of the atmosphere). This has important implications for a number of important phenomena related to the atmosphere, e.g., ozone formation, the locations of the jet streams, and how tropical cyclones form.
In Paper 3, we identify a mechanism by which energy is transmitted throughout the atmosphere, which we call “pervection”. This mechanism is not considered in the greenhouse effect theory, or in the current climate models. We carried out laboratory experiments to measure the rates of pervection in air, and find that it is much faster than radiation, convection and conduction.
This explains why the greenhouse effect theory doesn’t work.
- 1. Introduction
- 2. The atmospheric temperature profile
- 3. Crash course in thermodynamics & radiative physics: All you need to know
- 4. Paper 1: Phase change associated with tropopause
- 5. Paper 2: Multimerization of atmospheric gases above the troposphere
- 6. Paper 3: Pervective power
- 7. Applying the scientific method to the greenhouse effect theory
- 8. Conclusions
We have written a series of three papers discussing the physics of the Earth’s atmosphere, and we have submitted these for peer review at the Open Peer Review Journal:
- The physics of the Earth’s atmosphere I. Phase change associated with the tropopause – Michael Connolly & Ronan Connolly, 2014a
- The physics of the Earth’s atmosphere II. Multimerization of atmospheric gases above the troposphere – Michael Connolly & Ronan Connolly, 2014b
- The physics of the Earth’s atmosphere III. Pervective power – Michael Connolly & Ronan Connolly, 2014c
In these papers, we show that carbon dioxide does not influence the atmospheric temperatures. This directly contradicts the greenhouse effect theory, which predicts that carbon dioxide should increase the temperature in the lower atmosphere (the “troposphere”), and decrease the temperature in the middle atmosphere (the “stratosphere”).
It also contradicts the man-made global warming theory, since the the basis for man-made global warming theory is that increasing the concentration of carbon dioxide in the atmosphere will cause global warming by increasing the greenhouse effect. If the greenhouse effect doesn’t exist, then man-made global warming theory doesn’t work.
Aside from this, the results in our papers also offer new insights into why the jet streams exist, why tropical cyclones form, weather prediction and a new theory for how ozone forms in the ozone layer, amongst many other things.
In this essay, we will try to summarise some of these findings and results. We will also try to summarise the greenhouse effect theory, and what is wrong with it.
However, unfortunately, atmospheric physics is quite a technical subject. So, before we can discuss our findings and their significance, there are some tricky concepts and terminology about the atmosphere, thermodynamics and energy transmission mechanisms that we will need to introduce.
As a result, this essay is a bit more technical than some of our other ones. We have tried to explain these concepts in a fairly clear, and straightforward manner, but if you haven’t studied physics before, it might take a couple of read-throughs to fully figure them out.
Anyway, in Section 2, we will describe the different regions of the atmosphere, and how temperatures vary throughout these regions. In Section 3, we will provide a basic overview of some of the key physics concepts you’ll need to understand our results. We will also summarise the greenhouse effect theory. Then, in Sections 4-6, we will outline the main results of each of the three papers. In Section 7, we will discuss what the scientific method tells us about the greenhouse effect. Finally, we will offer some concluding remarks in Section 8.
2. The atmospheric temperature profile
As you travel up in the atmosphere, the air temperature generally cools down, at a rate of roughly -6.5°C per kilometre (-3.5°F per 1,000 feet). This is why we get snow at the tops of mountains, even if it’s warm at sea level. The reason the air cools down with height is that the thermal energy (“heat”) of the air gets converted into “potential energy” to counteract the gravitational energy pulling the air back to ground. At first, it might seem hard to visualise this gravitational cooling, but it is actually quite a strong effect. After all, it takes a lot of energy to hold an object up in the air without letting it fall, doesn’t it?
This rate of change of temperature with height (or altitude) is called the “environmental lapse rate”.
Surprisingly, when you go up in the air high enough, you can find regions of the atmosphere where the temperature increases with altitude!For this reason, atmospheric scientists and meteorologists give the different parts of the Earth’s atmosphere different names. The average temperature profile for the first 120 kilometres and the names given to these regions are shown in Figure 1.
By the way, in this essay we will mostly be using the Kelvin scale to describe temperatures. This is a temperature scale that is commonly used by scientists, but is not as common in everyday use. If you’re unfamiliar with it, 200 K is roughly -75°C or -100°F, while 300 K is roughly +25°C or +80°F.
At any rate, the scientific name for the part of the atmosphere closest to the ground is the “troposphere”. In the troposphere, temperatures decrease with height at the environmental lapse rate we mentioned above, i.e., -6.5°C per kilometre (-3.5°F per 1,000 feet).
Above the troposphere, there is a region where the temperature stops decreasing (or “pauses”) with height, and this region is called the “tropopause”. Transatlantic airplanes sometimes fly just below the tropopause.
As we travel up higher, we reach a region where temperatures increase with height. If everything else is equal, hot air is lighter than cold air. So, when this region was first noticed, scientists suggested that the hotter air would be unable to sink below the colder air and the air in this region wouldn’t be able to mix properly. They suggested that the air would become “stratified” into different layers, and this led to the name for this region, the “stratosphere”. This also led to the name for the troposphere, which comes from the Greek word, tropos, which means “to turn, mix”, i.e., the troposphere was considered a region where mixing of the air takes place.
To get an idea of these altitudes, when Felix Baumgartner broke the world record for the highest skydive on October 14, 2012, he was jumping from 39 kilometres (24 miles). This is a few kilometres above where the current weather balloons reach, i.e., in the middle of the stratosphere:
At the moment, most weather balloons burst before reaching about 30-35 kilometres (18-22 miles). Much of our analysis is based on weather balloon data. So, for our analysis, we only consider the first three regions of the atmosphere, the troposphere, tropopause and stratosphere.
You can see from Figure 1 that there are also several other regions at higher altitudes. These other regions are beyond the scope of this essay, i.e., the “stratopause”, the “mesosphere” and the “mesopause”.
Still, you might be interested to know about the “Kármán line”. Although the atmosphere technically stretches out thousands of kilometres into space, the density of the atmosphere is so small in the upper parts of the atmosphere that most people choose an arbitrary value of 100 kilometres as the boundary between the atmosphere and space. This is called the Kármán line. If you ever have watched a meteor shower or seen a “shooting star”, then you probably were looking just below this line, at an altitude of about 75-100 kilometres, which is the “meteor zone”.The temperature profile in Figure 1 is the average profile for a mid-latitude atmosphere. But, obviously, the climate is different in the tropics and at the poles. It also changes with the seasons. Just like ground temperatures are different at the equator than they are in the Arctic, the atmospheric temperature profiles also change with latitude. Typical temperature profiles for a tropical climate and a polar climate are compared to the “standard” mid-latitude climate in Figure 2, up to a height of 30 kilometres (19 miles).
One more term you may find important is the “boundary layer”. This is the first kilometre or two of the troposphere, starting at ground level. We all live in the boundary layer, so this is the part of the atmosphere we are most familiar with. Weather in the boundary layer is quite similar to the rest of the troposphere, but it’s generally windier (more “turbulent”) and the air tends to have more water content.
3. Crash course in thermodynamics & radiative physics: All you need to know
Understanding energy and energy equilibrium
All molecules contain energy, but the amount of energy the molecules have and the way in which it is stored can vary. In this essay, we will consider a few different types of energy. We already mentioned in the previous section the difference between two of these types, i.e., thermal energy and potential energy.
Broadly speaking, we can divide molecular energy into two categories:
- Internal energy – the energy that molecules possess by themselves
- External energy – the energy that molecules have relative to their surroundings. We refer to external energy as mechanical energy.
This distinction might seem a bit confusing, at first, but should become a bit clearer when we give some examples, in a moment.
These two categories can themselves be sub-divided into sub-categories.
We consider two types of internal energy:
- Thermal energy – the internal energy which causes molecules to randomly move about. The temperature of a substance refers to the average thermal energy of the molecules in the substance. “Hot” substances have a lot of thermal energy, while “cold” substances don’t have much
- Latent energy – the internal energy that molecules have due to their molecular structure, e.g., the energy stored in chemical bonds. It is called latent (meaning “hidden”), because when you increase or decrease the latent energy of a substance, its temperature doesn’t change.
When latent energy was first discovered in the 18th century, it wasn’t known that molecules contained atoms and bonds. So, nobody knew what latent energy did, or why it existed, and the energy just seemed to be mysteriously “hidden” away somehow.
We also consider two types of mechanical energy:
- Potential energy – the energy that a substance has as a result of where it is. For instance, as we mentioned in the previous section, if a substance is lifted up into the air, its potential energy increases because it is higher in the Earth’s gravitational field.
- Kinetic energy – the energy that a substance has when it’s moving in a particular direction.
Energy can be converted between the different types.For instance, if a boulder is resting at the top of a hill, it has a lot of potential energy, but very little kinetic energy. If the boulder starts to roll down the hill, its potential energy will start decreasing, but its kinetic energy will start increasing, as it picks up speed.
As another example, in Section 2, we mentioned how the air in the troposphere cools as you travel up through the atmosphere, and that this was because thermal energy was being converted into potential energy.
In the 18th and 19th centuries, some scientists began trying to understand in detail when and how these energy conversions could take place. In particular, there was a lot of interest in figuring out how to improve the efficiency of the steam engine, which had just been invented.Steam engines were able to convert thermal energy into mechanical energy, e.g., causing a train to move. Similarly, James Joule had shown that mechanical energy could be converted into thermal energy.
The study of these energy interconversions became known as “thermodynamics”, because it was looking at how thermal energy and “dynamical” (or mechanical” energy were related.
One of the main realisations in thermodynamics is the law of conservation of energy. This is sometimes referred to as the “First Law of Thermodynamics”:
The total energy of a substance will include the thermal energy of the substance, its latent energy, its potential energy, and its kinetic energy:
So, in our example of the boulder rolling down a hill, when the potential energy decreases as it gets closer to the bottom, its kinetic energy increases, and the total energy remains constant.
Similarly, when the air in the troposphere rises up in the atmosphere, its thermal energy decreases (i.e., it gets colder!), but its potential energy increases, and the total energy remains constant!
This is a very important concept to remember for this essay. Normally, when one substance is colder than another we might think that it is lower in energy. However, this is not necessarily the case – if the colder substance has more latent, potential or kinetic energy then its total energy might actually be the same as that of the hotter substance. The colder substance might even have more total energy.
Another key concept for this essay is that of “energy equilibrium”:
The technical term for energy equilibrium is “thermodynamic equilibrium”.
For a system in energy equilibrium, if one part of the system loses energy and starts to become unusually low in energy, energy flows from another part of the system to keep the average constant. Similarly, if one part of the system gains energy, this extra energy is rapidly redistributed throughout the system.
Is the atmosphere in energy equilibrium? That is a good question.
According to the greenhouse effect theory, the answer is no.
The greenhouse effect theory explicitly assumes that the atmosphere is only in local energy equilibrium.
If a system is only in local energy equilibrium then different parts of the system can have different amounts of energy.
As we will see later, the greenhouse effect theory fundamentally requires that the atmosphere is only in local energy equilibrium. This is because the theory predicts that greenhouse gases will cause some parts of the atmosphere to become more energetic than other parts. For instance, the greenhouse effect is supposed to increase temperatures in the troposphere, causing global warming.
However, this assumption that the atmosphere is only in local energy equilibrium was never experimentally proven.
In our papers, we experimentally show that the atmosphere is actually in complete energy equilibrium – at least over the distances from the bottom of the troposphere to the top of the stratosphere, which the greenhouse effect theory is concerned with.
What is infrared light?Before we can talk about the greenhouse effect theory, we need to understand a little bit about the different types of light.
While you might not realise it, all warm objects are constantly cooling down by emitting light, including us. The reason why we don’t seem to be constantly “glowing” is that the human eye cannot detect the types of light that are emitted at body temperature, i.e., the light is not “visible light”.
But, if we use infrared cameras or “thermal imaging” goggles, we can see that humans and other warm, living things do actually “glow” (Figure 5).Infrared (IR) light is light that is of a lower frequency than visible light, while ultraviolet (UV) light is of a higher frequency than visible light.
When we think of light, we usually think of “visible light”, which is the only types of light that the human eye can see, but this is actually only a very small range of frequencies that light can have (see Figure 6).
For instance, bees and other insects can also see some ultraviolet frequencies, and many flowers have evolved quite unusual colour patterns which can only be detected by creatures that can see ultraviolet light – e.g., see here, here, or here. On the other hand, some animals, e.g., snakes, can see some infrared frequencies, which allows them to use “heat-sensing vision” to hunt their prey, e.g., see here or here.
As a simple rule of thumb, the hotter the object, the higher the frequencies of the light it emits. At room temperature, objects mostly emit light in the infrared region. However, when a coal fire gets hot enough, it also starts emitting light at higher frequencies, i.e., in the visible region. The coals become “red hot”.
Because the temperature at the surface of the Sun is nearly 6000 K, the light that it emits is mostly in the form of ultraviolet and visible (“UV-Vis.” for short), with some infrared light. In contrast, the surface of the Earth is only about 300 K, and so the light that the Earth emits is mostly low frequency infrared light (called “long infrared” or long-IR).
As the Sun shines light onto the Earth, this heats up the Earth’s surface and atmosphere. However, as the Earth’s surface and atmosphere heat up, they also emit more light. The average energy of the light reaching the Earth from the Sun, roughly matches the average energy of the light leaving the Earth into space. This works out at about 240 Watts per square metre of the Earth’s surface.
This brings us to the greenhouse effect theory.
In the 19th century, an Irish scientist named John Tyndall discovered that some of the gases in the Earth’s atmosphere interact with infrared light, but others don’t. Tyndall, 1861 (DOI; .pdf available here) showed that nitrogen (N2) and oxygen (O2) are totally transparent to infrared light. This was important because nitrogen and oxygen make up almost all of the gas in the atmosphere. The third most abundant gas in the atmosphere, argon (Ar) wasn’t discovered until several decades later, but it also is transparent to infrared light.
However, he found that some of the gases which only occurred in trace amounts (“trace gases”) do interact with infrared light. The main “infrared-active” gases in the Earth’s atmosphere are water vapour (H2O), carbon dioxide (CO2), ozone (O3) and methane (CH4).
Because the light leaving the Earth is mostly infrared light, some researchers suggested that these infrared-active gases might alter the rate at which the Earth cooled to space. This theory has become known as the “greenhouse effect” theory, and as a result, infrared-active gases such as water vapour and carbon dioxide are often referred to as “greenhouse gases”.
In this essay, we well stick to the more scientifically relevant term, infrared-active gases instead of the greenhouse gas term.
Greenhouse effect theory: “It’s simple physics” version
In crude terms, the greenhouse effect theory predicts that temperatures in the troposphere will be higher in the presence of infrared-active gases than they would be otherwise.
If the greenhouse effect theory were true then increasing the concentration of carbon dioxide in the atmosphere should increase the average temperature in the troposphere, because carbon dioxide is an infrared-active gas. That is, carbon dioxide should cause “global warming”.
This is the basis for the man-made global warming theory. The burning of fossil fuels releases carbon dioxide into the atmosphere. So, according to the man-made global warming theory, our fossil fuel usage should be warming the planet by “enhancing the greenhouse effect”.
Therefore, in order to check if man-made global warming theory is valid, it is important to check whether or not the greenhouse effect theory is valid. When we first started studying the greenhouse effect theory in detail, one of the trickiest things to figure out was exactly what the theory was supposed to be. We found lots of people who would make definitive claims, such as “it’s simple physics”, “it’s well understood”, or “they teach it in school, everyone knows about it…”:
Simple physics says, if you increase the concentration of carbon dioxide in the atmosphere, the temperature of the earth should respond and warm. – Prof. Robert Watson (Chair of the Intergovernmental Panel on Climate Change, 1997-2002) during a TV debate on global warming. 23rd November 2009
…That brings up the basic science of global warming, and I’m not going to spend a lot of time on this, because you know it well… Al Gore in his popular presentation on man-made global warming – An Inconvenient Truth (2006)
However, when pressed to elaborate on this allegedly “simple” physics, people often reverted to hand-waving, vague and self-contradictory explanations. To us, that’s not “simple physics”. Simple physics should be clear, intuitive and easy to test and verify.
At any rate, one typical explanation that is offered is that when sunlight reaches the Earth, the Earth is heated up, and that infrared-active gases somehow “trap” some of this heat in the atmosphere, preventing the Earth from fully cooling down.
For instance, that is the explanation used by Al Gore in his An Inconvenient Truth (2006) presentation:
The “heat-trapping” version of the greenhouse effect theory is promoted by everyone from environmentalist groups, e.g., Greenpeace, and WWF; to government websites, e.g., Australia, Canada and USA; and educational forums, e.g., Livescience.com, About.com, and HowStuffWorks.com.
However, despite its popularity, it is just plain wrong!
The Earth is continuously being heated by the light from the Sun, 24 hours a day, 365 days a year (366 in leap years). However, as we mentioned earlier, this incoming sunlight is balanced by the Earth cooling back into space – mainly by emitting infra-red light.
If infrared-active gases were genuinely “trapping” the heat from the sun, then every day the air would be continuously heating up. During the night, the air would continue to remain just as warm, since the heat was trapped. As a result, each day would be hotter than the day before it. Presumably, this would happen during the winter too. After all, because the sun also shines during the winter, the “trapped heat” surely would continue to accumulate the whole year round. Every season would be hotter than the one it followed. If this were true, then the air temperature would rapidly reach temperatures approaching that of the sun!
This is clearly nonsense – on average, winters tend to be colder than summers, and the night tends to be colder than the day.
It seems that the “simple physics” version of the greenhouse effect theory is actually just simplistic physics!
Having said that, this simplistic theory is not the greenhouse effect theory that is actually used by the current climate models. Instead, as we will discuss below, the “greenhouse effect” used in climate models is quite complex. It is also highly theoretical… and it has never been experimentally shown to exist.
In Sections 4-6, we will explain how our research shows that this more complicated greenhouse effect theory is also wrong. However, unlike the “simple physics” theory, it is at least plausible and worthy of investigation. So, let us now briefly summarise it…
Greenhouse effect theory: The version used by climate models
In the “simple physics” version of the greenhouse effect theory, infrared-active gases are supposed to “trap” heat in the atmosphere, because they can absorb infrared light.
As we discussed earlier, it is true that infrared-active gases such as water vapour and carbon dioxide can absorb infrared light. However, if a gas can absorb infrared light, it also can emit infrared light. So, once an infrared-active gas absorbs infrared light, it is only “trapped” for at most a few tenths of a second before it is re-emitted!The notion that carbon dioxide “traps” heat might have made some sense in the 19th century, when scientists were only beginning to investigate heat and infrared light, but it is now a very outdated idea.
Indeed, if carbon dioxide were genuinely able to “trap” heat, then it would be such a good insulator, that we would use it for filling the gap in double-glazed windows. Instead, we typically use ordinary air because of its good insulation properties, or even use pure argon (an infrared-inactive gas), e.g., see here or here.
So, if carbon dioxide doesn’t trap heat, why do the current climate models still predict that there is a greenhouse effect?
Well, while infrared-active gases can absorb and emit infrared light, there is a slight delay between absorption and emission. This delay can range from a few milliseconds to a few tenths of a second.
This might not seem like much, but for that brief moment between absorbing infrared light and emitting it again, the infrared-active gas is more energetic than its neighbouring molecules. We say that the molecule is “excited”. Because the molecules in a gas are constantly moving about and colliding with each other, it is very likely that some nearby nitrogen or oxygen molecule will collide with our excited infrared-active gas molecule before it has a chance to emit its light.
During a molecular collision, molecules can exchange energy and so some of the excited energy ends up being transferred in the process. Since the infrared-inactive gases don’t emit infrared light, if enough absorbed energy is transferred to the nitrogen and oxygen molecules through collisions, that could theoretically increase the average energy of the air molecules, i.e., it could “heat up” the air.
It is this theoretical “collision-induced” heating effect that is the basis for the greenhouse effect actually used by the climate models, e.g., see Pierrehumbert, 2011 (Abstract; Google Scholar access).
Now, astute readers might be wondering about our earlier discussion on energy equilibrium. If the atmosphere is in energy equilibrium, then as soon as one part of the atmosphere starts gaining more energy than another, the atmosphere should start rapidly redistributing that energy, and thereby restoring energy equilibrium.
This means that any “energetic pockets” of air which might start to form from this theoretical greenhouse effect would immediately begin dissipating again. In other words, if the atmosphere is in energy equilibrium then the greenhouse effect cannot exist!
So, again, we’re back to the question of why the current climate models predict that there is a greenhouse effect.
The answer is simple. They explicitly assume that the atmosphere is not in energy equilibrium, but only in local energy equilibrium.
Is this assumption valid? Well, the people who developed the current climate models believe it is, but nobody seems to have ever checked if it was. So, in our three papers, we decided to check. In Sections 4-6, we will describe the resuls of that check. It turns out that the atmosphere is actually in complete energy equilibrium – at least over the distances of the tens of kilometres from the bottom of the troposphere to the top of the stratosphere.
In other words, the local energy equilibrium assumption of the current climate models is wrong.
Nonetheless, since the greenhouse effect theory is still widely assumed to be valid, it is worth studying its implications a little further, before we move onto our new research…
When we hear that carbon dioxide is supposed to increase the greenhouse effect, probably most of us would assume that the whole atmosphere is supposed to uniformly heat up. However, the proposed greenhouse effect used by the models is actually quite complicated, and it varies dramatically throughout the atmosphere.
There are several reasons for this.
Although the rate of infrared absorption doesn’t depend on the temperature of the infrared-active gases, the rate of emission does. The hotter the molecules, the more infrared light it will emit. However, when a gas molecule emits infrared light, it doesn’t care what direction it is emitting in! According to the models, this means that when the air temperature increases, the rate at which infrared light is emitted into space increases, but so does the rate at which infrared light heads back to ground (“back radiation”).
Another factor is that, as you go up in the atmosphere, the air gets less dense. This means that the average length of time between collisions amongst the air molecules will increase. In other words, it is more likely that excited infrared-active gas molecules will be able to stay excited long enough to emit infrared light.
Finally, the infrared-active gases are not uniformly distributed throughout the atmosphere. For instance, the concentration of water vapour decreases rapidly above the boundary layer, and is higher in the tropics than at the poles. Ozone is another example in that it is mostly found in the mid-stratosphere in the so-called “ozone layer” (which we will discuss below).With all this in mind, we can see that it is actually quite a difficult job to calculate exactly what the “greenhouse effect” should be at each of the different spots in the atmosphere. According to the theory, the exact effect will vary with height, temperature, latitude and atmospheric composition.
Climate modellers refer to the various attempts at these calculations as “infrared cooling models”, and researchers have been proposing different ones since the 1960s, e.g., Stone & Manabe, 1968 (Open access).
Deciding which infrared cooling model to include in the climate models has been the subject of considerable debate over the years. It has been a particularly tricky debate, because nobody has ever managed to experimentally measure an actual infrared cooling profile for the atmosphere. Nonetheless, most of the ones used in the current climate models are broadly similar to the one in Figure 8.
We can see that these models predict that infrared-active gases should slow down the rate of infrared cooling in the troposphere. This would allow the troposphere to stay a bit warmer, i.e., cause global warming. However, as you go up in the atmosphere, two things happen:
- The density of the air decreases. This means that when an infrared-active gas emits infrared light, it is more likely to “escape” to space.
- The average temperature of the air increases in the stratosphere. This means the rate of infrared emission should increase.
For these two reasons, the current climate models predict that increasing infrared-active gases should actually speed up the rate at which the tropopause and stratosphere cool. So, the calculated “global warming” in the troposphere is at the expense of “global cooling” in the stratosphere, e.g., Hu & Tung, 2002 (Open access) or Santer et al., 2003 (Abstract; Google Scholar access).Why do temperatures (a) stop decreasing with height in the tropopause, and (b) start increasing with height in the stratosphere?
According to the current climate models, it is pretty much all due to the ozone layer.
When sunlight reaches the planet, the light includes a wide range of frequencies – from infrared light through visible light to ultraviolet light. However, when the sunlight reaches us at the ground, most of the high frequency ultraviolet light has been absorbed. This is because the ozone in the ozone layer absorbs it.
The fact that the ozone layer absorbs ultraviolet light is great for us, because high frequency ultraviolet light is very harmful to most organisms. Life as we know it probably couldn’t exist in the daylight, if all of the sun’s ultraviolet light reached us.
Anyway, because the models assume the atmosphere is only in local energy equilibrium, they conclude that when the ozone absorbs the ultraviolet light, it heats up the air in the ozone layer.
As the light passes through the atmosphere, there is less and less ultraviolet light to absorb, and so the amount of “ozone heating” decreases (Figure 9). The concentration of ozone also decreases once you leave the ozone layer.
So, according to the climate models, the reason why temperatures increase with height in the stratosphere is because of ozone heating. In the tropopause, there is less ozone heating, but they reckon there is still enough to counteract the normal “gravitational cooling”, and that’s why the temperature “pauses”, i.e., remains constant with height.
As we discuss in Paper I, there are major problems with this theory.
First, it relies on the assumption that the atmosphere is only in local energy equilibrium, which has never been proven.
Second, it implies that the tropopause and stratosphere wouldn’t occur without sunlight. During the winter at the poles, it is almost pitch black for several months, yet the tropopause doesn’t disappear. In other words, the tropopause does not need sunlight to occur. Indeed, if we look back at Figure 2, we can see that the tropopause is actually more pronounced for the poles, in that it starts at a much lower height than it does at lower latitudes.
In Section 5, we will put forward a more satisfactory explanation.
Has anyone ever measured the greenhouse effect?
Surprisingly, nobody seems to have actually observed this alleged greenhouse effect … or the ozone heating effect, either!
The theory is based on a few experimental observations:
- As we discussed earlier, all objects, including the Earth, can cool by emitting light. Because the Earth only has a temperature of about 300K, this “cooling” light is mostly in the form of infrared light.
- The main gases in the atmosphere (i.e., nitrogen, oxygen and argon) can’t directly absorb or emit infrared light, but the infrared-active gases (e.g., water vapour, carbon dioxide, ozone and methane) can.
- Fossil fuel usage releases carbon dioxide into the atmosphere, and the concentration of carbon dioxide in the atmosphere has been steadily increasing since at least the 1950s (from 0.031% in 1959 to 0.040% in 2013).
We don’t disagree with these observations. But, they do not prove that there is a greenhouse effect.
The greenhouse effect theory explicitly relies on the assumption that the atmosphere is in local energy equilibrium, yet until we carried out our research, nobody seems to have actually experimentally tested if that assumption was valid. If the assumption is invalid (as our results imply), then the theory is also invalid.
Even aside from that, the greenhouse effect theory makes fairly specific theoretical predictions about how the rates of “infrared cooling” and “ozone heating” are supposed to vary with height, latitude, and season, e.g., Figures 8 and 9. Yet, nobody seems to have attempted to experimentally test these theoretical infrared cooling models, either.
Of course, just because a theory hasn’t been experimentally tested, that doesn’t necessarily mean it’s wrong. However, it doesn’t mean it’s right, either!
With that in mind, we felt it was important to physically check what the data itself was saying, rather than presuming the greenhouse effect theory was right or presuming it was wrong… After all, “Nature” doesn’t care what theories we happen to believe in – it just does its own thing!
4. Paper 1: Phase change associated with tropopause
In this paper, we analysed publically archived weather measurements taken by a large sample of weather balloons launched across North America. We realised that by analysing these measurements in terms of a property known as the “molar density” of the air, we would be able to gain new insight into how the temperature of the air changes with height.
When we took this approach, we found we were able to accurately describe the changes in temperature with height all the way up to where the balloons burst, i.e., about 35 kilometres (20 miles) high.
We were able to describe these temperature profiles by just accounting for changes in water content and the existence of a previously overlooked phase change. This shows that the temperatures at each height are completely independent of the infrared-active gas concentrations, which directly contradicts the predictions of the greenhouse effect theory.
We suspected that the change in temperature behaviour at the tropopause was actually related to a change in the molecular properties of the air. So, we decided to analyse weather balloon data in terms of a molecular property known as the “molar density”. The molar density of a gas tells you the average number of molecules per cubic metre of the gas. Since there are a LOT of molecules in a cubic metre of gas, it is expressed in terms of the number of “moles” of gas per cubic metre. One mole of a substance corresponds to about 600,000,000,000,000,000,000,000 molecules, which is quite a large number!
To calculate the molar densities from the weather balloon measurements, we converted all of the pressures and temperatures into units of Pa and K, and then determined the values at each pressure using D=n/V=P/RT, where R is the ideal gas constant (8.314 J/K/mol)
We downloaded weather balloon data for the entire North America continent from the University of Wyoming archive. We then used this data to calculate the change in molar density with pressure.
Atmospheric pressure decreases with height (altitude), and in space, the atmospheric pressure is almost zero. Similarly, molar density decreases with height, and in space, is almost zero. So, in general, we would expect molar density to decrease as the pressure decreases. This is what we found. However, there were several surprising results.Figure 10 shows the molar density plots calculated from the measurements of seven different weather balloons launched over a period of 4 days from Albany, New York (USA) in May 2011.
Since atmospheric pressure decreases as we go up in the atmosphere, in these plots, the balloon height increases as we go from right to left. That is, the ground level corresponds to the right hand side of the plot and the left hand side corresponds to the upper atmosphere. The three different regions are discussed in the text below.
There are several important things to note about these plots:
- The measurements from all seven of the weather balloons show the same three atmospheric regions (labelled Regions 1-3 in the figure).
- For Regions 1 and 2, the molar density plots calculated from all of the balloons are almost identical, i.e., the dots from all seven balloons fall on top of each other.
- In contrast, the behaviour of Region 3 does change a bit from balloon to balloon, i.e., the dots from the different balloons don’t always overlap.
- The transition between Regions 1 and 2 corresponds to the transition between the troposphere and the tropopause. This suggests that something unusual is happening to the air at the start of the tropopause.
- There is no change in behaviour between the tropopause and the stratosphere, i.e., when you look at Region 1, you can’t easily tell when the tropopause “ends” and when the stratosphere “begins”. This suggests that the tropopause and stratosphere regions are not two separate “regions”, but are actually both part of the same region.
When we analysed the atmospheric water concentration measurements for the balloons, we found that the different slopes in Region 3 depended on how humid the air in the region was, and whether or not the balloon was travelling through any clouds or rain.
On this basis, we suggest that the Region 3 variations are mostly water-related. Indeed, Region 3 corresponds to the boundary layer part of the troposphere, which is generally the wettest part of the atmosphere.
What about Regions 1 and 2? The change in behaviour of the plots between Regions 1 and 2 is so pronounced that it suggests that some major change in the atmosphere occurs at this point.
In Paper 2, we propose that this change is due to some of the oxygen and/or nitrogen in the air joining together to form molecular “clusters” or “multimers”. We will discuss this theory in Section 5.
For now, it is sufficient to note that the troposphere-tropopause transition corresponds to some sort of “phase change”. In Paper 1, we refer to the air in the troposphere as being in the “light phase”, and the air in the tropopause/stratosphere regions being in the “heavy phase”When we analyse weather balloon measurements from other locations (and seasons), the same general features occur.
However, there are some differences, which we illustrate schematically in Figure 11. In tropical locations, the heavy phase/light phase transition occurs much higher in the atmosphere (i.e., at a lower pressure). In contrast, in the Arctic, the heavy phase/light phase change occurs much lower in the atmosphere (i.e., at a higher pressure). This is in keeping with the fact that the height of the tropopause is much higher in the tropics than at the poles (as we saw in Figure 2).
One thing that is remarkably consistent for all of the weather balloon measurements we analysed is that in each of the regions, the change of molar density with pressure is very linear. Another thing is that the change in slope of the lines between Regions 1 and 2 is very sharp and distinct.
Interestingly, on very cold days in the Arctic winter, we often find the slopes of the molar density plots near ground level (i.e., Region 3) are similar to the slope of the “heavy phase” (schematically illustrated in Figure 11).
The air at ground level is very dry under these conditions, because it is so cold. So, this is unlikely to be a water-related phenomenon. Instead, we suggest that it is because the temperatures at ground level in the Arctic winter are cold enough to cause something similar to the phase change which occurs at the tropopause.
At this stage you might be thinking: “Well, that’s interesting what you discovered… But, what does this have to do with the temperature profiles?”
Well, since we calculated the molar densities from the temperature and pressure measurements of the balloons, we can also convert molar densities back into temperature values. Since we found that the relationship between molar density and pressure was almost linear in each region, we decided to calculate what the temperatures would be if the relationship was exactly linear. For the measurements of each weather balloon, we calculated the best linear fit for each of the regions (using a statistical technique known as “ordinary least squares linear regression”). We then converted these linear fits back into temperature estimates for each of the pressures measured by the balloons.In Figure 12, we compare the original balloon measurements for one of the Albany, New York balloons to our linear fitted estimates.
Because pressure decreases with height, we have arranged the graphs in this essay so that the lowest pressures (i.e., the stratosphere) are at the top and the highest pressures (i.e., ground level) are at the bottom.
The black dots correspond to the actual balloon measurements, while the two dashed lines correspond to the two temperature fits.
We find the fits to be remarkable. Aside from some deviations in the boundary layer (at around 90kPa) which are associated with a rain event, the measurements fall almost exactly onto the blue “light phase” curve all the way up to the tropopause, i.e., 20kPa. In the tropopause and stratosphere, the measurements fall almost exactly onto the red “heavy phase” curve.
We found similar strong fits to all of the balloons we applied this approach to. The exact values we used for the fits varied from balloon to balloon, but in all cases the balloon measurements could be fit using just two (or sometimes three) phases.Examples of the balloons which needed to be fitted with three phases were those launched during the winter in the Arctic region. For instance, Figure 13 shows the balloon measurements and fits for a balloon launched from Norman Wells, Northwest Territories (Canada) in December 2010.
Again, the matches between the experimental data and our linear fits are very good.
For these balloons, the slope of the molar density plots for the region near the ground level (Region 3) is very similar to the slope of the heavy phase in Region 1. This is in keeping with our earlier suggestion that the air near ground level for these cold Arctic winter conditions is indeed in the heavy phase.
For us, one of the most fascinating findings of this analysis is that the atmospheric temperature profiles from the boundary layer to the middle of the stratosphere can be so well described in terms of just two or three distinct regions, each of which has an almost linear relationship between molar density and pressure.
As we saw in Section 3, the greenhouse effect theory predicts that infrared-active gases lead to complicated infrared cooling rates which should be different at each height (e.g., the one in Figure 9). According to the theory, infrared-active gases partition the energy in the atmosphere in such a way that the atmospheric energy at each height is different.
This means that we should be finding a very complex temperature profile, which is strongly dependent on the infrared-active gases. Instead, we found the temperature profile was completely independent of the infrared-active gases.
This is quite a shocking result. The man-made global warming theory assumes that increasing carbon dioxide (CO2) concentrations will cause global warming by increasing the greenhouse effect. So, if there is no greenhouse effect, this also disproves the man-made global warming theory.
5. Paper 2: Multimerization of atmospheric gases above the troposphere
In this paper, we investigated what could be responsible for the phase change we identified in Paper 1. We suggest that it is due to the partial “multimerization” of oxygen and/or nitrogen molecules in the atmosphere above the troposphere.
This explanation has several important implications for our current understanding of the physics of the Earth’s atmosphere, and for weather prediction:
- It provides a more satisfactory explanation for why temperatures stop decreasing with height in the tropopause, and why they start increasing with height in the stratosphere
- It reveals a new mechanism to explain why ozone forms in the ozone layer. This new mechanism suggests that the ozone layer can expand or contract much quicker than had previously been thought
- It offers a new explanation for how and why the jet streams form
- It also explains why tropical cyclones form, and provides new insights into why high and low pressure weather systems occur
In Paper 2, we decided to investigate what could be responsible for the phase change we identified in Paper 1. We suggest that it is due to the partial “multimerization” of oxygen and/or nitrogen molecules in the atmosphere above the troposphere. We will explain what we mean by this later, but first, we felt it was important to find out more information about how the conditions for this phase change vary with latitude and season.
Variation of phase change conditionsWe downloaded weather balloon data from the Integrated Global Radiosonde Archive (IGRA) which is maintained by the NOAA National Climatic Data Center. The IGRA dataset contains weather balloon records from 1,109 weather stations located on all the continents – see Figure 14.
As each of the weather stations launches between 1 and 4 balloons per day, and has an average of about 36 years worth of data, this makes for a lot of data. To analyse all this data, we wrote a number of computer scripts, using the Python programming language.
Our scripts systematically analysed all of the available weather balloon records to identify the pressure and temperature at which the phase change occurred, i.e., the transition between Region 1 and Region 2.
If there wasn’t enough data for our script to calculate the change point, we skipped that balloon, e.g., some of the balloons burst before reaching the stratosphere. However, we were able to identify the transition for most of the balloons.
Below are the plots for all of the weather balloons launched in 2012 from one of the stations – Valentia Observatory, Ireland. The black dashed lines correspond to the phase change for that balloon.
In all, our scripts identified the phase change conditions for more than 13 million weather balloons.We decided to group the stations into twelve different latitudinal bands (see Figure 14). Then, for each of the bands, we calculated the average phase conditions for each month. Figure 15 shows the seasonal variations for three of the twelve latitudinal bands.
In Paper 2, we present the data for all twelve bands, and discuss the main features of the seasonal changes in some detail. However, for the purpose of this essay, it is sufficient to note the following features:
- Each latitudinal band has different patterns.
- All bands have very well defined annual cycles, i.e., every year the phase change conditions for each band goes through clear seasonal cycles.
- For some areas, and at some times of the year, the temperature and pressure conditions change in sync with each other, i.e., they both increase and decrease at the same time. At other times and places, the temperature and pressure changes are out of sync with each other.
In Section 4, we saw that the phase change conditions are directly related to the atmospheric temperature profiles.
This means that if we can figure out the exact reasons why the phase change conditions vary as they do with season and latitude, this should also provide us with insight into entire temperature profiles.
If we could do this, this could help meteorologists to make dramatically better weather predictions. So, in our paper, we discuss several interesting ideas for future research into understanding how and why the phase change conditions vary.
Multimerization of the air
At any rate, it seems likely to us that some major and abrupt change in the atmospheric composition and/or molecular structure is occurring at the tropopause.
However, measurements of the atmospheric composition don’t show any major change associated with the troposphere/tropopause transition. Both above and below the phase change, the atmosphere is 78% nitrogen, 21% oxygen and 1% argon.Instead, we suggest that the phase change involves a change in the molecular structure of at least some of the air molecules. Although argon might be involved, it only comprises 1% of the atmosphere, so we will focus here on the oxygen and nitrogen molecules, which make up 99% of the atmosphere near the phase change.
As can be seen in Figure 16, oxygen and nitrogen molecules are both “diatomic”, i.e., each molecule contains two atoms.We suggest that, once the phase change conditions occur, some of these diatomic molecules begin clustering together to form “molecular clusters” or “multimers”. We illustrate this schematically in Figure 17.
Below the tropopause, all of the oxygen is the conventional diatomic oxygen that people are familiar with. Similarly, all of the nitrogen is diatomic. However, above the tropopause, some of these air molecules coalesce into large multimers.
Multimers take up less space per molecule than monomers. This reduces the molar density of the air. This explains why the molar density decreases more rapidly in Region 1 than in Region 2 (e.g., Figure 10).
It also has several other interesting implications…
Why temperature increases with height in the stratosphere
The current explanation for why temperatures stay constant with height in the tropopause and increase with height in the stratosphere is that ozone heats up the air in the ozone layer by absorbing ultraviolet light. However, as we discussed in Section 3, there are major problems with this explanation.
Fortunately, multimerization offers an explanation which better fits the data. We saw in Section 4 that the temperature behaviour in both the tropopause and stratosphere is very well described by our linear molar density fit for the “heavy phase” (have a look back at Figures 12 and 13, in case you’ve forgotten).
This suggests that the changes in temperature behaviour above the troposphere are a direct result of the phase change. So, if the phase change is due to multimerization, as we suggest, then the change in temperature behaviour is a consequence of multimerization.
Why would multimerization cause the temperature to increase with height?
Do you remember from Section 3 how we were saying there are four different types of energy that the air molecules have, i.e., thermal, latent, potential and kinetic?
Well, in general, the amount of energy that a molecule can store as latent energy decreases as the molecule gets bigger.
This means that when oxygen and/or nitrogen molecules join together to form larger multimer molecules, the average amount of latent energy they can store will decrease.
However, due to the law of conservation of energy, the total energy of the molecules has to remain constant. So, as we discussed in Section 3, if the latent energy of the molecules has to decrease, one of the other types of energy must increase to compensate.
In this case, the average thermal energy of the molecules increases, i.e., the temperature increases!
Changes in the ozone layerThe conventional explanation for how ozone is formed in the ozone layer is the Chapman mechanism, named after Sydney Chapman who proposed it in 1930.
Ozone is an oxygen compound just like the oxygen molecules. Except, unlike regular diatomic oxygen, ozone is triatomic (O3). This is quite an unusual structure to form, and when the ozone layer was discovered, scientists couldn’t figure out how and why it formed there.
Chapman suggested that ultraviolet light would occasionally be powerful enough to overcome the chemical bond in an oxygen molecule, and split the diatomic molecule into two oxygen atoms.
Oxygen atoms are very unstable. So, Chapman proposed that as soon as one of these oxygen atoms (“free radicals”) collided with an ordinary diatomic oxygen molecule, they would react together to form a single triatomic ozone molecule (Figure 18).
This Chapman mechanism would require a lot of energy to take place, and so it was assumed that it would take several months for the ozone layer to form. But, nobody was able to come up with an alternative mechanism that could explain the ozone layer.However, if multimerization is occurring in the tropopause/stratosphere, then this opens up an alternative mechanism.
We suggest that most of the ozone in the ozone layer is actually formed by the splitting up of oxygen multimers! We illustrate this mechanism in the schematic in Figure 19.
As in the Chapman mechanism, ultraviolet light can sometimes provide enough energy to break chemical bonds. However, because there are a lot more oxygen atoms in an oxygen multimer than in a regular diatomic oxygen molecule, the ultraviolet light doesn’t have to split the oxygen into individual atoms. Instead, it can split the multimer directly into ozone and oxygen molecules. This doesn’t require as much energy.
To test this theory, we decided to see if there was any relationship between the concentration of ozone in the ozone layer, and the phase change conditions.
We downloaded from the NASA Goddard Space Flight Center’s website all of the available monthly averaged ozone measurements from the NASA Total Ozone Mapping Spectrometer (TOMS) satellite (August 1996-November 2005). We then averaged together the monthly values for the same twelve latitudinal band we used for our weather balloons.
When we compared the seasonal variations in ozone concentrations for each band to the seasonal variations in the phase change conditions, we found they were both highly correlated! For instance, Figure 20 compares the average monthly pressure of the phase change to the average monthly ozone concentrations for the 45-60°N band.
If ozone was been mainly formed by the conventional Chapman mechanism, then there is no reason why the ozone concentrations should be correlated to the phase change conditions. However, if the ozone is being formed by our proposed mechanism, then it makes sense.
To us this indicates that most of the ozone in the ozone layer is formed from oxygen multimers, and not by the Chapman mechanism, as has been assumed until now.
It also suggests that we have seriously underestimated the rates at which the ozone layer expands and contracts. Figure 20 shows how the thickness of the ozone layer is strongly correlated to the phase change conditions.
But, these phase change conditions change dramatically from month to month. This means that ozone is formed and destroyed in less than a month. This is much quicker than had been previously believed.
New explanation for the jet streams
When we wrote our scripts to analyse the temperatures and pressures of the phase change conditions, we also looked at the average wind speeds measured by the weather balloons. You might have noticed in the video we showed earlier of the Valentia Observatory phase changes for 2012 that the bottom panels showed the average wind speeds recorded by each balloon.
We noticed an interesting phenomenon. At a lot of weather stations, very high wind speeds often occurred near the phase change. When the pressure at which the phase change occurred increased or decreased, the location of these high wind speeds would also rise or fall in parallel.This suggested to us that the two phenomena were related. So, we decided to investigate. On closer inspection, we noticed that the weather stations we were detecting high wind speeds for were located in parts of the world where the jet streams occur.
The jet streams are narrow bands of the atmosphere near the tropopause in which winds blow rapidly in a roughly west to east direction (Figure 21). It turns out that the high wind speeds we were detecting were the jet streams!
But, these high winds seemed to be strongly correlated to the phase change conditions. This suggested to us that multimerization might be involved in the formation of the jet streams.
Why should multimerization cause high wind speeds?
Well, as we mentioned earlier, when multimers form they take up less space than regular air molecules, i.e., the molar density decreases.
So, if multimers rapidly form in one part of the atmosphere, the average molar density will rapidly decrease. This would reduce the air pressure. In effect, it would form a partial “vacuum”. This would cause the surrounding air to rush in to bring the air pressure back to normal. In other words, it would generate an inward wind.
Similarly, if multimers rapidly break down, the average molar density will rapidly increase, causing the air to rush out to the sides. That is, it would generate an outward wind.
We suggest that the jet streams form in regions where the amount of multimerization is rapidly increasing or decreasing.
New explanation for tropical cyclonesOur analysis also offers a new explanation for why tropical cyclones (hurricanes, typhoons, etc.) form. Tropical cyclones form and exist in regions where there is no jet stream.
We suggest cyclones occur when the “vacuum” formed by multimerization is filled by “sucking” air up from below, rather than sucking from the sides as happens with the jet streams. This reduces the atmospheric pressure at sea level, leading to what is known as “cyclonic behaviour”.
Similarly, if the amount of multimers rapidly decreases, this can “push” the air downwards leading to an increase in the atmospheric pressure at sea level, causing “anti-cyclonic behaviour”.
Meteorologists use the term “cyclone” to refer to any low-pressure system, not just the more dangerous tropical cyclones. But, if an ordinary cyclone forms over a warm ocean, then the cyclone can suck up some of the warm water high into the atmosphere. This water freezes when it gets up high, releasing energy, and making the cyclone even stronger.
It is this extra energy released from the warm water freezing which turns an ordinary cyclone into a powerful tropical cyclone. This was already known for the standard explanation for how tropical cyclones are formed, e.g., see here.
However, until now, it had been assumed that tropical cyclones were formed at sea level. We suggest that the initial cyclone which leads to the more powerful tropical cyclone is actually formed much higher, i.e., at the tropopause, and that it is a result of multimerization.
By the way, when water is drained down a sink hole, it often leaves in a whirlpool pattern. In the same way, if multimerization causes air to be sucked up to the tropopause from the surface, it might be sucked up in a whirlpool manner. This explains why if you look at satellite photographs for the cloud structures of tropical cyclones, they usually have a whirlpool-like structure, as in Figure 22.We hope that this new way of looking at tropical cyclones will allow meteorologists to make better and more accurate predictions of hurricanes, typhoons and other tropical cyclones.
It might also help us to better understand why high pressure and low pressure weather systems (Figure 23) develop and dissipate. Much of the day-to-day job of meteorologists involves interpreting and predicting how these weather systems vary from day to day, and hour to hour. So, if rapid changes in the phase change conditions play a role in forming high and low pressure areas, then studying this could provide us with more accurate weather predictions.
6. Paper 3: Pervective power
In this paper, we identified an energy transmission mechanism that occurs in the atmosphere, but which up until now seems to have been overlooked. We call this mechanism “pervection”.
Pervection involves the transmission of energy through the atmosphere, without the atmosphere itself moving. In this sense it is a bit like conduction, except conduction transmits thermal energy (“heat”), while pervection transmits mechanical energy (“work”).
We carried out laboratory experiments to measure the rates of energy transmission by pervection in the atmosphere. We found that pervective transmission can be much faster than the previously known mechanisms, i.e., conduction, convection and radiation.
This explains why we found in Papers 1 and 2 that the atmosphere is in complete energy equilibrium over distances of hundreds of kilometres, and not just in local energy equilibrium, as is assumed by the greenhouse effect theory.
In Section 3, we explained that a fundamental assumption of the greenhouse effect theory is that the atmosphere is only in local energy equilibrium. But, our results in Papers 1 and 2 suggested that the atmosphere were effectively in complete energy equilibrium – at least over the distances from the bottom of the troposphere to the top of the stratosphere. Otherwise, we wouldn’t have been able to fit the temperature profiles with just two or three parameters.
If the atmosphere is in energy equilibrium, then this would explain why the greenhouse effect theory doesn’t work.
However, when we consider the conventional energy transmission mechanisms usually assumed to be possible, they are just not fast enough to keep the atmosphere in complete energy equilibrium.
So, in Paper 3, we decided to see if there might be some other energy transmission mechanism which had been overlooked. Indeed, it turns out that there is such a mechanism. As we will see below, it seems to be rapid enough to keep the atmosphere in complete energy equilibrium over distances of hundreds of kilometres. In other words, it can explain why the greenhouse effect theory is wrong!
We call this previously unidentified energy transmission mechanism “pervection”, to contrast it with convection.
There are three conventional energy transmission mechanisms that are usually considered in atmospheric physics:
Radiation is the name used to describe energy transmission via light. Light can travel through a vacuum, and doesn’t need a mass to travel, e.g., the sunlight reaching the Earth travels through space from the Sun.
However, the other two mechanisms need a mass in order to work.
In convection, energy is transported by mass transfer. When energetic particles are transported from one place to another, the particles bring their extra energy with them, i.e., the energy is transported with the travelling particles. This is convection.
There are different types of convection, depending on the types of energy the moving particles have. If the moving particles have a lot of thermal energy, then this is called thermal convection. If you turn on an electric heater in a cold room, most of the heat will move around the room by thermal convection.
Similarly, if the moving particles have a lot of kinetic energy, this is called kinetic convection. When a strong wind blows, this transfers a lot of energy, even if the wind is at the same temperature as the rest of the air.
Conduction is a different mechanism in that energy can be transmitted through a mass without the mass itself moving. If a substance is a good conductor, then it can rapidly transfer thermal energy from one side of the substance to another.
If one side of a substance is hotter than the other, then conduction can redistribute the thermal energy, so that all of the substance reaches the same temperature. However, conduction is only able to transfer thermal energy.
Since air is quite a poor conductor, conduction is not a particularly important energy transmission mechanism for the atmosphere.
For this reason, the current climate models only consider convection and radiation for describing energy transport in the atmosphere. But, could there be another energy transmission mechanism the models are leaving out?We realised there was. Consider the Newton’s cradle shown in Figure 24.
When you lift the ball on the left into the air and release it, you are providing it with mechanical energy, which causes it to rapidly swing back to the other balls.
When it hits the other balls, it transfers that energy on. But, then it stops. After a very brief moment, the ball on the other side of the cradle gets that energy, and it flies up out of the cradle.
Clearly, energy has been transmitted from one side of the cradle to the other. However, it wasn’t transmitted by convection, because the ball which originally had the extra energy stopped once it hit the other balls.
It wasn’t conduction, either, because the energy that was being transmitted was mechanical energy, not thermal energy.
In other words, mechanical energy can be transmitted through a mass. This mechanism for energy transmission is not considered in the current climate models. This is the mechanism that we call pervection.
Since nobody seems to have considered this mechanism before, we decided to carry out laboratory experiments to try and measure how quickly energy could be transmitted through air by pervection.
Figure 25 shows the experimental setup we used for these experiments.
In our experiment we connected two graduated cylinders with a narrow air tube that was roughly 100m long. We then placed the two cylinders upside down in water (which we had coloured green to make it easier to see). We also put a second air tube into the graduated cylinder on the left, and we used this tube to suck some of the air out of the cylinders. This raised the water levels to the heights shown in Figure 25. Then we connected the second tube to a syringe.Figure 26 shows the basic idea behind the experiment. We used the syringe to push a volume of air into the air gap at the top of the cylinder on the left.
This caused the air gap in the left cylinder to expand, pushing the water level down, i.e., it increased the mechanical energy of the air in the air gap. However, over the next 10-20 seconds, two things happened. The water level in the left cylinder started rising again and the water level in the cylinder on the right started to fall.
19 seconds after the initial injection, the water levels in both sides had stopped moving, and had reached a new equilibrium.
There are several interesting points to note about this:
- Some of the mechanical energy transferred to the cylinder on the left was transmitted to the cylinder on the right
- This energy transmission was not instantaneous
- But, it was quite fast, i.e., it had finished after 19 seconds
What does this mean? Well, mechanical energy was somehow transmitted from the cylinder on the left to the one on the right.
This energy was transmitted through the air in the 100m tube that connects the two cylinders.
Since we are looking at energy transmission through air, we are considering the same energy transmission mechanisms that apply to the atmosphere.
Could the energy have been transmitted by conduction? No. First, it was mechanical energy which was transmitted, not thermal energy. And second, air is too poor a conductor.
Could the energy have been transmitted by radiation? No. Again, radiation is a mechanism for transmitting thermal energy, not mechanical energy. But in addition, radiation travels in straight lines. If you look at the setup in Figure 25, you can see that we had wrapped the 100m air tube in multiple loops in order to fit it into a storage box. So, the energy wouldn’t be able to travel all the way down the tube by radiation.
The only remaining conventional energy transmission mechanism is convection. However, the air was moving far too slowly for the energy to reach the cylinder on the right by the air being physically moved from one cylinder to the other.
When we calculated the maximum speed the air could have been moving through the 100m tube, it turned out that it would take more than an hour for the energy to be transmitted by convection. Since the energy transmission took less than 19 seconds, it wasn’t by convection!
That leaves pervection.
You can watch the video of our experiment below. The experiment is 5 minutes long, and consists of five cycles. We alternated between pushing and pulling the syringe every 30 seconds.
In the paper, we estimate that pervection might be able to transmit energy at speeds close to 40 metres per second.
Since the distance from the bottom of the troposphere to the top of the stratosphere is only about 50 km, that means it should only take about 20 minutes for energy to be transmitted between the troposphere and stratosphere. This should be fast enough to keep the troposphere, tropopause and stratosphere in complete energy equilibrium, i.e., it explains why the greenhouse effect theory doesn’t work.
7. Applying the scientific method to the greenhouse effect theory
If a physical theory is to be of any practical use, then it should be able to make physical predictions that can be experimentally tested. After all, if none of the theory’s predictions can actually be tested, then what is the point? The late science philosopher, Dr. Karl Popper described this as the concept of “falsifiability”. He reckoned that, for a theory to be scientific, it must be possible to construct an experiment which could potentially disprove the theory.
There seems to be a popular perception that the greenhouse effect and man-made global warming theories cannot be tested because “we only have one Earth”, and so, unless we use computer models, we cannot test what the Earth would be like if it had a different history of infrared-active gas concentrations. For instance, the 2007 IPCC reports argue that:
“A characteristic of Earth sciences is that Earth scientists are unable to perform controlled experiments on the planet as a whole and then observe the results.” – IPCC, Working Group 1, 4th Assessment Report, Section 1.2
To us, this seems a defeatist approach – it means saying that those theories are non-falsifiable, and can’t be tested. This is simply not true. As we said above, if a physical theory is to be of any use, then it should be able to make testable physical predictions. And by predictions, we mean “predictions” on what is happening now. If a scientist can’t test their predictions for decades or even centuries, then that’s a long time to be sitting around with nothing to do!
Instead, a scientist should use their theories to make predictions about what the results of experiments will be, and then carry out those experiments. So, we wondered what physical predictions the greenhouse effect theory implied, which could be tested… now! It turns out that there are fundamental predictions and assumptions of the theory which can be tested.
For instance, we saw in Section 3 that the theory predicts that the temperatures of the atmosphere at each altitude are related to the amount of infrared-active gases at that altitude. It also predicts that the greenhouse effect partitions the energy in the atmosphere in such a way that temperatures in the troposphere are warmer than they would be otherwise, and temperatures above the troposphere are colder than they would be otherwise.
However, our new approach shows that this is not happening! In Paper 1, we showed that the actual temperature profiles can be simply described in terms of just two or three linear regimes (in terms of molar density). In Paper 2, we proposed a mechanism to explain why there is more than one linear regimes.
The greenhouse effect theory explicitly relies on the assumption that the air is only in local energy equilibrium. Otherwise, the predicted partitioning of the energy into different atmospheric layers couldn’t happen. But, our analysis shows that the atmosphere is actually in complete energy equilibrium, at least over distances of the tens of kilometres covered by the weather balloons. In Paper 3, we identified a previously-overlooked energy transmission mechanism that could explain why this is the case.
In other words, the experimental data shows that one of the key assumptions of the greenhouse effect theory is wrong, and two of its predictions are false. To us, that indicates that the theory is wrong, using a similar logic to that used by the late American physicist and Nobel laureate, Dr. Richard Feynman, in this excellent 1 minute summary of the scientific method:
Man-made global warming theory predicts that increasing the atmospheric concentration of carbon dioxide (CO2) will cause global warming (in the troposphere) and stratospheric cooling, by increasing the strength of the greenhouse effect. But, our analysis shows that there is no greenhouse effect! This means that man-made global warming theory is also wrong.
It is often said that the greenhouse effect and man-made global warming theories are “simple physics”, and that increasing the concentration of carbon dioxide in the atmosphere must cause global warming.
It can be intimidating to question something that is claimed so definitively to be “simple”. Like the story about the “Emperor’s New Clothes”, most of us don’t want to acknowledge that we have problems with something that everyone is telling us is “simple”, for fear that we will look stupid.
Nonetheless, we found some of the assumptions and predictions of the theory to be questionable, and we have no difficulty in asking questions about things we are unsure on:
He who asks a question is a fool for five minutes; he who does not ask a question remains a fool forever. – old Chinese proverb
So, we decided to look carefully at the theory to test its reliability. When we looked in detail at the so-called “simple physics”, we found that it was actually “simplistic physics”.
Our experimental results show that the theory was just plain wrong!
Remarkably, nobody seems to have actually checked experimentally to see if the greenhouse effect theory was correct. It is true that the greenhouse effect theory is based on experimental observations, e.g., a) the different infra-red properties of the atmospheric gases; b) the infra-red nature of the Earth’s outgoing radiation and c) the observation that fossil fuel usage is increasing the concentration of carbon dioxide in the atmosphere.
However, being based on experimentally-verified results is not the same thing as being actually experimentally verified.
At any rate, it turns out that the concentration of infrared-active gases in the atmosphere has no effect on the temperature profile of the atmosphere. So, doubling, trebling or quadrupling the concentration of infrared-active gases, e.g., carbon dioxide, will make no difference to global temperatures – after all, if you “double” nothing, you still end up with nothing!
The current climate models predict that if we continue increasing the concentration of carbon dioxide in the atmosphere it will cause dramatic man-made global warming. On this basis, huge policy changes are being proposed/implemented in desperate attempts to urgently reduce our fossil fuel usage, in the hope that this will help us “avoid dangerous climate change”. For example, see the Stern Review (2006) or the Garnaut Climate Change Reviews (2008).
The different policies being introduced specifically to reduce our carbon dioxide emissions vary from international treaties, e.g., the Kyoto Protocol (2005), to national laws, e.g., the UK’s Climate Change Act, 2008, and even regional legislation e.g., California (USA)’s Global Warming Solutions Act, 2006.
Clearly, if the greenhouse effect theory is wrong, then man-made global warming theory is also wrong. The results of the current climate models which are based on the greenhouse effect theory are therefore invalid, and are inappropriate for basing policy on. So, the various policies to reduce our fossil fuel usage, specifically to “stop global warming”, which have been introduced (or are being planned) are no longer justified.
There has been so much confidence placed in the greenhouse effect theory, that most people seem to have thought that “the scientific debate is over”. We believe that our results show that the debate over the man-made global warming theory is indeed now “over”. The theory was just plain wrong.
There may be other reasons why we might want to reduce our fossil fuel usage, but global warming is not one.The implications of our research for global warming are significant. However, for us, a more important result of our research is that we have identified several important insights into the physics of the atmosphere, which do not seem to have been noticed until now. These insights open up several new exciting avenues for future research, and in each of our papers we describe some possible research projects that we think could be informative.
These insights also have great significance for understanding the weather, and we suspect that they will lead to major improvements in weather prediction. We believe that more accurate and reliable weather predictions will be of tremendous benefit to society, in everything from people being able to make better day-to-day plans to improved agricultural planning to being better able to predict and cope with extreme weather disasters. So, we hope that our findings will be of use to meteorologists. |
noun, singular: somatic cell
The word “somatic” is derived from the Greek word soma, meaning “body”. Hence, all body cells of an organism – apart from the sperm and egg cells, the cells from which they arise (gametocytes) and undifferentiated stem cells – are somatic cells.
Examples of somatic cells are cells of internal organs, skin, bones, blood and connective tissues. In comparison, the somatic cells contain a full set of chromosomes whereas the reproductive cells contain only half.
Word origin: Gk sōmatikós of = pertaining to the body.
Synonym: body cells.
Compare: sex cells. |
This project introduces advanced programming with sensors and can also be used to teach linear equations.
Create a musical instrument that uses a sensor to play tones.
Building: Beginner – no prior experience necessary
Programming: Medium – uses a mathematical formula to make sounds from sensor readings
LEGO MINDSTORMS EV3 kit
Write a program that:
- Takes a measurement from sensor data
- Uses data operations to scale the measurement
- Plays a tone with scaled measurement as frequency
- Uses a loop to repeat program |
Many parents tend to underestimate gum disease at an early age in children. This is because they know that their child will get their adult teeth in a few years. However, healthy baby teeth are essential for healthy adult teeth, which is why we recommend teaching your children about proper oral hygiene from an early age.
The Importance of Baby Teeth at An Early Age
Baby teeth are present in the gum before birth and start to emerge between six and 10 months of age. The last of the baby teeth erupt from the gums between two and three years of age. These teeth are vitally important for chewing and speaking. They also provide the path for the adult teeth.
When good pediatric oral hygiene isn’t maintained, it can lead to the need for tooth extractions. When this happens, the other teeth may shift in the mouth. This can cause the adult teeth to emerge crooked, crowded, or gapped, and can eventually lead to braces later on. Thankfully, the need for braces and teeth extractions can be reduced with good oral hygiene starting at an early age.
Helping Your Child Prevent Gum Disease
Good oral hygiene starts directly after birth with the parents wiping down their new baby’s gums with a soft, damp cloth or piece of gauze after every bottle feeding. This is to prevent bacteria buildup in the mouth. When the first teeth start to appear, they must also be wiped down or cleaned with a soft-bristled baby toothbrush. When the child has two teeth next to each other, it’s time to start gently flossing between them. If you are not sure what types of products to use to clean your baby’s gums and teeth, Dr. Lindsay can make recommendations.
Once teeth are in the mouth, parents should start brushing and flossing their child’s teeth at least twice a day. Begin by letting them hold the handle and try to brush their own. This lets them feel the movement of the toothbrush. Remember to always brush with them as well. Giving your child water to swish and spit is not recommended, as they may drink it instead of spit it out. As the child gets older, they will be able to start brushing their own teeth. By the age of six, they should have the dexterity to brush their own teeth and spit out excess toothpaste.
Supervising Your Children’s Brushing
Children will need close supervision while brushing their teeth until about the age of eight or nine. Children under the age of three should use less than a grain of rice size of fluoridated toothpaste. From ages three to five, a pea-sized amount of toothpaste that contains fluoride should be used. Parents can make brushing more fun by letting their child pick out their own toothbrush and toothpaste. Many children’s toothpaste comes in different flavors, like cherry, berry, and grape. Some electric children’s toothbrushes play music for two or three minutes, which can encourage your child to brush their teeth for the recommended amount of time.
Teeth Cleanings and Oral Health Checkups by Our Pediatric Dentist
It’s also a good idea to schedule regular dental checkups and teeth cleanings in order to ensure your child’s teeth and jaw are developing correctly. We recommend that all children have their first pediatric dental visit by the age of one, and yearly visits thereafter. Children who are at a higher risk of cavities and gum disease may need to be seen more often.
Around the age of seven, pediatric dentists will start evaluating your child’s teeth to determine whether or not braces will be needed in the future. If it looks like your child’s jaw may be too small to fit all the adult teeth or if your child is developing an under or overbite, we may recommend early intervention. This is done through orthodontic appliances, which can reduce or eliminate the need for braces as a teenager. Take care of your child’s oral health by scheduling a dental check-up and teeth cleaning with us at Vedra Pediatric Dentistry! |
Camera BasicsAlthough many cameras do not require you to set aperture and shutter controls,
understanding how these controls work can help you shoot quality pictures.
When you take a picture, you "expose" a film or sensor to light. The two parts which work together to control your exposure are the APERTURE and SHUTTER. Some "Point and Shoot" cameras select these automatically, but more expensive digital cameras enable you to set these manually, or to "program" them for certain shooting conditions.
The aperture is an opening that changes in size to admit more or less light (similar to the iris of an eye). The numbers on the aperture control are called F-stops and referred to as F16, F11, F8, and so on.
Here's how it works:
The larger the F-stop number, the smaller the opening.
Each number higher lets in half as much light as one number lower.
For example, F5.6 admits twice as much light as F8, while F11 lets in only half as much.
The aperture doesn't work alone, however. The shutter speed is responsible for exposure, too. It controls the amount of time light is allowed to reach the film or sensor.
The shutter is a device that opens and closes at varying speeds to determine the amount of time the light entering the aperture is allowed to reach the film or sensor.
Shutter speed is measured in fractions of a second. 125 means 1/125 of a second, 60 means 1/60. Typical shutter speeds range from 1 second to 1/1000. A shutter speed setting for a bright, sunny day - using an aperture of F11 - might be 1/125 second. A cloudy day might use 1/60 second with the same aperture, exposing the film or sensor to light for a longer period of time.
The settings for a good exposure are determined by a light meter. (Most 35mm cameras have a built-in light meter that shows you the appropriate settings, or automatically controls them.)
Aperture and shutter settings work together. Because the shutter (like the aperture) approximately halves or doubles the light reaching the film or sensor with each change in setting, a number of different combinations of settings can result in the same exposure.
Any of the combinations shown above would result in approximately the same exposure.
If all the settings result in the same exposure, why would you want to use F5.6 at 1/125 instead of F11 at 1/30? Two good reasons: By selecting the right combination for the situation you can control depth of field and motion blur. |
The International Day for Preventing the Exploitation of the Environment in War and Armed Conflict is an international day observed annually on November 6. On 5 November 2001, the UN General Assembly declared 6 November of each year as the International Day for Preventing the Exploitation of the Environment in War and Armed Conflict.
At the time of war, it affects the ecosystem such as water supply is poisoned, the forest is burnt, animals being killed, etc. Though humanity has always counted its war casualties in terms of dead and wounded soldiers and civilians, destroyed cities and livelihoods, the environment has often remained the unpublicized victim of war. Water wells have been polluted, crops torched, forests cut down, soils poisoned, and animals killed to gain military advantage. |
Afros, locks and naturals – all symbols of a powerful movement that began with the word of John S. Rock, an African-American abolitionist who was the first black person to be admitted to the bar of the Supreme Court of the United States. Rock would paraphrase the term “Black is Beautiful” during a speech back in 1858.
Some 100 years later, African Americans would begin a movement illustrating Rock’s words, which encouraged men and women to stop straightening their hair and attempting to lighten or bleach their skin with creams.
The Black is Beautiful movement became the most prominent in the writings of author Steve Biko in his book, the “Black Consciousness Movement” in South Africa. The book and the movement aimed to dispel the notion in many world cultures that black people’s natural features such as skin color, facial features and hair are inherently ugly. It was meant to dispel the stigma of internalized racism, which caused such things as “paper bag parties” in the black community, where only those with a skin color lighter than a brown paper bag were permitted.
The idea of Black is Beautiful led to the Black nationalist and Uhuru movement of the 60’s and 70’s where the world would be introduced to legends like Malcolm X, Frantz Fanon, Marcus Garvey and Elijah Muhammad. By 1969, there were black characters on 21 primetime television shows. |
Between 60 - 70% of household waste is compostable materials in the form of food waste, garden materials, food contaminated paper, tissues and handtowels.
Definition of compost
- A mixture of organic residues such as decomposed vegetation, manure etc, used as a fertiliser.
- Organic waste is derived from living organisms, both animal and plant. It may include garden and food wastes, manure, sewerage and natural fibres such as paper, cotton etc.
- Compost is a natural soil conditioner made from organic waste. The material is from plant and animal origin, mixed together to hasten the process of decay. It is ready to use when the original material is unidentifiable. It has a dark brown, crumbly texture and pleasant earthy odour.
Way back when...the history of compost
One of the earliest references to compost appears in a set of clay tablets found in the Mesopotamian Valley about 2500 years ago. Other references occur in literature throughout medieval and modern times. In recent times, Sir Howard researched and wrote extensively about compost, and is recognised as the father of modern organic farming.
Composting is something that has happened naturally ever since vegetation first covered the earth. The leftovers of one process became the inputs for growth and rejuvenation in another process. At some time in the distant past, humans noticed that plants grew better near piles of rotting vegetation and manure, and this information was passed on to future generations.
The importance of healthy soil
In 1937, US President, Franklin Roosevelt said: “The nation that destroys its soil, destroys itself.”
This sentiment is echoed in the motto of The New Zealand Soil and Health Association
Healthy Soil - Healthy Food - Healthy People
Oranga nuku - Oranga kai - Oranga tangata
The importance of soil organic matter is outlined in an article by the Food and Agriculture Organisation of the United Nations.
in Over 80% of the agricultural soils in the developed world contain only 1-3% of organic matter. For a good, strong soil to grow healthy food, it must contain at least 6-10% of organic matter.
The Canterbury region has topsoil ranging from only 10cm deep to 40cm deep. We can return organic matter to the topsoil by composting, and thereby bring about all the benefits of using compost.
need NZ wide info and link
Digging up the facts about dirt
Dirt, dust, soil, topsoil, earth, sod, ground, turf,
The material we call soil is a complex mixture of eroded rock, mineral nutrients, decaying organic matter, water, air and billions of living organisms, most of them microscopic decomposers. Although soil is potentially a renewable resource, it is produced very, very slowly by the weathering of rock, the deposit of sediments by erosion and the decomposition of dead organisms. Soils develop and mature so slowly that it takes between 200 and 1000 years to develop 25mm of topsoil. According to a 1990 survey, topsoil is eroding faster than it is formed on over 1/3 of the world’s cropland.
Compost ticks all the boxes
- Improves soil quality
- Increases productivity
- Reduces water use
- Reduces pesticide and fertiliser use
- Reduces nutrient run-off
- Reduces soil erosion
- Supplies the calcium, phosphorous, lime, nitrogen and potassium needed for plant growth
- Improves drainage in heavy clay soil and conserves water in light sandy soil
- Keeps the soil cooler in summer and warmer in winter
- Increases aeration in compacted soil
- Promotes root growth and creates spaces in the soil for air and water
- Saves landfill space
- Reduces the need to transport your greenwaste to the dump
- The amount of water pollution, gas release and odour from landfill is decreased
Making black gold - how to compost
Otherwise, buy commercial bins online or from hardware stores.
- Earthmaker's patented three-stage process uses traditional three-bin composting but stacked vertically so gravity does the hard work!
- Black bins, tumblers and other composting accessories available from:
Four ingredients compost recipe
For a well-functioning compost heap you will need:
- Micro-organisms: Compost is made by billions of microbes that digest the food you provide for them. All this microbial activity will make the compost heap hot. To function, the microbes need oxygen, water and food.
- Air: Create air tunnels in your heap by poking sticks through it, or by turning the pile, by breaking up any clumps of compost, or by adding compost worms, who will make their own air tunnels.
- Water: An ideal heap contains as much water as a wrung out sponge – damp and moist, not wet. A dry compost heap will die, as will one which is too wet, which makes the heap heavy and squashes the air out of it. If you are using kitchen as well as garden waste in your heap, there should be enough moisture in the leftover fruit, animal and vegetable materials to keep the heap balanced.
- Food: Compost needs both ‘brown’ (carbon) and ‘green’ (nitrogen) food to work properly. Browns (carbon) are generally dried materials like autumn leaves, straw, dead weeds, twigs and branches, wood chips, wood ash and sawdust and paper and paperboard products. Greens (nitrogen) are fresh plant materials like fruit and vegetable scraps, other food waste, grass cuttings, weeds and other plant matter.
Variety is the spice of life
Try mixing up your food ingredients layering "brown" and "green" materials in a bin.
- Food scraps
- Garden prunings, weeds and grass
- Vacuum dust – contains dust, hair, nails, skin cells…
- Pencil sharpenings
- Paper products
- Natural fabrics – wool, cotton, silk
Composting is a biological process, and a compost pile is really a teeming microbial farm. There are more living organisms in a teaspoon of good compost than there are beings on the earth. Creation of heat cooks the materials into compost. Once a heap has been built, billions of creatures will create the necessary heat to begin the process. Weed seeds and pathogens should be killed if the heap stays at around 55 – 60 degrees C for 5 – 7 days. Bacteria start the compost making process of organic matter breakdown. Populations of fungi, protozoans and round worms then increase, while insects, beetles and other animals follow later.
“There are only two truly creative things human beings can do: make babies and make compost” (Anon)
Compost is ready to use when it has turned dark brown and gone crumbly, with a smell like the heart of a forest. If used too soon, the compost may borrow nitrogen from the soil to finish breaking down, which will reduce the amount of nitrogen available to growing plants.
Do not pile compost up against tree trunks or plant stems as this may lead to fungal decay, - the stems and trunks need air to circulate. Do not put compost at the bottom of the hole when planting trees or shrubs – rather mix it with equal parts of topsoil.
Partly decayed compost can be used as a mulch, but do not dig it in deeper than 15cm or hydrogen sulphide gas will be created, which is toxic to plant roots.
There can never be too much compost!
Make compost ‘tea’ by filling a cloth bag with a litre of compost. Tie the bag and soak in a bucket of water. Let it ‘steep’ overnight and then pour around soil of plants. If you leave the tea longer than overnight, make sure you dilute it with equal parts of water before pouring.
Compost poem (By an Earthworks graduate.) -need link
your soil sad and lacking in life? Your
trees in trouble, your plants in strife?
You could add water and cow manure. But compost is a better cure.
off with leaves and bits of stick. Begin
in layers and not too thick.
Now add manure to the compost heap. It may be chicken, or horse, or sheep.
are tonnes of things you could have in there. Like
vacuum dust and human hair.
And grass and leaves and bits of tree. And seaweed if you live by the sea.
fruit and veggie peels and rinds. And
other things which you may find.
With blood and bone and a bit of lime. Your heap will have a lovely time.
breaking down and rotting away. Compost
is ready to live another day.
It must have air and must drain well. And kept like this it will not smell.
Worms work and slave and breed and toil. Chomping scraps and making soil.
Facts and figures -need links
- An acre of good, living soil can contain 900 pounds of earthworms, 2400 pounds of fungi, 1500 pounds of bacteria, 133 pounds of protozoa, 890 pounds of arthropods and algae. They are sometimes called the ‘micro herd’ – the most important livestock on any farm.
- Figure out what's on your "quarter acre".
- Regular use of quality compost has shown yield increases of up to 15% for lettuce and broccoli; irrigation water savings of 10% in summer on sandy soil; significant fertiliser savings, faster maturation of crop and more even crop quality.
- Biologically active soils are less likely to support disease-making organisms. Compost has been shown to contain certain microorganisms that can suppress or kill disease-causing organisms such as root rots and nematodes.
- Farm compost trials in Australia show that improved organic matter levels resulted in positive effects on moisture holding capacity, bulk density, cation exchange capacity, pH and reduced erosion. The trials also showed that marketable yields for a wide range of crops were improved, especially after repeat applications of good quality compost.
- Research indicates that for every 1 acre of land which we bring back to a 6% level of organic matter, we would be sucking up to 12 tonnes of greenhouse gases out of the atmosphere and into the soil. Any composting has a positive effect on reversing the volume of greenhouse gases in the atmosphere. (Peter W. Rutherford & Mary Lou Lamonda: The Australian worm and compost Book,1994.)
- Gore Composting System: a windrow system using large Gore covers. |
Plants, animals and human all undergo glycolysis by breaking down of glucose. This occurs in cytoplasm of cells. It is a reaction takes place in all organisms produces ATP (Adenosine tri phosphate) by breakdown of glucose. Pyruvate is the end product.
Glycolysis, part of cellular respiration, is a series of reactions that constitute the first phase of most carbohydrate catabolism, catabolism meaning the breaking down of larger molecules into smaller ones. The word glycolysis is derived from two Greek words and means the breakdown of something sweet. Glycolysis breaks down glucose and forms pyruvate with the production of two molecules of ATP. The pyruvate end product of glycolysis can be used in either anaerobic respiration if no oxygen is available or in aerobic respiration via the TCA cycle which yields much more usable energy for the cell.
Related Journals of Glycolysis
Biochemistry & Molecular Biology Journal, Biochemistry & Analytical Biochemistry, Advances in Carbohydrate Chemistry and Biochemistry, American Journal of Biochemistry and Biotechnology, American Journal of Biochemistry and Molecular Biology, Annals of Clinical Biochemistry, Applied Biochemistry and Biotechnology - Part A Enzyme Engineering and Biotechnology, Applied Biochemistry and Microbiology. |
The Religious Freedom National Scenic Byway is more than just a glimpse of history. Visitors traveling along the Byway as it winds through Southern Maryland’s scenic roads can immerse themselves in the natural and cultural spaces that shaped the early story of a cherished American Right. Along this Byway you will discover the story of the very first attempt in American to introduce the radical idea of religious toleration and to separate church from state. These concepts are now enshrined in the First Amendment to the Constitution guaranteeing everyone living in the United States the right to believe as they wish.
The Beginning of the Journey
Maryland’s founding in 1634 was economically motivated, intended to further the Calvert Family’s financial interests while extending their King’s dominions. The Calverts were Catholic, and in Anglican England, Catholics were persecuted for their beliefs. To hold beliefs contrary to the official religion meant that loyalty to your country was suspect. Although the venture was to be led by Catholics, the Calverts took every measure to demonstrate that they and their colony were fervently loyal to the King and to England, not the Pope.
Hoped for Peace and Prosperity
Maryland’s early years were fraught with tension over the differing religious beliefs of its founders, its colonists, and the English government. In an effort to maintain the peace and attract colonists of differing religious beliefs Lord Baltimore adopted a policy of freedom of belief and worship for all who settled in the colony. This practical measure was intended to prevent religious rivalry in the colony and fear in the King’s court that Catholicism would be promoted as the religion of the colony.
An Act Like No Other
“…in a well Governed and Christian Commonwealth matters Concerning Religion & the honor of God ought in the first place to be taken into serious Consideration and endeavoured to be settled.”
The Act Concerning Religion, passed by the Maryland General Assembly on April 21, 1649, was among the first legislative acts in North America allowing liberty of conscience, though only for Christians.
“No Person or Persons…shall henceforth be in any ways troubled molested or discountenanced for or in respect of his or her Religion nor in the free exercise thereof…”
A Founding Principle, a Basic Human Right
Most significant to the advance of western ideas, the Calverts introduced the concept of the division of Church and State into the New World and into Western political discourse. Even though the Act Concerning Religion was abolished 40 years later due to political turmoil in England and the Maryland Colony, its legacy can be traced through American history to the free exercise of religion clause in the First Amendment to the Constitution:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof.”
The following locations in Charles and St. Mary’s Counties are part of the trail. If you would like to tour any of the churches, we recommend contacting their administrative offices before your visit. |
Bees rely on flowers to get the nutrients they need – both adults and larvae. Looking at bee nutrition more in detail however, makes clear that it isn’t as simple as that. All 20,000 bee species have different life-histories, have different amounts of offspring, may be social, solitary or something in between. Some species are oligolectic, others forage on a huge variety of plants. All this justifies diverse nutritional demands. This means that there are potentially 20,000 or even more combinations of an ideal bee diet. If we consider that adults and larvae may have distinct needs, this gets even more complex. We know concerningly little about this. However, this knowledge is extremely important for the conservation of bees: their decline is partly linked to the loss of floral resources.
Main nutrients in a bee diet
Like us, bees need carbohydrates, proteins and lipids (fats) – the so-called macronutrients. They get everything they need from only two resources: pollen and nectar. To me, only this is already amazing. It seems like a pretty reduced diet, but it is obviously rich enough to maintain the diversity of bee species. Both larvae and adults need these macronutrients, but the proportions of each may be different. The highly mobile adults need more energy than the larvae. However, both would starve quickly if they don’t get enough carbohydrates. Proteins are important for the growing larvae, but also the adults need them for reproduction. Lipids finally are important as a secondary energy source and have a role in a number of metabolic processes.
So far it may seem quite simple, but of course things are more complex. Both pollen and nectar are very diverse in their composition depending on the plant species. In a nutshell, nectar is the main source for carbohydrates and pollen for protein, lipids and other nutrients like vitamins and minerals. The main sugars in nectar are fructose, glucose and sucrose, but there are also others that contribute to the specific sweetness of the nectar of every flower species. The protein content in pollen may vary between 2-60%, the lipid content between 1-20%. That means that oligolectic species that forage on single plant species must also have very specific physiological adaptations to the distinct nutrients offered by their host plant. Pretty amazing, isn’t it?
Different nutritional demands of bee species
As mentioned above, the differences in life-history of all bee species justify diverse needs for nutrients. In an interesting review of the current knowledge, Vaudo and colleagues state that honey bees and bumblebees regulate their intake of carbohydrates and proteins in favour of the former. That means that they collect more nectar than pollen. This reminded me a study I did in sunflower fields two years ago: I didn’t see a single honey bee with sunflower pollen. Some (few) bumblebees collected pollen, but clearly were more interested in the nectar. On the other hand, the solitary bees were working mainly on the pollen.
These observations weren’t the aim of the study, so unfortunately I don’t have real data on this. However, it fits into the arguments of the review.
Every bee species has different demands in the sugar concentration and composition in the nectar. Interestingly, the length of the tongue is correlated to the sugar concentration in the nectar: while honey bees (with quite long tongues) prefer concentrations of 30-50%, short tongued species prefer higher concentrations of 45-60%. Which makes me wonder about bumblebees: species of this group may have quite short up to very long tongues… Even more questions are open concerning the protein needs of different bee species. First, there is the amount of pollen provisioned to the larvae, but also the quality of the protein that pollen from different flowers provides. It seems that honey bees forage more for quantity, while bumblebees prefer pollen with higher protein content. And who knows about solitary bees…
Finally, lipids may play an important role also for the recognition of the quality of pollen. The exterior layer (pollenkitt) is rich in lipids and may stimulate bees to feed on it. And of course, both pollen and nectar also contain other substances like vitamins, minerals and secondary metabolites of the plant that also play a role for bee nutrition. There is still much to learn about all this, also for the less abundant species.
Consequences for conservation
As mentioned above, we know only little about the nutrients all the different bee species need. Honey bees are most studied, but there are lots of open questions also in their case. We already know much less about bumblebee nutrition and it gets really scarce if we talk of all the other bee species. However, more research is extremely necessary if we want to restore the floral resources that got lost in the past decades. It is not enough to mix some flower seeds. Very often, these mixes meet more human aesthetics than diet requirements for bees. And if they are designed for the latter, they mostly consider the preferences of the most known species: honey bees and bumblebees.
However, as we still need all bee species, we also have to consider the whole picture without simplifying. I don’t want to be in the chorus of those who only criticize without ever being content. Every step already taken to improve the floral supply for bees is great. But it’s still much to early to tap ourselves on the shoulder for saving the bees. We still need much research: about the basics of bee nutrition and about the application for conservation. We need monitoring of the existing measures so that we can improve and adapt them to different circumstances. It won’t be easy, but as a dear colleague said some time ago on a conference: that it isn’t easy is no argument not to try. |
Baroque-era composer Johann Sebastian Bach is arguably the most influential composer in human history. These days, his music, more than any other composer's, is still being studied, analysed and performed – setting the stage for the constant evolution of classical music for centuries to come.
Bach’s innovations have had a direct influence on the music of Central Europe in the Classical period, but in the post-Beethovenian Romantic period, that influence has somewhat subsided (not neglected, however). That is, until the final decades of the 19th century, during which the composers at the time – most of whom have studied Bach’s music like their older contemporaries – began to implement musical elements inspired by his works into their modern compositions. This sudden surge of interest in Bach’s music was unprecedented, but its appeal to the late Romantic and Modern composers can be explained by understanding what musical realms he revolutionised, and by tracing his legacy through music history.
Continuing to refine his contrapuntal style for his whole life, Bach helped an important compositional technique gain attention…
Bach, unlike his contemporaries of the Baroque era, did express an interest in the Gregorian chants that had been music for the church for centuries until the Renaissance period, or more specifically, he was interested in the Cantus Firmus (“fixed song”). Shortly, a Cantus Firmus is a two-voice form in which one voice provides an active melody and the other provides sustained notes. This is, in fact, already polyphony (two or more voices that ‘sing’ different notes at the same time). By the Baroque era, there was much more complex music, and certainly more than two voices going on at the same time. Polyphony, as well as Harmony, have evolved massively since the Cantus Firmus. But while the Renaissance era did borrow from the Cantus form of Polyphony, the Baroque period composers sought to emancipate their music from it, writing pieces for many active voices. That is, until Bach, who wanted more clarity (or, less density) in music, which brought him back to the Cantus Firmus.
Now, we can talk about Counterpoint. “Punctus contra punctum”, Latin for “point against point”, is “… the relationship between voices that are harmonically interdependent, yet independent in rhythm and contour”. To put it simply, it is polyphonic activity on a time grid. Counterpoint, or Kontrapunkt in many languages, has naturally evolved with polyphony, and also with the establishing of the ‘measured music’ – a music that has a defined duration, dictated by the composer by means of aligning the notes in measures with precise note lengths, and with indication of metre (time signatures) and tempo. The collision between polyphony and ‘measured music’ happened in the early Renaissance, and by the Baroque era, most music was a result of this merge: Contrapuntal music.
Bach was not a fan of the density of Contrapuntal music in his time, and in his own works, aspired to have the main melody/melodies stand out. And how does one achieve that? By having less activity in the other voices surrounding it. It’s important to mention that in Bach’s time, instrumental music was already very common, so by becoming a masterful orchestrator, Bach was very successful at his task of conveying his melodies in his pieces, again, by utilising counterpoint to his advantage. Continuing to refine his contrapuntal style for his whole life, Bach helped an important compositional technique gain attention: the Fugue.
A fugue is a loose form, built on a musical subject which is imitated, often in different pitches, by other voices throughout the piece. Fugues usually have three sections: an exposition (introduction of the musical ideas in all of the voices), a development (mostly based on ideas presented in the exposition) and a return of the subject to the tonic key. When composers of the late 19th century and early 20th century started to incorporate some of Bach’s ideas into their own music, the Fugue was one of the most popular ideas to tackle. Composers of the Classical period, like Haydn and Mozart, continued writing in a contrapuntal fashion like that which had been established in Bach’s works. But with the modernisation some of the ideas had gone through at the hands of, amongst others, Bach’s own son Carl Philipp Emanuel, the Classical musical ‘dialect’ consisted of even less active voices operating simultaneously, and consequentially there was a larger focus on melodies which were at the front, and counter-melodies, which were less prominent.
Naturally, there were quite a few pieces which looked back at Bach’s contrapuntal works. Such one piece is Ludwig van Beethoven’s gargantuan Große Fuge (“Great Fugue”), for string quartet, a highly-criticized piece at the time it was published, during Beethoven’s late period. The reason behind the intense criticism it had received was the piece’s “complexity”, or, in other words, the lessened presence of the Classical “dialect”, of melodies and respective counter-melodies, which made it incomprehensible to Classical and early Romantic reviewers, musicologists and listeners. It is one of the last pieces of music to be written in a manner that is reminiscent of Bach’s contrapuntal style for 50 years.
Bach’s reputation as a composer increased during the Romantic period, thanks in part to the publishing and performances of his preserved works. Even so, the new music composed during that time had different goals: the gradually-expanding symphony orchestra and the further development of the instruments allowed more technical works to be composed, and that aligned with the grandeur the Romantic period had become associated with; pieces for larger ensembles in larger halls for greater audiences. It certainly wasn’t in a composer’s interest to alienate his audience, so while the subject matter for compositions had become more personal, the musical content was, for the most part, rather accessible. The melodies were at the front, evoking emotional and, often, nationalistic reactions. Felix Mendelssohn and German nationalism were instrumental in bringing Bach to the foreground, and paved the path for the revitalisation of the Bach-ian counterpoint in the works of Johannes Brahms. Inspired by the music produced in his own time, but also by folk music, the gallant music of the Classical period, and Bach’s music – Brahms seamlessly weaved complex contrapuntal ideas together while still operating within the Romantic mould.
that’s enough proof for the skeptics: Bach’s musical descendants, while not direct, are a good amount of the composers among us
Brahms was also known for rooting and assisting younger composers, probably because he himself was aided by Robert and Clara Schumann. Brahms had helped composers like Antonín Dvořák gain recognition, but he found particular interest in the music of a young Alexander von Zemlinsky.
Zemlinsky, studying composition with Johann Nepomuk Fuchs and Anton Bruckner, was well-acquainted with Bach’s works and his own works, although in a Romantic idiom and very harmonically-advanced, were contrapuntally rich, having much in common with the compositions of Brahms and Bruckner. Besides being on friendly terms with Gustav Mahler, who was quite the contrapuntalist himself, Alexander von Zemlinsky was the teacher of one of the most feared-of composers in history, Arnold Schönberg.
And it turns out Schönberg was a genuinely good student, because when one examines his early tonal pieces, like Verklärte Nacht, Op. 4 or the String Quartet No. 1, Op. 7, it appears that even with the tonality being pushed to its limits, the dense contrapuntal writing is more apparent than it had been for decades prior. He had introduced counterpoint into his radical ideas of atonal composition techniques, like the 12-tone system, which made him, as well as his brilliant students Alban Berg and Anton Webern, stand out even in the field of atonal music because they didn’t have to resort to writing just textural works like many free-atonality composers did; they had melody, counter-melodies and harmony at their advantage.
At the same time as Schönberg, Bach’s counterpoint was also finding its way to works by tonal composers of the first half of the 20th century, such as Paul Hindemith, Béla Bartók and Dmitri Shostakovich. Like Mahler and early Schönberg, they were pushing the envelope when it came to tonality and harmony, but also have carefully studied Palestrina and Bach’s counterpoint and implemented it rigorously into their own compositions.
An interesting work from this period I’d like to mention is Gian Carlo Menotti’s Piano Concerto. This piece, composed in America in 1945, is a prime example of the influence Bach has had. With a neo-classicist approach to form and harmony contrasted by utmost attention to the calculated clashing of voices against each other; Palestrina might as well have sneered at the parallel intervals in the first movement of the concerto, but we, contemporary music folk, recognise Menotti was doing him a great favour in the midst of all of the chaos the 20th century had brought with it.
Compositions from the second half of the 20th century onward became less contrapuntal in the traditional sense, mostly due to the ‘freedom from form’ phenomenon music was experiencing. There was less dependence on harmony and melody, and therefore a lack of need for counterpoint. But since Bach’s music and other highly-advanced contrapuntal works are studied and performed ceaselessly across the world, there are always exceptions. Composers of tonal and atonal music do still very much utilise counterpoint to suit their needs, and that’s enough proof for the skeptics: Bach’s musical descendants, while not direct, are a good amount of the composers among us. |
This The Comanche and the Horse video also includes:
There was a time when the Comanche controlled an empire in North America, and the heart and soul of that empire was the horse. Scholars use the installment of the larger Native American Sacred Stories series to explore how the Spanish brought the horse to the native peoples in North America. Budding historians then learn how the Comanche made the horse wholly their own. The included extensive pictograph activity engages learners in considering how culture helps communicate values.
- Use resources to create a Socratic seminar on the outcomes of the Columbian exchange or federal policies toward Native peoples
- Parts of the video are in Comanche with English subtitles
- Animated images of massacred horses may be disturbing to some pupils
- Easy-to-use extension activities engage learners on how written language functions in civilization
- Background information makes the lesson easy to use in the classroom
- Embedded notes in the activity reference an example that is not included in the materials |
December 24th, 2012, 12:18 PM
Cryptography and password
I have a question on encryption. I used OpenSSL to encrypt data using a symmetric algorithm. For that I launched a command line specifying the algorithm and input/ouput.
But OpenSSL asked me for a password. So I entered one. And after that, the encryption proceed.
My question is the following, what mechanism is used to verify this password? And how do OpenSSL can noticed that the password is wrong when it's wrong (because I tried a wrong one just to see if it used a hash function on my password to get the decryption key, if it was the case decryption might be wrong but OpensSSL noticed that the password is wrong and didn't decrypt into a wrong plaintext)?
Do someone know?
December 24th, 2012, 04:22 PM
It depends on the implementation and cipher, but commonly it'll have either a known header on the file or a checksum of the encrypted contents or both. If decryption fails, then the header will be wrong and the checksum won't match, so the program knows you used the wrong password. |
Happy National Punctuation Day!
Where would be without those little dots, dashes, and squiggly lines? You wouldn’t know that we were excited about Punctuation Day, nor would you know that last sentence was a question. But where do these punctuation words come from? We thought we’d take a look at eight of our favorites.
“The history of question marks and their ilk turns out to be epic, particularly in the case of the ampersand, whose evolution takes in everything from Julius Caesar to a 17th-century typesetter called Amper (who didn’t actually exist) and even Nazi Germany.”
Johnny Dee, “Internet Picks of the Week,” The Guardian, September 2, 2011
The ampersand – or & – represents the word and. The word originated around 1837, says the Online Etymology Dictionary, and is a “contraction of and per se” and means “(the character) ‘&’ by itself is ‘and’.” Furthermore, “in old schoolbooks the ampersand was printed at the end of the alphabet and thus by 1880s had acquired a slang sense of ‘posterior, rear end, hindquarters.’”
Read more about the history of the ampersand.
“The arches are almost flat, and decorated with a kind of chevron moulding very rarely met with.”
C. King Eley, Bell’s Cathedrals: The Cathedral Church of Carlisle
The chevron is also known as the guillemet, “either of the punctuation marks ‘«’ or ‘»’, used in several languages to indicate passages of speech,” and is “similar to typical quotation marks used in the English language.”
While guillemet is a diminutive of Guillaume, the name of its supposed inventor, chevron comes from the Old French chevron, “rafter,” due to the symbol’s similarity in appearance. The Old French chevron ultimately comes from the Latin caper, “goat.” The likely connection, according to the Online Etymology Dictionary, is the similarity in appearance between rafters and goats’ “angular hind legs.” Chèvre, a type of goat cheese, is related.
“The colon marks the place of transition in a long sentence consisting of many members and involving a logical turn of the thought.”
The colon is “a punctuation mark ( : ) used after a word introducing a quotation, an explanation, an example, or a series and often after the salutation of a business letter.” The word comes from the Latin colon, “part of a poem,” which comes from the Greek kolon, which translates literally as “limb.”
Then there’s the semicolon. Ben Dolnick professed his love for the hybrid punctuation mark, despite Kurt Vonnegut pronouncing semicolons “transvestite hermaphrodites representing absolutely nothing,” while Stan Carey admitted to being semi-attached to them as well.
“All the interesting punctuation debates I have are internal, as I debate whether or not a comma is necessary in a given spot, or whether two clauses are sufficiently related to be separated by a mere semi-colon.”
“So It’s National Punctuation Day Again,” Motivated Grammar, September 24, 2009
The comma is “a punctuation mark ( , ) used to indicate a separation of ideas or of elements within the structure of a sentence,” and comes from the Greek komma, “piece cut off, short clause,” which comes from koptein, “to cut.”
The comma is a seemingly simple punctuation mark about which people have a lot to say. Earlier this year, Ben Yagoda discussed comma rules and comma mistakes, and addressed some comma questions. At Lingua Franca, he explored some comma beliefs. Stan Carey responded, as did The New Yorker, who defended what Mr. Yagoda called their “nutty” comma style. Johnson questioned the comma splice while Motivated Grammar assured us that comma splices are “historical and informal” but not wrong.
Finally, let’s not forget the importance of the Oxford comma:
For even more about commas, check out our list of the day.
“A single character combining a question mark and an exclamation — called an interrobang — didn’t catch on because it doesn’t read well in small sizes and never made it to standard keyboards, while, thanks to email addresses, the @, also known as an amphora, has become ubiquitous.”
Heller McAlpin, “Fond Of Fonts? Check Out ‘Just My Type’,” NPR Books, September 1, 2011
The interrobang, “a punctuation mark in the form of question mark superimposed on an exclamation point, used to end a simultaneous question and exclamation,” comes from a blend of interrogation point, an old term for the question mark, and bang, printers’ slang for the exclamation point.
The at or @ symbol’s “first documented use was in 1536,” according to Smithsonian Magazine, “in a letter by Francesco Lapi, a Florentine merchant, who used @ to denote units of wine called amphorae, which were shipped in large clay jars.”
“In 1899, French poet Alcanter de Brahm proposed an ‘irony mark’ (point d’ironie) that would signal that a statement was ironic. The proposed punctuation looked like a question mark facing backward at the end of a sentence. But it didn’t catch on. No one seemed to get the point of it, ironically.”
Mark Jacob and Stephan Benzkofer, “10 Things You Might Not Know About Punctuation,” Chicago Tribune, July 18, 2011
BuzzFeed listed some other punctuation marks you may not have heard of, while The New Yorker tasked readers with inventing a new punctuation mark. The winner was the bwam, the bad-writing apology mark, which “merely requires you to surround a sentence with a pair of tildes when ‘you’re knowingly using awkward wording but don’t have time to self-edit.’” For Punctuation Day, The New Yorker has asked for a punctuation mash-up: “combine two existing pieces of punctuation into a new piece of punctuation.” Check their culture blog for the winners.
“The systematization of punctuation is due mainly to the careful and scholarly Aldus Manutius, who had opened a printing office in Venice in 1494. The great printers of the early day were great scholars as well. . . .They naturally took their punctuation from the Greek grammarians, but sometimes with changed meanings.”
The word punctuation came about in the 16th century, according to the Oxford English Dictionary, and originally meant “the action of marking the text of a psalm, etc., to indicate how it should be chanted.” The word came to mean “system of inserting pauses in written matter” in the 1660s, and ultimately comes from the Latin pungere, “to prick.”
“Commas were not employed until the 16th century; in early printed books in English one sees a virgule (a slash like this /), which the comma replaced around 1520.”
Henry Hitchings, “Is This the Future of Punctuation?” The Wall Street Journal, October 22, 2011
The virgule, now more commonly known as the slash is “a diagonal mark ( / ) used especially to separate alternatives, as in and/or, to represent the word per, as in miles/hour, and to indicate the ends of verse lines printed continuously.”
Virgule ultimately comes from the Latin virga, “shoot, rod, stick.” Related are verge, virgin, with the idea of a “young shoot,” and virga, an old term for “penis,” as well as “wisps of precipitation streaming from a cloud but evaporating before reaching the ground.”
For even more punctuation goodies, check out Jen Doll’s imagined lives of punctuation marks; McSweeney’s seven bar jokes involving grammar and punctuation; and Ben Zimmer’s piece on how emoticons may be older than we thought. Also be sure to revisit our post from last year on punctuation rules.
Finally, it’s not too late to enter the official National Punctuation Day contest. You have until September 30. |
What is a rift valley & how it is formed?
1 Answer | Add Yours
A rift valley is, of course, a valley. It is long and and relatively straight. It runs between two ranges of mountains. The way that rift valleys are formed is by the spreading of geological faults. In other words, a rift valley is formed by the two sides of a fault pulling apart.
Geologically speaking, there are many different kinds of right valleys. In other words, they can be caused by a variety of geological processes. But in general, they are all caused by the crust spreading apart and breaking, with the middle part dropping to form the valley.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes |
Communicating to Methods with Messages
Perhaps the biggest difference between Objective-C and languages such as C++ is its messaging syntax as well as the way people talk about it. Objective-C has classes just as other object-oriented languages do, and those classes can have methods within them. You communicate with those methods with messages. A message is enclosed within square brackets, and it consists of the name of the object to which it is being sent followed by the message itself.
The implementation files that you create carry the .m suffix because originally they were referred to as message files that contained the code for the messages defined in the header (.h) files. (This is possibly an apocryphal tale, but the importance of messaging in Objective-C is undisputed.)
Looking at a Simple Message
Here is a simple message that is sent to an object called myObject, which is assumed to be of type NSObject—the object that is the root class of most class hierarchies in Objective-C.
This message calls the init method on myObject.
Methods can return a value. If a method returns a value, you can set it to a variable as in the following:
myVariable = [myObject
Declaring a Method
When you declare a simple method, you use an Objective-C variation on C function syntax. NSObject, the root class of almost all of the Objective-C objects you use, does declare an init method.
The following is the declaration that supports the messages shown in the previous section:
As you might surmise, the init method shown here returns a result of type id. (You find out more about id shortly.)
The minus sign at the start of the method is an important part of the declaration: It is the method type. It indicates that this is a method that is defined for instances of a class. Any instance of the class in which this declaration is used can invoke this method.
To put it another way, you (or an instance of a class) can send the init message to any instance of this class. Because this is the NSObject superclass of every other object, that means you can send the init message to any instance of any class.
There is more on allocating and initializing objects later in this hour.
Using Class Methods
The minus sign at the start of the method shown in the previous section indicates that it is an instance method. There is another type of method in Objective-C: a class method. It is indicated by a plus sign.
A message to an instance method can be sent to any instance of that class subject to constraints for that specific class. Whereas you call an instance method on an instance of a class, you call a class method on the class itself. No instance is involved.
Class methods are used most frequently as factory methods. Perhaps the most common class method is alloc. For NSObject, its declaration is
Whereas you send init to an instance, as in this case:
alloc allocates an uninitialized instance of a class as in
This returns an instance of MyClass. As you can see in the declaration, this result is of type id. It is time to explore that type.
Working with id—Strongly and Weakly Typed Variables
Objective-C supports strongly and weakly typed variables. When you reference a variable using a strong type, you specify the type of the variable. The actual variable must be of that type or a subclass of that type; if it is a subclass, it is, by definition, the type of all of its superclasses.
In Cocoa, you can declare a variable as:
This means you could be referring to an object of type NSMutableArray, which is a subclass. You can write the same code to work with elements of the array no matter what its actual type is. If necessary, you might have to coerce a specific instance to the subclass that you want (if you know that is what it is).
id is the ultimate weakly typed variable; it could be any class. That is why it is used as the return type from alloc. alloc is a class method on NSObject so if you call it on an NSArray, you get an instance of NSArray returned through id.
Messages can be nested within one another. You could write the following:
myObject = [
alloc]; myObject = [myObject
This would use the class method of MyClass to allocate a new instance of MyClass, which you immediately assign to myObject.
You can nest them together as follows:
myObject = [[
The rules for nesting square brackets are the same as for nesting parentheses——the innermost set is evaluated first.
Looking at Method Signatures and Parameters
alloc and init are very good basic examples because they have no parameters. Most methods in any language do have parameters. For example, you can write an area function that takes two parameters (height and width) and returns the product as its return value.
Other languages generally specify a name and a type for each parameter, and so does Objective-C. However, it adds another dimension: It labels each parameter.
This labeling means that the code is more readable, but you do have to understand what is going on when there is more than one parameter. When there is no parameter, the message is simply the receiver and the name of the method:
If there is a parameter, it follows the method name. In the message, a colon precedes the parameter itself. For example, in NSSet, you can initialize a set with an NSArray using code like this:
The declaration needs to specify not only the parameter name, which is used in the code of the method, but also its type:
The second and subsequent parameters are also labeled. The difference is that the first parameter is labeled in effect by the name of the method. If you add more parameters, their names and types are needed; they are preceded by a keyword (which, in the case of the first parameter is the method name). Here is another NSSet method. It initializes an NSSet to the elements of another NSSet (the first parameter). The second parameter specifies whether the elements of the first set are to be copied or not.
Here is a typical invocation:
[mySet: initWithSet: aSet copyItems:
Here is the declaration:
In documentation (and in this book), the signature sometimes is compressed to be the result, method name, and parameters so that the previous declaration is shown as |
Supporting Very Young Writers
By: Reading Rockets
Have Growing Readers delivered each month right to your inbox!
(In English & Spanish) Sign up here >
A child's writing typically goes through several stages, beginning with scribbling that probably won't include recognizable shapes or letters. From there, children tend to write using more letter-like shapes and later, your child may create a piece of writing that includes random strings of letters. Regardless of the stage, recognize that each effort of crayon to paper has value. Two ways to support your child's effort are through writing time and dictation.
In school, writing time may be called Writer's Workshop. During this special time at home, provide time and fun materials for writing. This may include smelly markers, fat pencils and paper of all shapes and sizes. Encourage your child to draw and/or write, and then use this time to talk about what's been created. Early efforts will probably be readable only by your child, but let your child feel like the expert with that piece of writing. As your child gets older, you may find that the writing time starts to include more emphasis on letters and sounds. A child's name and simple words like Mom, Dad and love are often penned early. Regardless of what's been written, be proud of the work and display it for all to see.
Writing down what your child says is a simple but effective way to model many important aspects of written language. These dictation activities can take place after a family adventure, an exciting event, or a shared book experience. It can be as simple as writing down a favorite part of a movie or book or recording what was for dessert that night. Have your child sit next to you or watch you write. Your child's watching will help her become aware of many conventions of written language, including capitalization, spacing between words, and punctuation. Keep the dictated sentences short, and use your best handwriting! These dictated sentences may be among the very first things your young writer reads all by herself. When you're done writing, encourage your child to re-read the sentences along with you.
Regardless of topic, it's always fun to hear what your child thinks was the most interesting part of a book or the most exciting part of their day. Capturing it in writing will create a memory, and it will also help your child further down the path of literacy.
Research to Practice: This Growing Reader is based in part from research from Early Childhood Education Journal (2012).
Reading Rockets (2013) |
(FOX 11) - Studies show that children are born with the ability to identify sounds from every language, and as they grow, their skills narrow to focus on the language(s) they hear most often.
Learning more than one language at the same time is not confusing to young children. Rather, it helps them develop multiple, but inter-related, language systems, and increased cognitive functions.
Don’t limit or be afraid to introduce your little ones to another language.
Dual-language learners tend to demonstrate greater working memory, reasoning, flexibility, and problem-solving.
Knowing more than one language can also expand their career opportunities, keep them mentally sharp in their twilight years, and help them gain an appreciation for the culture and roots of their families.
Some ideas on how to incorporate two languages in your home:
- Play music in multiple languages. Songs played over and over again help a child learn and understand new words and concepts.
- Read books in different languages and take your child to dual language reading groups at the library.
Copyright 2017 FOX 11 Los Angeles: Download our mobile app for breaking news alerts or to watch FOX 11 News | Follow us on Facebook, Twitter , Instagram and YouTube. Be a citizen journalist for FOX 11 and get paid – download the Fresco News App today. |
What are allergies?
An allergy is the body’s reaction to typically a benign foreign substance due to a weakened immune system. Your immune system produces substances known as antibodies. Some of these antibodies protect you from unwanted invaders that could make you sick or cause an infection. When you have allergies, your immune system makes antibodies that identify a particular allergen as something harmful, even though it isn’t. These antibodies attach to white blood cells and when stimulated, release a number of chemicals including histamine, which produce the allergic symptoms. Reactions can inflame your skin, sinuses, airways or digestive system. The severity of allergies varies from person to person and can range from minor irritation to anaphylaxis- a potentially life-threatening allergic reaction.
Common allergic symptoms include congestion, itchy or watery nose or eyes, itchy or irritated skin, hives, rash, coughing, and wheezing. Anaphylaxis symptoms include loss of consciousness, lightheadedness, severe shortness of breath, a rapid, weak pulse, skin rash, nausea and vomiting, and swelling airways, which can block breathing.
Allergies can be caused by a myriad of foods and substances. Most prevalent food allergens include cow’s milk, soy, eggs, wheat, peanuts, tree nuts, fish and shellfish. Other common non-food allergens include latex, poison ivy or poison oak, venom of stinging insects like bees or wasps, pet dander, dust mites, pollen, mold, and various medications (like penicillin and penicillin-based antibiotics).
Several methods are used to test for allergies. Skin prick tests are commonly done on a person’s back or inside forearm. Small amounts of suspected allergens and/or their extracts (e.g., pollen, grass, mite proteins, peanut extract) are introduced to marked sites on the skin. If the patient is allergic to the substance, then a visible inflammatory reaction will usually occur within 30 minutes. This response will range from slight reddening of the skin to a full-blown hive. Interpretation of the results of the skin prick test is normally done by allergists on a scale of severity, with +/- meaning borderline reactivity, and 4+ being a large reaction.
Patch testing is a method used to determine if a specific substance causes allergic inflammation of the skin. It tests for delayed reactions. It is used to help ascertain the cause of skin contact allergy, or contact dermatitis. Adhesive patches, usually treated with a number of common allergic chemicals or skin sensitizers, are applied to the back. The skin is then examined for possible local reactions at least twice, usually at 48 hours after application of the patch, and again two or three days later.
Blood tests can also be used for determining allergic reactions to substances. Blood tests can be performed irrespective of age, skin condition, medication, symptom, disease activity, and pregnancy. Adults and children of any age can take an allergy blood test. The test measures the concentration of specific antibodies in the blood. Quantitative antibody test results increase the possibility of ranking how different substances may affect symptoms. A general rule of thumb is that the higher an antibody value, the greater the likelihood of symptoms.
How Chinese Medicine and Acupuncture Treat Allergies
While many over-the-counter remedies promise symptomatic relief, practitioners of Chinese medicine believe that addressing the causes of allergies, treating the whole person, and focusing on balancing the immune system leads to substantial long-term health benefits in managing allergies.
The acupuncturist looks for constitutional or more deeply rooted signs in each person who presents with allergies. Often people with chronic allergies show signs of spleen or kidney deficiency as well as lung signs, according to traditional Chinese medicine. The goal of the acupuncturist is to develop a plan that addresses the person’s acute symptoms to provides relief, but also address the underlying immune system imbalance which is thought to be at the root of the person’s allergies.
Acupuncture: By inserting small hair-like needles around the nose and sinuses we are able to stop sneezing and relieve congestion. There are acupuncture points on the feet that can soothe red, itchy eyes and other points to calm down an overactive immune system. Acupuncture may also be quite effective for people suffering from multiple allergies, since it works to calm the areas of the immune system that are over stimulated by exposure to multiple irritating factors.
In a small but significant study of 26 hay fever patients published in the American Journal of Chinese Medicine, acupuncture reduced symptoms in all 26 — without side effects. A second study of some 72 people totally eliminated symptoms in more than half, with just two treatments.
According to a study published in the September 2004 issue of Allergy Magazine, a combination of Chinese herbs and weekly acupuncture sessions was useful for alleviating the symptoms of allergies and may help prevent allergies all together.
Chinese herbs are used to build a strong immune system and help regulate the respiratory system to help with acute and preventative measures for allergies. Some herbs also offer antiviral properties. Herbs that drain dampness are employed in order to clear the nasal passages and sinuses. We may also use natural antihistamine supplements to suppress the immune response to the allergen.
Nutritional guidance is customized for each patient based on the health of his or her kidney, lung and spleen function. Diet plays an important part in controlling seasonal allergies. When excessive mucus accumulates in the system, allergens stimulate a much stronger allergic reaction. Sweets, dairy products, and cold foods all tend to increase mucus production, and therefore should be avoided during allergy season. Efficient digestion also helps prevent mucus buildup, so we suggest eating soups, vegetables, and boiled grains which are all easy for the body to digest. Eating foods rich in vitamin C, which is a natural antihistamine, can be helpful and may be found in citrus fruits, spinach, broccoli, strawberries, etc. Chinese medicine also advocates replacing coffee with green tea, which provides anti-allergy actions. Even Chrysanthemum tea made from dried flowers can help reduce allergy symptoms. We address all possible food allergies that could be causing the symptoms as well.
Diagnostic testing can be used to identify allergens to foods and environmental toxins.
How Western Medicine Treats Allergies
Western medical therapies often rely on inhibiting the allergic response; antihistamines (Chlor-trimetron, Benadryl, etc.) are a good example. Other types of drugs used to treat allergic rhinitis or asthma include ones which act on the nervous system (Albuterol, epinephrine), cortico-steroids (prednisone), and decongestants.
Western medicine also emphasizes the importance of avoiding the allergen if possible, and the use of air filters to decrease exposure. When avoidance or elimination is impossible or impractical, the next level of treatment may be desensitization, the injection of small amounts of the allergen in gradually increasing doses in order to neutralize over time the number of antibodies present.
Although Western medicine is very effective at treating the allergic response, side effects such as drowsiness in some people, immune system suppression or over-reliance on medications cause many to seek alternative approaches to managing their allergies. Many turn to their acupuncturist for advice and treatment. |
In Python the basic elements of programming are things like strings, dictionaries, integers, functions etc. They are all objects. For example, every string object has a standard set of methods - some of which were used in Sect.strings.
In python one can create our own blueprints which can be used to create a user-defined objects. These are called classes.
Here we define our own class”OurClass” using the class keyword. Methods are defined like functions - using the def keyword. They are indented to show that they are inside the class. In this example we create an instance of OurClass, and call it instance “obj”. When we create it, we pass in the arg1 and arg2 as arguments. When we call printargs() these original arguments are printed. The “init” method (init for initialize) is called when the object is instantiated. Instantiation is done by (effectively) calling the class.
obj= OurClass('arg1', 'arg2')
This creates a new instance is created. Then its “init” method is called and passed the arguments 'arg1' and 'arg2'.
Objects combine data and the methods used to work with the data. This means it's possible to wrap complex processes - but present a simple interface to them. Anyone using the object only needs to know about the public methods and attributes. |
This striking astronaut photograph shows polar mesospheric clouds over the Southern Hemisphere on January 30, 2010. These clouds occur over the high latitudes of both the Northern and Southern Hemispheres during their respective summer months at very high altitudes (approximately 76 to 85 kilometers, or 47 to 53 miles). They are most visible during twilight, when the clouds are still illuminated by the setting Sun, while the ground is already dark.
Polar mesospheric clouds are also known as noctilucent or “night-shining” clouds—a property that is clearly visible in this astronaut photograph. The clouds exhibit thin, wispy light blue forms that contrast with the darkness of space (image upper right). Lower levels of the clouds are more strongly illuminated by the Sun and appear light orange to white. Clouds closest to the Earth’s surface are reddish-orange (image center).
The image was taken approximately 38 minutes after midnight Greenwich Mean Time (GMT), while the International Space Station was located over the southern Atlantic Ocean. At this time of year, the Sun never sets over Antarctica, but rather traces an arc across the local horizon, allowing polar mesospheric clouds to be observed near local midnight.
The International Space Station (ISS) orbit ranges from 52 degrees north to 52 degrees south; combined with the highly oblique (“from-the-side”) views through the Earth’s atmosphere that are possible with hand-held cameras, the ISS is an ideal platform for documenting transient, high-altitude phenomena like polar mesospheric clouds. Another NASA mission, the Aeronomy of Ice in the Mesosphere is dedicated to the study of polar mesospheric clouds, and the satellite is providing daily information about their formation, distribution, and variability.
Astronaut photograph ISS022-E-52281 was acquired on January 30, 2010, with a Nikon D2Xs digital camera fitted with a 70mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science & Analysis Laboratory, Johnson Space Center. The image was taken by the Expedition 22 crew. The image in this article has been cropped and enhanced to improve contrast. Lens artifacts have been removed. The International Space Station Program supports the laboratory as part of the ISS National Lab to help astronauts take pictures of Earth that will be of the greatest value to scientists and the public, and to make those images freely available on the Internet. Additional images taken by astronauts and cosmonauts can be viewed at the NASA/JSC Gateway to Astronaut Photography of Earth. Caption by William L. Stefanov, NASA-JSC.
- ISS - Digital Camera |
According to standard modern physical theory, light and all other electromagnetic radiation propagates at a constant speed in vacuum, the speed of light. It is a physical constant and notated as <math>c</math> (from the Latin celeritas, "speed"). Regardless of the reference frame of an observer or the velocity of the object emitting the light, every observer will obtain the same value for the speed of light upon measurement. No matter or information can travel faster than <math>c</math>.
The value is precisely
It is important to realize that the speed of light is not a "speed limit" in the conventional sense. We are accustomed to the additive rule of velocities: if two cars approach each other, each travelling at a speed of 50 miles per hour, we expect that each car will perceive the other as approaching at a combined speed of <math>50 + 50 = 100</math> miles per hour (to a very high degree of accuracy).
At velocities approaching or at the speed of light, however, it becomes clear from experimental results that this additive rule no longer applies. Two spaceships approaching each other, each travelling at 90% the speed of light relative to some third observer between them, do not perceive each other as approaching at 90 + 90 = 180% the speed of light; instead they each perceive the other as approaching at slightly less than 99.5% the speed of light.
This last result is given by the Einstein velocity addition formula:
where <math>v</math> and <math>w</math> are the speeds of the spaceships relative to the observer, and <math>u</math> is the speed perceived by each spaceship.
Contrary to our usual intuitions, regardless of the speed at which one observer is moving relative to another observer, both will measure the speed of an incoming light beam as the same constant value, the speed of light.
Albert Einstein developed the theory of relativity by applying the (somewhat bizarre) consequences of the above to classical mechanics. Experimental confirmations of the theory of relativity directly and indirectly confirm that the velocity of light has a constant magnitude, independent of the motion of the observer.
Since the speed of light in vacuum is constant, it is convenient to measure both time and distance in terms of <math>c</math>. Both the SI unit of length and SI unit of time have been defined in terms of wavelengths and cycles of light; currently, the meter is defined as the distance travelled by light in a certain amount of time: this relies on the constancy of the velocity of light for all observers. Distances in physical experiment or astronomy are commonly measured in light seconds, light minutes, or light years.
In passing through materials, light is slowed to less than <math>c</math>, by the ratio called the refractive index of the material. The speed of light in air is only slightly less than <math>c</math>. Denser media such as water and glass can slow light much more, to fractions such as 3/4 and 2/3 of <math>c</math>. On the microscopic scale this is caused by continual absorption and re-emission of the photons that compose the light by the atoms or molecules through which it is passing.
Recent experimental evidence shows that is is possible for the group velocity of light to exceed c. One experiment made the group velocity of laser beams travel for extremely short distances through caesium atoms at 300 times <math>c</math>. However, it is not possible to use this technique to transfer information faster than <math>c</math>; the product of the group velocity and the velocity of information transfer is equal to the normal speed of light in the material squared.
The speed of light may also appear to be exceeded in some phenomena involving evanescent waves. Again, it is not possible that information is transmitted faster than <math>c</math>.
See also: tachyon
Galileo seems to have been the first person to suspect that light might have a finite speed and attempt to measure it. He wrote about his unsuccessful attempt using lanterns flashed from hill to hill outside Florence.
The speed of light was first measured in 1676, some decades after Galileo's attempt, by the young Danish astronomer Ole RÝmer, who was studying the motions of Jupiter's moons. A plaque at the Observatory of Paris, where RÝmer happened to be working, commemorates what was, in effect, the first measurement of a universal quantity made on this planet. RÝmer published his result, which was within about ten percent of being correct, in Journal des Scavans of that year.
It is a bizarre coincidence that the average speed of the earth in its orbit is very close to one ten-thousandth of this, actually within less than a percent. This gives a hint as to how RÝmer measured light's speed. He was recording eclipses of Jupiter's moon Io: every day or two Io would go into Jupiter's shadow and later emerge from it. RÝmer could see Io blink off and then later blink on, if Jupiter happened to be visible. Io's orbit seemed to be a kind of distant clock, but one which RÝmer discovered ran fast while Earth was approaching Jupiter and slow while it was receding from the giant planet. Roemer measured the cumulative effect: by how much it eventually got ahead and then eventually fell behind. He explained the measured variation by positing a finite velocity for light. |
Markup and Markdown Problems
Videos and solutions to help Grade 7 students learn how to identify the original price as the whole and use their knowledge of percent and proportional relationships to solve multistep markup and markdown problems.
Plans and Worksheets for Grade 7
Plans and Worksheets for all Grades
Lessons for Grade 7
Common Core For Grade 7
New York State Common Core Math Grade 7, Module 4, Lesson 7
Lesson 7 Student Outcomes
• Students understand the terms original price, selling price, markup, markdown, markup rate, and markdown
• Students identify the original price as the whole and use their knowledge of percent and proportional
relationships to solve multistep markup and markdown problems.
• Students understand equations for markup and markdown problems and use them to solve markup and
Lesson 7 Classwork
• A markup
is the amount of increase in a price.
• A markdown
is the amount of decrease in a price.
• The original price
is the starting price. It is sometimes called the cost or wholesale price.
• The selling price
is the original price plus the markup or minus the markdown.
• The markup rate
is the percent increase in the price, and the markdown rate (discount rate) is the percent
decrease in the price.
• Most markup problems can be solved by the equation: (Selling Price) = (1 + m)(Whole), where m is the
markup rate, and the whole is the original price.
• Most markdown problems can be solved by the equation: Selling Price) = (1 - m)(Whole), where m is the
markdown rate, and the whole is the original price.
Games Galore Super Store buys the latest video game at a wholesale price of $30.00. The
markup rate at Game’s Galore Super Store is 40%. You use your allowance to purchase the game
at the store. How much will you pay, not including tax?
a. Write an equation to find the price of the game at Games Galore Super Store. Explain
b. Solve the equation from part (a).
c. What was the total markup of the video game? Explain.
d. You and a friend are discussing markup rate. He says that an easier way to find the total markup is by
multiplying the wholesale price of $30 by 40%. Do you agree with him? Why or why not?
Example 2: Black Friday
A mountain bike is discounted by 30% and then discounted an additional 10% for shoppers who arrive before 5:00
a. Find the sales price of the bicycle.
b. In all, by how much has the bicycle been discounted in dollars? Explain.
c. After both discounts were taken, what was the total percent discount?
d. Instead of purchasing the bike for $300, how much would you save if you bought it before 5:00 a.m.?
Example 3: Working Backwards
A car that normally sells for $20,000 is on sale for $16,000. The sales tax is 7.5%.
a. What percent of the original price of the car is the final price?
b. Find the discount rate.
c. By law, sales tax has to be applied to the discount price. However, would it be better for the consumer if the 7.5%
sales tax were calculated before the discount was applied? Why or why not?
d. Write an equation applying the commutative property to support your answer to part (c).
1. Sasha went shopping and decided to purchase a set of bracelets for 25% off of the
regular price. If Sasha buys the bracelets today, she will receive an additional 5%.
Find the sales price of the set of bracelets with both discounts. How much money will
Sasha save if she buys the bracelets today?
2. A golf store purchases a set of clubs at a wholesale price of $250. Mr. Edmond learned that the clubs were marked
up 200%. Is it possible to have a percent increase greater than 100%? What is the retail price of the clubs?
3. Is a percent increase of a set of golf clubs from $250 to $750 the same as a markup rate of 200%? Explain.
a. Write an equation to determine the selling price, p, on an item that is originally
priced s dollars after a markup of 25%.
b. Create a table (and label it) showing five possible pairs of solutions to the equation.
c. Create a graph (and label it) of the equation.
d. Interpret the points (0, 0) and (1, r).
Use the following table to calculate the markup or markdown rate. Show your work. Is the relationship between the
original price and selling price proportional or not? Explain. |
A super-high-resolution 3-D light microscope developed at the Max Planck Institute for Biophysical Chemistry will allow biologists to watch the workings of the tiniest organelles and even individual clusters of proteins in living cells. The new technology, which has a resolution of 40 nanometers, overcomes some major limitations in existing microscopy techniques and could have important applications in dissecting exactly how drugs impact cells.
“[It’s] a tour de force–a major accomplishment,” says John Sedat, a professor of biochemistry and biophysics at the University of California, San Francisco. Using the Max Planck microscope and others that are pushing nanoscale resolution, biologists will be able to watch how live cells work at an unprecedented level of detail. “It’s going to be a revolution for biology,” says Sedat, who was not involved in the research.
In recent decades, biologists have made great strides in understanding the molecular makeup of cells, but how these parts add up to functioning cells and tissues is still something of a mystery. Using light microscopes, biologists can watch living cells at relatively low resolution; using electron microscopy, they can carefully dissect dead cells.
The new microscope “allows you to optically dissect living cells,” says Stefan Hell, head of the department of nanobiophotonics at the Planck Institute, in Göttingen, Germany, who led the instrument’s development.
Researchers used the new microscope to make the first super-high-resolution light images of tiny cell organelles called mitochondria, which are crucial for cell metabolism and play a role in the aging process. One potential application is to visualize how certain cancer drugs affect the mitochondria, whose inner workings have been invisible to 3-D light microscopy. “It’s been difficult because you couldn’t see molecules binding to each other,” making it impossible to definitively name the cause of these drugs’ effects, says Maryann Fitzmaurice, a pathologist at Case Western Reserve University, in Cleveland.
Three-dimensional light microscopes work by scanning a focused spot of light through cells in three planes. The size of this spot limits the resolution of the microscope–nothing smaller than the size of the spot can be seen. Due to a fundamental property of light called the diffraction limit, focusing light down to a size smaller than half its wavelength is impossible using conventional lenses. Many parts of the cell are smaller than half the wavelength of the light used for these techniques. Other researchers have gotten around the diffraction limit in two dimensions, or with techniques that only work with a particular wavelength of light. |
Your Body - Anatomy of the Endocrine System
The following are integral parts of the endocrine system:
Click Image to Enlarge
- Hpothalamus. The hypothalamus is located in the brain, near the optic chiasm. It secretes hormones that stimulate or suppress the release of hormones in the pituitary gland, in addition to controlling water balance, sleep, temperature, appetite, and blood pressure.
- Pineal body. The pineal body is located below the corpus callosum, a part of the brain. It produces the hormone melatonin.
- Pituitary. The pituitary gland is located at the base of the brain. No larger than a pea, the gland controls many functions of the other endocrine glands.
Hormones are chemical substances created by the body that control numerous body functions. They actually act as "messengers" to coordinate functions of various body parts.
Most hormones are proteins consisting of amino acid chains. Some hormones are steroids, fatty cholesterol-produced substances.
Functions controlled by hormones include activities of entire organs; growth and development; reproduction; sexual characteristics; usage and storage of energy; or levels of fluid, salt and sugar in the blood.
- tThyroid and parathyroids. The thyroid gland and parathyroid glands are located in front of the neck, below the larynx (voice box). The thyroid plays an important role in the body's metabolism. Both the thyroid and parathyroid glands also play a role in the regulation of the body's calcium balance.
- Thymus. The thymus is located in the upper part of the chest and produces T-lymphocytes (white blood cells that fight infections and destroy abnormal cells).
- Adrenal gland. The pair of adrenal glands are located on top of both kidneys. Adrenal glands work hand-in-hand with the hypothalamus and pituitary gland.
- Pancreas. The pancreas is located across the back of the abdomen, behind the stomach. The pancreas plays a role in digestion, as well as hormone production. Hormones produced by the pancreas include insulin, which regulates levels of sugar in the blood.
- Ovary. A woman's ovaries are located on both sides of the uterus, below the opening of the fallopian tubes (tubes that extend from the uterus to the ovaries). In addition to containing the egg cells necessary for reproduction, the ovaries also produce estrogen and progesterone.
- Testis. A man's testes are located in a pouch that hangs suspended outside the male body. The testes produce testosterone and sperm.
Click here to view the
Online Resources of Women's Center |
New Findings on Carbon Dioxide Release from World's Oceans
Carbon Dioxide (CO2), a greenhouse gas, is intricately linked to global warming. The largest store of CO2 is the world's oceans. How the oceans sequester or release CO2 to or from the atmosphere is important to understand as mankind alters Earth's climate with the burning of fossil fuels. A new report from researchers at the University of California, Davis offers clues on how that mechanism works by analyzing the shells of plankton fossils.
CO2 from the atmosphere touches the ocean surface is absorbed by the water. Marine phytoplankton consume the CO2 from the surface as they grow. After the plankton dies, it sinks to the bottom of the ocean. Decomposition then transforms the organic compounds of the plankton into dissolved CO2. This cycle, known as the biological pump, is extremely effective at removing CO2 from the atmosphere and depositing it in the deep ocean waters.
As global temperatures rise, one of the first symptoms is the melting of glaciers and sea ice. This frigid water then sinks to the bottom of the ocean, pushing up the carbon-rich waters that have been trapped under the warmer water for so long, like fizzy soda under a bottle cap). Once the older carbon-rich water reaches the surface, the collected greenhouse gas is released back into the atmosphere, accelerating the cycle of temperature rise.
This is what occurred at the end of the last great ice age, about 18,000 years ago. The question is, where and how quickly does the release of CO2 from the oceans occur? Earlier studies suggest that the release took place all over the northern and southern hemisphere and over centuries and millennia. However, Howard Spero, a UC Davis geology professor, and his colleagues disagree.
According to Spero, the CO2 release that preceded the current warm period was akin to a big fizz rather than a slow leak, and took place largely in the Southern Ocean which surrounds Antarctica. This theory was tested by examining the carbon-14 content in the fossil shells of phytoplankton that were alive at the end of the last ice age. These shells were obtained from core samples, which took up ancient sediment from deep in the sea floor.
"We now understand that the Southern Ocean was the fundamental release valve that controlled the flow of carbon dioxide from the ocean to the atmosphere at the end of the last ice age. The resulting atmospheric increase in this greenhouse gas ultimately led to the warm, comfortable climate that human civilization has enjoyed for the past 10,000 years," Spero concluded.
The UC Davis study was published in the recent issue of the journal, Nature. The lead authors are Kathryn Rose, one of Spero's students at UC Davis, and Elisabeth Sikes of Rutgers University.
Link to published article: http://www.nature.com/nature/journal/v466/n7310/abs/nature09288.html |
1933, permitted the suspension of basic civil rights—rights that had been guaranteed by the democratic weimar constitution the third reich became a police. The after effects of hitler hitler's actions during world war ii had a profound effect on world society during the war, but the effects after the war were just as important, if not. The weimar constitutiongermany’s post-war constitution has shouldered much of the blame for the political instability of the 1920s the men who drafted the constitution in 1919 attempted to construct a political system not unlike that of the united states, incorporating democracy, federalism, checks and balances and protection of. The actions of president hindenburg were the most important reason why hitler came to power in 1933 discuss from 1928 to 1932, the nazi party went from 12 seats in the reichstag to 230 this was due to a number of factors including the wall street crash and the depression that followed, the weaknesses of. The weimar republic looked like the perfect democracy what the weimar constitution of 1919 said how good the weimar constitution was now try a. He was appointed on 30 january 1933 early actions included a complete ban on all anti-government demonstrations.
Start studying history - weimar and nazi germany learn vocabulary, terms, and more with flashcards, games, and other study tools. Life in nazi germany 1933-1945 article 48 reparations mein kampf ideology communist treaty of versailles great depression propaganda hitlers rise to power. ‘hitler rose to power because the german people were disillusioned with democracy’ uploaded by timothy whiffen timothy whiffen 12lf modern history stage 2. How did hitler manage to take complete control of germany when the country was, effectively, a modern democracy.
Other factors that influenced the failure of weimar were the structural weaknesses induced by the constitution and the basic lack of support for the republic among the german people particularly amongst the elite all in all, these aspects were the major causes that doomed the weimar republic to ultimate failure and the eventual ascent of. Do the problems of 1919-1924 suggest that the weimar republic was doomed from the start following the nation's defeat in ww1 and the kaiser's abdication germany was left in a state of disarray a constitution was written up in the city of weimar due to the instability in berlin it was the first. To what extent was hitler's success in coming to power due to the depression hitler became chancellor in january 1933 by march he had full dictatorial power.
A level history ocr history a h505 (as h105) non-british period study democracy and dictatorship in -1963 booklet 4: the nazi dictatorship and its domestic policies. Essay about the reasons hitler came to power 755 words | 4 pages the reasons hitler came to power in 1933, hitler the leader of the nsdap (national socialist german workers party) became the chancellor of. Proportional representation allowed hitler to come to power discuss germany's first attempt at democracy was through the weimar constitution the essence of this.
Why did hitler become a chancellor print reference this published: 23rd march, 2015 last edited: 16th june, 2017 disclaimer: this essay has been submitted by a. Best answer: the actions of president hindenburg were the most important reason why hitler came to power in 1933 from 1928 to 1932, the nazi party went from 12 seats in the reichstag to 230 this was due to a number of factors including the wall street crash and the depression that followed, the weaknesses of the weimar constitution. The nazis in power: discrimination, obedience, and opportunism from the unit: decision making in times of injustice unit this material will retire in july 2018.
The weimar constitution is not a socialist constitution but we adhere to the basic principles of a constitutional state, to the equality of rights, and the concept of social legislation anchored therein we german social democrats solemnly pledge ourselves in this historic hour to the principles of humanity and justice, of freedom and socialism. Why did the weimar republic fail and hitler gain power in 1933 after the failure to win world war i, germany was faced with a new government, the german people hoped. The rise of fascism: assessing the constitution of the weimar republic jeremy coldrey abstract this paper will argue that the weimar constitution can offer.
The enabling act (german: ermächtigungsgesetz) was a 1933 weimar constitution amendment that gave the german cabinet – in effect, chancellor adolf hitler – the power to enact laws without the involvement of the reichstagit passed in both the reichstag and reichsrat on 24 march 1933, and was signed by president paul von hindenburg. What were the long terms and short term causes of hitlers rise to power free essay example: what were the long terms and short term causes of hitler’s rise to. The unprecedented collapse of the weimar republic: a brief examination on the rise of hitler's autocratic legitimacy uploaded by can karaoğlu the unprecedented. In the 'imperial germany and the weimar republiic' section of the forum i have posted a topic entitled hitler and the weimar constitution unfortunatly it has not been replied too. The army did not support the weimar republic the weimar republic did not have any charismatic leaders friedrich ebert, the first president of the weimer government. What was the most important factor for hitler becoming chancellor in january 1933 there were a number of reasons that hitler became chancellor in 1933 one of the. |
Saying "yes" instead of "no"
Fair Use Guidelines make room for students and teachers to use copyrighted material in multimedia presentations.
Copyright concerns have usually been a negative experience for most library/media specialists. It generally means saying "no" to the plans of well-intentioned teachers. "NO — you can’t make multiple photocopies of a workbook to save the district money." "NO — you can’t play that rental video as a Friday entertainment activity." "NO — I can’t make seven copies of a video for temporary reserve."
The Fair Use Guidelines for Educational Multimedia gives library/media specialists an opportunity to promote copyright compliance in a positive light. In using these Guidelines with students, educators plant seeds of ethical judgment in an era when access to and manipulation of information is all too easy.
The Fair Use Guidelines for Educational Multimedia has its roots in the "fair use" clause of the 1976 Copyright Law. Educators were given an exemption to copyright restrictions based on brevity, spontaneity, cumulative effect, and copyright attribution.
The Fair Use Guidelines for Educational Multimedia are the result of the Conference on Fair Use (CONFU). Spearheaded by the Consortium of College and University Media Centers (CCUMC), this working group drew representatives from a wide variety of educational arenas, the publishing world, and the U.S. Copyright Office. Built on the discussions generated by the changing technology used to transmit and store information, both sides aired concerns and came to a workable compromise. In 1996, these Guidelines were recognized and issued by a House of Representatives subcommittee. The Guidelines were intentionally not passed into law so that both educators and copyright holders could evaluate the ability to follow the Guidelines as well as to take a "wait-and-see" posture toward rapidly evolving technologies.
Guidelines in brief
The overriding advantage of the Guidelines is that when educators and students follow them, they do not have to ask for advance permission from copyright holders. Think of the number of possible items a student can incorporate into a multimedia presentation — audio, photography, video, text — and the number of permission letters these items would require. Multiply that by a class of students and the planned project rapidly becomes overwhelming. That alone provides a persuasive case to use the Guidelines.
Since this is not a law, students and educators can, if they wish, exceed the Guidelines. However, since the limits detailed in the Guidelines have the approval of publishers, exceeding the limits places projects at risk of copyright infringement.
Different limits are given for educators and students. Students may use these Guidelines to produce multimedia projects for a specific course. In other words, they cannot use these Guidelines and newly acquired skills to produce multimedia presentations for unassigned activities, but they can show this project in the classroom. In her revised edition of Commonsense Copyright, Rosemary Talab notes that this has generally been extended to mean that project can be shown at "in house" events, such as parent open houses, science fairs, etc. Lastly, students may keep a copy of the work indefinitely to use for college interviews.
Educators may use these Guidelines to create teaching materials for their courses. These multimedia presentations may be used in face-to-face teaching, assigned for self-study, and in remote instruction (with limits). In other words, educators can place a copy of the project on reserve in the library or computer lab for students to review or for those who missed the class. Distance-learning classes, where access is limited by password, are treated the same as face-to-face instruction. Educators, however, are limited to using these projects for two years. After that, they must obtain written copyright permission to keep using the multimedia presentation.
Further, the new Guidelines recognize that sharing among peers is an integral part of teaching. So, presentations can be shown at conferences and workshops. And, as with students, educators may also retain a copy for use in interviews, tenure, etc.
Both students and educators are limited in the number of copies they may make of a completed presentation. No more than two copies may be in use at any time. This allows one to be used in a classroom and one to be placed on reserve in the library or computer lab. In addition, one copy may be kept for archival/portfolio purposes. When several people create a project, each may have his/her own archival copy, but no more than two copies may be used.
Limits by type of media
The Fair Use Guidelines for Educational Multimedia (FUGEM) contains portion limits that follow those of the 1976 print copyright law. These are cumulative to each course, each semester. In other words, if students or educators do more than one multimedia project for a course, then the limits apply for all projects. Additionally, the Guidelines specifically exempt K-6 grade students from adhering strictly to portion limits. Briefly, those limits are as follows:
- Motion Media: Up to 10 percent or three minutes, whichever is less, of a single copyrighted motion media work.
- Text Material: Up to 10 percent or 1,000 words, whichever is less, or a single copyrighted work of text.
- Poems: An entire poem of less than 250 words, but no more than three poems by one poet or five poems by different poets from a single anthology. In longer poems, the 250-word limit still applies, plus no more than three excerpts by one poet or five excerpts by different poets from a single anthology may be used.
- Music, Lyrics, and Music Video: Again, up to 10 percent, but no more than 30 seconds of music and lyrics from a single musical work. Any alterations of a musical work shall not change the basic melody or the fundamental character of the work. For example, a music instructor could use a piece of music and change the rhythm or emphasis on certain instruments to show how this would alter the music. However, the basic melody must still be recognizable.
- Illustrations and Photographs: Here copyright holders recognized that using part of a photograph of painting is generally not feasible. So, students can use an entire photograph or illustration. But, no more than five images by one person and no more than 10 percent or 15 images from a single published work may be used. Here, too, instructors may take a famous painting and, using the computer, alter color to show how color affects the mood of a painting.
- Numerical Data Sets: Up to 10 percent or 2,500 fields or cell entries, whichever is less, from a database or datatable can be used. The Guidelines define field and cell entries. Business instructors often receive sample databases with texts. This allows them to use portions within a multimedia presentation.
These limits promote creativity among students, giving a resounding "no" to the student who wants to combine photographs from one art book, poems by one poet, and a song to "create" a multimedia project. Instead, it requires more research to develop a theme from among different media and a wider variety of creators.
The Guidelines also reinforce the cautions library/media specialists give students every day. Just because something is on the Internet doesn’t mean it’s automatically available to use. It further reminds Guidelines users that works often get on the Internet without legal copyright permission.
Two requirements must be included in every multimedia presentation created using the Guidelines. First, a notice of restriction must appear at the beginning of a multimedia presentation, alerting viewers that this presentation was created using the FUGEM. Library/media specialists or teachers can create such a template for all students to use. This also serves as notice to copyright holders that the creators did their best to follow Guidelines and, therefore, written permissions were not needed.
A sample copyright notice might look like this:
This presentation was created following the Fair Use Guidelines for Educational Multimedia. Certain materials are included under the Fair Use exemption of the U.S. Copyright Law. Further use of these materials and this presentation is restricted.
Proper credit for copyrighted materials must also be included in the presentation. This can be done at the time the copyrighted material is used, or it can be included at the end of the presentation, much like the bibliography at the end of a printed paper. |
abbreviated hex) is a 16 numeral system, usually
written using the symbols 0-9 and A-F. It is a useful
system in computers because there is an easy mapping from
four bits to a single hex digit. Thus one can represent
every byte as two consecutive hexadecimal digits. Colors
in HTML are specifyed using hexadecimal numbers.
Most graphic design programs (photoshop, photopaint,
paintshop pro etc.) offer to handle colors in a RGB (Red,
Green, Blue) color system. To define a certain color you
simply enter the amount of each of the three colors Red,
Green and Blue.
Another way to convert a decimal value into a
hexadecimal value is using the Windows standard calculator
(If you don't see the hex-option in the calculator, click
the view-menu and choose scientific). Enter a value, click
the Hex-field and the calculator will convert the number |
There are numerous communication devices that help people of all ages express themselves. Some are high tech while others are as simple as a flashlight that can be used to communicate in morse code. For children with special needs, these devices provide a window to the world outside their own. In addition to helping others communicate, communication devices can also be utilized for entertainment, work and other purposes.
What walkie-talkie has 12 km range?
A computer system (or set of computer programs) used for tracking, storing, and transmitting electronic documents or images. A common example is a modem (from modulator/demodulator). The device modulates digital signals into analog signal for transmission over standard telephone networks, and demodulates the analog back into a digital data stream that the computer saves as information.
The term is most commonly applied to electronic devices, such as computers and mobile phones. These are used to connect and exchange information between people, but they are also widely used in other types of hardware such as cameras, televisions, automobiles and medical equipment.
Other examples of communication devices include: |
The previous posting showed that the temperatures in the past have exceeded the current temperature rise giving lie to the assertion that the year 2015 was the hottest year ever.
A cursory examination indicates that the warmers do not dispute the temperature records derived from the ice cores. But looking at the relationship of carbon dioxide (CO2) and temperature rise and fall as indicated by the ice core record, some warmers do not agree with the idea that CO2 follows temperature rise and fall rather than leads. First a look at graphical representations of the ice core data:
To understand this chart, remember, time flow to the right from the past to the present. When examining the blue, temperature, and the red, CO2, the line to the right is later. For example, look at the blue and the red line beginning about 140,000 years ago. The red line is very close to the blue line but it is to the right of the blue line meaning that it is lagging the rise in temperature shown by the blue line. The difference is more apparent if you click on the chart to enlarge it. Note that the thickness of a line on this chart may be the equivalent of 1,000years. The creators of this chart meant to show the CO2 lagging the temperature because that is what their data told them.
Another chart using EPICA Dome C ice cores:
This a busy chart but it tells the same story as the previous chart that used Vostok ice cores. Again, click to enlarge it. Some interesting notations are included.
Where does the CO2 come from? The oceans hold vast amounts of CO2 dissolved in the sea water. Water can hold more dissolved CO2 when it is cold than it can when it is hot. As the water warms, CO2 come out of solution and enters the atmosphere.
How does this happen? CO2 solubility in water is determined by both partial pressure and temperature. If the sea water temperature is relatively constant and the amount of CO2 in the atmosphere increases, the partial pressure increase in the atmosphere would cause CO2 to be absorbed. But as cold sea water begins to heat, the solubility of CO2 in the sea water would decrease and CO2 would begin to leave the ocean water and enter the atmosphere. What causes the sea water to heat up is still a question but I believe the Milankovitch cycles must result in longer Northern Hemisphere summers and a resumption of warming. This fits the data, too, where the temperature leads the CO2.
It would seem to be imparitive that the warmers try to destroy the CO2 lagging part of the ice core data. They know that their theory that CO2 forces temperature change fails based upon these ice core data.
A posting on Climate of the Past titled “Tightened constraints on the time-lag between Antarctic temperature and CO2 during the last deglaciation” says( that the deglaciation period from 19,000 to 11,000 before the present time)”… the CO2 lagged the temperature rise by less than 400 year and that even a short lead of CO2 over temperature cannot be excluded”
So their data says it was 400 years but forget about the data. Lets just call it the way wish is was, “…..even a short lead…”
The ice core data repeatedly shows that temperatures were higher in the past than the present and that temperature rise or drop were followed by CO2, not lead by CO2. And don’t miss this point—What made the CO2 go up before man had any influence?
Pingback: Trump Ignores Gore And DiCaprio Advice | Climate Change Sanity
Pingback: Will The Global Temperature Begin To Cool Down In The Near Future? | Climate Change Sanity |
Miami Beach’s sea level is rising faster than the global average, and it’s making major floods there more common, according to a new study from researchers at University of Miami’s Rosenstiel School of Marine and Atmospheric Science.
The researchers suggest two ways regions can better prepare for sea-level rise based on their results.
Regions should project local sea-level rises, rather than relying on global averages, to determine the likely effects rising sea levels will have on their communities, the researchers argue.
Coastal cities should also invest in new drainage systems if they still rely on traditional, gravity-based systems.
The study was published in the latest issue of the journal Ocean and Coastal Management.
The researchers analyzed how often it floods and what caused each flood in Miami Beach from 1998 to 2013. They pulled from five data sets: tide gauges, rain gauges, media reports, insurance claims and photos.
They omitted floods caused by storm surges because they result from anomalous events. Instead, they focused on flooding caused by heavy rain and flooding caused by tides alone, or “sunny sky flooding.”
Their analysis found that tidal flooding has increased dramatically since 2006, and rain-induced flooding is up as well.
Article continues below chart.
In the eight years beginning in 2006, Miami Beach flooded due to rain 15 times and from tidal shifts 16 times. In the eight years previous, rain caused nine floods while tides caused just two.
During the same time that floods have become more frequent, sea-level rise has accelerated in South Florida. Before 2006, the sea level was rising in the region by 3 millimeters per year; after 2006, sea-level rise quickened to 9 millimeters per year.
In other words, flooding events in Miami Beach increased 66 percent after 2006 – a jump that also coincided with acceleration in sea-level rise in the region.
Usually, when researchers gauge coastal flooding hazards, they consider the effects of average sea-level rise on a specific area based on its population, elevation and economic value. Basically, they assume the sea will rise by the same rate everywhere, and then measure what will be underwater if that increase takes place.
But that method has a major shortcoming: it misses out on the possibility that sea-level rise might be happening more quickly in some locations than others. It also can’t account for the effect of flooding due to rain – even though sea-level rise and floods caused by rain are related.
Why’s that? Many cities rely on gravity-based drainage systems – flood drainage that merely allows water to flow naturally from streets into storm drains and out of the system. Those systems are less effective as the water level rises.
The researchers found Miami Beach flooding due to rain increased 33 percent because of sea-level rise.
But after the period of the study, Miami Beach spent millions of dollars converting its flood drainage system to one that relies on pumping water out of the system, rather than just letting gravity do the work.
“Indeed, the change to pump-based drainage system resulted in reduced flooding events in the city,” the researchers write.
It all led the researchers to emphasize the importance that cities prefer for projected sea-level rise based on projections for their region, rather than a global average.
“Engineering projects, such as increasing efficiency of drainage systems or erection of seawalls, are typically based on globally average forecasts of (sea-level rise),” they write. “Thus, if local rates of (sea-level rise) are significantly higher than those of the globally average ones, as observed in Miami Beach, planned engineering solutions will provide protection for a shorter time period than planned.” |
Floralia was a Roman festival dedicated to Flora, the goddess of flowers and spring. It was celebrated annually from April 28 to May 3. The festival was characterised by lively celebrations that included feasting, dancing, singing, theatrical performances, and various forms of entertainment.
The origins of Floralia are unclear, but it is believed to have been established as a religious holiday during the Republican era to honour Flora and seek her blessings for the fertility of crops and vegetation. The festival's name is derived from the Latin word "flor" meaning flowers.
During Floralia, people adorned themselves with garlands of flowers, and the streets were decorated with blossoms and greenery. The festivities also involved the offering of milk and honey to Flora, as well as the release of animals such as hares and goats to symbolise fertility. Floralia was considered a time of freedom and merriment, and it was common for people to engage in various forms of debauchery and promiscuous behaviour. The festival's association with sexuality and fertility led to its eventual suppression by the Christian church in the Middle Ages. Despite its eventual decline, Floralia remains an important part of Roman cultural heritage and has influenced many modern celebrations of spring and renewal.
Beltane is an ancient Gaelic festival celebrated on May 1st. It marks the beginning of summer and is a time for fertility, growth, and new beginnings. In this blog, we'll explore the history and folklore of Beltane.
Falling midway between the Spring Equinox and the Summer Solstice in the northern hemisphere, Beltane has been celebrated for thousands of years in Ireland, Scotland, and other parts of the Celtic world. It was one of the four major festivals of the Celtic year and was a time to honour the gods and goddesses of nature.
The festival was traditionally celebrated by lighting bonfires, dancing around maypoles, and performing rituals to bless the crops and livestock. Beltane, a Celtic word, means “the fires of Bel.” It was also a time for young couples to court and for marriages to be arranged. After the spread of Christianity, Beltane was incorporated into the Christian calendar as May Day. However, many of the traditional customs and rituals continued to be practiced in rural communities throughout Ireland and Scotland.
Beltane is steeped in folklore and mythology. It is said that on Beltane Eve, the veil between the worlds of the living and the dead is thinnest, and the faeries and spirits of nature can easily cross over into our world.
One of the most famous Beltane traditions is the May Queen, who was chosen to represent the goddess of spring and fertility. She would be crowned with flowers and preside over the celebrations, including the maypole dance. Another Beltane tradition is the lighting of the Beltane fire. It was believed that the smoke and flames had protective and purifying powers, and people would jump over the flames for good luck and to ensure fertility for the coming year. In some parts of Ireland, it was also customary to herd the cattle between two Beltane fires to protect them from disease and misfortune.
Beltane is a festival steeped in history and folklore, celebrating the arrival of summer and the renewal of life. Its customs and traditions have been passed down through generations, and many are still celebrated today, mainly in Ireland and Scotland.
May Day is celebrated in many different ways around the world. In some countries, it is a public holiday and is marked by parades, festivals, and other cultural events. During the Middle Ages, May Day became associated with the Christian celebration of the Feast of Saint Walpurga, a missionary from England who lived in Germany in the 8th century. The holiday was known as Walpurgisnacht in Germany and was marked by bonfires and dancing
One of the most enduring traditions associated with May Day is the dancing of the Maypole. The Maypole is a tall pole decorated with flowers and ribbons that is erected in public places. Dancers weave intricate patterns around the pole, holding onto the ribbons as they move. The pole is often referred to as a phallic symbol, harking back to the frivolity of Floralia, and associated with the arrival of spring and the rebirth of nature. In many cultures, the Maypole is seen as a symbol of fertility and renewal, and the dancing that accompanies it is a way of celebrating the rhythms of nature. In the UK you are also likely to encounter morris dancing at festivals and public houses at this time of year. May Day is the start of the dance season for such folk dance groups and many morris sides are involved in May Day and Beltane celebrations.
Awakening of the Jack in the Green is a traditional May Day custom that originated in England and is still practiced in some parts of the country today. The Jack in the Green is a person
or figure who is covered in foliage, typically leaves and branches, and represents the spirit of nature and fertility. The tradition of the Jack in the Green dates back to the 16th century and is closely associated with the May Day celebrations of that time. The Jack in the Green would be led through the streets, accompanied by musicians and Morris dancers, as part of a larger May Day procession or parade. The exact origins of the Jack in the Green are unclear, but it is believed to have been inspired by ancient pagan beliefs and rituals. The figure is often seen as a symbol of the renewal of life and the return of spring, and is sometimes associated with the Green Man, a figure from mythology who represents the natural world.
Another tradition is the Crowning of the May Queen. The May Queen is a symbolic figure who represents the renewal of spring and the goddess of fertility. The tradition of crowning a May Queen dates back to medieval times, when young women were chosen to represent the fertility of the land and the coming of spring. The May Queen was usually chosen from among the unmarried girls of the community, and was often dressed in a white gown and crowned with a wreath of flowers or a crown of greenery.
All three celebrations, Floralia, Beltane and May Day, are a reminder of the importance of honouring nature and the cycles of the seasons. It is a time to celebrate new beginnings, growth, and the power of the natural world. Whether you participate in traditional Beltane celebrations or simply take a moment to appreciate the beauty of nature, this festival offers a powerful reminder of our connection to the earth and the importance of honouring it.
To embrace the joys of the time of year, why not bring a little of the natural world into your home? Langtree Botanic's Beltane fragrance is brimming with mystical frankincense and enhanced with herbs of the season.
Photo credits: Jozef Klopacka, Axel Bueckert and Archivist |
DNA Replication in eukaryotes
- DNA Replication in Eukaryotes
- Initiation of DNA Replication in Eukaryotes
- Eukaryotic Primases and DNA Polymerases
- Telomeres and Telomerases
- DNA replication is similar in all cellular organisms. It is accomplished by a huge complex of proteins called the replisome.
- Central to the functioning of the replisome are the DNA polymerases responsible for leading and lagging strand replication.
- All DNA polymerases catalyze the synthesis of DNA in the 5′ to 3′ direction using a template to synthesize a complementary DNA strand and a primer to provide a free 3′-hydroxyl group to which nucleotides can be added.
- The DNA of most bacteria is circular and replication begins at a single point, the origin of replication.
- The bacterial replicative DNA polymerase (DNA polymerase III) is recruited to the origin only after the initiator protein DnaA begins assembly of the bacterial replisome, which is composed of at least 30 proteins.
- The replication forks move bidirectionally from the origin until the entire circular chromosome is replicated.
DNA Replication in Eukaryotes
- The distinctive features of eukaryotic DNA replication arise from differences in chromosome structure and the replication machinery.
- Eukaryotes have multiple chromosomes, each of which is usually much larger than a typical bacterial chromosome.
- The DNA in eukaryotic chromosomes is linear, which means that a mechanism for replicating chromosome ends is needed.
- Finally, eukaryotic DNA is wound around histones. These must associate with the newly synthesized strand of DNA.
Initiation of DNA Replication
- Replication of eukaryotes is initiated at multiple origins of replication. This allows the chromosome to be replicated much faster than it could be if there were only one origin per chromosome.
- Two replication forks move outward from each origin until they encounter replication forks that formed at adjacent origins.
- Thus eukaryotic chromosomes consist of multiple replicons, rather than the single replicon (i.e., the entire chromosome) observed in bacteria.
- In eukaryotes, origins of replication are “marked” by a complex of proteins called the origin recognition complex (ORC), which remains bound to the origins throughout much of the cell cycle.
Major Replisome Proteins in Bacteria and Eukaryotes
|Initiator; binds origin||Dna A||ORC|
|Helicase loader||DnaC||Cdc6 and Cdt1|
|Single-strand binding||SSB proteins||RPA|
|Primer synthesis||DnaG primase||Pol α-primase1|
|Replicative DNA polymerase||DNA polymerase III (C-family polymerase)||DNA polymerase δ and DNA polymerase ε (B-family polymerases)|
|Clamp loader||ϒ complex||RF-C|
|Ligase||DNA ligase2||DNA ligase 12|
|Primer removal||Ribonuclease H; DNA polymerase I||RNaseH/Fen1|
- Six different ORC proteins have been identified (Orc1 to Orc6), and these are combined in a species-specific manner to form the ORC.
- For instance, the ORC of the yeast Schizosaccharomyces pombe consists of six ORC proteins, two each of Orc1 (origin-recognition-complex protein 1), Orc2, and Orc4.
- ORC serves as a platform on which other proteins assemble in a cell cycle-dependent fashion. The first two proteins to associate with ORC are Cdc6 (cell-division-cycle protein 6) and Cdtl (CdclO-dependent-transcript 1 protein).
- The two proteins bind ORC in the late M/early G 1 phases of the cell cycle.
- Together, ORC, Cdc6, and Cdtl recruit a set of proteins called the MCM complex to the origin, thereby forming the pre-replicative complex (pre-RC). Pre-RCs are activated as the cell cycle transitions from the Gl phase to the S phase by phosphorylation of Cdc6.
- Cdc6 and Cdtl are then replaced by numerous other proteins, forming the complete replication machinery. At this point, MCM, which has helicase activity, begins unwinding DNA and replication begins.
Eukaryotic Primases and DNA Polymerases
- DNA polymerases from many organisms have been analyzed and sorted into seven families based on their amino acid sequence and structure.
DNA Polymerase Families
|A||Bacterial DNA polymerase I||Replacement of RNA primers present in Okazaki fragments|
|B||Archaeal DNA polymerase B (Pol B)
Eukaryotic DNA polymerases α, δ, and ε
|Replicative DNA polymerases|
|C||Bacterial DNA polymerase III||Replicative DNA polymerase|
|D||Archaeal DNA polymerase D (Pol D)||Replicative DNA polymerase in some archaea; unique to archaea|
|X||Eukaryotic DNA polymerase β||DNA repair|
|Y||Bacterial DNA polymerase IV (Din B)||DNA repair|
|Reverse transcriptase (RT)||Retroviral reverse transcriptases Telomerase RT||RNA-dependent DNA polymerase|
- As shown in table eukaryotic DNA polymerases are found in different families than are the bacterial enzymes.
- In eukaryotes, three DNA polymerases are responsible for DNA replication:
- DNA polymerases α-primase
- DNA polymerases ε-primase
- DNA polymerases δ-primase
- Synthesis of the leading and lagging strands begins with the formation of a primer in all organisms, regardless of domain.
- However, among eukaryotes, this is accomplished with a bifunctional enzyme called DNA polymerase α-primase (often called simply Pol α-primase).
- This enzyme consists of four subunits:
- Two enzyme catalyze DNA synthesis (Pol α).
- Two enzyme catalyze RNA synthesis (primase).
- The primer is made in two steps:
- The primase component of the enzyme makes a short RNA strand (∼10 nucleotides), which is then transferred to the active site of Pol α.
- Pol α adds an additional 20 or so deoxyribonucleotides to the RNA strand. Thus in eukaryotes, the single-stranded primer for DNA replication is an RNA-DNA hybrid molecule.
- Once the primer is formed, the other two DNA polymerases take over.
- Evidence suggests that DNA polymerase ε (Pol ε) is responsible for leading-strand synthesis, whereas DNA polymerase δ (Pol δ) carries out lagging-strand synthesis.
- Thus unlike most bacteria where DNA polymerase III synthesizes both leading and lagging strands, two distinct DNA polymerases carry out these functions in eukaryotes.
Telomeres and Telomerases: Protecting the Ends of Linear DNA Molecules
- The fact that the DNA in eukaryotic chromosomes is linear poses several problems.
- Without protection, the ends are susceptible to degradation by enzymes called DNases.
- They are also able to fuse with the ends of other DNA molecules, thus generating aberrant chromosomes, which may not segregate properly during cell division.
- Finally, linear chromosomes present a problem during replication because of DNA polymerase’s need for a primer that provides a free 3′-0H.
- When the primer for the Okazaki fragment at the end of the daughter strand is removed, the daughter molecule is shorter than the parent molecule.
- Over numerous rounds of DNA replication and cell division, this leads to a progressively shortened chromosome.
- Ultimately the chromosome loses critical genetic information, which could be lethal to the cell. This problem is called the “end replication problem,” and a cell must solve it if it is to survive.
- Eukaryotic cells have solved the difficulties related to having linear chromosomes by forming complex structures called telomeres at the ends of their chromosomes and by using an enzyme called telomerase.
- Telomeres are protein-DNA complexes that protect the linear DNA within them from degradation and end fusion.
- The protein component of a telomere varies from species to species, as does the length of DNA present.
- Telomeric DNA contains many copies of a particular sequence of nucleotides, placed one after the other (tandem repeats).
- Importantly, the DNA is single-stranded at the very end and is called the G-tail because it is rich in guanosine bases; telomerase uses the G-tail to maintain chromosome length.
Telomerase has two important components:
- An internal RNA template
- An enzyme called telomerase reverse transcriptase (RT).
An internal RNA template
- The internal RNA template is complementary to a portion of the G-tail and base-pairs with it.
- The internal RNA template provides the template for DNA synthesis, which is catalyzed by telomerase RT (i.e., the 3′ -OH of the G-tail serves as the primer for DNA synthesis).
- After being lengthened sufficiently, there is room for synthesis of an RNA primer, and the single strand of telomere DNA can serve as the template for synthesis of the complementary strand. Thus the length of the chromosome is maintained.
Telomerase reverse transcriptase (RT)
- Telomerase RT deserves additional comment. As described in the preceding paragraph, telomerase RT synthesizes DNA using an RNA template.
- Enzymes with this capability are defined as RNA-dependent DNA polymerases.
- RNA-dependent DNA polymerases are not unique to telomerases; certain viruses use RNA-dependent DNA polymerases to complete their life cycles (e.g., human immunodeficiency viruses and hepatitis B viruses).
Reference and Sources
- Transcription in prokaryotes: Initiation, Elongation and termination
- Plasmid: Properties, Types, Replication and Organization
- Spectroscopy: Introduction, Principles, Types and Applications
- Vector: properties, types and characteristics
- Bacteriophage: characteristics and replication of lytic and lysogenic cycle
- Chickenpox (Varicella) and Shingles (Herpes Zoster)
- Nitrogen Cycle |
The chemical weapons are either some gases or liquid, formulated with the aim to inflict death of living. A chemical weapon is anything that injures, irritate or kill by mean of chemical activity. These are ‘the chemicals of mass destruction’. These are different from conventional weapons not only in their formulation but also due to different mode of harm they inflict.
Chemical weapons are liquid at room temperature and turn into gases upon release. The most lethal of these are nerve agents or nerve gases. These agents are composed of sulfur mustards components. Some examples are mustard gas and phosgene which were widely used in world war 1.
The first chemical weapon
Gerhard Schrader, accidentally developed first chemical weapon (nerve agent) in 1934. This was actually tabun. But considering their use as biological war weapon, researchers made more effort to discover and produce more nerve agents. Chemists developed approximately 2000 other nerve agents by the end of second world war. Three of the nerve agents are termed as classic nerve agents as they represent a specific class of chemicals. These are:
- Tabun is easiest to produce and its chemical name is O-ethyl dimethylaminophosphorylcyanide (GA).
- Serin, produced in 1944, and chemically it is isopropyl methylphosphonoflouridate.
- Soman last in series of classic nerve gases and chemically it is pinacolyl methylphosphonoflouridate.
Chemical weapons in war
Although the first chemical weapons, the nerve gases, were accidentally prepared. Yet they do have a role as the warfare weapons. The basic nerve gases of G series were all discovered or prepared during the First World War. After the production of tabun the Ministry of Chemical Warfare in Germany allowed the scientist to search for further chemical agents like tabun so that they could be used in warfare.
- The Nazi Germans committed genocide against Jews in World War 2. In many cases, they used commercial hydrogen cyanide blood agents (with the name Zyklon B). This resulted in largest death toll by means of chemical weapons.
- During the World War 2, German army weapons ministry allowed the production of nerve gases and agents as biological weapons. During the same period, researchers developed the V series of nerve agents.
- The massive use of nerve agents by Iraq. Iraq used mustard gas against the Kurds and Iranian troops (1980-1988).
- On 20 marches 1995, terrorists used the chemical weapons in a domestic terrorist attack in Tokyo, (Tokyo Subway Sarin attack). It was the deadliest attack in Japan after the Second World War. It not only killed many people but also about five thousand suffered temporary blindness.
- During Gulf War, no chemical weapons were used at large scale. However, some of the U.S personals exposed to these gases during the destruction of chemical depot of these weapons. But this exposure caused the Gulf War Syndrome. This is a disorder of many physical and mental health problems. Many people suffered this disorder particularly the veterans and civilian workers of the gulf war.
- During the Syrian civil war in 2013, sarin gas was used at large scale. It killed hundreds and thousands of people in Syria. These chemical weapons are still in use in Syria.
- In 2016, the pepper spray and the nerve gases remain common use by police and army in conflicts. The pepper spray is not lethal yet it causes blindness and skin irritation.
States with stockpiles of chemical weapons
In June 1997, India affirmed its stocks of chemical weapons. Reports show the huge stockpiles of chemical weapons in India. These are about 1044 tons of sulfur mustard. In 200, India became one of the six nations to have stocks of lethal chemicals. According to the rules of organization and prohibition of chemical weapons (OPCW), India reported to UN that it had completely destroyed its chemical weapons.
During Iraq-Iran war, the former used mustard gases absent Kurdish. Halabja chemical attack on March 16, 1988, killed about 3000-5000 people including the civilians and army.
Japan also stored chemical war weapons form 1937-1945. These were mostly the mustard and lewisite gas mixture. However, in September 2010, Japan reported destruction of all stocks, as instructed by OPWC.
Libya used a massive amount of chemical weapons during Muammar Gaddafi regime. They have destroyed most of their stocks. But, it still has stocks of about 11.25 tons of chemical weapons. The national transitional council of Libya after the regime of Muammar Qaddafi is cooperating with the OPCW to ensure the complete destruction of any type of lethal chemicals.
Russia had one of the largest stockpiles of chemical weapons. However, by the end of 2016, Russia destroyed about 94% of its chemical stocks. In September 2017, Russia announced to destroy its last batch of chemical weapons. Yet in March 2018, Russia was alleged to have stockpiles of these but also to conduct a chemical attack in Salisbury.
Syrian government admitted on 23 July, 2012 to have stocks of chemical weapons. Syria produces few hundred tons of these lethal weapons annually. Previously, in 2007, a chemical weapon depot of Syrian army exploded causing a few causalities. The government declared it was nothing related to chemical weapons. In august 2012, stock of chemical weapons was located in east Aleppo. Similarly, the first use of chemical weapon was reported in Syrian uprising and conflicts in March 2013. the Syrian army massively used chemical weapon in Ghouta in August 2013. Syrian army is still using these lethal gases killing a huge number of innocent civilians there.
The use of chemical weapons is an inhuman act and is never justified. United States has destroyed about 90% of chemical weapon’s stocks. However, the complete destruction will take up to 2023. And this is possible if all the countries immediately stop the production and stockpiling these chemical weapons. |
By Maitreya Shah
This article intends to examine the user interface (UI) design strategies of accessibility overlay tools, and their implications on the access of internet for people with disabilities.
Thousands of companies are increasingly using accessibility overlay tools to make their websites more accessible for people with disabilities (Markee, 2022). These tools, now dozens in the market, claim to fix web accessibility issues with automated solutions that are cheaper than employing conventional, relatively expensive human auditors. However, contrary to these claims, people with disabilities, their purported beneficiaries, allege that these tools rather obstruct their access to internet (Morris, 2022). These tools therefore warrant closer scrutiny. What are accessibility overlay tools? How do they impact the experiences of people with disabilities on the internet? This blog intends to examine the user interface (UI) design strategies of accessibility overlay tools, and their implications on the access to internet for people with disabilities.
Significance of Digital Accessibility
The power of the web is in its universality (W3C). For the estimated 15% of the world population that lives with some form of a disability (WHO and World Bank, 2011), internet and information and communication technologies (ICTs) break traditional barriers to communication, interaction, and access to information (Samant Raja, 2016). However, when these technologies are not accessible, they pose a risk of widening the digital divide for people with disabilities (Samant Raja, 2016). ‘Accessibility’ refers to inclusive design practices that make websites and web-applications usable for all users regardless of their disabilities (W3C). To illustrate, blind people use screen readers, which would require alternate text containing a description of images and their contents. Similarly, other elements of a web page have to be designed keeping blindness, hearing impairment, and cognitive and learning disabilities into consideration. In 2022, WebAim, a United States-based nonprofit, conducted audits of over one million web pages on the internet, of which only about 1% were found to be accessible.
Accessibility is one of the core principles on which the United Nations Convention on Rights of Persons with Disabilities (UNCRPD) is premised (Lawson, 2017). Despite having an entire Article 9 dedicated to it, accessibility also forms part of the general principles governing the Convention. There is hence an overarching applicability, making accessibility both a standalone right and a means to access other substantive rights such as education, healthcare, political participation, and economic opportunities, (Lawson, 2017; Darvishy et al, 2019). The CRPD Committee, in its General Comment no. 2, has also explicated that denial to provide accessible information could amount to discrimination on the basis of disability. Moreover, accessibility is ex ante (anticipatory) in nature, and is available to people with disabilities as a group, as against it being an individual right (Lawson, 2017).
Challenges in the Implementation of Web Accessibility
To streamline the process of designing accessible websites, the World Wide Web Consortium (W3C) has formulated Accessibility Guidelines referred to as ‘WCAG’. As these guidelines have been formulated by an independent international entity, they are not legally enforceable. However, domestic legislations and judicial precedents of many countries formally recognize WCAG as the standards for web accessibility compliance. The Government of India Guidelines on Website Accessibility (GIGW) 2018 for example, borrows many of its requirements from the WCAG. In the United States, courts have frequently interpreted Title III of the Americans with Disabilities Act (ADA) to include accessibility of the internet in conjunction with access to public accommodations such as shopping malls, stores, restaurants, and schools (Thompson, 2018). In Robles v. Domino’s (2019), the court refused to legally enforce WCAG, as the guidelines were formulated by a private organization. However, owing to the absence of state-backed regulations, the court granted its conformance as an equitable remedy for ADA Title III violations.
As the Department of Justice is yet to promulgate digital accessibility regulations, litigation is currently the only alternative for settling all accessibility disputes in the United States. Consequently, thousands of lawsuits are filed every year alleging web accessibility violations. According to a data tracker managed by Seyfarth Shaw LLP, in 2021 alone, litigants filed over 11,000 web accessibility lawsuits in the American federal courts. This upsurge in accessibility-related court cases has an apparent causal connection to the advent of overlay tools in the market, a majority of which have originated in the United States.
There have previously been instances where same plaintiffs file multiple lawsuits alleging accessibility violations against private companies. While this number is only a fraction of the overall registered cases, the legitimacy of plaintiffs and their motives have often been a subject of public scrutiny (Markham, 2021). A much-contested political issue is whether these lawsuits are filed with an underlying arrangement with law firms to make profits out of the compensation received from companies. Such debates are responsible for shaping the negative perception of private enterprises towards accessibility.
Additionally, there are many misconceptions about the costs of making a website accessible. Although developing an accessible website involves some associated costs towards designing, testing, and training, the Web Accessibility Initiative in its factsheet has clarified that companies do not have to incur any exorbitant expenses. Still, designing accessible websites do require some monetary investments and human resources, especially if an enterprise is trying to modify an existing website to comply with accessibility standards. Thus, when overlay vendors came out with miraculous automated tools that promised inexpensive, easy compliance to accessibility standards, it turned out to be the best contrivance for many enterprises.
What are Overlay Tools?
In user interface (UI) design, a generic overlay is deployed as an additional layer on the top of the pre-existing interface structure. It comes in many forms- modals, dialogue boxes, scroll bars, or tooltips (Lindberg, 2019). Overlays are generally used as interruptive tools to shift a user’s attention to specific texts, confirmation messages, security checks, or advertisements. In a majority of cases, users cannot access the actual UI of the website or program until they interact with the overlays (Moran, 2021).
WCAG is primarily addressed to web developers, designers, and accessibility evaluators. Its Version 2.2 contains a list of thirteen guidelines to make content accessible for different disabilities, which also have accompanying success testing criteria. Some of these guidelines include technical specifications on providing text alternatives to multimedia, designing form labels, making websites operable with keyboard commands, and writing texts in easy-readable formats (WCAG 2.1 at a Glance, 2018). Each of these functions require human evaluators that could test the success of their designs with assistive technology (The Criticisms and Objections of Accessibility Overlays, 2021).
Overlays on the other hand, can only detect programmatic issues with the websites, constituting less than 30% of the WCAG success criteria (The Criticisms and Objections of Accessibility Overlays, 2021). As they do not have access to the source code of the website, no formidable changes to the UI can be carried out. Moreover, although they claim to deploy Artificial Intelligence in detecting and fixing accessibility errors, they only improve the presentation of the website by providing users with options to change the text color, background, font size, or converting text to speech (Overlay Fact Sheet, n.d.). To people who lack knowledge about accessibility, these improvements seem beneficial. However, these changes might be practically overstated, given that people with disabilities already have most of these features built into their assistive technology such as screen readers or text magnifiers (Overlay Fact Sheet, n.d.). Overlays are also built and hosted on the vendor’s server, decelerating the performance of the website they are attached to (Male, 2022).
Harms perpetuated by deception
A typical UI with an accessibility overlay installed would usually have a toolbar, ribbon, or a menu (Kuykendall, 2020) declaring that the website is accessible. This kindles trust in the minds of the disabled users visiting the website. However, when these tools do not actually provide an accessible experience to the users, they become vulnerable to deception, fraud, and misrepresentation. To illustrate, overlay tools can only approximate limited information about images, and can provide neither detailed, nor accurate alternate texts (Kuykendall, 2020). On an e-commerce platform, where product images are of greater significance, such inappropriate image descriptions could mislead people into buying products they do not intend to. These tools are also not capable of providing accurate labels to form fields; which in fact require considerable human inputs (Overlay Fact Sheet, n.d.). While filling out forms for crucial social security benefits or carrying out financial transactions, even a single instance of inappropriate labeling could result in major catastrophes.
As one of their worst repercussions, these tools also often block the assistive technology of users, entirely depriving them of access to the websites and their content (The Criticisms and Objections of Accessibility Overlays, 2021). Many blind users for example, have said they were not able to navigate a website, because the overlay rendered the interface inoperable with keyboard commands (Morris, 2022). While changing the layout of the web pages, they also tend to override user settings, disturbing how their assistive technology works. In a lawsuit filed in the United States, the plaintiff demonstrated how his screen reader stopped working after he clicked on the AccessiBe’s toolbar installed on the defendant’s website. Although the interface in its original form was not very accessible, overlays ironically made it entirely impossible for him to use the website (Murphy v. Eyebobs, 2021). For someone with a cognitive disability such as autism or dyslexia impacting their abilities of reading, comprehending, or remembering things, manual designs are more acceptable (Darvishy et al, 2021) over inutile modifications of automated tools.
By making websites unusable for people with disabilities, they also run the risk of depriving them of crucial resources such as healthcare, livelihoods, or social protection. Studies have also highlighted the low levels of digital literacy amongst people with disabilities (Darvishy et al, 2021), making them more vulnerable to frauds, scams, and deceptive practices. Many overlay tools also tend to detect the assistive technology of users, revealing sensitive information on their disabilities and functional limitations (Overlay Fact Sheet, n.d.). Not only are such practices violative of user privacy, but they also open up avenues of discrimination. In a 2018 lawsuit, Facebook was sued for employing algorithms that detected disabilities of its users, which were then unlawfully used to exclude them from housing advertisements (Marks, 2020). There are currently no regulations explicitly governing the use of data by these overlay tools, further aggravating the vulnerability of people with disabilities.
Conclusion and Recommendations
Ashley Shew (2020) has explicated the concept of “technoableism”. Technoableism refers to technologies that are purportedly developed to benefit people with disabilities, but have underpinning historic ableist biases. Overlay tools are befitting examples of such technologies that are developed in the guise of helping the disabled, but inflict more harm in their application.
Although the growth of AI and emerging technologies is immensely beneficial for an inclusive digital future, it is equally significant to critically examine technologies that are inherently ableist in nature. One of the most effective solutions to confront deceptive practices of these tools is public awareness. Dissemination of information on their harms could potentially stop companies from buying these tools, and also equip people with disabilities with strategies to tackle them. A group of over five hundred disability rights advocates, accessibility experts, and organizations from the United States wrote the Overlay Fact Sheet to call out the manipulative tactics of these tools. A disabled web developer has also built AccessiByeBye, a browser extension that detects and thwarts overlay tools on the internet.
The discourse around overlays has however conventionally been restricted to an accessibility standpoint. This blog is therefore a novel attempt to understand overlay tools from a deceptive design perspective, thereby expanding the awareness about their harms amongst a diverse group of stakeholders. When developers and designers are apprised of accessible designs at the very inception of a web development process, such add-on tools would be left redundant.
Download the full article:
ABOUT THE AUTHOR
Maitreya Shah is a blind lawyer and a technology policy researcher. He works on the intersection of disability rights and responsible technology. He can be reached here.
Shah, M. (2023). Examining the deceptive practices of accessibility overlay tools. The Unpacking Deceptive Design Research Series. The Pranava Institute. <https://www.design.pranavainstitute.com/post/examining-the-deceptive-practices-of-accessibility-overlay-tools> |
Researchers at RTI International have developed a new solar technology that could make solar energy more affordable, and thus speed-up its market adoption.
The RTI solar cells are formed from solutions of semiconductor particles, known as colloidal quantum dots, and can have a power conversion efficiency that is competitive to traditional cells at a fraction of the cost.
Solar energy has the potential to be a renewable, carbon-neutral source of electricity but the high cost of photovoltaics — the devices that convert sunlight into electricity — has slowed widespread adoption of this resource.
The RTI-developed solar cells were created using low-cost materials and processing techniques that reduce the primary costs of photovoltaic production, including materials, capital infrastructure and energy associated with manufacturing.
Preliminary analysis of the material costs of the technology show that it can be produced for less than $20 per square meter — as much as 75 percent less than traditional solar cells.
“Solar energy currently represents less than 1 percent of percent of the global energy supply, and substantial reductions in material and production costs of photovoltaics are necessary to increase the use of solar power,” said Ethan Klem, a research scientist at RTI and co-principal investigator of the project. “This technology addresses each of the major cost drivers of photovoltaics and could go a long way in helping achieve that goal.”
The technology was recently featured in a paper published in Applied Physics Letters.
In demonstration tests, the cells consistently provided a power conversion efficiency more than 5 percent, which is comparable to other emerging photovoltaic technologies.
“The efficiency of these devices is primarily limited by the amount of sunlight that is absorbed,” said Jay Lewis, a senior research scientist at RTI and the project’s other principal investigator. “There are many well-known techniques to enhance absorption, which suggests that the performance can increase substantially.”
The cells, which are composed of lightweight, flexible layers, have the potential to be manufactured using high volume roll-to-roll processing and inexpensive coating processes, which reduces capital costs and increases production. Unlike traditional solar cells, the RTI-developed cells can be processed at room temperature, further reducing input energy requirements and cost.
In addition to being low-cost, the new cells have several other key benefits, including higher infrared sensitivity, which allows the cells to utilize more of the available solar spectrum for power generation. |
World War II
|World War II|
North and Eastern Africa
French National Committee
Soviet Union (from 1941)
Republic of China
United States (from 1941)
Italy (from 1943 to 1945)
Italy (until 1943)
Franklin D. Roosevelt
|61,000,000 total||12,000,000 total|
World War II, also known as The Great Patriotic War, was a global set of conflicts beginning in 1931 in Asia, 1935 in Africa, and 1939 in Europe, all lasting until 1945, in which the Allied powers, led after the Fall of France by the British Commonwealth, and including the United States, the Soviet Union, the Republic of China, among many other nations, completely defeated the Axis Powers, led by Nazi Germany, and including Italy, Japan, Hungary, Romania and Bulgaria. Although Japan's war against China began in 1937, the main conflict started in September 1939 when Germany and the Soviet Union divided Poland; Britain and France then declared war on Germany. France was quickly knocked out of the war and became divided betwen the collaborationist Vichy regime occupying the continent and the so-called "Free French" in exile in England and North Africa.
The conflict was the deadliest in human history with estimated deaths ranging from 50 million to over 70 million soldiers and civilians. It ended with the Soviet Union dominant in a part of Central Europe and all of Eastern Europe, and the U.S. and its allies dominant in Western Europe, a part of Central Europe and Scandinavia, setting the stage for the Cold War.
- 1 Causes
- 2 Invasion of Poland
- 3 Fall of France, Denmark, and Norway
- 4 Finland: The Winter War
- 5 Battle of Britain
- 6 Operation Barbarossa
- 7 Turning point: The Battle of Stalingrad
- 8 Far East and Pacific
- 9 Impact
- 10 See also
- 11 Further reading
- 12 References
- 13 External links
Invasion of Poland
- See also: Molotov-Ribbentrop pact and World War II: 1939
In the immediate run up to WWII, there were frequent reports of trespassing Polish troops. On August 31, 1939 German covert operatives staged a fake attack by Polish troops on a German radio station. WWII started on September 1, 1939, when German troops invaded Poland. Hitler justified this as a defensive act, pointing to the frequent border incidents, and said famously that from this moment on Germany would strike back.
The major tactical innovation of the war was the use of combined arms warfare, typified by the German doctrine of blitzkrieg. In this style of warfare armor, infantry, artillery and air power (see Luftwaffe) all coordinate to achieve overwhelming superiority at a point on the enemy lines. Armor and fast-moving infantry units then exploit the gap and penetrate deep behind enemy lines. The objective is to cause a widespread collapse of the enemy's ability to fight. It was particularly effective during the early stages of the war, before the Allies developed effective countermeasures. On September 17, 1939, Poland was invaded from the east by Hitler's ally, Stalin. Before the month was out, the Nazi and Soviet armies staged a joint victory parade through the streets of occupied Brest-Litovsk, Poland, where the Soviets handed over to the Gestapo some 600 prisoners, "most of them Jews."
In 1939-1940, eastern Poland, Estonia, Latvia, Lithuania and Bessarabia were invaded and annexed by the Soviet Union.
Fall of France, Denmark, and Norway
- See also: Anti-Comintern Pact
Once the invasion of Poland was complete, German forces regrouped while French and British forces remained on the defensive, leading US commentators to dub it the Phoney War. May 10, 1940 made clear that the war was real, as Germany invaded France, occupying neutral countries such as the Netherlands, Luxembourg and Belgium in the process. Resistance by the British and French armies proved ineffective, and France was soon surrendered. British and French troops were routed and evacuated mainland Europe at Dunkirk. France was divided into the northern Occupied France and the collaborationist Vichy regime in the south of France, including Corsica. The United States granted full diplomatic recognition to the Vichy regime, whereas the United Kingdom granted recognition to the French National Committee led by Gen. Charles De Gaulle.
The collapse and occupation of France, together with Germany's non-aggression pact with the Soviet Union, Germany's alliance with fascist Italy and an expansionist Japan, the benevolent neutrality of fascist Spain, and the fact that little of Europe was outside Axis control, led many to assume that Britain had been defeated. Indeed, it would appear that the seemingly foolish decision of the relatively weak Britain to continue the war took the Axis powers off guard. This decision ensured the remaining British Empire was still involved in the war, with Japan threatening many British possessions in Asia.
In 1940 Denmark and Norway were invaded by German forces, to preempt a British occupation of Norway and occupy its coastline and ports to be used by the Kriegsmarine. Norway also contained a source of Heavy water, potentially crucial in the construction of an atomic weapon. The operation was successful, but losses were heavy, especially to the Kriegsmarine. This was soon followed by the British troops invited by Iceland and American occupation of Greenland. (The goal was to prevent any increase in the range of German air and submarine activity, brought about the occupation of these lands - and of the Azores at the request of the Portuguese Government.)
All these countries, France, the Low Countries, Denmark, and Norway, provided troops and manpower to the SS, and industrial capacity to the German war machine.
Finland: The Winter War
The Soviet Union invaded Finland on November 30, 1939. This conflict came to be known as the "Winter War". Despite the overwhelming numbers of the Red Army, the Finnish resistance was strong and the battle was hard-fought before the Soviet army took control. Outside powers (including the U.S.) considered intervention to help Finland; only a little aid trickled in and Finland was forced to sue for peace. The peace treaty signed in March 1940 favored the Soviets, but they paid heavily for their victory with 200,000 dead. Finland lost 25,000 dead, and had to absorb 400,000 refugees from areas turned over to the Soviets. In 1941 Finland joined Germany in attacking the Soviets, in the Siege of Leningrad, but lost again.
An armistice in Sept. 1944 stabilized the border, using March 1940 lines; in addition Finland had to pay heavy reparations and had to remain neutral in the Cold War.
Battle of Britain
- See also: Battle of Britain
With Britain the sole opposing European nation, the Battle of Britain commenced. The Luftwaffe attempted to achieve aerial dominance over the south of Britain, in order to allow a sea based invasion of the British Isles to proceed. From 10 July to the end of October the Royal Air Force and the Luftwaffe fought for dominance; the resilience of the RAF, which counted in its ranks also Commonwealth personnel, US volunteers and Polish and Czech exiles, and the use of radar and its associated early warning systems, had forced a rethink of German tactics. It was the first significant setback for the Germans in the War. They now concentrated on the great population centers, especially London, hoping that huge civilian casualties would weaken morale and lead to a lessening of the war effort by the populace. The period that followed, popularly known as the Blitz, lasted into May 1941. Around 40,000 civilians and civil defense workers died; but the Germans failed to reach their objectives and their resources were soon diverted to the Eastern front as Hitler began concentrating on the impending invasion of the Soviet Union.
With the pressure off their air bases the RAF was now able to increase its nightly raids on industrial sites in Germany and occupied lands. Because of the inability to correctly target these sites, the raids soon turned into “area bombing”, and German civilian casualties rose. These raids were to reach further into Germany as the war progressed and were greatly increased when American bombers began their sorties.
Battle of the Atlantic
The aircraft carrier HMS Courageous was sunk by a German submarine on September 12, 1939, and the carrier Ark Royal narrowly missed a similar fate 2 days later. The Kriegsmarine scored an even greater victory in October, when U-47 penetrated the Royal Navy base at Scapa Flow and sank the WWI-era battleship Royal Oak.
During World War II, U-boats were primarily used to destroy Lend-Lease transport vessels supplying the United Kingdom with the aim of causing supply shortages and forcing Britain to surrender. At first this was highly successful, but the Allies later developed many countermeasures, such as properly organized escorts, the Magnetic Airborne Detector that detected the change in local magnetic field caused by the U-boats, the 'Huff-Duff' system that tracked U-boats by their radio transmissions, improvements in depth-charges and sonar.
The primary weapons of U-boats were torpedoes.
- See also: Battle of El Alamein and Italy_in_World_War_II#North_Africa
The Battle of El Alamein took place in the North African desert in Egypt in October–November 1942. British and Commonwealth forces led by General B. L. Montgomery ('Monty' to the troops) attacked and overwhelmed a German–Italian force led by Field Marshal Erwin Rommel. Following the victorious outcome of the battle, the Allied forces chased the Germans westwards across North Africa to Tunisia, where, in concert with an American army which had landed in North Africa in Operation Torch, the Axis forces were driven out of Africa.
Alamein was also significant in raising morale in Britain, as it was the first significant land victory over German forces by the British Army. Churchill remarked that "before Alamein, we never had a victory; after Alamein, we never had a defeat." This was not entirely accurate, but did pinpoint the battle as a turning point in British conduct of the war, which had hitherto seen a series of defeats against Germany (Dunkirk, Greece, Crete, the Desert).
The Battle also cemented the reputation of Montgomery as a victorious general. He was a cautious commander, and carefully built up a great superiority in arms, equipment and men, before launching his attack. Criticized by some for over‐caution in action, and over‐exuberance, not to say arrogance, in his dealings with other generals and politicians, his care for the welfare of his men made him a popular leader.
Michael Tracey writes:
|"Those keen to maintain an ability to say that US entry to World War II was justified, even if certain methods or tactics were not justified, often point to a few high-profile examples of bombing raids which they think may have crossed a line. These commonly include the firebombing of Dresden, the firebombing of Tokyo, and the nuclear strikes on Hiroshima and Nagasaki — all of which deliberately targeted civilians for mass destruction. But less commonly understood is that these attacks were not aberrational. Deliberate targeting of civilians was always a foundational tenet of the Allied (and US) aerial warfare strategy.
According to a review published in 2006 by Air University Press, and conveniently available on the Defense Department’s website, US perpetration of so-called “area raids” in the European theater was officially authorized and systematic.
The first “city bombing attack” conducted by British forces was in Mannheim, Germany on December 16, 1940 — crews were ordered to “drop incendiaries on the center of town,” reports the Air Force review. “The attack had the clear intention of burning out the city center.” By September 27, 1943, the US officially institutionalized this tactic. In a raid on the German city of Emden, the command headquarters of the Eighth Air Force “ordered the attacking aircraft to aim for the center of the city, not specific industrial or transportation targets.” As the 2006 Air Force review specifies, “by definition an area raid on a city requires a large percentage of incendiaries.” From that point onward, the Eighth Air Force conducted at least one “area raid” per week until the end of the war. Previously raids which deliberately targeted civilian populations had occurred on a more ad hoc basis, such as on August 12, 1943, when 106 US bombers “visually attacked the city of Bonn as a target of opportunity with 243 tons of bombs.”
In January 1945, General George McDonald pointed out that in its large-scale adoption of this tactic, the US Air Force was “unequivocally into the business of area bombardment of congested civil populations,” causing “indiscriminate homicide and destruction.” In certain Air Force records, deliberate bombings of cities were concealed and falsely classified as attacks on “military targets.” But this was not some rogue activity; in a joint directive issued on January 24, 1943, Franklin Roosevelt and Winston Churchill ordered their respective air forces to “undermine the morale of the German people.” The prevailing theory was that this could be accomplished by deliberately fire-bombing civilian population centers to instill “generalized fear,” as Sir Charles Portal, Chief of the Air Staff, explained. In early 1942, the British forces had adopted “an almost exclusive focus on city centers.” As early as July 9, 1941, months before formal US entry to the war, Roosevelt had directed his military chiefs to develop operational principles for the forthcoming aerial bombardment campaign. The chiefs concluded that a central aim should be the “undermining of German morale by air attack of civil concentrations… heavy and sustained bombing of cities may crash that morale entirely.”
Even US aerial attacks in which civilians may not have been deliberately targeted are difficult to distinguish from US aerial attacks in which civilians were deliberately targeted. As Tami Davis Biddle, professor at the United States Army War College, wrote in 2005:
Years after the civilian-targeting policy had been systematically implemented, in February 1945, Secretary of War Stimson declared: “We will continue to bomb military targets and… there has been no change in the policy against conducting ‘terror bombings’ against civilian populations.” There had only been “no change” insofar as deliberate targeting of civilians had long been the policy. This official deception continued for some time, such as on July 23, 1945, when — repeating as fact the claims made by US Air Force General James Bevans — the New York Times reported: “During the entire European war, the American air forces concentrated on precision bombing.”
“The leaders of the USAAF knew exactly what they were doing, and civilian casualties were one of the explicit objectives of area incendiary bombing approved by both the USAAF and the Joint Chiefs of Staff,” concluded Thomas Searle in the Journal of Military History (2002).
According to records referenced by Alex J. Bellamy (University of Queensland) in Massacres and Morality: Mass Atrocities in an Age of Civilian Immunity (2012), the US aerial bombardment campaign in Japan was designed by Air Force planners who “used three criteria to select targets. In order of importance, they were: (1) ‘congestion/inflammability’ of the city; (2) incidence of war industry; (3) incidence of transportation facilities.”
Bellamy writes, “Despite public claims to the contrary, therefore, the planners clearly chose cities themselves as targets and primarily on the basis of the likely destruction that could be wrought, with the presence of war industries a secondary consideration to the potential for destroying cities congested with civilians. The presence of military facilities was apparently not a major factor in target selection.”
- See also: Operation Barbarossa
1941 marked the major turning point in the war in Europe, when the Germans broke the Molotov-Ribbentrop Non-Aggression Pact and undertook Operation Barbarossa - the invasion of the Soviet Union. Stalin was repeatedly informed by his own spies and anti-German countries that Germany was about to attack; he rejected the accurate reports and paid dearly for the blunder. The Great Purge of the 1938 also had decimated the Red Army's military leadership.
In June—behind schedule because of diversions in the Balkans—the Germans launched their massive war against the Soviet Union (known as the "Great Patriotic War" in Russia). It was by far the largest, bloodiest, and most decisive phase of World War II. Outside observers in the first few months figured that Germany would win easily. But the Nazi armies were split three ways, logistics became worse and worse as distances grew, and none met their objective by the time the extreme Russian winter of 1941-42 set in. Blitzkrieg had failed against the Soviets, and the Germans lacked the resources to fight a long war against a country with such vast areas and so many more people. The Luftwaffe, which promised to overcome the slowness of ground travel, failed to provide adequate support and was soon matched and outnumbered by the Soviet air force.
The 14th Waffen Grenadier Division (1st Galician) was a German military formation made up predominantly of military volunteers with a Ukrainian ethnic background from the area of Galicia, later also with some Slovaks and Czechs. Formed in 1943, it was largely destroyed in the battle of Brody, reformed, and saw action in Slovakia, Yugoslavia and Austria before being renamed the first division of the Ukrainian National Army and surrendering to the Western Allies by 10 May 1945. Volodymyr Kubiyovych (Ukrainian Father Jewish Mother) founded this Division in order for Ukrainians to aid the Ukrainian Insurgent Army with weapons.
The Nachtigall Battalion, also known as the Ukrainian Nightingale Battalion Group, or officially as Special Group Nachtigall, was the subunit under command of the German Abwehr (Military Intelligence) special operations unit "Brandenburg". Along with the Roland Battalion it was one of two military units formed February 25, 1941 by head of the Abwehr Adm. Wilhelm Canaris, which sanctioned the creation of the "Ukrainian Legion" under German command. It was composed of volunteer "Ukrainian nationalists," Ukrainians operating under Stephan Bandera's OUN orders.
At three villages of the Vinnytsia region "all Jews which were met" were shot.
The Simon Wiesenthal Center contends that between June 30 and July 3, 1941, in the days that the Battalion was in Lviv the Nachtigall soldiers together with the German army and the local Ukrainians participated in the killings of Jews in the city. The pretext for the pogrom was a rumor that the Jews were responsible for the execution of prisoners by the Soviets before the 1941 Soviet withdrawal from Lviv. The Encyclopedia of the Holocaust states that some 4,000 Jews were kidnapped and killed at that time. It further states that the unit was removed from Lviv on July 7 and sent to the Eastern Front.
The Ukrainian Insurgent Army (UPA) arose out of separate militant formations of the OUN-Bandera faction (the OUNb). The political leadership belonged to the OUNb. It was the primary perpetrator of the massacres of Poles in Volhynia and Eastern Galicia.
Its official date of creation is 14 October 1942, The Ukrainian People's Revolutionary Army at the period from December 1941 till July 1943 has the same name (Ukrainian Insurgent Army or UPA).
The OUN's stated immediate goal was the re-establishment of a united, quasi-independent Nazi-aligned, mono-ethnic nation state on the territory that would include parts of modern day Russia, Poland, and Belarus. Violence was accepted as a political tool against foreign as well as domestic enemies of their cause, which was to be achieved by a national revolution led by a dictatorship that would drive out what they considered to be occupying powers and set up a government representing all regions and social classes.
Battle of Kharkov
As the Germans advanced on Moscow in the summer of 1941, and against the advice of the German High Command, Hitler suddenly detoured the center line of advance south, creating a huge traffic jam as the central column had to cross the southern advance. This slowed the advance on Moscow to which they did not arrive until October 16, 1941, as the snowy season set in.
The army group sent south engaged the ill-prepared, ill-equipped Red Army forces at Kharkov. 600,000 Red Army troops were quickly encircled and taken prisoner without much of a fight. Hitler declared it, "the greatest military victory of all time."
Most of the Russian POWs became slave laborers digging tank traps for Soviet T-34s; some Ukrainian POWs were recruited into German fighting units in both eastern Europe, where they committed atrocities against the civilian population, and the Western Front to guard the Atlantic Wall in advance of, and during, the Normandy Invasion.
Second Battle of Kharkov
On 17 May, 1942 the German 3rd Panzer Corps and XXXXIV Army Corps under the command of Fedor von Bock, supported by aircraft, arrived, enabling the Germans to launch Operation Fridericus, pushing back the Soviet Barvenkovo bridgehead to the south. On 18 May, Marshal Semyon Timoshenko requested permission to fall back, but Stalin rejected the request. On 19 May, Gen. Paulus launched a general offensive to the north as Bock's troops advanced in the south, thus attempting to surround the Soviets in the Izium salient. Realizing the risk of having entire armies surrounded, Stalin authorized the withdraw, but by that time the Soviet forces were already started to be closed in. On 20 May, the nearly surrounded Soviet forces mounted counteroffensives, but none of the attempts were successful in breaking through the German lines. The Soviets achieved some small victories on 21 and 22 May, but by 24 May, they were surrounded near Kharkov.
The Second Battle of Kharkov resulted in an extremely costly loss to the Soviets, which saw 207,000 men killed, wounded, or captured; some estimates put the number as high as 240,000. Over 1,000 Soviet tanks were destroyed during this battle, as well as the loss of 57,000 horses. German losses were much smaller than the Soviets, with over 20,000 killed, wounded, or captured. Soviet General Georgy Zhukov later blamed this major defeat on Stalin, who underestimated German strength in the region and failed to prepare an adequate reserve force to counter the arrival of the German reinforcement that turned the tide.
Siege of Leningrad
- See also: Siege of Leningrad
The Siege of Leningrad began in September 1941 when the armies of Nazi Germany and Finland surrounded the Russian city of Leningrad (now St. Petersburg). During the winter of 1941-42 people in Leningrad began to die in large numbers because the Germans and Finns would not allow food into the city. Many civilians were also killed by German bombing.
The Red Army finally broke the siege in January 1944. During the siege 1.2 million people died of starvation because of Finland and Germany. The Siege of Leningrad killed more civilians than the bombing of Hamburg, Dresden, Hiroshima and Nagasaki combined.
Siege of Moscow
American Lend Lease tanks provided approximately 30% of the tank force in the defense of Moscow.
Turning point: The Battle of Stalingrad
- See also: Battle of Stalingrad
In the third year of war Germany began to suffer from a lack of important resources such as oil. Hitler therefore ordered the German army to take the city of Stalingrad and the oil fields of Baku in South Russia. The operation failed after the 6th German army and parts of the 4th Panzer army were encircled in Stalingrad and completely annihilated. The Battle of Stalingrad marked a turning point in the war and the Soviet Union started launching their own offensives.
Warsaw Ghetto Uprising
- See also: Warsaw Ghetto Uprising
Word spread of the German defeat at Stalingrad causing resistance movements to rise.
Beginning at 3 am on April 19, 1943, Nazi tanks and SS troops began assaulting a Jewish ghetto in Poland. Jewish resistance fighters, totaling 700 to 750, opposed 2000 heavily armed Germans. The Jewish resistance fighters had some weapons, but no more than three light machine guns.
Waffen SS soldiers and elements of the Sicherheitsdienst sought to clear the ghetto of 60,000 Jewish residents in only three days. But the resistance fighters held off the Nazis for nearly a month, marking the first time in World War II that there was an uprising against Nazis in territory under German control. Eventually, SS forces destroyed the ghetto and its synagogue.
Battle of Kursk
- See also: Battle of Kursk and Cherkassy-Korsun Pocket
The Battle of Kursk in 1943 was the largest tank battle in history. German forces attempted to encircle Soviet forces in the Kursk Salient on the eastern front, but strong Soviet resistance defeated the German assault. Kursk was the last German offensive of any strategic significance in the east; henceforth, they would conduct a fighting retreat. Germany already suffered from a grave shortage of war resources, when Hitler ordered the offensive and lost a huge number of tanks such as the new Tigertank and the Panther.
- See also: Operation Overlord
Operation Bagration and the Vistula front
- See also: Vistula-Oder Operation
After a time of comparatively slow progress, the brilliant Soviet officer, Konstantin Rokossovsky, engineered "Operation Bagration", named-so after the Napoleonic Russian hero. The operation was extremely successful for the Soviets, leading to around 600,000 Soviet casualties and over 500,000 German casualties, including over 60,000 German vehicles and tanks. Furthermore, the German Army Group Center (Heeresgruppe Mitte) ceased to exist as an effective fighting force, due to its massive losses in men and material. Even the Germans' best officer, Erich von Manstein, couldn't turn the situation around.
The Vistula-Oder Operation took place on the Eastern Front between January 12 and February 2, 1945. Hitler appointed Heinrich Himmler, who dreamed of being a combat leader but never had any combat training or duty, in charge of German operations on the Vistula front. Soviet troops, led by Marshals Georgi Zhukov and Ivan Konev, advanced from the Vistula river in Poland to the Oder river which was only 50 miles from Berlin. The extermination camp at Auschwitz was liberated on January 27, 1945. The Wehrmacht suffered enormous losses as a result of the operation.
The Waffen-SS Division Charlemagne was formed in September 1944 from French collaborationists, many of whom were already serving in various other German units. Named after Charlemagne, the 9th-century Frankish emperor, it superseded the existing Legion of French Volunteers Against Bolshevism formed in 1941 within the German Army and the SS-Volunteer Sturmbrigade France formed in July 1943. The division also included French recruits from other German military and paramilitary formations and Miliciens, who were created by the Vichy regime to help fight the French Resistance. The SS Division Charlemagne had 7,340 men at the time of its deployment to the Eastern Front in February 1945. It fought against Soviet forces in Pomerania where it was almost annihilated within a month.
Battle of Berlin
Finally, in 1945, Soviet troops stormed Berlin, and forced Nazi Germany into capitulation. Around 300 members of the Waffen-SS Division Charlemagne participated in the Battle in Berlin in April–May 1945 and were among the last Axis forces to surrender.
The UK, US and Russia reached an agreement at Potsdam called the Potsdam Agreement on 30 July 1945. The Allied Control Council was constituted in Berlin to execute the Allied resolutions known as “Four Ds":
- Denazification of the German society to eradicate Nazi influence
- Demilitarization of the former Wehrmacht forces and the German arms industry
- Democratization, including the formation of political parties and trade unions, freedom of speech, of the press and religion
- Decentralization resulting in German federalism, along with disassemblement as part of the industrial plans for Germany.
Far East and Pacific
- See also: Second Sino-Japanese War
After Pearl Harbor, the Japanese juggernaut seemed unstoppable. In the south, they conquered the Philippines, the oil-rich Dutch East Indies, Malaysia, and extended their reach as far as the Solomon Islands. In the west, they seized Burma and the vital port at Rangoon, and even attacked British forces at Ceylon. The Japanese empire now reached as far as Wake Island in the east and the Aleutian Islands to the north. Attacks on Japanese targets, including the Doolittle raid, boasted American morale, but did little material damage. In May 1942, Japanese forces were finally halted at the Battle of the Coral Sea, which cost the Americans a precious aircraft carrier, but saved southern New Guinea. At the Battle of Midway a month later, the Japanese lost four of their best carriers, suffering a blow to their sea power from which they never recovered.
The Americans took the offensive in August with a landing on the island of Guadalcanal. The overall American offensive strategy was two-pronged. Forces in the south advanced up the Solomon island chain and New Guinea, while in the central Pacific, Marines took island after island, including Tarawa, Eniwetok, Saipan, and Guam. The two lines of attack came together at the Philippines.
Integral to the strategy was the policy of island hopping. Many Japanese strongholds were bypassed, allowing the American forces to concentrate on more strategically significant islands. For example, Truk and Rabaul were home to major Japanese air and naval bases, but once the bases were neutralized, there was no reason to take on the troops there. This policy not only saved thousands of American (and Japanese) lives, it shortened the war by at least several months.
The American invasion of the Philippines took place in late October 1944 when Marines landed on Leyte Island. A few days later, the US Navy shattered what was left of Japanese naval power in the Battle of Leyte Gulf. The Japanese fought hard, however, and Leyte took two months to secure. When the Americans landed on the other islands, they found the troops there equally unwilling to retreat, but with American superiority in almost every area, the outcome was never really in doubt. Manila was captured by March, and the American position had become solid enough that leaders could start preparing for the final stage: the invasion of Japan. The first step was taken when the island of Okinawa was captured in June after two months of heavy fighting. Operation Olympic, the invasion of Kyushu, was scheduled for November 1945, followed by Operation Coronet, the invasion of Honshu, in March 1946.
The Japanese, soldiers and civilians alike, were expected to put up a fierce defense. Army Chief of Staff George C. Marshall believed that Japan would fight to the last man, and insisted on preparing for a land invasion of Japan with an army of 2,000,000 men anticipating a tremendous number of casualties. Some analysts estimated the number of projected casualties from Operation Olympic alone at 250,000 dead and wounded. For this reason, Washington strongly requested the Soviets declare war on Japan. At the Potsdam conference in mid-July 1945, Stalin told President Truman the Soviets would declare war on Japan but would not give a firm timetable. (This was the last of the four "Allied" conferences, taking place in mid-July 1945; the other three were: the Tehran Conference from November 28 to December 1, 1943; the Cairo Conference from November 22 to November 26, 1943; the Yalta Conference from February 4 to February 11, 1945.)
After the successful atom bomb test in the U.S., President Truman was left with the immense task of deciding what to do with the power of the atomic bomb. Truman assembled a committee to advise him. The committee recommended the bomb should be used on the Japanese Empire mainland to save American lives and produce maximum shock to try to convince the Japanese to surrender. Therefore, on August 6, 1945, a B-29 Superfortress, the "Enola Gay" piloted by Paul Tibbet, dropped an atomic bomb (now called a nuclear weapon) on Hiroshima. Japan's war council still insisted on its four conditions for surrender, refusing unconditional surrender. So on August 9, "Bockscar", a B-29 piloted by Frederick C. Bock dropped the second atomic bomb on Nagasaki. The necessity of the second bombing at Nagasaki has been debated, as the Soviet Union had declared war upon Japan, and Japan was blockaded; however, the Japanese war council still refused unconditional surrender before the second bomb was dropped.
The Soviet Union declared war on Japan at midnight August 8, 1945, in response to the American requests and in a last-minute grab for the spoils of war. It invaded Manchuria and Korea with 1.6 million troops; the Japanese army disintegrated. The Soviets captured 600,000 military and civilian prisoners of war; most of whom never returned home again. It was no longer possible for the Imperial Army to defend the emperor. On August 15, 1945, Emperor Hirohito by radio broadcast announced Japan would accept the terms of the Allies, unconditional surrender. By follow up message, the Japanese government stated they were surrendering with the understanding the Emperor would remain on the throne and would not be hung as a war criminal. Washington agreed, saying the authority of the emperor would be "subject to the Supreme Commander of the Allied Powers." On September 2, 1945, the Japanese Emperor formally surrendered all Japanese forces to the Allies in a famous ceremony aboard the battleship USS Missouri in Tokyo Bay. This was the ending of World War II, after six years almost to the day.
The war caused between 70 and 85 million deaths (3% of the world’s population) and untold numbers of seriously wounded. Soviet Union and China citizens accounted for half of deaths. The Chinese lost between 15-20 million, oe about 3-4% of its population. The Soviets lost 16-18 million civilians, and 9-11 million soldiers (14% of its population). A similar number were seriously wounded. The Soviet Union lost 70,000 villages, 1,710 towns and 4.7 million houses. Of the 15 republics of the Soviet Union, the Russian Federation lost 12.7% of its population: 14 million, just over half were civilians. Ukraine lost nearly seven million, over five million civilians; a total of 16.3% of its population. The U.S. lost just 12,000 civilians, 407,300 military or 1/3 of one percent of its population.
In Ukraine, there were so many more civilians killed than in other countries because Hitlerite Nazis and Banderist Ukrainian fascists went especially after civilian Jews, Poles and ethnic Russians. Stepan Bandera fought with German Nazis as leader of the Organization of Ukrainian Nationalists. Later, he separated from the Germans and continued killing Russians and Jews. Bandera remains a national hero of the Ukrainian fascist forces.
Effects of war on empires
- The Red army conquered much of eastern Europe and created an "Iron Curtain," destroying independent national governments and making them all subservient to Moscow. The US grudgingly tolerated this imperialism until 1947, when it was Greece's turn. Then the US drew the line and adopted a policy of containment. Because of the geography of war, Yugoslavia and Albania escaped the Red Army. They fell under the control of independent Communists--Yugoslavia received American support, and Albania turned to Red China for help against the Soviets.
- The war effectively bankrupted Britain, which soon gave up India (which then included Pakistan and Ceylon) and many of its other colonies.
- French Empire
- France saluted its overseas Empire as the savior of France, and wanted control back. That led to nasty large scale wars in Algeria and Vietnam, which France lost.
- the Netherlands and Indonesia
- The Dutch returned to the Dutch East Indies to face an insurrection they could not handle. Dutch acknowledged in 1949 the sovereignty of Indonesia, a non-Communist state.
- Supremacy of the USA in the Western World
- The war left the U.S. with a vastly stronger economy than anyone else. To save on budget deficits the military was demobilized, but the long-term strategy was in confusion after Roosevelt's death.
- Battle of the Bulge, Dec. 1944
- Battle of Halbe, Apr. 1945
- Battle of Okinawa, spring 1945
- Battle of Poznan, 1945
- Second Sino-Japanese War, 1937–45
- Dwight D. Eisenhower, US general
- Omar Bradley, US general
- George S. Patton, Jr., US general
- Luftwaffe, German air force
- World War II in the Air
- Bombing of Japan
- Bombing of Germany
- Attack on Pearl Harbor, Dec. 1941
- Battle of Midway, June 1942
- Chester W. Nimitz, US Admiral
- Holocaust, killing the Jews
- World War II Axis Leaders
- Charles de Gaulle, France
- Adolf Hitler, Germany
- Franklin D. Roosevelt, US
- Winston Churchill, Britain
- Joseph Stalin, Russia
- Chiang Kai-shek, China
- Benito Mussolini, Italy
- William Lyon Mackenzie King, Canada
- John Curtin, Australia
For a more detailed guide, see Bibliography of World War II
- Dear, I. C. B. and M. R. D. Foot, eds. Oxford Companion to World War II (in Britain titled Oxford Companion to the Second World War) (2005) (2nd ed. 2009). the best reference book; excerpt and text search
- Times Atlas of the Second World War (1995)
- Weinberg, Gerhard L. A World at Arms: A Global History of World War II, (1994) the best overall view of the war.
- Wheeler, Keith. The Fall of Japan (1983)
- ↑ http://users.erols.com/mwhite28/warstat1.htm#Second
- ↑ Olaf Groehler, Selbstmorderische Allianz: Deutsch-russische Militarbeziehungen, 1920-1941 (Berlin: Vision Verlag 1993), pp. 21-22, 123-124; Nekrich 1997: 131. Cf. Anthony Read and David Fisher, The deadly embrace: Hitler, Stalin, and the Nazi-Soviet Pact, 1939-1941 (M. Joseph, 1988), ISBN 0718129768, p. 336; Nigel Thomas, World War II Soviet Armed Forces (1): 1939-41 (Osprey Publishing, 2010), ISBN 1849084009, p. 15; Norman Davies, Rising '44: the battle for Warsaw (Viking, 2004), ISBN 0670032840, p. 30
- ↑ Louis Rapoport, Stalin's war against the Jews: the doctors' plot and the Soviet solution (Free Press, 1990), ISBN 0029258219, p. 57. Cf. Guy Stern, "Writers in Extremis," Simon Wiesenthal Center annual, Vol. 3 (Rossel Books, 1986), p. 91; Robert Conquest, The Great Terror: A Reassessment (Oxford University Press US, 2007), ISBN 0195317009, p. 402; Richard J. Evans, The Third Reich in Power (Penguin, 2006) ISBN 0143037900, p. 694
- ↑ Celebrations Marking 60 Years Since the End of World War II, Pavel Vitek, Russkii vopros - Studies, No. 1 2005. Translation from Russian.
- ↑ Roger R. Reese, "Lessons of the Winter War: a Study in the Military Effectiveness of the Red Army, 1939-1940," Journal of Military History 2008 72(3): 825-852
- ↑ Submarine Warfare, an Illustrated History, by Antony Preston, Thunder Bay Press, 1998
- ↑ A Fairy Tale Version of World War II is Being Used to Sell the Next World War, Michael Tracey, September 21, 2022. substack.
- ↑ The best studies of this theater are by David Glantz
- ↑ http://worldwartwo.filminspector.com/2016/03/ukrainian-collaborator-girls.html
- ↑ German: 14. Waffen-Grenadier-Division der SS (galizische Nr. 1)) Ukrainian: 14а Гренадерська Дивізія СС (1а галицька)), prior to 1944 titled the 14th SS-Volunteer Division "Galicia" Ukrainian: 14а Добровільна Дивізія "Галичина"
- ↑ Abbot, Peter. Ukrainian Armies 1914-55, p.47. Osprey Publishing, 2004. ISBN|1-84176-668-2
- ↑ І.К. Патриляк. Військова діяльність ОУН(Б) у 1940—1942 роках.
Університет імені Шевченко \Ін-т історії України НАН України Київ, 2004 (No ISBN) p.271-278
- ↑ "... скрепив нашу ненависть нашу до жидів, що в двох селах ми постріляли всіх стрічних жидів. Під час нашого перемаршу перед одним селом... ми постріляли всіх стрічних там жидів" from Nachtigal third company activity report Центральний державний архів вищих органів влади та управління України (ЦДАВО). — Ф. 3833 . — Оп. 1. — Спр. 157- Л.7
- ↑ Gutman, Israel. "Nachtigall Battalion". Encyclopedia of the Holocaust. Macmillan Publishing Company: New York, 1990.
- ↑ Vedeneyev, D. Military Field Gendarmerie – special body of the Ukrainian Insurgent Army. "Voyenna Istoriya" magazine. 2002.
- ↑ The July 1943 genocidal operations of OUN-UPA in Volhynia, pp=2-3; https://web.archive.org/web/20160401045104/http://www.volhyniamassacre.eu/__data/assets/pdf_file/0006/5199/The-July-1943-genocidal-operations-of-the-OUN-UPA-in-Volhynia.pdf
- ↑ Demotix: 69th anniversary of the Ukrainian Insurgent Army. 2011.
- ↑ Institute of Ukrainian History, Academy of Sciences of Ukraine, Organization of Ukrainian Nationalists and the Ukrainian Insurgent Army Chapter 3 pp.104-154
- ↑ Myroslav Yurkevich, Canadian Institute of Ukrainian Studies, Organization of Ukrainian Nationalists (Orhanizatsiia ukrainskykh natsionalistiv) This article originally appeared in the Encyclopedia of Ukraine, vol. 3 (1993).
- ↑ The USSR did not become a party to the Geneva Convention until 1960. So, although Germany was a signatory, Nazi Germany felt to compunction to honor the terms of the convention given German POWs and civilians were also subjected to human rights abuses by their Soviet counterparts.
- ↑ https://ww2db.com/battle_spec.php?battle_id=50
- ↑ Kirschenbaum, Lisa A. (2006). The Legacy of the Siege of Leningrad, 1941–1995: Myth, Memories, and Monuments. Cambridge University Press, 44. ISBN 9781139460651. “The blockade began two days later when German and Finnish troops severed all land routes in and out of Leningrad.”
- ↑ http://www.russiatoday.com/news/news/24522
- ↑ https://www.pbs.org/wgbh/amex/holocaust/peopleevents/pandeAMEX103.html
- ↑ Historical Atlas of the U.S. Navy, by Craig L. Symonds, the Naval Institute, 1995
- ↑ Wheeler (1983) p. 58
- ↑ The Story Behind the Famous Kiss, U.S Naval Academy website
- ↑ Wheeler (1983) pp. 58-60
- ↑ Wheeler (1983) pp. 94-101
- ↑ Wheeler (1983) p. 156
- ↑ Wheeler (1983) pp. 165-167
- United States Department of the Navy/Navy Historical Center's Fleet Admiral Chester William Nimitz
- World War II
- "On the Brink of World War II: Justus Doenecke’s Storm on the Horizon," by Ralph Raico
- Kursk: The Turning Point on the Eastern Front in World War II, by Roberto R. Padilla II
- World War II - Encyc
- "Government and the Economy: The World Wars," by Robert Higgs
- "How War Amplified Federal Power in the Twentieth Century," by Robert Higgs
- Depression, War and Cold War, by Robert Higgs
- "Wartime Prosperity?", by Robert Higgs
- Pearl Harbor |
Copper plating is a coating of copper metal on another material, often other metals. Plating is designed to increase durability, strength, or visual appeal, and copper plating specifically is often used to improve heat and electrical conductivity. Copper plating is seen most often in wiring and cookware.
Occasionally, copper plating is used for decorative purposes, giving objects a brassy look. Copper plating is more often used, however, for electrical wires since copper conducts heat extremely well. Additionally, many circuit boards are plated with copper.
Since copper is an exceptional heat conductor, copper plating is popular in cookware as well. The speed with which copper heats allows for even surface heat, and, therefore, allows for more even cooking. Professional chefs generally used solid copper cookware, usually lined with steel for increased durability, but these are expensive and not generally in the budget of a hobby cook. Plated pots and pans are usually aluminum or steel plated with copper. This plated cookware still allows for the benefits of copper heating without the expense of the pure copper alternatives.
Copper plating is often applied by a process called electroplating. Electroplating is simple enough to be done at home, but can be dangerous so is not recommending for the inexperienced. Simple setups of electroplating are often used in high school science demonstrations, but nickel rather than copper is most often used as the plating substance.
A simple setup for electroplating requires the object to be plated, a battery with positive and negative connecting cables, a rod of solid copper, and a copper metal salt, such as copper sulfate, that has been dissolved in water. The object to be plated and the copper rod are both placed into the salt solution and connected to the battery: the copper rod to the positive and non-copper object to the negative. In this setup, the non-copper object becomes the cathode, and the copper rod becomes the anode.
When the salt is dissolved in the solution, the molecules break apart into positively charged copper and negatively charged sulfur ions. Since the cathode is hooked to the negative output of the battery, it becomes negatively charged. The negative charge attracts the copper ions in the solution and they adhere to the outside of the object. Meanwhile, the copper atoms from the anode are being pulled into the solution, replenishing those that are attaching to the non-metal object.
This process is more complicated when attempting to plate iron or steel with copper. Copper will passively adhere to iron-based substances when placed in a solution of this type. Passive transfers do not retain the plating, so are useless for this purpose. In order to plate iron or steel with copper, a coating of nickel must be applied to the iron first. |
A collagen fibril mounted on a MEMS mechanical testing device. At the bottom is a single human hair for size comparison.
Collagen is the fundamental building block of muscles, tissues, tendons and ligaments in mammals. It is also widely used in reconstructive and cosmetic surgery. Although scientists have a good understanding about how it behaves at the tissue-level, some key mechanical properties of collagen at the nanoscale still remain elusive. A recent experimental study conducted by researchers at the University of Illinois at Urbana-Champaign, Washington University in St. Louis and Columbia University on nanoscale collagen fibrils reported on, previously unforeseen, reasons why collagen is such a resilient material.
Because one collagen fibril is about one millionth in size of the cross-section of a human hair, studying it requires equally small equipment. The group in the Department of Aerospace Engineering at U of I designed tiny devices—Micro-Electro-Mechanical Systems—smaller than one millimeter in size, to test the collagen fibrils.
"Using MEMS-type devices to grip the collagen fibrils under a high magnification optical microscope, we stretched individual fibrils to learn how they deform and the point at which they break," said Debashish Das, a postdoctoral scholar at Illinois who worked on the project. "We also repeatedly stretched and released the fibrils to measure their elastic and inelastic properties and how they respond to repeated loading."
Das explained, "Unlike a rubber band, if you stretch human or animal tissue and then release it, the tissue doesn't spring back to its original shape immediately. Some of the energy expended in pulling it is dissipated and lost. Our tissues are good at dissipating energy–when pulled and pushed, they dissipate a lot of energy without failing. This behavior has been known and understood at the tissue-level and attributed to either nanofibrillar sliding or to the gel-like hydrophilic substance between collagen fibrils. The individual collagen fibrils were not considered as major contributors to the overall viscoelastic behavior. But now we have shown that dissipative tissue mechanisms are active even at the scale of a single collagen fibril."
A very interesting and unexpected finding of the study is that collagen fibrils can become stronger and tougher when they are repeatedly stretched and let to relax.
"If we repeatedly stretch and relax a common engineering structure, it is more likely to become weaker due to fatigue," said U of I Professor Ioannis Chasiotis. "While our body tissues don't experience anywhere near the amount of stress we applied to individual collagen fibrils in our lab experiments, we found that after crossing a threshold strain in our cyclic loading experiments, there was a clear increase in fibril strength, by as much as 70 percent."
Das said the collagen fibrils themselves contribute significantly to the energy dissipation and toughness observed in tissues.
"What we found is that individual collagen fibrils are highly dissipative biopolymer structures. From this study, we now know that our body dissipates energy at all levels, down to the smallest building blocks. And properties such as strength and toughness are not static, they can increase as the collagen fibrils are exercised," Das said.
What's the next step? Das said with this new understanding of the properties of single collagen fibrils, scientists may be able to design better dissipative synthetic biopolymer networks for wound healing and tissue growth, for example, which would be both biocompatible and biodegradable.
The study, "Energy dissipation in mammalian collagen fibrils: Cyclic strain-induced damping, toughening, and strengthening," was co-written by Julia Liu, Debashish Das, Fan Yang, Andrea G. Schwartz, Guy M. Genin, Stavros Thomopoulos, and Ioannis Chasiotis. It is published in Acta Biomaterialia.
The research was supported by the National Science Foundation and National Institutes of Health and by the National Science Foundation Science and Technology Center for Engineering MechanoBiology. Das' effort was supported by a grant from the National Science Foundation. |
West Nile Virus
West Nile virus (WNV) is most commonly transmitted to humans by mosquitoes. You can reduce your risk of being infected with WNV by using insect repellent and wearing protective clothing to prevent mosquito bites. There are no medications to treat or vaccines to prevent WNV infection. Fortunately, most people infected with WNV will have no symptoms. About 1 in 5 people who are infected will develop a fever with other symptoms. Less than 1% of infected people develop a serious, sometimes fatal, neurological illness.
West Nile Virus is an arbovirus (ARthropod BOrne VIRUS) that naturally cycles within local bird populations. It is passed along most often through the bite of a female mosquito, which comes looking for blood from the bird for protein to create a batch of eggs. If there are enough virus particles in the blood of the bird to withstand the mosquito's digestion process and to spread to her salivary glands, the mosquito then becomes able to pass along the virus to the next bird, horse, or human that she bites. Interestingly, the host (source of blood) preference of the main vector of WNV in Teton County (Culex tarsalis) has been shown to shift as the season progresses, moving from mostly birds in May and June to a more mammal-based diet in July, August and September. For this reason it is important to stay vigilant and protect oneself against mosquito bites long into the season, even when the peak number of summer mosquitoes has dissipated.
Wyoming Mosquito Management Association - Mosquito Abatement in Wyoming
Mosquito abatement throughout Wyoming encounters a variety of challenges and consequently results in a multitude of local approaches. Sharing and building upon these innovations is just one of the goals of the Wyoming Mosquito Management Association. |
Stanford University chemists have developed a new way to make transistors out of carbon nanoribbons. The devices could someday be integrated into high-performance computer chips to increase speed and generate less heat.
A research team led by Professor Hongjie Da has made transistors called "field-effect transistors" -- a critical component of computer chips -- with graphene that can operate at room temperature. Graphene is a form of carbon derived from graphite. Other graphene transistors, made with wider nanoribbons or thin films, require much lower temperatures.
"For graphene transistors, previous demonstrations of field-effect transistors were all done at liquid helium temperature, which is 4 Kelvin [-452 Fahrenheit]," said Dai, the lead investigator. His group's work is described in a paper published online in the May 23 issue of the journal Physical Review Letters.
The Dai group succeeded in making graphene nanoribbons less than 10 nanometers wide, which allows them to operate at higher temperatures. Using a chemical process developed by his group and described in a paper in the Feb. 29 issue of Science, the researchers have made nanoribbons, strips of carbon 50,000-times thinner than a human hair, that are smoother and narrower than nanoribbons made through other techniques.
Field-effect transistors are the key elements of computer chips, acting as data carriers from one place to another. They are composed of a semiconductor channel sandwiched between two metal electrodes. In the presence of an electric field, a charged metal plate can draw positive and negative charges in and out of the semiconductor. This allows the electric current to either pass through or be blocked, which in turn controls how the devices can be switched on and off, thereby regulating the flow of data. Researchers predict that silicon chips will reach their maximum shrinking point within the next decade. This has prompted a search for materials to replace silicon as transistors continue to shrink in accordance with Moore's Law, which predicts that the number of transistors on a chip will double every two years. Graphene is one of the materials being considered.
Dai said graphene could be a useful material for future electronics but does not think it will replace silicon anytime soon. "I would rather say this is motivation at the moment rather than proven fact," he said. Although researchers, including those in his own group, have shown that carbon nanotubes outperform silicon in speed by a factor of two, the problem is that not all of the tubes, which can have 1-nanometer diameters, are semiconducting, Dai said. "Depending on their structure, some carbon nanotubes are born metallic, and some are born semiconducting," he said. "Metallic nanotubes can never switch off and act like electrical shorts for the device, which is a problem." On the other hand, Dai's team demonstrated that all of their narrow graphene nanoribbons made from their novel chemical technique are semiconductors. "This is why structure at the atomic scale—in this case, width and edges—matters," he said. |
Rain gardens are landscape features designed to capture and naturally filter storm water. Also called bioswales or biofiltration gardens, these gardens use planted, shallow depressions to collect, slow down, and spread water over a larger area to allow it time to soak into the ground rather than flow into storm sewers and ultimately into nearby waterways.
Why plant a rain garden?
- Rain gardens require very little, if any, watering and less water usage means lower water bills. This also helps reduce wasting drinking water. In an urban setting such as the District of Columbia, more than 40 percent of the potable water supply is used for gardening and other outdoor activities.
- Rain gardens capture runoff and slowly filter out common pollutants and sediment.
- Less storm water runoff -- runoff can cause erosion and often carries pollutants from streets and other paved surfaces. Reducing the volume of this contaminated water running into sewer drains helps reduce polution flowing into local waterways.
- With appropriate plants, rain gardens provide attractive habitats for birds, butterflies and beneficial insects.
Rain barrels are a centuries-old technique to collect rainwater from roofs. Rain barrels attach to the downspouts at your home or business and help keep groundwater and waterways clean. You can find rain barrels for sale in garden centers and online.
Why Have a Rain Barrel?
- Low-cost water conservation device that can be used to reduce runoff.
- Help delay and reduce the peak runoff flow rates.
- Clean water for healthy gardens and lawns.
- Help delay the need to expand sewage treatment facilities.
Rain Gardens at U.S. Botanic Garden
In the recent renovation of Bartholdi Park, 10 new rain gardens were created, featuring a wide variety of plants that can tolerate the long periods of little water between brief inundations during rain events.
- Rainwater in Bartholdi Park: Ten rain gardens capture 100 percent of rainfall on the site and allow it to soak into the ground, diverting runoff from D.C.'s combined sewer system. The rain gardens can accept up to 4,000 cubic feet of water in a 24-hour storm event - equivalent to 256 bathtubs of water. The project also used permeable paving and reduced the amount of impervious surface.
- Learn more about the sustainable gardening features in Bartholdi Park at www.USBG.gov/BartholdiPark
The rain gardens in Bartholdi Park feature plants not found as frequently in other rain gardens, like this blooming red hibiscus and juncus combination.
This photo shows the rain garden from above capturing rain during a rain event. The photo was taken a few months earlier in the year before the hibiscus has grown large. |
Dimensions of Ethics – Notes for GS Preparation
Human beings are considered rational animals that are capable of thinking at very high order. For more than a millennium, we have produced great thinkers who have laid foundation to the institution of philosophy.
The study of general and fundamental questions raised about abstract subjects like existence, knowledge, values, reason, mind and language are tried to be resolved by both subjective and objective reasoning.
Amongst the various branches of philosophy, one of the most studied as well as applied division is moral philosophy, which in a single term, is denoted by ‘ethics’. This study involves systemizing, defending and recommending concepts of right and wrong conduct in the society that we live in.
The concepts of good and bad are a subject of perspective and are bound to change periodically. It is the studies undertaken by ethics which try to form a collective outlook which decides what is considered to be good and bad.
For example, capital punishments and torture had been a good example of punishments for crimes against the society. But today, brutal executions are frowned upon by majority of the nations of the world.
The questions of human morality regarding good and evil, virtues and vices, justice and injustice and related matters are raised under four major dimensions of ethics- Meta-ethics, Prescriptive Ethics, Descriptive Ethics and Applied Ethics.
These four branches of ethics have been systematically answering to all the moral philosophical queries that have been raised or are being raised in today’s society.
Candidates preparing for Civil Services must read UPSC 2020 linked article.
In a single phrase, this sub-division of ethics can be described as the ‘ethics of ethics’. Meta-ethics deal with the questions which determine if a raised subject or matter is morally right or morally wrong. It asks about our understanding- how we interpret if a decision, action or a motive is good and bad.
From ancient times, philosophers have been trying to give a definitive description to meta-ethics. Aristotle for example, had theorised that our interpretation of right and wrong is based on our understanding of other subjects and relative ethical wisdom that we passively gain from it.
For instance, the differentiation of healthy food items from junk food items is on the basis of our understanding of factors like taste, appetite and effects on our body. Additionally, Aristotle also claimed that acculturation plays an important role to influence our thoughts and ideologies regarding a subject.
When two or more cultures combine in a geographical region to co-exist, the various aspects of every culture are integrated in their daily lives and broaden their horizon of knowledge and influence their understanding of good and bad.
But modern philosophers hold a divided view on meta-ethics. This subject is divided into two schools of thought who try to describe meta-ethics. One holds the opinion that when we describe something as right or wrong on our understanding of morality, our judgement is neither true nor false.
This abstract ideology is classified under non-cognitivism. While the other school of idea stresses on the importance of facts and figures while judging moral good and bad. This difference is akin to non-descriptive and descriptive ideologies.
Non-cognitivists are non-realists are they do not feel that there is need of specific ontology for meta-ethics whereas cognitivists are realists who must explain what kind of properties or states are relevant for this subject, what values do they possess and why they guide and motivate our decisions and actions.
This division of ethics deals with the study of ethical action. It extensively investigates questions which ask whether the action one implements is actually right or not. Prescriptive ethics are also known as normative ethics. It is a vast subject and is conveniently divided into sub-divisions that helps in better organisation and analysis of questions and ideas raised.
The first division is termed as virtue ethics, also called as the ethics of Socrates. This describes the character of a moral agent as the driving force behind ethical behaviour.
Another branch known as stoicism, based on the teachings of Heraclitus and heavily influenced by Socrates, maintains the abstract chain of ideologies of the greatest good being achieved in contentment and serenity.
This field promotes detachment from materialism; self-mastery over ones desires and accepts things at their face value. While on the other hand, hedonism serves as a stark contrast to stoicism as it prioritises on maximising pleasures and minimising pain, even at the expense of others.
A fourth set of teachings are compiled under intuitive ethics which theorises that our intuitive knowledge of evaluative facts, forms the foundation of our ethical knowledge.
Further division of prescriptive ethics classifies the subject under consequentialism which refers to moral theories that hold the consequences of a certain action as the foundation for any relevant moral judgement regarding the particular action.
This view is easily understood in the aphorism- ‘The ends justify the means.’ Further branches of Deontology, pragmatic ethics and anarchist ethics further classify prescriptive ethics on the basis of a multitude of factors that determine the answer to ‘is the action or decision being implemented is wrong?’
This dimension of ethics is on the least philosophical end of the spectrum of ethics. It seeks information on how people live; observe the patterns of situations arising in their surroundings and draw general conclusions based on these observations.
Descriptive ethics identify more as a branch of social science rather than human morality, by offering a value-free perspective of ethics. Descriptive ethics do not start with preconceived theories and hypotheses but rather prefer to thoroughly investigate the existing facts and cases relating to the subject- making observations of actual choices which are made by moral agents in a practical world.
The study of descriptive ethics includes various fields of examinations ranging from ethical codes that lay down rules and regulations for the society, informal theories on etiquette, practices of law and arbitration and finally, observing choices made by ordinary people without the assistance or advice of an expert.
This is the subject of ethics which finds use in practical life in various fields of work and life. This discipline applies ethical philosophy in real-life situations. Some common fields of specialised applied ethics include engineering ethics, bioethics, geoethics, military ethics, public services ethics and business ethics.
Under this discipline, various specific questions have been raised which require a philosophical approach rather than technical interpretation to satisfy the morality of the human nature. Many public policies are decided upon the answers to such questions. For example- is abortion immoral?
Should euthanasia be legalised? What are the fundamental human rights? – And others. While dichotomies are preferred due to the convenience of taking a decision, most of the questions raised are generally multifaceted in nature and the most efficient answers are able to solve many areas coherently.
Approaching the dimensions of ethics
#1. Utilitarian Approach
Conceived in the 1800’s by famous philosophers Jeremy Bentham and John Stuart Mill, this approach was used by the legislators to determine which laws were morally upright and which ones were not.
The foundation of this approach lies on the fact that the laws are to be formulated to provide the best balance between right and wrong. For instance, ethical warfare is trying to curb terrorism for the greater good by killing and destroying the terrorist organisations.
#2. The Rights Approach
This discipline stems from the philosophy of Immanuel Kant which focused on the rights of a person to choose from their free will. This approach stresses on the fact that humans are not a subject to manipulation and their dignity and decisions should be respected. Many fundamental and legal rights like right to privacy, right of freedom, etc. find their roots in this form of approach.
#3. Fairness or justice approach
First described by Aristotle and his contemporary philosophers, this approach propagates the idea of equality to all irrespective of their origins or creed in every aspect of life.
#4. Common goods approach
First described by the Greek philosophers as an approach which denotes life in a society as a good commodity in itself and the actions of each and every individual should contribute to this common good.
Modern philosopher John Rawls gives a better definition of common good as ‘certain general conditions those are equally applicable to everyone’s advantage.’ Affordable healthcare, transparent administration, environmental uplifting are some of the most cited examples of this ethical approach.
#5. Virtue Approach
The most primitive approach in the list, it adheres to the fact that ethical actions are supposed to be consistent and at par with certain ideal virtues that provide for the holistic development of our humanity.
These virtues are temperaments and practices of day to day lives that enable us to act according to the highest potential of our character and propagate the moral values.
Honesty, courage, compassion, generosity, tolerance, love fidelity, integrity, fairness, self-control, and prudence are some of the examples of virtues.
The response that an average human being is able to generate to the stimulus of a situation or a task assigned is directly dependent upon the nature of system of ideologies that resides in their minds. And the ideologies are nothing but propagation of our ethical values.
Our understanding of ethics and its dimensions, no matter how advanced we are today, has still remained very vague. The abstract nature of this subject tends to make it difficult to analyse or assign a certain definition. But a curios mind is always looking forward to answer questions that are poised in front of it.
As long as mankind possesses this unadulterated curiousness in every aspect of life, a philosophical question will arise as a result and will definitely change the existing comprehension of ethics and its dimensions.
Till then, it is important to understand the current stage of interpretation on this subject as well as positively react to the ideas that are being generated in the society. Because as Albert Schweitzer had aptly stated-
“The first step in the evolution of ethics is a sense of solidarity with other human beings.” |
Sure… labeling pictures is a fun and cute writing activity for your beginning writers but there are more reasons to label all the pictures than that!
Stage of Writing
Labeling pictures is an actual developmental stage of emergent writers. Many children will initially share their stories or ideas through pictures and then begin to use letters and sounds to represent objects in their drawings.
Labeling a picture introduces students to the idea that they can add print to their pictures and begin to represent objects and ideas with letters and sounds. Modelling this with your students is a great way to teach phonics in context. This can be effective at different levels. Emergent writers can label with beginning sounds and invented/phonetic spelling. Specific phonics skills can be practiced with more fluent writers by labeling pictures containing images of CVC/CVCE words, digraphs, word families etc.
Labeling a picture is a concrete way to introduce and review vocabulary. Connecting new words to an image helps reinforce the meaning. This makes it an effective strategy for both beginning and more fluent writers. Pictures with everyday items from the classroom or home can reinforce common vocabulary. Pictures introducing new concepts or units of study can get your students thinking about a topic and the vocabulary that goes with it. It can help you assess student prior knowledge of the topic. Do they know what the objects in the picture are called? This can bring up the words they will need to discuss the topic and highlight areas they will need to learn about. Labeling a picture at the end of a unit can act as a review of vocabulary and concepts learned.
Beginning Sentence Writing
Labeling a picture can be a little like a brainstorming session for students. The picture and words are already there……which provide a scaffold for writing sentences.
Independent Writing Activity
Independent writing in kindergarten and first grade can often feel a bit like an impossible task! Blank writing pages or booklets are all some students need to be on their way…. but other students need more support or direction to work on writing on their own (for more than about 5 seconds!) I created my labeling pages to provide students with an engaging writing task that had enough supports for all students to work independently. From open ended blank labeling sheets, to word boxes, cut & paste labels and word cards… students at all levels can work independently on this writing task.
We all have those students who will stare at a blank paper for hours, write one word, scribble it out and then rip up the paper….. They can’t think of anything to write about, aren’t ready to take risks and hate making mistakes! Labeling a picture is a safe writing task that can help them gain the confidence to move on to more independent writing activities.
Our goal… of course, is for students to move past this stage of writing but labeling a picture is also a useful way to:
- teach phonics in context
- develop vocabulary
- scaffold sentence writing
- provide our more reluctant writers with an independent writing task
So you can keep on labeling all year long! Looking for some labeling pages to add to your writing center? Click on my TPT Products Below. |
The most detailed definition of yin and yang theory
Yin and Yang Ancient Chinese philosophical theory is a summary of the ancient people’s observation and induction of the nature of everything in nature and its development and changes.
The yin and yang theory in medicine is the product of the combination of ancient dialectical thought methods and medical experience.
That is to say, the opposition and unity of yin and yang, the view of growth and transformation, the relationship between man and nature, and a series of questions in the medical field.
Anatomical aspects: Inducting the properties of human organs and tissues such as “Lingshu?”
“Shoutian Gangruo”: “There are yin and yang in the home, and yin and yang in the outside. In the inside, the five internal organs are yin and the six sputum is yang; in the outside, the bones are yin and the skin is yang.
Physiological aspects: Analyze the physiological functions of the human body.
Such as “Su Wen, angry through the heavens”: “Yin, the Tibetan essence and the extreme also; Yang, the Wei and the solid.
Illustrate that yin represents the storage of matter and is the source of yang energy; yang represents the functional activity, and plays the role of defending the yin.
Pathological aspects: clarify the basic laws of pathological changes.
Such as “Su Wen, Yin Yang should be like a big theory”: “Yin Sheng is Yang disease, Yang Sheng is Yin disease; Yang Sheng is hot, Yin wins cold.
Another example is “Su Wen, Tiaojing”: “Yang deficiency is outside the cold, yin deficiency is internal heat; Yang Sheng is external heat, Yin Sheng is internal cold.
“and many more.
Diagnostic aspects: the general outline of the classification of the attributes of the disease, the positive syndrome and the negative syndrome as the overall identification method.
For example, “Su Wen, Yin Yang should be like a big theory”: “A good doctor, check the color according to the pulse, do not yin and yang.
Treatment: Determine the diarrhea, spare it, and adjust the principle of relative balance between yin and yang.
Such as “Su Wen, really want to talk about”: “The cold is hot, the hot is cold” and “such as the question, the yin and yang should be like the big theory”: “yang disease cure Yin, Yin disease cure Yang.
“and many more.
In addition, the performance of drugs, acupuncture techniques, etc., also have corresponding yin and yang properties.
Clinically, it is necessary to pay attention to the relationship between Yin and Yang and the yin and yang of governance.
To sum up, Yin and Yang are both an important part of the basic theory and a tool for summarizing clinical experience.
However, this doctrine can only give a rough explanation of the interrelationships within things based on some intuitive experiences.
The yin and yang transformation of the yin and yang, under certain conditions, can be transformed into each other, the yin can be transformed into yang, and the yang can be transformed into yin.
Such as: physiologically, the yin is in the yang, the yang is in the yin, and the yin and yang are the roots; in the pathology, the chill is extremely hot, the heat is extremely cold, the yin syndrome can be transformed into yang, and the yang can also be converted into yin.
Such as: physiologically, the yin is in the yang, the yang is in the yin, and the yin and yang are the roots; in the pathology, the cold is hot, the heat is extremely cold, the yin syndrome can be transformed into the positive syndrome, and the positive syndrome can also be converted intoYin card and so on.
The secret of yin and yang, with a long history, emphasis on phenomena, and emphasis on surface narrative, are the characteristics of classical yin and yang.
Modern exploration has made revolutionary progress and new ideas from the mid-to-late 1990s.
Mathematical physics, that is, “numerical yin and yang” has become the symbol of modern yin and yang.
The use of mathematical models to describe the yin and yang using the three elements of the world, “material, energy, and information,” is a characteristic of modernization and scientific yin and yang.
In 1995, Li Rongxing, the definition of yin and yang, Liaoning Journal of Traditional Chinese Medicine, 1995, 6 issues.
The definition of yin and yang is biased towards TCM clinical evidence.
In 1997, Zhao Xixin, a mathematical model of Chinese medicine yin and yang, Henan Traditional Chinese Medicine, 1997, 5 issues.
In 1998, Deng Yu et al., the scientific nature and mathematical construction of yin and yang,
Mathematical yin and yang, Journal of Mathematical Medicine, 1999, 1 issue.
1999, Deng Yu et al, Chinese Medicine Fractal Set, Journal of Mathematical Medicine, 1999, v12, 3 issues.
Create concepts such as “Yin and Yang Fractal Sets”.
Fractal dimension of yin and yang = 1.
In 2003, Lin Jianming, Modernization of Traditional Chinese Medicine and Mathematics, Journal of Mathematical Medicine, 2003, issue. In 2004, Qi Fengjun, on the mathematical balance between yin and yang, Chinese Journal of Basic Medicine in Traditional Chinese Medicine, 2004, 7 issues.
In 2005, Zhao Zhizhen, Zhao Wei, the establishment of the mathematical model of yin and yang theory of Chinese medicine and the study of calculus quantification, Sichuan Traditional Chinese Medicine, 2005, 11 issues.
In 2007, Meng Kaizhen, Yin Yang and Wu Xing mathematics and its application in Chinese medicine, Journal of Shanghai University of Traditional Chinese Medicine, 2007, 6 issues.
In 1998, Yin and Yang’s definition of modern definition of philosophy and logic 1: Yin and Yang are an incompatible relationship between the two concepts “Yin and Yang” under the concept of “the unity of opposites” of the same genus.
The connotation of yin and yang is mutually negated. One concept “Yin” affirms the yin attribute of the object, and the other concept “Yang” refers to the attribute affirmed by the negative concept, as the attribute of the yang object; the extension of yin and yang is mutually blocked and complement each other.The sum is equal to the extension of their nearest genus concept (the unity of opposites), that is, the sum of two concepts of yin and yang extension or concurrency.
Quantitative measurement of yin and yang: The “state function u” indicator is used to describe the overall description of the material-energy-information available for the state or state, u = EP.
E is the kinematics or dynamics index energy; P is the system chaos degree (order degree) index, which is closely related to entropy.
Ancient Chinese Culture – Ancient Philosophical Thoughts – Yin and Yang In the concept of Chinese philosophy, yin and yang refer to two opposing and connected forces that exist in everything in the world.
Yin represents passive, dark, female, night; yang represents active, bright, male, daytime.
Chinese academic circles believe that the concept of yin and yang originated from the ancient people’s understanding of the sun and the shade of the terrain.
For example, the article “The Book of Songs, Daya, and Gong Liu” said: “Yu Gong Liu is both long and sturdy, and it is both yin and yang, and its yin and yang.
Yin and Yang are the south and the north of the yin.
The ancient meaning of the words Yin and Yang is the day and the sun, and there is no philosophical connotation at first.
Yin, “Said Wen Jie Zi” 曰: “Dark also, the North of the South of the water is also”, “Speaking of the Department of Biography” 曰: “Shanbei Shuinan, the day is not as good.”
Yang, “Said Wen Jie Zi” 曰: “Gao Ming also.
“” Shuo Wen Jie Zi Yi Zheng”: “Gao Ming also, against the Yinyan.
“Lao said in the Tao Te Ching: “Dao Shengyi, one life two, two students three, three things.
Everything is negative and yang, and I think it is.
This primitive understanding of yin and yang was quickly generalized in the Zhou Dynasty.
The Zhou Dynasty people also began to explain the natural phenomenon with the concept of yin and yang. For example, Bo Yang’s father attributed the cause of the earthquake to “the sun is not able to come out, and the yin is forced to steam.
“Yin and Yang doctrine is a dichotomy of the world. Yin and yang are a kind of high abstraction and overlap of the dichotomy of heaven and earth, men and women, staying up late, burning, fast, slow, up and down, front and back, inside and outside.
Yangyin in the cosmology of the solar wind says, what is vacuum vacuum?
It is of course a kind of space. If there is no vacuum material, will it exist?
What is the substance?
The current scientific community believes that the Big Bang is the origin of the universe, but it is difficult to tell the situation of the universe in the moment after the big bang, so that there is a virtual vacuum and a real vacuum at that time.
Astronomers have not yet put forward corresponding theories to the situation at the time.
In the big bang theory, vacuum exists as a product of the big bang.
I don’t agree with this point of view. I think vacuum is an independent space. It is not a product of the big bang.
If in my opinion, vacuum is an independent space, what if the vacuum is removed in our universe?
Another independent space emerged when the vacuum was removed.
I call the vacuum a yin and another space as a yang.
That is to say, our universe is a moderate space formed by two spaces of yin and yang.
This is my point of view.
In the moderate space, the two spaces are divided into each other. The male space is mainly composed of various spatial particles, and the negative space is surrounded by various positive space particles, that is, negative yin and yang.
There is no temperature in pure space and pure space. Why?
Because the temperature we now call is in the material world in a moderate space, and the temperature is measured by matter.
In the moderate space, due to the mutual division of the yin and yang space, the material world in our eyes is produced. In the pure yin and pure yang space, there is no such mutual division, and there is no material existence, pure yin and pure yang.The temperature of the space cannot be measured, so there is no temperature there.
The so-called temperature only exists in the moderate space.
My idea is that whether it is a vacuum or a substance, its essence is space. The particles of matter are produced by dividing the two spaces into one another. Each particle is actually a small space body.
Second, the evolution of matter When the two spaces are mixed with each other, our universe begins its evolution. From the beginning of mutual mixing, the wheel of time begins its rotation, so if there is no mixture of two spaces,There is no material evolution at the beginning of time.
Third, the type of moderate space has two kinds of moderate space: negative yin and yang, negative yang and yin.
The mutual relationship between the same kind of spatial particles, when the heterogeneous spaces are mixed with each other, will divide each other and form various spatial forms.
The interference force in various small spaces is re-acquired in the heterogeneous space (the reset force is not directly given by the heterogeneous space, and the heterogeneous space is a necessary condition for the formation of the repulsive force).
Fourth, the yang-A, yin-A-A space is divided by the space of B. The space of A is divided into numerous small spaces. The smaller space of A may surround the space of the penetrating space. This is the mutual attraction of the same kind of space.The result of the action.
The small space around the large space together constitutes a large space field. Regardless of the large space or the small space, they are slowly divided under the action of the space B. The gravitational relationship caused by the space recombines the large and small spaces.
Energy is produced by the movement of various spaces. Energy is abstract and the space that makes up matter is real.
If the two spaces (yin and yang) that I am talking about are considered to be two kinds of energy space according to the Western way of thinking, vacuum is also a space of energy, but there is some space for yang (energy).The difference is five, while the sex when a space particle is struck by other particles, the action of a point on the particle is in our view simultaneous.
If this particle is enough to go through a ray of light, this simultaneity will still exist.
This simultaneity is very common in reality. Something says that a bamboo pole pushes it from a scorpion to a moving movement. The movement of a point above it is different in our view.of.
If this scorpion is long enough for a light year, then this simultaneity will not exist?
If the simultaneity of that huge space particle exists, can it be explained that the transfer of momentum inside the space particle is timeless (the transmission of momentum within the same continuous space is timeless)?
Sixth, the evolution of the universe: including the movement of space particles (or an object) from one point to another, the growth and reduction of space particles themselves.
That is to say, movement is the second element of the evolution of the universe. The first element is the mixture of yin and yang space.
Energy is an expression of movement.
Seventh, the evolution of the simultaneous universe of various points in the universe is synchronic at various points in the universe. This synchronicity is irrelevant to the distortion of time, speed, space and time, and gravity.
Only sports related to it.
A space particle in a substance is moving in some way, and another particle is moving in another way.
Let’s talk about our earth moving around the sun. In the depths of the distant universe, another planet is moving around another planet. Let’s talk about the light we see. It transmits and records the information of the past.The photon is the photon being in motion.
It’s just that the information we receive from the Earth is in the past.
Each region of the universe produces its own time concept relative to this synchronization.
I put a synchronous axis in this transitional universe, then I can draw the following time axis: eight, I use the yin and yang theory to think of the space distortion yin and yang delivery reduction, the two are produced at the junctionSpace distortion, this kind of space distortion is divided into two kinds, one is the concave curvature produced by the space embodied in the vacuum, and the other is the convex curvature generated by the space of the space particle.
This is the physical distortion of space.
The distortion of time and space seems to me to be a motion distortion of space particles.
Nine, the “start” and “fall” of space say that a “positive space particle” moves in the “yin space” embodied in the vacuum, assuming that there is such a point in the “yin space”, when this “yang space particle””When the sport reaches this point, it will dispose of the “yin space” at this point and occupy this point. I call this the “starting space” of “female space”; when this “yang space particle” moves from this pointThe negative space will reoccupy this point. I call this the “falling” of “female space.”
If the “internal space particle transfer of momentum is timeless” is true, then the information of the ups and downs will be quickly transmitted in this negative space and may be generated for all the positive space particles inside this negative space.Some kind of influence.
10. Does the “illusion” of space’s “starting” and “falling” produce gravitation? When I think about these problems, I suddenly think about whether gravity is related to space. In my opinion, the cause of gravity is not likeIt is only one kind, the mutual division between different kinds of space in the cause of it is the most important, followed by the ups and downs of space, the distortion of space, the mutual fusion of the same kind of space, etc.To elicit the necessary conditions, I divide these necessary conditions into two kinds: “truth” and “illusion”. In “truth”, there are mutual divisions between different kinds of spaces, the rise and fall of space, the mutual fusion of the same space., “illusion””There is a regular movement of space particles caused by the distortion of space.11. The yin-yang theory and the similar yin-yang theory of Western science are the summary of the ancient Chinese thinking about the world with the unique way of thinking of the Chinese nation.It is the study of the universe.
The same Western science is also studying the universe, so there are some similarities between them.
Friends who have seen my thoughts should think that the universe that I derived from the theory of yin and yang is a moving universe.
The distribution of yin and yang, because space particles are generated, as long as it is a space particle, it will be split by another space to produce other moving particles, or some particles will be re-fused together due to the existence of motion.Against this natural fusion process.
Let’s take a look at the mass equation E = mc2 in Einstein’s theory of relativity. Einstein added the speed of light to his equation, from which we can trim as long as there is mass of matter that is energetic.
The speed of light is a description of the state of motion of light, that is to say energy “E” is a kind of exercise energy.
I think Einstein sees movement as an element of the universe.
For quantum theory, it has similarities with yin and yang theory. In yin and yang theory, it is difficult to measure the position of another spatial particle with a spatial particle as a reference. Because this reference is also moving, it is alsoThe evolution of the moment is in line with the uncertainty principle of quantum theory.
The Chinese are not stupid. The Chinese are not stupid. The current educational philosophy is not suitable for the Chinese. The educational method is not in line with the Chinese cognitive model.
We need an enlightenment textbook. Every subject of this kind of textbook is from shallow to deep, and incorporates all the essential knowledge points of this subject into a book, rather than the superficial textbooks like now.Throw it when you are finished.
The true cognitive mode of the Chinese is to learn for a lifetime, so there must be an enlightenment textbook that will enable us to use our whole life.
For Chinese, use the Siku Quanshu.
Moreover, this kind of enlightenment textbook is truly enlightening. It can’t guide the knowledge that every human subject has in it, and guide us to learn more.
We need to modernize each discipline from its source, and the development of this discipline is edited in a chronological way.
For a detailed explanation and record of the knowledge points of each era in this discipline every year, the knowledge of the previous year is the basis of the knowledge of the next year. Don’t evade the knowledge that has been abandoned before, don’t think that heIt is wrong, it is probably quite right.
Let us not go through the rough knowledge of last year or ancient knowledge.
The development of each discipline is gradually produced and developed according to the time. According to the method of chronology, this discipline is fully presented in front of our Chinese people. Its development is its origin, and we humans are at such a high level.Why do animals develop from simple cell bodies step by step, why can’t we directly form an adult directly.
We have to record and explain in detail.
The same is true of the development of each discipline. We also learn from those disciplines and learn from the origin of each discipline.
The preparation of each kind of teaching material is also like this. Only the knowledge of the textbooks can be arranged according to the time sequence of the development of these two disciplines, in order to be truly shallow.
Only then can we clearly grasp the pulse of each discipline.
We must reshape the history of each discipline in the minds of Chinese people.
We have an old saying in China, we know what it is, and we don’t think that learning is ancient.
Learning ancient knowledge lies in the warmth, the purpose is to let our modern people know new things.
We can use the oldest knowledge to learn modern knowledge. The oldest knowledge may be the most simple or the most ignorant, or the most esoteric.
Speaking about traditional Chinese medicine, it seems that many people say that Chinese medicine is not very scientific. Those who say those words may be stupid. How long has our Chinese medicine been tempered?
Is it necessary for Westerners to admit that she is scientific in the practice of the millennium?
After thousands of years of Chinese herbal medicines, Chinese herbal medicines that have been verified in countless Chinese people have not been more convincing than western medicine.
Why do Westerners refuse to take Chinese medicine?
They are afraid!
In China, and even in the world, the strength of Chinese medicine will greatly reduce the Western medicine, so that their sales of Western medicine are greatly reduced!
Foreigners can not believe Chinese medicine can not accept Chinese medicine, but we Chinese must believe her! |
Twenty years ago, it was easier to identify fake news. There were the tabloid papers in the grocery store checkout line and the sensationalized “news” programs that promised inside looks at celebrity lives. Now, between the number of online information sites and the proliferation of social media apps, plus near constant mobile phone use, determining a story’s credibility seems to call for advanced detective skills.
In her edWebinar “Fight Fake News: Media Literacy for Students,” Tiffany Whitehead, school librarian for the Episcopal School of Baton Rouge, Louisiana, says that’s exactly what we need to teach students. While today’s youth may be aware that not everything on the internet is true, they don’t have the tools to evaluate accuracy and authenticity.
First, Whitehead says educators and students need to use the same definitions for the same terms, such as news literacy and fake news. Otherwise, any conversations could result in miscommunication. For her students, Whitehead uses definitions from the Center for News Literacy. More important than defining the words is how just discussing the definitions can engage students in reflective conversations. This is an opportunity for them to identify what they have seen and read online.
Next, Whitehead teaches her students about the different forms of logic and reasoning inauthentic sources use to appear legitimate. These tactics include:
- Confirmation bias: only pursuing sources that confirm your own point of view
- Echo chamber: similar to confirmation bias, discussing news or sources within a group that confirms existing views
- Circular reasoning: when a piece of information appears to come from multiple sources, but they are really one source citing each other
In addition, she talks about filter bubbles. Whitehead wants students to understand that search engines and apps watch their online activity and filter search results and ads based on perceived preferences. Thus, before students even type in a word, online media is already funneling them the news the media thinks they want to see.
Whitehead doesn’t believe in lecturing students about news literacy. Her lessons include several activities to help them embrace the idea that they can’t just accept what they see. For instance, she has shared television news reports about bloggers profiting from fake news stories, engaged them in activities to evaluate news reports on similar topics, and had them create their own source decks. Most important, she gives them tools they can use outside of her class like fact-checking websites and checklists to determine a website’s, article’s, or author’s credibility. |
If you haven’t been paying attention to the extent of Arctic Sea ice over the last decade, then it’s time to start. Satellite measurements began in earnest in 1979, and 2018’s Arctic Sea ice extent in October was about 2.34 million square miles. That might sound like a lot of ice, but it represents the third lowest extent in the nearly four decades that data has been recorded. The record low occurred in October 2012.
The trend isn’t expected to stop. Scientists expect we might see an Arctic free of ice cover by the year 2050. Jump forward to 2100, and we might see three months of an Arctic free of sea ice for a whopping five months, up from three.
The ice is important for a number of reasons, not least of which is that polar bears need it to survive. The ice is a key ingredient in temperature regulation around the world, and without it we could see temperatures skyrocket. We’re already experiencing out-of-whack weather patterns everywhere we look, but it’s nothing compared to we’ll see in the future.
Obviously, carbon dioxide is a big driver of this change because it helps trap heat to warm the surface of the planet. There are other factors that contribute, and they’re going to play more of a role in the coming decades. Heat is carried by the wind. Ocean currents during periods of increased temperatures throughout the Atlantic Ocean can lead to warmer water in the Arctic, transported along the Gulf Stream.
Another factor is atmospheric temperature. In tropical rainforests, there is a feedback pattern in which air rises to higher altitudes because of high temperature and excess moisture below. When the warmer air rises beyond a certain point, it escapes the atmosphere. This isn’t what happens in the Arctic. There, the air is trapped closer to the ground, contributing to more ice melt over time.
The overall minimum extent of sea ice in the Arctic has plummeted about 40 percent since those measurements first began in 1979. That means in order to be ice-free by 2050, the rate of sea ice melt has to occur faster. And it is.
If somehow the Paris Climate Agreement outlined goals are kept, then we might not see these ice-free summers. Unfortunately, that seems unlikely if not impossible. |
Read Critically and Efficiently
- Expectations for Post-Secondary Reading
- Efficient Reading is Active Reading
Academic reading is almost always difficult reading. It is usually densely packed with ideas and implications that need to be thought out and considered. The result is that your reading will take time – lots of it.
It will also require that you read actively and critically. Critical reading involves breaking the argument down into its parts to see how well each part works and how parts of the argument work together. You likely have good reading skills to understand a text, but you need to move beyond comprehension to also analyze what a text does and how it does it.
One of the best ways to approach university reading is to see it as a three-part experience: before, during, and after. And, what you do before and after you begin a reading is as important to your comprehension of it as what you do while you read.
Many students find their reading in university to take more time than they expect, but they quickly learn that reading shortcuts (speed-reading techniques and quick skim methods) do not facilitate active or critical reading. Instead, they often read the text all over again.
One reason that reading can take so long is that students approach their reading without a plan, so they often lose focus and need to read a page more than once. Other students believe they must take notes so detailed they come close to re-writing the text. Neither of these approaches is effective or efficient.
Students who believe they should read for two, three, or four hours straight may also find reading to be difficult, particularly as they grow weary of small text and big words. And still other students who believe they can read while completing other tasks quickly find that multi-tasking is not the most efficient strategy for completing course readings.
Efficient reading is purposeful ....
Critical reading is far easier if you have a sense of the purpose and main point of a text before you begin reading it in depth. Having this in your mind can help you to follow the author’s key message and to separate essential ideas from supporting details. One way to develop a sense of the purpose of the source is to preview before you read.
- Consider its form; is it a textbook chapter, an empirical article, or an argumentative article?
- Read and understand the title.
- Examine the table of contents and/or section headings.
- Read the abstract, as well as the introductory and concluding sections.
- Skim through the text looking for main ideas; read topic sentences, transitional sections, bolded elements, captions, boxes.
- Read text summary and summary questions (if they are provided in your text).
- Determine the argument or the significant findings presented in the article or book.
By the end of your preview, you should be able to explain and write down:
- The type of text and its purpose
- The main topic or question that the text will address
- The author’s main argument or findings (for empirical and argumentative works)
- The structure of the text or the organization of ideas
Once you have previewed a text, you can begin reading it in detail, confident in the knowledge that you know where the text is going.
To read critically, you must read actively. Ask questions as you read about the key message or argument, the main findings, the evidence used to support the key message, or applications and limitations of the findings.
It is important to take good notes; they reinforce your learning, provide you a resource for reference in lab and seminar, and support you in preparation for exams.
Tips for Effective Notetaking During Reading
- Before you begin taking detailed notes, write down the topic or question the text focuses on and the author’s thesis or main point.
- Read a text in small chunks, the length of which will depend on the length of the text. Take notes after you read a paragraph, section, or chapter. This will ensure that you write down only the most important information.
- Use point form. Avoid recopying the text.
- After you complete your reading, make a list of the 3-5 most important points.
Take some time after completing a reading to review your notes and reflect on them. If there are review questions, answer them. If there are key terms listed at the end of a chapter, define them. You can even write them directly onto flashcards to aid in exam preparation. If review exercises are not provided, make your own. What 3-5 questions would you ask about the reading? What terms do you think are most important? What questions will you ask or what points will you make about the reading during class?
See specific strategies, questions, and notetaking templates for three common texts assigned in university: |
In this course you will learn to use some mathematical tools that can help predict and analyze sporting performances and outcomes. This course will help coaches, players, and enthusiasts to make educated decisions about strategy, training, and execution. We will discuss topics such as the myth of the Hot Hand and the curse of the Sports Illustrated cover; how understanding data can improve athletic performance; and how best to pick your Fantasy Football team. We will also see how elementary Calculus provides insight into the biomechanics of sports and how game theory can help improve an athlete’s strategy on the field. In this course you will learn: How a basic understanding of probability and statistics can be used to analyze sports and other real life situations. How to model physical systems, such as a golf swing or a high jump, using basic equations of motion. How to best pick your Fantasy Football, March Madness, and World Cup winners by using ranking theory to help you determine athletic and team performance. By the end of the course, you will have a better understanding of math, how math is used in the sports we love, and in our everyday lives. |
Droughts can transform tropical forests. They kill trees and slow their growth. They also affect microbes that are critical to keeping the forest soil healthy. But termites, better known for their intricate mound-building skills and for chewing through wooden furniture, help tropical forests withstand drought, a new study has found.
Tiny termites are known to play big roles in tropical forests: they dig tunnels through the forest floor, and eat wood and leaf litter, moving moisture and mixing nutrients through the soil. Despite these roles, termites are an understudied group in ecology, says Louise Amy Ashton, assistant professor at the University of Hong Kong and lead author of the study published in Science.
Termites' impacts on forests, especially during drought, which is increasing in severity and frequency across the tropics, has not been experimentally studied before. Ashton and colleagues had the opportunity to do just that during the El Niño drought of 2015–16.
The team set up experimental plots within an old-growth tropical rainforest in the Maliau Basin Conservation Area in Sabah, Malaysia. In four of those plots, the researchers suppressed termite activity by both physically removing all termite mounds, and by applying insecticides and using poisoned toilet paper rolls as bait to get rid of most wood- and leaf-litter-eating termites.
"The insecticide is active against all insects and other arthropods," writes co-author Theodore Evans, an associate professor at the University of Western Australia, in an email. "However, the concentration used was so low, a lot of bait has to be eaten to reach a lethal dose, and so will work better against social species [like termites].
"Also, toilet paper is almost pure cellulose, so will be eaten only by animals that target cellulose, and undecomposed at that," he added.
The team monitored termite activity in these "suppressed" plots and compared it with four control plots—similar plots where termite activity was not interfered with—both during and after drought.
Overall, termite activity was considerably lower in the plots where termites had been suppressed compared to the control plots. At the same time, other invertebrate groups hadn't been affected much, meaning that any difference between the suppressed and control plots could be attributed to the termites themselves. "Up to now we didn't have the methods to target suppression of termites," co-author Kate Parr, a professor at the University of Liverpool, writes in an email. "Our novel methods have enabled us to target the specific role of termites."
During the course of their experiments, the team found that the number of termites in the control plots were more than double during drought compared to post-drought or when normal rains had resumed. Moreover, during drought conditions, increased termite activity led to considerably higher leaf litter decomposition, increased soil moisture, and greater diversity in soil nutrient distribution (which influences plant diversity in turn) in the control plots compared to the suppression plots. These differences were less stark in the post-drought conditions—suggesting that termites had an especially important role to play during drought.
So what caused termite numbers and activity to increase during drought? The researchers haven't identified the exact cause, but they think that drought conditions possibly make the termites' tunnels drier and less water-logged, making moving through the environment easier. The dry conditions could have also reduced competition from fungi, the other main decomposers in tropical forests.
"Fungi need to have water and food in the same place in order to survive and grow, which is why fungi do well in wet environments," Evans said. "During dry periods, the food and water are separated, with the food (wood and litter) on the (dry) soil surface, and the water at depth (ground water). Termites carry water to their food and nests (they have special water sacs for this purpose), and so can be active and continue feeding during dry periods."
The drought may also have suppressed the activity of ants, one of the main predators of termites, Parr writes. This could, in turn, have had a positive effect on termites.
The increased activity of termites also had knock-on effects throughout the forest, the team found. The improved soil conditions created by the termites during drought translated to a 51 percent increase in seedling survival on the control plots compared to the suppression ones. This suggests that termites will have an important role to play in maintaining plant diversity in the future, given that the severity and frequency of droughts is predicted to increase with climate change.
What the study also shows is that "pristine rainforests have lots of termites that contribute to ecosystem health and resilience to drought," says co-author Paul Eggleton, an entomologist at the Natural History Museum–London. "However, disturbed forests have fewer termites, and so will have lower ecosystem health and resilience."
Parr writes that, since the team did not suppress all termites in their plots, the findings were, in fact, "a huge underestimate."
"The true importance of termites is likely much greater!" she writes.
This story originally appeared at the website of global conservation news service Mongabay.com. Get updates on their stories delivered to your inbox, or follow @Mongabay on Facebook, Instagram, or Twitter. |
Nouns : Collective Nouns
A collective noun is a slightly different kind of noun, its job is to give a single name to a group of people, places objects or ideas:
audience, band, choir, class, crowd, herd, flock, herd, bunch, range, crew, flotilla,
Here are some examples used in sentences:
The flotilla sailed into the harbour.
Dad threw the bunch of keys on the table
The audience clapped for a long time at the end of the show
A flotilla is one group of ships sailing as one unit into the harbour.
The keys were on a ring and landed together on the table.
The audience is a group of people acting together as one.
So are collective nouns singular or plural?
Hmm, the problem is that they can be either. though perhaps this is not with pursuing with primary age students it is a well for teachers to know the ins and outs so here we go…
Singular means one plural means more than one. Collective nouns are usually singular because they centre on all the individuals in the group acting as one. In this case the verb that describes the groups actions will be singular.:
The team is winning
The pride of lions is hunting buffalo
However, when a sentence is hi-lighting the behaviour of individuals in the group then the collective noun is regarded as plural and the corresponding verb will be plural:
The team are working well together tonight.
The lions have attacked and the herd are scattering in all directions.
Collective Nouns Crossword/Word Search from primary resources.co.uk |
Plate Tectonics: New Findings Fill Out the 50-Year-Old Theory That Explains Earth's Landmasses
Fifty years ago, there was a seismic shift away from the longstanding belief that Earth’s continents were permanently stationary.
In 1966, J. Tuzo Wilson published Did the Atlantic Close and then Re-Open? in the journal Nature. The Canadian author introduced to the mainstream the idea that continents and oceans are in continuous motion over our planet’s surface. Known as plate tectonics, the theory describes the large-scale motion of the outer layer of the Earth. It explains tectonic activity (things like earthquakes and the building of mountain ranges) at the edges of continental landmasses (for instance, the San Andreas Fault in California and the Andes in South America).
At 50 years old, with a surge of interest in where the surface of our planet has been and where it’s going, scientists are reassessing what plate tectonics does a good job of explaining – and puzzling over where new findings might fit in.
Evidence for the theory
Although the widespread acceptance of the theory of plate tectonics is younger than Barack Obama, German scientist Alfred Wegener first advanced the hypothesis back in 1912.
He noted that the Earth’s current landmasses could fit together like a jigsaw puzzle. After analyzing fossil records that showed similar species once lived in now geographically remote locations, meteorologist Wegener proposed that the continents had once been fused. But without a mechanism to explain how the continents could actually “drift,” most geologists dismissed his ideas. His “amateur” status, combined with anti-German sentiment in the period after World War I, meant his hypothesis was deemed speculative at best.
In 1966, Tuzo Wilson built on earlier ideas to provide a missing link: the Atlantic ocean had opened and closed at least once before. By studying rock types, he found that parts of New England and Canada were of European origin, and that parts of Norway and Scotland were American. From this evidence, Wilson showed that the Atlantic Ocean had opened, closed and re-opened again, taking parts of its neighboring landmasses with it.
And there it was: proof our planet’s continents were not stationary.
The 15 major plates on our planet’s surface
How plate tectonics works
Earth’s crust and top part of the mantle (the next layer in toward the core of our planet) run about 150 km deep. Together, they’re called the lithosphere and make up the “plates” in plate tectonics. We now know there are 15 major plates that cover the planet’s surface, moving at around the speed at which our fingernails grow.
Based on radiometric dating of rocks, we know that no ocean is more than 200 million years old, though our continents are much older. The oceans' opening and closing process – called the Wilson cycle – explains how the Earth’s surface evolves.
A continent breaks up due to changes in the way molten rock in the Earth’s interior is flowing. That in turn acts on the lithosphere, changing the direction plates move. This is how, for instance, South America broke away from Africa. The next step is continental drift, sea-floor spreading, ocean formation – and hello, Atlantic Ocean. In fact, the Atlantic is still opening, generating new plate material in the middle of the ocean and making the flight from New York to London a few inches longer each year.
A simplified ‘Wilson Cycle’. Philip Heron, CC BY
Oceans close when their tectonic plate sinks beneath another, a process geologists call subduction. Off the Pacific Northwest coast of the United States, the ocean is slipping under the continent and into the mantle below the lithosphere, creating in slow motion Mount St Helens and the Cascade mountain range.n addition to undergoing spreading (construction) and subduction (destruction), plates can simply rub up against each other - usually generating large earthquakes. These interactions, also discovered by Tuzo Wilson back in the 1960s, are termed “conservative.” All three processes occur at the edges of plate boundaries.
In addition to undergoing spreading (construction) and subduction (destruction), plates can simply rub up against each other - usually generating large earthquakes. These interactions, also discovered by Tuzo Wilson back in the 1960s, are termed “conservative.” All three processes occur at the edges of plate boundaries.
But the conventional theory of plate tectonics stumbles when it tries to explain some things. For example, what produces mountain ranges and earthquakes that occur within continental interiors, far from plate boundaries?
Gone but not forgotten
The answer may lie in a map of ancient continental collisions my colleagues and I assembled.
Over the past 20 years, improved computer power and mathematical techniques have allowed researchers to more clearly look below the Earth’s crust and explore the deeper parts of our plates. Globally, we find many instances of scarring left over from the ancient collisions of continents that formed our present-day continental interiors.
Present day plate boundaries (white) with hidden ancient plate boundaries that may reactivate to control plate tectonics (yellow). Regions where anomalous scarring beneath the crust are marked by yellow crosses. Philip Heron, CC BY
A map of ancient continental collisions may represent regions of hidden tectonic activity. These old impressions below the Earth’s crust may still govern surface processes – despite being so far beneath the surface. If these deep scarred structures (more than 30 km down) were reactivated, they would cause devastating new tectonic activity.
It looks like previous plate boundaries (of which there are many) may never really disappear. These inherited structures contribute to geological evolution, and may be why we see geological activity within current continental interiors.
Mysterious blobs 2,900 km down
Modern geophysical imaging also shows two chemical “blobs” at the boundary of Earth’s core and mantle – thought to possibly stem from our planet’s formation.
These hot, dense piles of material lie beneath Africa and the Pacific. Located more than 2,900 km below the Earth’s surface, they’re difficult to study. And nobody knows where they came from or what they do. When these blobs of anomalous substance interact with cold ocean floor that has subducted from the surface down to the deep mantle, they generate hot plumes of mantle and blob material that cause super-volcanoes at the surface.
Does this mean plate tectonic processes control how these piles behave? Or is it that the deep blobs of the unknown are actually controlling what we see at the surface, by releasing hot material to break apart continents?
Answers to these questions have the potential to shake the very foundations of plate tectonics.
Plate tectonics in other times and places
And the biggest question of all remains unsolved: How did plate tectonics even begin?
The early Earth’s interior had significantly hotter temperatures – and therefore different physical properties – than current conditions. Plate tectonics then may not be the same as what our conventional theory dictates today. What we understand of today’s Earth may have little bearing on its earliest beginnings; we might as well be thinking about an entirely different world.
In the coming years, we may be able to apply what we discover about how plate tectonics got started here to actual other worlds – the billions of exoplanets found in the habitable zone of our universe.
So far, amazingly, Earth is the only planet we know of that has plate tectonics. In our solar system, for example, Venus is often considered Earth’s twin - just with a hellish climate and complete lack of plate tectonics.
Incredibly, the ability of a planet to generate complex life is inextricably linked to plate tectonics. A gridlocked planetary surface has helped produce Venus' inhabitable toxic atmosphere of 96 percent CO₂. On Earth, subduction helps push carbon down into the planet’s interior and out of the atmosphere.
It’s still difficult to explain how complex life exploded all over our world 500 million years ago, but the processes of removing carbon dioxide from the atmosphere is further helped by continental coverage. An exceptionally slow process starts with carbon dioxide mixing with rain water to wear down continental rocks. This combination can form carbon-rich limestone that subsequently washes away to the ocean floor. The long removal processes (even for geologic time) eventually could create a more breathable atmosphere. It just took 3 billion years of plate tectonic processes to get the right carbon balance for life on Earth.
A theory works now, but what’s in the future?
Fifty years on from Wilson’s 1966 paper, geophysicists have progressed from believing continents never moved to thinking that every movement may leave a lasting memory on our Earth.
Life here would be vastly different if plate tectonics changed its style – as we know it can. A changing mantle temperature may affect the interaction of our lithosphere with the rest of the interior, stopping plate tectonics. Or those continent-sized chemical blobs could move from their relatively stable state, causing super-volcanoes as they release material from their deep reservoirs.
It’s hard to understand what our future holds if we don’t understand our beginning. By discovering the secrets of our past, we may be able to predict the motion of our plate tectonic future.
Philip Heron is a Postdoctoral Fellow in Geodynamics, University of Toronto |
Western South America: Western Ecuador
The Guayaquil flooded grasslands represent one of the most important and complex coastal environments in Ecuador. These grasslands are located in the Guayas River Basin in Ecuador. This ecoregion is threatened by the steady increase of human population and large agricultural irrigation programs.
1,100 square miles
Location and General Description
Types and Severity of Threats
Justification of Ecoregion Delineation
These seasonally innundated grasslands occur east of the city of Guuayaquil, in western coastal Ecuador. In its southern extent, this ecoregion forms a gradual transition to mangrove as tidal influence increases near the Gulf of Guayaquil. Linework for this ecoregion follows the UNESCO (1980) classification of “tropical tall flooded grasslands” which occur in this watershed.
UNESCO. 1980. Vegetation map of South America. Map 1:5,000,000. Institut de la Carte Internationale de Tapis Vegetal. Toulouse, France
Prepared by: Dr. Carmen Bonifaz de Elao
Reviewed by: In process |
"Jill" told me her 5 year old son Eric was taking Tae Kwon Do (Korean martial art) and asked if I could come up with an appropriate craft, so I put this one together.
Anne was very helpful to point out the origins of the different martial arts. Karate is Japanese, King Fu is Chinese, and Taekwondo is Korean. Thank you.
- toilet paper roll,
- a printer
- Print out the craft template of choice.
- Color (if using the black and white version of the craft) and cut out the template pieces.
- Glue the large rectangular piece on first to cover the tube.
- Glue on the belt, head, arms, and legs (you can glue them onto the front of the toilet paper roll or make it look like he's sitting down (flying through the air). Or just cut off the shoes and glue them to the bottom of the tp roll. Draw a black marker line down from the belt to make it look like legs.
- Close the template window after printing to return to this screen.
- Set page margins to zero if you have trouble fitting the template on one page (FILE, PAGE SETUP or FILE, PRINTER SETUP in most browsers).
Thanks to Sean for providing this summary!
Karate-do translated means way of empty hand. It's origin is traced to the 6th century A.D., when a monk, Bodhidharma, traveled overland from India to China, across the Himalayan Mountains. After settling in Hunan Province, China, he established a monastery and began teaching Zen Buddhism. He developed a series of fighting techniques in which the monks trained to become both physically and mentally stronger. Soon, they were formidable warriors.
Over a period of many years, the monks became scattered throughout China. Where they settled, they brought with them this knowledge, and began teaching others. As merchant trade developed between China and the Ryukyu Islands, of which Okinawa is the largest, those fighting techniques were transmitted from the Chinese to the Okinawans. The Okinawans merged their new found knowledge with the already existing combative techniques indigenous to Okinawa, known as Te.
These skills remained in Okinawa, passed on generation after generation. In 1922, Karate reached Japan. This introduction was through the Okinawan Master Gichin Funakoshi. Karate flourished in Japan, and from there began to spread throughout the world. |
Social and Emotional Learning (SEL) is the process through which young people acquire and effectively apply the knowledge, attitudes, and skills necessary to understand and manage emotions; set and achieve goals; demonstrate empathy for others; establish and maintain positive relationships, and make effective decisions. It includes a number of competencies: self-awareness, self-management, social awareness, relationship skills and responsible decision-making.
Learning ultimately supports the well-being of the self, the family, the community, the land, the spirits, and the ancestors. Learning is connected to the broader community and extends beyond the walls of the classroom and school. Connecting learning with community members, parents and extended family reinforces the links between school and other aspects of the learner’s life.
Learning is holistic, reflexive, reflective, experiential, and relational. Deep learning engages the whole student (and teacher) – heart, mind, body and soul. When we work together, collaborate conceptually, and combine our energies to reinforce commonalities across multiple subject areas, we make learning cohesive, connected and relevant. The relationships teachers develop and foster with their students is essential to student success.
Learning involves patience and time. Effective instruction honours learning as a process in which teachers gradually shift responsibility of learning to students over time. To further learning and develop awareness of oneself as a learner, students must reflect on their learning, thereby becoming more autonomous and empowered to take ownership of their learning.
Learning requires exploration of one’s identity. Learning begins with a positive self-identity. Exploring their own identities in a safe learning environment helps students develop empathy towards their peers, build stronger relationships, and dispel stereotypes and perceptions about other cultures and groups of people.
We know and believe that the implementation and support of quality social and emotional learning (SEL) through research-based processes and practices have been shown to enhance the well-being of learners, overall achievement, and positive life outcomes. Through a growing awareness of the Core Competencies, learners develop essential intellectual, personal, and social-emotional proficiencies in order to engage in deep and life-long learning and become thoughtful, ethical and active citizens.
Supporting Ongoing Professional Learning
We support a variety of professional learning opportunities in the area of social and emotional learning for Surrey teachers. Some current examples including:
- School Inquiry
In 2017-18 approximately 55 elementary and secondary schools used an inquiry process to investigate school-based questions related to SEL. We welcomed 140 teachers and administrators as they attended a 3-part dinner series to support their inquiry process. Participants highlighted their learning throughout the series by indicating ideas or strategies that have impacted their practice, including:
- The importance of students telling their own stories. Self-assessment is a necessary process for students to gain self-awareness and develop an understanding of themselves as learners.
- The importance of continuing to educate ourselves and taking care of our own social emotional learning in order to better facilitate SEL with students.
- How closely linked SEL is to the core competencies and First Peoples Principles of Learning.
- The importance of a sense of belonging.
- The importance of having a growth mindset and using growth language with our students.
- How important it is to include parents in the process.
- “We saw ourselves as learners on this journey, able to celebrate and question along the way.”
- Changing Results for SEL
Using a case study inquiry approach, 17 Kindergarten – Grade 2 teachers are looking at the social emotional learning strengths and needs of individual students and adjusting their practices to meet the needs of those learners. This has a profound effect on the learning of everyone in the classroom.
- Cost Share Resourcing
There were 52 elementary schools that took part in the Social & Emotional Learning Cost Share opportunity. This involved providing evidence-based programs and relevant picture books to enhance the learning for skill building in the areas of problem solving, empathy, and emotion management.
We hosted 80 educators from around the district to provide the MindUp program and resources for classroom implementation.
Participating in this inquiry has helped me to be mindful of how I embed SEL into my classroom routines. As these routines develop they have created positive change in my students. They now feel empowered to recognize and advocate for their emotional needs. Surrey Educator
Our classroom teachers design learning environments focused on implementing the latest research in cognitive neuroscience and mindfulness in education to enhance the well-being of all who are part of the learning community.
When asked questions such as, “What does it mean to have courage?” and “Who or what supports your ability to be resilient?” the following Grade 4-6 Surrey students reflect on and share their experiences and how they apply specific knowledge, skills, and attitudes in order to manage emotions: |
TEACHER MS. ROBO KITTY ON: Measurement
“Measurement is the assignment of a numerical value to an attribute of an object,” according to the National Council of Teachers of Mathematics. Whoa! So first you need an attribute – length, weight, etc. Next you need a unit to measure that attribute – unit of length, unit of weight, etc. So that we can communicate with others, we use standard units of measurement. We use tools that have these standard units, such as a meter stick.
Measurement naturally provides for a host of tactile, kinesthetic, hands-on activities for students of all ages. Provide your students a meter stick and have fun measuring everything in sight. Cook together and use measuring cups and spoons. Talk about how many liters of milk are in the carton or milk dish.
Do your students the immense favor of immersing them in the metric system so that they start thinking of everything in terms of metric measurements – it is x kilometers to the store, the car’s gas tank holds y liters, etc. |
Researchers at the University of Waterloo have found that drawing pictures of information that needs to be remembered is a strong and reliable strategy to enhance memory.
Researchers found that 4- to 9-year-old kids knew more about how animals are classified after a four-day camp at a zoo. It wasn’t that children who attended just knew more facts about animals, the researchers noted. The camp actually improved how they organized what they knew – a key component of learning.
Researchers have discovered how two brain regions work together to maintain attention, and how discordance between the regions could lead to attention deficit disorders, including schizophrenia, bipolar disorder, and major depression.
- Page 1 of 2 |
New Hope School conducts a project-based learning component as part of its overall education program. Project-based learning focuses on teaching students key knowledge and understanding as well as success skills, such as critical thinking/problem solving, collaboration, and self-management.
A typical project is based on a meaningful problem to solve or question to answer, at an appropriate level of challenge for students, and is governed by an open-ended and engaging driving question.
The project involves an active, in-depth process over a period of several weeks, during which students generate their own questions, find and use resources, develop further questions, and come to conclusions.
There is a real-world context using real-world processes, tools, and standards. The project is connected to students’ own interests and identities and is expected to make a real-world impact.
Students are encouraged to choose the types of products they wish to create, how they will work and use their time, with guidance by their teacher.
Opportunities are provided for students to reflect on what and how they are learning as well as on the project’s design and implementation.
Students are expected to give and receive feedback on their work in order to revise their ideas and products or conduct further inquiry.
At the end, students demonstrate what they have learned by making a presentation beyond their own classroom, either to the entire school, their parents, or to the general community.
Past projects have included creating a 3D model of the Delaware Water Gap to understand its ecological significance and impact on the surrounding region; examining the physical geography of northeast New Jersey and its impact on the people living in the area; and creating a sales pitch to an imaginary tech company to convince its CEO to locate its new headquarters in a particular city.
Below is a catalogue of project-based learning projects being conducted by our various classes this spring:
Kindergarten: “Ecosystems in Real Life.” Students will research and create dioramas of various kinds of ecosystems illustrating the types of vegetation and animals that live there. Students will learn how their ecosystems support these life forms. Their products will be presented as part of our science fair in May.
First grade: “My Body, My Life.” Students will research and reconstruct on a model of the human body six major systems of the body: digestive, respiratory, circulatory, skeletal, muscular, and nervous. They will learn the connectedness of these systems and the link between the physical body, exercise, and diet. They will also examine the effect of their emotions on their physical well-being. Students will present their discoveries at the science fair in May.
Second grade: “Water Pollution.” Students will examine the impact of water pollution on our school, city, country, and the world. They will investigate how pollutants enter the water cycle and the damage its causes. Working as a group, they will discuss, reflect, and decide the best ways to purify water. Using their knowledge of the water cycle, students will create a terrarium, a water filter, write a report, and present their findings at a school assembly in June.
Third grade: “Designing a School.” Working as a group, students will design the ideal school, taking into consideration its overall purpose, function of the different sections, the school day schedule, the types of classes held, technology, efficiency, and budgeting. They will consider the types of materials to be used and will present their finished product to the entire school community.
Fourth and fifth grades: “Marking History, Making History.” Students will research, write and illustrate the history of the Clifton area from the perspectives of Native Americans, African-Americans, early English settlers, and later immigrants. In this way, students will come to understand that there is more than just one history of a given area depending on one’s perspective. Presentations will be made at a school assembly in May.
Sixth through eighth grades: “Rocking the Rock Cycle on Mars.” Why is Mars so different from Earth? Students will compare the two planets seeking to understand the conditions necessary to support the generation of life. Their investigation will include geological structure, the water cycle, the formation of an atmosphere, and the generation of weather. The project will conclude with students giving a PowerPoint presentation at the school assembly in June.
Check back to this page periodically as we plan to post photos of our students work on these projects in the next few weeks. |
This English Language quiz is called 'Comprehension (Literature)' and it has been written by teachers to help you if you are studying the subject at elementary school. Playing educational quizzes is an enjoyable way to learn if you are in the 3rd, 4th or 5th grade - aged 8 to 11.
It costs only $12.50 per month to play this quiz and over 3,500 others that help you with your school work. You can subscribe on the page at Join Us
When a student learns to read and analyze literature, it will strengthen their writing narrative skills. They will use the elements found in literature they read in short stories. These would include characters, settings, events and specific details. The more students read, the better it will help them write their narratives. In this quiz, students will answer questions about what should be included in their narrative writing. |
$2.95 | Unlimited Digital Downloads
Music students break the secret code for the world’s most popular scales and learn the building blocks that will soon lead to an understanding of chords and harmony. This workbook includes three engaging worksheets that create "light bulb" moments into how music works and why scales are an important part of music study.
Example: Students learn the half and whole step pattern for the major scale and discover that all major scales are built with the same pattern. Then they build three major scales on the blank staves. The worksheet concludes with a "Scales in Action" example showing students that the melody for Joy to the World begins with a complete descending major scale.
Worksheet 1: Major Scales (2 Pages)
Worksheet 3: Minor Scales (3 Pages)
Worksheet 4: Mixed Scales - Egyptian, Octatonic, Blues, Whole Tone and Pentatonic Scales (3 Pages)
Scale Quiz (1 Page)
Answer Key (3 Pages)
Unlimited copies for you and your students. However, you may not distribute additional copies to friends and fellow teachers. |
What Is It?
Attention-deficit hyperactivity disorder (ADHD), usually first diagnosed in childhood, can appear in a variety of forms and has many possible causes. People with ADHD probably have an underlying genetic vulnerability to developing it, but the severity of the problem is also influenced by the environment. Conflict and stress tend to make it worse.
The main features of this disorder are found in its name. Attention problems include daydreaming, difficulty focusing and being easily distracted. Hyperactivity refers to fidgeting or restlessness. A person with the disorder may be disruptive or impulsive, may have trouble in relationships and may be accident-prone. Hyperactivity and impulsiveness often improve as a person matures, but attention problems tend to last into adulthood.
ADHD is the most common problem seen in outpatient child and adolescent mental health settings. It is estimated that ADHD affects between 5% and 10% of school-aged children. Boys are more often diagnosed with ADHD than girls. Studies suggest that the number of ADHD diagnoses has risen significantly over the years. But whether more people have the disorder or whether it is just being diagnosed more often is not clear. The definition of the disorder has changed over the past several decades and will continue to develop as the experts explain more about the biology behind it.
The activity component is less apparent in adult ADHD. Adults tend to have problems with memory and concentration and they may have trouble staying organized and meeting commitments at work or at home. The consequence of poor functioning may be anxiety, low self-esteem, or mood problems. Some people turn to substances to manage these feelings.
The symptoms of ADHD — inattention, hyperactivity or impulsive behavior — often show up first at school. A teacher may report to parents that their child won't listen, is "hyper," or causes trouble and is disruptive. A child with ADHD often wants to be a good student, but the symptoms get in the way. Teachers, parents and friends may be unsympathetic, because they see the child's behavior as bad or odd.
A high level of activity and occasional impulsiveness or inattentiveness is often normal in a child. But the hyperactivity of ADHD is typically more haphazard, poorly organized and has no real purpose. And in children with ADHD, these behaviors are frequent enough that the child has a harder than average time learning, getting along with others or staying reasonably safe.
ADHD symptoms can vary widely, but here are common characteristics of the disorder:
- Difficulty organizing work, often giving the impression of not having heard the teacher's instructions
- Easily distracted
- Excessively restless or fidgety behavior; unable to stay seated
- Impulsive behavior (acts without thinking)
- Frequently calling out in class (without raising hand, yelling out answer before question is finished)
- Failing to follow through with teachers' or parents' requests
- Difficulty waiting for his or her turn in group settings
- Unable to stay focused on a game, project or homework assignment; often moving from one activity to the next without completing any
Many children with ADHD also show symptoms of other behavioral or psychiatric conditions. In fact, such problems may be different ways that the same underlying biological or environmental problems come to light. These associated conditions include learning disabilities and disorders characterized by disruptive behavior.
- Learning disabilities — Up to a quarter of children with ADHD may also have learning disabilities. This rate is much greater than the rate found in the general population.
- Oppositional, defiant or conduct disorders — These behavior disorders, which involve frequent outbursts of extremely negative, angry or mean behavior, affect as many as half of all children who have ADHD. Children who have both ADHD and behavioral disturbances are more likely to have a poor long-term outcome, with higher rates of school failure, antisocial behaviors and substance abuse.
There is no single test to diagnose ADHD. For a child, a pediatrician may make the diagnosis, or may make a referral to a specialist. For adults, a mental health professional generally performs the evaluation.
The clinician will ask about symptoms related to ADHD. Since, in children, many of these characteristics are more likely to be seen in a school setting, the clinician will also ask about behavior in school. To help collect this information, the evaluator will often interview parents, teachers and other caregivers or ask them to fill out special behavioral checklists.
Since other conditions may cause the symptoms of ADHD, the medical history and physical examination are important. For example, the doctor may look for trouble hearing or vision, learning disabilities, speech problems, seizure disorders, anxiety, depression, or other behavior problems. In some cases, other medical or psychological testing may be useful to check for one or more of these conditions. These tests can sometimes help clinicians and teachers develop practical suggestions.
In most children with ADHD, symptoms begin before age 7 and last through adolescence. In some cases, symptoms of ADHD continue into adulthood.
The exact cause of ADHD is not fully understood. There are numerous factors that are associated with the development of ADHD. It may be difficult to avoid these factors, but addressing them may reduce the risk of developing the disorder:
- Psychosocial adversity — severe marital conflict, father's criminal behavior, mother's mental disorder, poverty, the child's foster care placement
- Complications during pregnancy or delivery — poor maternal health, fetal distress, low birth weight
- Premature birth
- Mother's use of tobacco, alcohol or other drugs during pregnancy
- Lead poisoning — although lead exposure does not account for many cases and many children who are exposed to lead do not develop ADHD
Research shows that particular foods probably do NOT cause ADHD.
Although no treatment eliminates ADHD completely, many helpful options are available. The goal of treatment is to help children improve social relationships, do better in school, and keep their disruptive or harmful behaviors to a minimum. Medication can be very helpful, and it is often necessary. Drug treatment by itself is rarely the answer. Medication and psychotherapy together usually have the best results. For example, a behavioral program may be put in place where structured, realistic expectations are set.
Stimulants, such as methylphenidate (Ritalin) and forms of amphetamine (Dexedrine), have been used for many decades. They are relatively safe and effective for most children to help them focus their thoughts and control their behavior. With the development of long-acting forms of stimulants, one dose in the morning can provide a day-long effect.
Despite their name, stimulants do not cause increased hyperactivity or impulsivity. If the disorder has been properly diagnosed, the medication actually has the opposite effect. Common mild side effects are decreased appetite, weight loss, stomachaches, sleep problems, headaches and jitteriness. Adjusting the dose can often help eliminate these problems. Stimulant drugs are associated with some serious concerns and side effects.
- Tics. There is some evidence that tics (uncontrolled movements) are more likely in patients with a family history of tic disorders, but that is still controversial.
- Substance abuse. Although stimulant drugs can be and are abused, newer research shows that they may actually reduce the risk of substance abuse for people with ADHD.
- Growth delays. Experts disagree about the effects of stimulants on growth. There is some evidence that children taking stimulants grow at a rate that is less than expected. Some doctors recommend stopping stimulants periodically during periods of expected growth.
- Cardiovascular risk. Children taking stimulants do show small increases in blood pressure and heart rate. But major heart complications in children, teens and adults taking these drugs are extremely rare. Stimulants do not bring an excessive cardiovascular risk in children and adolescents, except in patients who already had underlying heart defects or disease.
Since such risks vary widely depending on the individual, it is important to discuss the potential benefits and risks of each treatment with your doctor.
Another potential problem, which is not strictly speaking a side effect, is that stimulants can find their way to people other than the person being treated for ADHD. Called "diversion," it is fairly common among adolescents and young adults. The drugs are most often taken to improve academic performance. Some individuals do take stimulants to get high.
Other non-stimulant medications are also available to treat ADHD. Atomoxetine (Strattera) is as effective as stimulants for treating ADHD. It works by a different chemical mechanism than stimulants. Atomoxetine is relatively safe, but carries a rare risk of liver toxicity. The antidepressant, bupropion (Wellbutrin), is helpful in some cases. It is also generally well-tolerated, but it should not be given to people with a history of seizures.
Other treatment approaches, used alone or in combination, may include:
- Behavioral therapy — This refers to techniques that try to improve behavior, usually by rewarding and encouraging desirable behaviors and by discouraging unwanted behaviors and pointing out the consequences.
- Cognitive therapy — This is psychotherapy designed to change thinking to build self-esteem, stop having negative thoughts and improve problem-solving skills.
- Social skill training — Developing social skills improves friendships.
- Parent education and support — Training classes, support groups and counselors can help to teach and support parents about ADHD, including strategies for dealing with ADHD-related behaviors.
Because many children with ADHD also are troubled by poor grades and school behavior problems, schools may need to provide educational adjustments and interventions (such as an individualized educational plan) to promote the best possible learning environment for the child.
When To Call a Professional
Call your doctor if your child shows symptoms of ADHD, or if teachers notify you that your child is having academic difficulties, behavioral problems or difficulty paying attention.
ADHD can cause significant emotional, social and educational problems. However, when ADHD is diagnosed early and treated properly, the condition can be managed effectively, so children can grow up to have productive, successful and fulfilling lives. Although some children appear to grow out of their ADHD as they reach their adolescent years, others have lifelong symptoms.
American Academy of Child and Adolescent Psychiatry (AACAP)
3615 Wisconsin Ave., NW
Washington, DC 20016-3007 |
DISCUSSION: This past weekend, Ellicott City, Maryland was devastated by what was referred to as a 1000-year flood. In recent years, there have been multiple storms that have produced 100-year, 500-year, and even 1000-year floods. But what do these terms actually mean and how can there be multiple of these historic floods within the given timespan?
Contrary to the name, a 1000-year flood does not necessarily mean this type of flooding will occur exactly once every 1000 years. Weather does not occur on a timed pattern so it is not possible to say that a certain severity of flooding will only occur exactly every 1000 years. What is actually meant by the name is that there is a 1 in 1000 chance of a storm producing flooding of that severity occurring within a year. It is not unheard of to have multiple historic floods within a relatively short time period. In 2017, flooding from Hurricane Harvey was reported as the third 500-year flood to affect Houston, Texas in three years. There was some confusion from the public as to how it was possible to have multiple 500-year floods less than 500 years apart from each other within one area. Since the naming of the category of these floods is based on probability, we can’t exactly say for certain there will or won’t be more than one flood of that magnitude within the 100/500/1000 etc. year timeframe.
When you flip a coin, there is a 1 in 2 chance of the coin landing on heads. Say you flip the coin twice. The only possible outcomes are the coin landing on heads twice, the coin landing on tails twice, or the coin landing on each once. Two of these scenarios do not represent this 1 in 2 chance probability. That is because the probability does not mean that for every 2 coins one of them must be heads. More accurately what this probability represents is that for a single coin toss, there is a 1 in 2 chance that the coin will land on heads. When the coin is tossed again, there is once again a 1 in 2 chance that the coin will land on heads since each coin toss is treated as an individual event. Since the 100/500/1000 year floods are based on the probability that type of flooding will occur within a given year, each year would be treated as an individual event. This means that for each year, there is a 1 in 100 chance that a 100-year flood will occur. The occurrence of that type of flooding in one year does not affect the probability of it happening again the next year, which is why it is entirely possible to have more than one flood of the same probability within that given time frame.
Ultimately, weather does not occur like clockwork. We cannot predict that a severe flood will occur exactly every 1000-years. We can, however, deduce that there is a 1 in 1000 chance that particular type of flood can occur in any given year. While the terminology may seem confusing, it is important to remember it refers to the probability of a flood, not an exact timeline of occurrence.
To learn more about high-impact flooding and flooding-related stories from around the world, be sure to click here!
©2018 Meteorologist Stephanie Edwards |
Whenever you’re dehydrated, or experiencing the stomach flu or diarrhea, you hear people telling you the importance of restoring your body’s electrolytes. But what are electrolytes and where can you obtain them from?
What are electrolytes?
Electrolytes, also known as minerals, are ions that occur in your body.
Okay… what’s an ion?
Chemistry time, folks!
Each chemical atom or molecule has a specific number of negatively charged particles, electrons, associated with it. When the number of electrons of the atom/molecule deviates from its usual number, it becomes charged. It can either be charged positively by losing electrons, or negatively by gaining electrons. Any charged atom or molecule is known as an ion.
Examples of minerals that occur in our bodies are:
- sodium (Na+)
- potassium (K+)
- chloride (Cl–)
- calcium (Ca2+)
- magnesium (Mg2+)
- bicarbonate (HCO3–)
- phosphate (PO42-)
- sulfate (SO42-)**
Why do we need electrolytes?
Electrolytes are essential for our motor skills as well as other nerve impulses and muscle contractions (including the beating of your heart!). They also affect how much water is in your body, and the acidity of your blood. They are important because they carry electric charges.
Dehydration, which can result from the stomach flu, diarrhea and even profuse sweating, represents a state where a lot water has been lost. Electrolytes accompany this mass of water that leaves our systems, which is why we are told to replenish them. Without electrolytes, we are slower and weaker because they are so important to our biological processes.
People usually recommend drinking sports drinks to raise the level of electrolytes (and fluids) in your system when you are dehydrated, but this really does depend on why you are dehydrated! If you are dehydrated as a result of exercising, then a sports drink is fine. For cases where you are dehydrated as a result of the stomach flu or diarrhea, it’s suggested that you drink an oral electrolyte solution such as Pedialyte® in place of a sports drink.
Sports drinks have a high concentration of sugars, which will irritate you when you have a stomach flu and worsen diarrhea as it will draw more water into your bowels. Pedialyte doesn’t use sucrose, which is the sugar found in all Gatorade® products and most Powerade® products. Gatorade is strictly a sports drink because of its sucrose levels, but I’m going to hand it to Powerade because they’ve introduced Powerade Zero which has no sugar whatsoever, which makes it a great candidate for an electrolyte replenisher.
All of these drinks typically focus on sodium and potassium as electrolytes. Why are these two electrolytes so important? That’s a story for another day. |
Recall the ramps we discussed in the previous lesson. The ratio of the vertical rise to the horizontal run was \(1: 12\). What is the ratio defined as in terms of the sides in the triangle?
Refer back to the ramps again. Redraw and calculate the side lengths of the ramps (horizontal run and length of the ramp) using the 1:12 ratio. Find the measures of the angles formed by the rise, run and length of the ramp.
1) One ramp needs to have a vertical rise of 3 feet.
2) The other ramp needs to have a vertical rise of 5 feet.
Once calculations have been made, discuss the ratio of the vertical rise (opposite side) to the length of the ramp (hypotenuse), using the angle made by the ground and the ramp as the angle of reference.
Repeat this process for the ratio of the horizontal run of the ramp (adjacent side) to the hypotenuse. You should again see that the ratio is the same.
The ratio of the rise to the hypotenuse is called the sine and the ratio of the run to the hypotenuse is called the cosine.
This video will summarize this discovery: |
ultimatum (ŭlˌtĭmāˈtəm) [key], in international law, final, definitive terms submitted by one disputant nation to the other for immediate acceptance or rejection. Since refusal to accept the terms may lead to war or hostile measures, an ultimatum usually constitutes a conditional declaration of war. An ultimatum is written and indicates how its nonacceptance will be regarded. When a brief time limit is imposed, the crisis becomes more intense, because there is less opportunity for mediation or arbitration. The contracting powers at the second Hague Conference (1907) agreed to begin hostilities only after giving warning. These provisions were superseded by the Covenant of the League of Nations and later by the Charter of the United Nations, which limited the right of states to use war as an instrument of national policy. An ultimatum presented by Austria to Serbia on July 23, 1914, was the immediate cause of World War I. Hitler also presented several ultimatums (to Czechoslovakia and Poland) in the year before the outbreak of World War II. Japan, however, began its war with the United States with an attack rather than an ultimatum.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. |
Most nations have special laws for young people. However, this has not always been the case.
For a long time, there was only one criminal legal system. Youth and adults alike could be jailed, whipped, or even killed for breaking the law. Around 600AD, the Romans decided that children under the age of 7 should not be punished as criminals. By the 1700s, children under 13 were generally viewed as incapable of appreciating the nature and consequences of their conduct. During the 1800s, reformers began to develop a separate juvenile court and legal system. The philosophy of these reforms was an emphasis on rehabilitation. It was believed that saving young people from a life of crime was an important objective for society.
In Canada, our understandings of law and society have also evolved. Thus, the Canadian criminal justice system has changed over time.
In 1908, the Juvenile Delinquents Act (JDA) marked the creation of Canada’s separate juvenile justice system. The JDA:
In 1984, the Young Offenders Act (YOA) replaced the JDA. Partially due to public demands for a stronger response to youth crime, the YOA:
The YOA was criticized for a number of reasons. It was said that it did not do enough to prevent youth-at-risk from entering a life of crime. As well, some argued that its sentencing options were inadequate to deal with and provide long-term rehabilitation for the most serious violent youth.
Criticism was also levelled because of the over-use of jail sentences for non-violent young offenders who could be better served through community-based approaches that emphasized responsibility and accountability.
In 2003, the government responded to the perceived weaknesses of the YOA by replacing it with the Youth Criminal Justice Act (YCJA). Referring to values such as accountability, respect, responsibility, and fairness, the YCJA’s preamble explained the law’s rationale, and stated that:
The YCJA included principles to provide clear direction to those dealing with youth in conflict with the law; emphasized out-of-court and non-custodial options for non-violent youth; and focused on reintegration and rehabilitation. At the same time, the YCJA provided custody options for youth who committed more serious offences.
In 2012 the government passed the Safe Streets and Communities Act, an Act that made important changes to the YCJA. The changes were designed to help ensure that youth who commit violent or repeat offences are held fully accountable.
Broadly speaking, the YCJA’s general principles were amended and now highlight the protection of the public as a key goal of the youth justice system. The principles of the Act now state that the youth justice system is intended to protect the public by:
At the same time, the amendments also emphasize that the youth justice system must be based on the principle of diminished responsibility of young persons.
Additional amendments, dealing primarily with youth who commit violent and repeat offences, were also incorporated.
Today, claims such as “youth crime is on the rise” and “youth crime is out of control” seem common in coffee shops and online message boards. There is one problem with such broad statements. They are not true.
The most recent data from Statistics Canada revealed that in 2013, the overall volume of youth crime declined by 16% from the previous year. This included drops in homicides, serious assaults, motor-vehicle thefts, and break-ins. This recent decline in youth crime is consistent with longer-term trends. In fact, Statistics Canada data reveals that crime has been in steady decline for well over 20 years.
In 1991 nearly 9,500 crimes were recorded per 100,000 youth in Canada, a peak in recent history. When the Youth Criminal Justice Act became law in 2003, there were nearly 7,500 crimes per 100,000 youth. By 2013, the number of crimes per 100,000 youth fell to just under 4,500. This is a 40% decline since the YCJA was enacted.
While many factors contribute to Canada’s falling crime rates, the implementation of the Youth Criminal Justice Act may offer a partial explanation for Canada’s falling youth crime rate.
Young people are dealt with differently than adults in criminal law. Children under the age of 12 cannot be arrested or charged with a crime. Once youth reach the age of 12, they are dealt with in a separate youth justice system. While the same criminal laws apply to youth 12 and over, the administration of justice is carried out under the provisions of the Youth Criminal Justice Act. Once youth reach the age of 18 they are subject to the adult criminal justice system.
Some of the arguments used to justify 18 as the age where individuals are subjected to the “full” adult system of criminal responsibility include:
In addition to these reasons, it is also important to note that the Youth Criminal Justice Act requires the Crown to consider seeking an adult sentence when a youth 14 or older is charged with a serious violent offence. The YCJA also allows a court to hand down an adult sentence when certain requirements are met. When a youth receives an adult sentence, they will serve the sentence in a youth facility until they turn 18, and possibly beyond. This means that despite 18 being the age of “full” responsibility, the law is flexible for exceptional circumstances.
For the administration of justice for children under the age of 12, the Department of Justice has suggested that “the small number of children under the age of criminal responsibility who exhibit serious behaviour problems can be dealt with more effectively by parents and the community without involving the state. When a more formal approach is required, child welfare or mental health systems are the preferred approach. These systems have access to a wide array of services that are more age-appropriate, family-oriented and therapeutic than those available through the criminal justice system.” |
Emperor penguins may dominate Antarctica now, but even they had trouble during the last ice age. A new study is out this week and suggests just three populations may have survived during the last ice age.
Emperor penguins are known for their hardiness against Antarctica’s frigid cold. Temperatures regularly drop to -30 degrees Celsius. Yet, the emperor penguins have no trouble breeding during the cold winters.
The penguins can handle the cold, so why were only three populations able to survive?
Gemma Clucas, one of the lead authors of the paper, explains: “Due to there being about twice as much sea ice during the last ice age, the penguins were unable to breed in more than a few locations around Antarctica. The distances from the open ocean, where the penguins feed, to the stable sea ice, where they breed, was probably too far. The three populations that did manage to survive may have done so by breeding near to polynyas – areas of ocean that are kept free of sea ice by wind and currents.”
Researchers believe one of these three groups can be found today in the Ross Sea. Emperor penguins that breed here are genetically distinct from other emperor penguins across Antarctica.
While emperor penguins need ice sheets to breed on, access to open seas is a must for feeding.
Emperor penguins’ habitat is a delicate one. There needs to be a balance of sea ice and access to open seas for the penguins to flourish. A 2014 study from Woods Hole Oceanographic Institute says the entire population could fall by a third by the end of the century due to disappearing sea ice.
As the sea ice disappears, so does krill – a main food supply for emperor penguins.
Researchers believe climate change might affect the Ross Sea last compared to other regions in Antarctica. In fact, even with warmer temperatures around the globe, ice levels are increasing in the Ross Sea. Changes in wind patterns have led to increases in the extent of winter sea ice over the last few decades. But, this trend is expected to reverse by the end of the century.
Dr Tom Hart, who helped organize the study, said, “It is interesting that the Ross Sea emerges as a distinct population and a refuge for the species. It adds to the argument that the Ross Sea might need special protection.”
Image credit: Paul Nicklen/National Geographic |
Osteoarthritis is the most common form of arthritis. It causes pain, swelling, and reduced motion in your joints. It can occur in any joint, but usually it affects your hands, knees, hips or spine.
Osteoarthritis breaks down the cartilage in your joints. Cartilage is the slippery tissue that covers the ends of bones in a joint. Healthy cartilage absorbs the shock of movement. When you lose cartilage, your bones rub together. Over time, this rubbing can permanently damage the joint.
Risk factors for osteoarthritis include
- Being overweight
- Getting older
- Injuring a joint
No single test can diagnose osteoarthritis. Most doctors use several methods, including medical history, a physical exam, x-rays, or lab tests.
Treatments include exercise, medicines, and sometimes surgery.
NIH: National Institute of Arthritis and Musculoskeletal and Skin Diseases |
Wayang Purwa: Repertoire and The Cast Of Characters
The term wayang purwa refers to four cycles of epics, which began to be standardized by the royal courts of Central Java in the eighteenth century. The two most popular and commonly performed of these epics are the Mahabharata and Ramayana.
Wayang stories involve moral and ethical dilemmas faced by the characters in their journeys through life, love, and war. The stories are about good versus evil, but more than that, they contemplate the existential struggle between right and wrong. They are about the pursuit of living a virtuous, noble life and the search for meaning. The means to those ends are not always clear cut. “Good” characters may posses certain negative traits and likewise, not all the “bad” characters are entirely immoral. Whatever the circumstance, wayang stories always present philosophical ideas and poignant messages.
The Hindu stories, Mahabharata and Ramayana, originated in India possibly as far back as the eighth century BCE and reached Java around the eighth century CE. For centuries, Hinduism was the predominant religion of Java. Over time, the epics became distinctly Javanese versions of the original Indian texts and by the tenth century they were recited in the form of wayang kulit as court-based theater. Most likely, the Hindu stories were applied to indigenous beliefs and local shadow puppet traditions, melding into a uniquely Javanese custom.
When Islam took hold in Java (1200 – 1600), the current stylized form of wayang kulit was adopted to meet cultural prohibitions on representation of the human form. The aesthetic traits of the puppets persist and Islam’s presence on Java remains strong, while the Hindu body of literature continues to be an important part of Javanese art and culture to this day.
The epics are divided into over 200 distinct and autonomous, yet related episodes (lakon), which are the stories presented in singular, nine hour performances of classical wayang kulit in Central Java. |
Sea Star (Asterias forbesii)
Alternate common name: Forbes Sea Star, Starfish.
Color: Brownish purple to orange with lighter underside.
Size: Up to 12 inches across when mature.
Habitat: Rocky shores, tide pools, dock pilings, Bay bottom.
Seasonal appearance: All year.
Sea stars are not fish as their nickname "starfish" suggests. They belong to a group of animals called echinoderms, which means "spiny skin." They are related to brittle stars, sea urchins, sea cucumbers, and sand dollars. Sea stars have five arms, or rays, connected to a small round body. Sea stars detect light with five purple eyespots at the end of each arm. The bright orange dot in the center of the body is called the madreporite. This organ pumps water into the sea star's body. This pumping action creates suction at the end of hundreds of tube feet, located in paired rows on the underside of the arms.
Life History and Behavior
Sea stars use suction in the tube feet for movement and feeding. They wrap their bodies around quahogs and other bivalves, using the suction from their tube feet to pull shells apart. When the prey is opened, the sea star pushes its stomach out of its body and into the bivalve, secreting enzymes that digest the prey's soft body tissues. The liquefied bivalve is then absorbed into the stomach. Sea stars feed often, and their size depends on the amount of food they eat, not on their age.
Sea stars are eaten by bottom-dwelling fish and crabs, as well as by sea gulls when low tides leave the sea stars exposed. Regeneration will occur as long as one fifth of the sea star's body remains intact. Sea stars breed in the spring, producing as many as 2,500,000 eggs. Females will feel plump and spongy when their arms are filled with eggs.
Adapted from The Uncommon Guide to Common Life on Narragansett Bay. Save The Bay, 1998. |
Ask the Mauritians
What better way to get help with your queries than to ask the localsJoin Community
Slavery in Mauritius
The first slaves were brought to Mauritius in 1639 from Madagascar by the Dutch after they discovered the island and decided to adopt it. However, development was altered by the slaves running away upon arrival and many more calamities such as cyclones and bad harvests.
In 1715, the French took possession of the island and brought slaves from East Africa to help in development plans. Slaves worked in the sugarcane fields, in construction matters and houses as per the wishes of their masters. They were categorised into 3 groups whether they were men, women or children: skilled, labour and household. They were thus bought/sold based on what they were specialised in and their country of origin.
Slavery is a fact that has existed around the world for a long time in the past and is still ongoing today though in a different way. A slave can be defined as a person who can be sold/bought/pawned, removed from his/her homeland and at the total mercy of the master. It is a way of living in which people no longer have any rights over themselves but instead become the property of their masters where they are forced to work, without compensation of any sort or right to ask for a change and even killed in case of misconduct since they belong totally to their masters.
Once captured, bought or obtained by birth, slaves were forced to succumb to the will of their masters. The history of Mauritius reveals that it has been under a severe regime of slavery in the past and today slave descendants’ account for a good portion of the Mauritian population.
Forms of Punishment
Slaves who did not please their masters were punished in different ways such as:
- Body mutilations eg having their ears cut off if they tried to escape
- Branding eg having their shoulder marked with a hot iron in the form of a Fleur de Lys
Although they knew they would have to face harsh consequences if they were captured, many slaves still tried to run away and were known as maroon slaves. These maroons had to be constantly on the move to avoid capture and faced difficult conditions such as hunger, lack of shelter and lived in fear.
Abolition of Slavery
Slave trade flourished in the 18th and 19th century in the name of development and manpower for progress with Mauritius acting as a main port for this. There was quite an influx of slaves onto the island and the Code Noir was also passed by the Louis XIV in 1723 as means of rules for the slaves. Though the Code Noir was already harsh (roman Catholism was forced upon them, weddings had to take place with the permission of the masters, they had no right to gather, they were to have food and clothes even when sick or old amongst a few), these were seldom respected. In 1793, slave trade was forbidden by the convention de Paris but since settlers on the island were against it, they decided not to abide by the decree.
In 1808, slave trade became illegal in British colonies because of mounting opposition against such treatment. When the British took possession of the island in 1810, slavery was prohibited yet it was still going on to help in the development of the island. Slaves were still being ill treated and many of them ran away whenever they could to hide in the forests and mountains.
The Slavery Abolition Bill was passed in 1833 under King William IV throughout the British Empire and as such slavery was abolished on the 1st February 1835 in Mauritius. Slaves were freed from their masters and became free men. However the ex-slaves had to serve their masters as apprentices and the following clauses were applied:
- Agricultural slaves had to work for 6 years
- Domestic slaves had to work for 4 years
- Apprentices had to be remunerated by their masters
Le Morne Heritage Trust Fund
When British soldiers went to deliver the good news to slaves hiding on the Le Morne Mountain in the south west of the island, the slaves thought they were being arrested for escaping and jumped off the mountain into the face of death to prevent enslaving all over again. The area is known to be an important point in the history of Mauritius, with runaway slaves hiding there and expressing their own cultures through songs and dances longing for their faraway land. As such, to commemorate the fight of slaves for their freedom in the past, the mountain has been declared a national heritage in Mauritius and the 1st February as a public holiday.
thank for the informations.it has help me a lot !
Yeah,it's interesting... But what was the punishment of the slaves apart from death? And what did they become after the abolition of slavery?
really interesting.. it helped me a lot in my project...., and i've got 47 out of 50...
Very Interesting. But apart from Africa and Madagascar, were there any slaves from other countries? Asia for example.
since when, i mean what year is 1st feb a public holiday? plz
can you tell me more about the ''ravanne'' and it's origin?
help children to know mare about slavery in mauritis
very interesting,it helps the cpe a lots
Nice research. It's important to know history because the past and the present are undeniably linked. The slaves and paid labourers were imported in order to serve the colonials (helping them acheive THEIR dreams). |
Sometimes the enormity of climate change can seem overwhelming. What can one person, or even one nation, do on their own to slow and reverse climate change?
Chihuahuan Desert Charities has found a route. We are teaching Las Cruces students sustainable food production and consumption practices that can help mitigate and even reverse impacts of climate change.
Corn grown in the U.S. requires barrels of oil for the fertilizer to grow it and the diesel fuel to harvest and transport it. Some grocery stores stock organic produce that do not require such fertilizers, but it is often shipped from halfway across the globe. And meat, whether beef, chicken or pork, requires pounds of feed to produce a pound of protein.
Choosing food items that balance nutrition, taste and ecological impact is no easy task. Foodstuffs often bear some nutritional information, but there is little to reveal how far a head of lettuce, for example, has traveled.
Improved agricultural practices could quickly eliminate this significant chunk of emissions.
Some may think that soil is just “dirt,” brown stuff that holds plants in the ground. But soil is an organism in itself, and healthy soil is rich in microorganisms, which under optimal circumstances live in symbiotic relationships with the plants. Healthy soils store vast quantities of atmospheric carbon. Improving soil health is, therefore, an integral part of reversing CO2 levels.
According to cutting-edge agricultural research, including that outlined in the Rodale Institute’s white paper, Regenerative Organic Agriculture and Climate Change, “recent data from farming systems and pasture trials around the globe show that we could sequester more than 100 percent of current annual CO2 emissions with a switch to widely available and inexpensive organic management practices, which we term “regenerative organic agriculture.”
As well as sequestering carbon, regenerative organic and agro-ecological systems can mitigate the chaotic effects brought about by climate change, such as flooding. Healthy soils have structure that allows them to retain large quantities of water. This structure not only holds soil in place preventing erosion, it also allows plants to be more tolerant of weather extremes. Regenerative systems increase the amount of carbon in soil while maintaining yields. In fact, research shows that yields under organic systems are more resilient to the extreme weather which accompanies climate change.
The DYGUP & SUSTAIN Program focuses on teaching regenerative agriculture and using 21st Century STEM skills. By donating you will contribute to improved education outcomes for economically disadvantaged children, providing STEM education to economically disadvantaged children, support economically disadvantaged students prepare for success in post-secondary educational institutions, and support responsible stewardship of the environment, protect clean air and water and provide an opportunity for improving physical activity and nutrition in youth.
Chihuahuan Desert Charities is proud to be the fiscal sponsor for DYGUP & SUSTAIN and support their important work in the Las Cruces community. The DYGUP & SUSTAIN Program has many partnerships in the community including Taylor Hood Farms, Backyard Farms LC, First Christian Church, and Las Cruces High-School. Support the DYGUP & SUSTAIN Program at Legacy Farm in Las Cruces by visiting ChihuahuanDesertCharities.org
Will you join us? Come back each day to learn more about this important mission and how YOU can make a difference. You can help immediately by giving to our “Love the People, Feed the People!” campaign. Donate to Fight Hunger In Las Cruces. |
The Mayo Clinic classifies beans as a fruit. According to botanists, a fruit is the part of the plant that develops from a flower or the section of the plant that contains the seeds. The other parts of plants are considered vegetables.
A common misconception is that beans are legumes and that legumes are neither fruits nor vegetables. Beans are indeed legumes, but legumes are simple, dry fruits contained within a shell or a pod. The most well-known legumes are peas, beans, peanuts and alfalfa. Legumes are a good source of protein, especially for those who abstain from eating meat. |
sphero-, spher-, -sphere-
(Greek: ball, round, around; globe, global; body of globular form; by extension, circular zone, circular area)
2. The study of the optical characteristics of the atmosphere or products of atmospheric processes.
The term is usually confined to visible and near visible radiation; however, unlike meteorological optics, it routinely includes temporal and spatial resolutions beyond those discernible with the naked eye.
Meteorological optics is that part of atmospheric optics concerned with the study of patterns observable with the naked eye./P>
This restriction is often relaxed slightly to allow the use of simple aids; such as, binoculars or a polarizing filter.
Topics included in meteorological optics are sky color, mirages, rainbows, halos, glories, coronas, and shines.
2. The pressure at any point in an atmosphere due solely to the weight of the atmospheric gases above the point concerned.
3. The average atmospheric pressure at sea level is approximately 14.7 pounds per square inch.
With an increasing altitude, the pressure decreases; for example, at 30,000 feet, approximately the height of Mt. Everest, the air pressure is 4.3 pounds per square inch.
The term is applied in particular to devices used to measure infrared radiation.2. A receiver for detecting microwave thermal radiation and similar weak wide-band signals that resemble noise and are obscured by receiver noise.
The primary application of an atmospheric radiometer has been on board spacecraft measuring atmospheric and terrestrial radiation, and they are mostly used for meteorological or oceanographic remote-sensing.
Their secondary application is also meteorological, as zenith-pointing surface instruments that view the earth's atmosphere in a region above the stationary instrument.
By understanding the physical processes associated with energy emission at these wavelengths, scientists can calculate a variety of surface and atmospheric parameters from these measurements, including air temperature, sea surface temperature, salinity, soil moisture, sea ice, precipitation, the total amount of water vapor and the total amount of liquid water in the atmospheric column directly above or below the instrument.
2. An apparent upward displacement of celestial objects relative to the horizon as light from them is bent toward the vertical by the decreasing density with altitude of the earth's atmosphere.
It is greatest for objects on the horizon and negligible at elevations higher than about 45 degrees.3. The angular difference between the apparent zenith distance of a celestial body and its true zenith distance, produced by refraction effects as the light from the body penetrates the atmosphere.
Any refraction caused by the atmosphere's normal decrease in density with height.
Near surfaces on the earth, those within a few meters or so, are usually dominated by temperature gradients.
2. The rhythmic, periodic oscillation of the earth's atmosphere because of the gravitational effects of the earth, sun, and moon and to the absorption of radiation by the atmosphere.
3. A tidal movement of the atmosphere resembling an ocean tide but caused principally by diurnal temperature changes.
Both the sun and moon produce atmospheric tides, and there also exist both gravitational tides (gravitational attraction of the sun or moon) and thermal tides (differential heating of the atmosphere by the sun). |
“A roller coaster is considered any elevated track with curves and rises, carrying passengers in open, rolling cars for entertainment” (5). Today’s roller coasters appear to be tons of tubular metal intertwined around itself, but regardless of how big, fast, or gravity defying they are, they all use the same natural force – gravity. The more twisting, turning, flipping, and the faster a roller coaster goes, the more the coaster depends on the law of physics, not mechanics, to keep it moving. There is no onboard motor on roller coasters but they can still reach speeds that exceed the limits of a car on the parkway, while completing a curve, twist, rise, or plunge. History of Roller Coasters
Modern day roller coasters are based off of the fails and successes of those created over the years and though they are more complex today, roller coasters wouldn’t exist today if it weren’t for the ones of past generations. Originating in Russia, roller coasters were as basic as they come – a simple ramp. Russia had the climate for sledding, but with flat plains and high altitudes it wasn’t necessarily possible. To solve this problem they built frozen slides where inclines didn’t naturally exist. This worked well for the Russians but other countries didn’t have such cold winters to maintain the ice on the slide. French inventors desperately wanted a slide of their own so they came up with wheels. These wheels would sit in the carved grooves of wooden ramps, which would allow for year-round fun. Eventually this Russian invented, French evolved contraption grew in popularity and proved that people craved the speed, the height, and sense of daringness that have resulted in the roller coasters of today and those that have yet to come.
The Physics of Roller Coasters
All roller coasters rely on the same physical forces to move – potential energy, kinetic energy, gravity, and momentum. Roller coasters use the power from getting to the top of the lift hill or from a powerful...
Please join StudyMode to read the full document |
Days 6-7 What do plants really need?
Lesson 13 of 19
Objective: SWBAT support an argument that plants get the materials they need to grow, develop, and survive from the sun, air, water, and soil by collecting evidence through multiple experiments
5e Lesson Plan Model
Many of my science lessons are based upon and taught using the 5E lesson plan model: Engage, Explore, Explain, Elaborate, and Evaluate. This lesson plan model allows me to incorporate a variety of learning opportunities and strategies for students. With multiple learning experiences, students can gain new ideas, demonstrate thinking, draw conclusions, develop critical thinking skills, and interact with peers through discussions and hands-on activities. With each stage in this lesson model, I select strategies that will serve students best for the concepts and content being delivered to them. These strategies were selected for this lesson to facilitate peer discussions, participation in a group activity, reflective learning practices, and accountability for learning.
The Ecosystems and Interactions unit focuses on students recognizing the interrelationship between organisms and their ecosystems. It engages students in understanding that organisms have observable characteristics that are fully inherited and can be affected by the climate and/or environment. Students distinguish structures that define classes of animals and plants, and develop an understanding that all organisms go through predictable life cycles. They learn that organisms depend upon one another for growth and development and discover that plants use the sun's energy to produce food for themselves. They observe how the sun's energy is transferred within a food chain from producers to consumers to decomposers.
In this lesson, Days 6-7 -What Do Plants Really Need lesson students wrap up their observations of each plant's growth and development. Then, students write evidence based conclusions which are used as part of a whole class discussion on developing reasons to support the outcomes they saw with each plant model. This discussion prepares them to write an argument that plants get their needs to grow from air, water, and sunlight. I collect this written piece and use it as an assessment.
Next Generation Science Standards
This lesson will address and support future lessons on the following NGSS Standard(s):
5-LS1-1. Support an argument that plants get the materials they need for growth chiefly from air and water.
5-PS3-1. Use models to describe that energy in animals’ food (used for body repair, growth, motion, and to maintain body warmth) was once energy from the sun.
Students are engaged in the following scientific and engineering practices
4.) Analyzing and Interpreting Data- Students analyze and interpret the data they have collected throughout the investigation. They compare and contrast in order to discuss and make sense of each plant's growth and development.
7.) Engaging in Argument from Evidence- Students construct a written scientific explanation that supports the statement Plants only get materials they need to grow from sunlight, air, and water. They use their evidence based conclusions and reasons from our discussions to write it.
The Days 6-7 What Do Plants Really Need lesson lesson will correlate to other interdisciplinary areas. These Crosscutting Concepts include:
2) Cause and Effect-Students make and defend an argument based on evidence that Plants only get materials they need to grow from sunlight, air, and water.
6.) Structure and Function- Students use each plant variable investigated to describe that plants only need sunlight, air, and water to grow and function.
Disciplinary Core Ideas within this lesson include:
LS1.C Organization for Matter and Energy Flow in Organisms
LS2.A Interdependent Relationships in Ecosystems
Importance of Modeling to Develop Student
Responsibility, Accountability, and Independence
Depending upon the time of year, this lesson is taught, teachers should consider modeling how groups should work together; establish group norms for activities, class discussions, and partner talks. In addition, it is important to model think aloud strategies. This sets up students to be more expressive and develop thinking skills during an activity. The first half of the year, I model what group work and/or talks “look like and sound like.” I intervene the moment students are off task with reminders and redirecting. By the second and last half of the year, I am able to ask students, “Who can give of three reminders for group activities to be successful?” Who can tell us two reminders for partner talks?” Students take responsibility for becoming successful learners. Again before teaching this lesson, consider the time of year, it may be necessary to do a lot of front loading to get students to eventually become more independent and transition through the lessons in a timely manner.
Noting Final Observations
Using Evidence to Support a Conclusion
- Does air affect plant's growth?
- Does water affect a plant's growth?
- Does sunlight affect a plant's growth?
- Does soil affect a plant's growth?
I ask them to review their data tables, looking for structures that appeared during the week, color appearances and changes took place throughout the week, how tall the plant grew, and examining their illustrations showing these variations and differences. As they analyze each data table separately, the students draw an evidence based conclusion by writing a statement that depicts their thinking by starting with the sentence starter: Based on my data, I conclude ____________________________________. I instruct students to continue the statement based on information and the data they collected throughout the investigation. Student write a concluding sentence that relates to their original test question.
This process continues for each independent variable set up at the start of the multi-day lesson. After writing a concluding sentence for each investigation the students planned and performed, I tell them we are reconvening as a whole class to share the conclusions made by one another.
Sharing Evidence Based Conclusions and Developing Reasons
I call on each group to share a conclusion aloud
Then, as a class, we enter into a discussion. I use questions like:
- Did removing a plant from the light stop it from growing? Why or Why not?
- Did keeping a plant in a bag so it did not get air, affect the plant's growth? Explain
- Did not giving a plant water, affect the plant's growth? Explain
- Did not having soil prevent the plant from growing? Why or Why not?
By asking these questions, I am leading them into developing reasons as whether or not a plant needs water, air, sunlight, soil, to obtain nutrients to grow. I want them start rationalizing their thinking so they can prepare a written argument that plants get their materials to grow mostly from air, water, and sunlight. (not soil).
Our discussion focused on the fact most plants grew even with a missing material. We talked about our control plant, which received all ingredients. Students quickly noted this plant had the most growth and most developed structures. Then we discussed the plant that received no sunlight. All the students were amazed at several things regarding this plant. First, its size. Even without sunlight, this plant great extremely tall. One group noted its height to be 12 inches high. Then we noted that while the plant grew very tall, it did not have color. Its stem was solid white. Since we learned during our photosynthesis lesson that sunlight helps plants make its chlorophyll, leaving it the white color. I asked them, "if it didn't have sunlight to perform photosynthesis to make food for itself, then how did the plant grow? Many ponder over this question, so I asked them to think back to our seed dissect and consider the parts we found inside. We came to the conclusion that it was able to live off of the stored food from the seed. Other students also noted that this plant did not have an other structural parts like multiple leaves to it like our control plant. We inferred that it grew so tall because it was trying to seek out light.
Next, we talked about the plant that did not receive soil. Students pointed out that the seed in the water, sun, and air, still germinated. They shared how the seeds in the water showed tiny roots popping out, which indicated water does make a difference in a growing plant. This led into our conversations on the plant that did not receive water while the others did. Students quickly recognize that the seed that did not receive water never germinated and nothing happend while in the soil. We determined that soil is not a necessity for plants to grow.
Finally, we talk about the plant sealed in the bag with no air. There was a lot of conversation centered on this plant because it grew! Many students identified that the bag become an ecosystem and that the water cycle took place inside, allowing the plant to grow. Student share that the water cycle and photosynthesis helped the plant survive and still grow. The water cycle helped it still receive water which was used for photosynthesis.
At the end of our discussion, we concluded that plants need air, sunlight, and water to grow and soil was not.
Elaborate / Evaluate
Applying the Data to Construct a Written Argument
I hand out lined paper and direct students to the board. I write the statement:
Plants only get materials they need to grow from sunlight, air, and water.
and ask them to write it at the top of their paper. I explain to them they are writing a scientific explanation that supports this statement posted. I remind them a good scientific explanation includes data as evidence to validate this type of thinking; therefore, they need to consider data from the investigations they did.
Students work on their explanations for the remainder of class time. While they work, I move around the room monitoring their progress. At the end of class, students take the assignment home and complete it for homework. The next day I collect it for assessment.
This type of writing is still an ongoing practice for my students. I stair-cased this assignment by having them write evidence based conclusions before writing this essay. They have been working on claim, evidence writing all year which is how many of them wrote their argument. They still struggle with really developing the reasons which indicates I need to provide them with more guided opportunities and models. They have made progress since the start of the school year, so I am satisfied with their assignment overall. |
Marshmallow Constellations is one of the activities taken from our Space section. The purpose of the activity is for children to learn more about stars and constellations and develop their understanding of the solar system. In doing so, they will apply engineering and mathematics skills to create their constellations.
A great starting point for this activity is with a bit of star gazing! Perhaps your class will have already visited a planetarium or have some knowledge of different constellations. Unfortunately it’s rather difficult to participate in a spot of star gazing during the school day. Therefore we’ve found iPad apps to be the next best thing! Apps such as ‘Star Walk’, ‘Solar Walk’ and ‘Star Chart’ give the user the ability to scroll across the night sky, exploring the many different constellations as they go. In fact, some of these apps display a faint picture of the constellation beneath it. Another useful feature is the ability to type in your location in order to see an accurate representation of the current night sky in your area. Another starting point for this activity is with a book. See our book and app recommendations here.
Once the children have had a chance to explore the sky, bring the class back together for a discussion, giving them opportunities to share their observations and ask questions. It soon became clear that the children I was working with found it difficult to separate astronomy from astrology. This led to an interesting discussion about the differences between the two areas. An important element of all learning is to frame it within the context of the ‘real world’ . We did this by having a quick look at the profiles of famous astronomers such as Nicolas Copernicus and Galileo Galilei. For more information about space careers and to view these profiles click here.
A few facts worth mentioning:
- Astronomers divide the night sky into 88 constellations.
- The sun, moon and planets travel along the ecliptic path.
- The 13 constellations they pass through are the stars of the zodiac.
Then it was time to get started. I laid out the equipment on each table and provided the children with cards representing each constellation. Then I left them to it!
Although simple to resource, the children found it surprisingly difficult to form each constellation accurately. It was certainly a good test of their engineering skills! They took an interest in the shapes within each constellation. Some children challenged themselves to look at the picture and estimate how many cocktail sticks and marshmallows they thought they would need. This not only added an extra maths element to the activity but also helped to cut down on unnecessary wastage.
Finally we painted a large piece of card back and splattered white paint across it to represent the night sky. Then it was time to add our constellations to it and take photos of our work!
For more space activities click here.
For space books and apps click here.
For space careers and inspiring people click here. |
The evidence of water on Mars is not new evidence, as NASA
has found previous signs of water on the Red Planet. NASA's rover Spirit had discovered sulfate-rich soil beneath the Martian surface, which suggested an earlier presence of liquid water. Also, NASA's Mars Reconnaissance Orbit probed and found water ice in areas that were far away from the Martian polar caps.
But Curiosity's images of rocks containing ancient stream bed gravel on the Red Planet are the absolute first of a kind. NPR
reports that NASA's next step will be to find a good spot to drill into the rock. They will be looking at possible carbon deposits, hopefully to determine whether the water on Mars once supported life.
Curiosity's images of the gravel bed are being studied by the Jet Propulsion Laboratory/Caltech science team, looking at various shaped stones that are cemented into a layer of conglomerate rock. "This is the first time we're actually seeing water-transported gravel on Mars. This is a transition from speculation about the size of streambed material to direct observation of it," said Curiosity science co-investigator William Dietrich of the University of California, Berkeley.
The stones' shapes and sizes are offering clues to the ancient stream's flow --- its speed and distance. "From the size of gravels it carried, we can interpret the water was moving about 3 feet per second, with a depth somewhere between ankle and hip deep," said Curiosity science co-investigator William Dietrich
of the University of Calinia, Berkeley.
Scientific studies have shown that the Mars' rounded rock shapes mean they were transported by a rapid flow of water. The grains are too large to have been moved by the wind. Water and sediment are thought to have flowed down the crater into a geological formation, created by material transported by water. This appears to have been where Curiosity landed
, "It's hard to say how long ago this water flowed -- an estimate would be 'thousands to millions of years'," Dietrich said.
"A long-flowing stream can be a habitable environment," said Grotzinger. "It is not our top choice as an environment preservation of organics, though. We're still going to Mount Sharp, but this is insurance that we have already found our first potentially habitable environment."
Finding water on Mars is vital, as it not only unlocks past climate history but also helps mankind understand the evolution of the planets. Because Mars has an atmosphere, finding water on our Red Planet is important; without water, there is no hope that Mars will someday sustain life.
Unfortunately, If the presence of water is found, NASA reports that it could be contaminated by Earth microbes from Curiosity's drill bits. This has been a huge controversary at NASA over the six months; if the contaminated drill bits touched ground where water may have been, the Earth microbes could survive and taint the area.
According to LA Times
, NASA has known about the possible contamination of the drill bit box about six months before the rover launched. The bits had originally been sterilized inside a box, to be opened after Curiosity had landed on Mars. The box should never have been opened without knowledge from a NASA contamination scientist ... but it was.
"They shouldn't have done it without telling me. It is not responsible for us not to follow our own rules," reported NASA Planetary Protection Officer Catharine Conley.
On Nov. 1, after learning that the drill bit box had been opened, Conley said she had the mission reclassified
to one in which Curiosity could touch the surface of Mars “as long as there is no ice or water.”
Who would have known?
The reason the box was opened in the first place and the drills contaminated is because the engineers were concerned that a rough Mars landing may damage Curiosity and its drill mechanism. Therefore, the box was illegally opened and one bit was mounted in the rover's drill to ensure success. If Curiosity became damaged from its landing, at least one drill bit would be ready; an act which was done without Conley's knowledge until shortly before the launch. By then, it was too late.
Conley's rules for contamination prevention was initially to sterilize any part of Curiosity that would touch the surface of Mars. This included all the drill bits and six of the rover's wheels, to preserve NASA's ability to explore water or ice on the planet --- no matter how remote the chance may be.
"The box containing the bits was unsealed in a near-sterile environment," said David Lavery, program executive solar system exploration at NASA headquarters. "Even so, the breach was enough to alter aspects of the mission and open a rift at NASA between engineers and planetary protection officials." |
The victors of the Great Panathenaia received as prize beautifully painted pitchers filled with olive oil, the so-called Panathenaic amphoras. On one side of these amphoras, the goddess Athena was depicted between two Doric columns surmounted by a cock. Next to the picture was written that the vase was one of the prizes of the city of Athens. The other site pictured the event won by the athlete. The style of the vases was somewhat archaic: they were always in the black-figured style (black people against a red background), long after other ceramics had gone over to the red-figured style (red people against a black background).
The amphora was filled with almost 40 liter of first class olive oil. This was the actual prize. This olive oil was very valuable, mainly because of the large quantity. The athletes received from 6 to 140 pitchers. The number depended on the event, the age-category and the place (first or second). An inscription from the early fourth century lists the number of pitchers for each victor, showing that the horse races were most appreciated , and that adults always received more than boys.
For Athens the pitchers were an important form of publicity. As the athletes received far more oil than they needed for personal use, they sold a part of their prize. In this way the amphoras got widely dispersed. In this way, Athens was everywhere known as a prosperous city that could afford to give valuable prizes to athletes. |
Neuron types and their functionality
Image Caption : Neuron types and their functionality : The nerve cell is the hub for all of the activity that occurs in the brain, and the connections between neurons create a living, dynamic framework for everything that we see, hear, taste, smell, touch and experience.
How do neurons work?
Nerve cells talk to each other via a complex system of electrical impulses and chemical signals. They are supported by another type of cell, called glia, which help these signals to transfer smoothly from one nerve to the next. Their work is so important that glia outnumber neurons in the brain and spinal cord.
Types of Neurons
Neurons fall into one of three types.Sensory neurons are responsible for relaying information from the senses-eyes, ears, nose, tongue and skin-to the brain to register sight, sound, smell, taste and touch.Motor neurons link the brain and spinal cord to the various muscles throughout the body, including those in our fingers and toes.Interneurons are intermediaries that bridge sensory or motor neurons to their neighbors.
Neurons, or nerve cells, carry out the functions of the nervous system by conducting nerve impulses. They are highly specialized and amitotic. This means that if a neuron is destroyed, it cannot be replaced because neurons do not go through mitosis. The image below illustrates the structure of a typical neuron.
Each neuron has three basic parts: cell body (soma), one or more dendrites, and a single axon.
In many ways, the cell body is similar to other types of cells. It has a nucleus with at least one nucleolus and contains many of the typical cytoplasmic organelles. It lacks centrioles, however. Because centrioles function in cell division, the fact that neurons lack these organelles is consistent with the amitotic nature of the cell.
Dendrites and axons are cytoplasmic extensions, or processes, that project from the cell body. They are sometimes referred to as fibers. Dendrites are usually, but not always, short and branching, which increases their surface area to receive signals from other neurons. The number of dendrites on a neuron varies. They are called afferent processes because they transmit impulses to the neuron cell body. There is only one axon that projects from each cell body. It is usually elongated and because it carries impulses away from the cell body, it is called an efferent process.
An axon may have infrequent branches called axon collaterals. Axons and axon collaterals terminate in many short branches or telodendria. The distal ends of the telodendria are slightly enlarged to form synaptic bulbs. Many axons are surrounded by a segmented, white, fatty substance called myelin or the myelin sheath. Myelinated fibers make up the white matter in the CNS, while cell bodies and unmyelinated fibers make the gray matter. The unmyelinated regions between the myelin segments are called the nodes of Ranvier.
In the peripheral nervous system, the myelin is produced by Schwann cells. The cytoplasm, nucleus, and outer cell membrane of the Schwann cell form a tight covering around the myelin and around the axon itself at the nodes of Ranvier. This covering is the neurilemma, which plays an important role in the regeneration of nerve fibers. In the CNS, oligodendrocytes produce myelin, but there is no neurilemma, which is why fibers within the CNS do not regenerate.
Functionally, neurons are classified as afferent, efferent, or interneurons (association neurons) according to the direction in which they transmit impulses relative to the central nervous system. Afferent, or sensory, neurons carry impulses from peripheral sense receptors to the CNS. They usually have long dendrites and relatively short axons. Efferent, or motor, neurons transmit impulses from the CNS to effector organs such as muscles and glands. Efferent neurons usually have short dendrites and long axons. Interneurons, or association neurons, are located entirely within the CNS in which they form the connecting link between the afferent and efferent neurons. They have short dendrites and may have either a short or long axon.
Neuroglia cells do not conduct nerve impulses, but instead, they support, nourish, and protect the neurons. They are far more numerous than neurons and, unlike neurons, are capable of mitosis.
National Cancer Institute / NIH
Inside the Brain: Neurons & Neural Circuits
Neurons are the basic working unit of the brain and nervous system. These cells are highly specialized for the function of conducting messages.
A neuron has three basic parts:
- Cell body
which includes the nucleus, cytoplasm, and cell organelles. The nucleus contains DNA and information that the cell needs for growth, metabolism, and repair. Cytoplasm is the substance that fills a cell, including all the chemicals and parts needed for the cell to work properly including small structures called cell organelles.
branch off from the cell body and act as a neuron's point of contact for receiving chemical and electrical signals called impulses from neighboring neurons.
which sends impulses and extends from cell bodies to meet and deliver impulses to another nerve cell. Axons can range in length from a fraction of an inch to several feet.
Each neuron is enclosed by a cell membrane, which separates the inside contents of the cell from its surrounding environment and controls what enters and leaves the cell, and responds to signals from the environment; this all helps the cell maintain its balance with the environment.
Synapses are tiny gaps between neurons, where messages move from one neuron to another as chemical or electrical signals.
The brain begins as a small group of cells in the outer layer of a developing embryo. As the cells grow and differentiate, neurons travel from a central "birthplace" to their final destination. Chemical signals from other cells guide neurons in forming various brain structures. Neighboring neurons make connections with each other and with distant nerve cells (via axons) to form brain circuits. These circuits control specific body functions such as sleep and speech.
The brain continues maturing well into a person's early 20s. Knowing how the brain is wired and how the normal brain's structure develops and matures helps scientists understand what goes wrong in mental illnesses.
Scientists have already begun to chart how the brain develops over time in healthy people and are working to compare that with brain development in people mental disorders. Genes and environmental cues both help to direct this growth.
The National Institute of Mental Health (NIMH) / (NIH)
The material on this site is for informational purposes only and is not intended as medical advice. It should not be used to diagnose or treat any medical condition. Consult a licensed medical professional for the diagnosis and treatment of all medical conditions and before starting a new diet or exercise program. If you have a medical emergency, call 911 immediately. |
Last week we started talking about how our grandparents were able to pull off the impossible. They taught our parents and helped us learn respect for people and the things around us.
The times are changing and now organic gardening, cooking, saving, repairing, reusing, recycling, conserving, and caring for our things and our surroundings is becoming increasingly necessary (and popular) for our environment and our finances.
We can’t go back to the old days, but there are some things we can do to help our children appreciate and respect the people and things around them. Here are a few ideas in no specific order:
- Try taking on a few projects with your children instead of paying to have them done for you. Tear apart and fix something together. Get dirty. It’s fun!
- Plant a vegetable garden together. Cook together!
- Consider what some of the things you commonly throw away can be reused for. Brainstorm together. Reuse a few things.
- Make family time a priority even if it’s difficult. Keep time set aside regularly. Eat meals together.
- If you can personally live with it, allow your children the privilege of owning and caring for a pet.
- Be a good listener. When a person is truly heard, it tells the person who is talking that they are important.
- Teach your children to value hard work. If they have a good work ethic they will always be able to take care of themselves.
- Grandparents have so much to share! Make sure to visit your grandparents and parents (their grandparents) often. They are learning from you when you do this. When your children grow up you are going to want them to visit you, right? Encourage outings with grandparents so your children can develop strong relationships and learn from them.
- Consider making birthday and Christmas gifts from things you already have around the house instead of buying them. Make craft time a family time.
- Try catching your dinner and preparing the meal together!
- Hike and walk together and enjoy this beautiful country. It’s hard for our children to respect and appreciate what they haven’t seen or interacted with.
- Explain why caring for other people is important. Visit the elderly together. Encourage them to engage in community service.
- Also, as important, is to encourage them to volunteer in a state or National park and take the ranger classes. Our public lands and natural resources cannot be easily replaced once they are damaged.
- Mirror the importance of taking good care of your things and our public resources by doing it yourself and inviting your children to help.
I love the signs that we see in our National Parks and National Forests. “If You Pack It, Then You Pack It Out. Take Nothing but Pictures, Leave Nothing but Footprints.”
Our parents put in what they got out of everything: the things they owned, the land, and most importantly their relationships. Here’s the take away: Give your children yourself and your time. Do things with them and listen to them. Your children will learn from you – you are their window into understanding what’s important. Waste not, want not.
Put that fishing outing on your calendar and repeat it often. Yes, it is more work if you use live bait but worth it. Have fun and keep your bait wet! |
Researchers announced today that they have discovered a planet outside our solar system — known as an exoplanet — that is not only comparable in size to Earth but also has a mass and density similar to Earth.
The planet, Kepler-78b, orbits a sun-like star called Kepler-78 every 8.5 hours.
Thanks to NASA's Kepler telescope, astronomers have discovered thousands of exoplanets in our galaxy, many that are Earth-sized and circle stars like our sun. Although measuring the size of an exoplanet is relatively easy, determining its mass has proven more challenging. Mass is an important measurement because it's an indication of a planet's density, and therefore, what that planet is made of.
Kepler-78b is exciting because it is now the smallest exoplanet for which both radius and mass are known accurately, according to scientists.
Kepler-78b has a radius that is 20% greater than Earth's and a mass that is roughly 80% greater. By astronomical standards it is "a virtual twin of Earth," writes Drake Deming in a news article for Nature.
Scientists determine the size and orbit-period of exoplanets by measuring how much light the planet blocks as it passes in front of its host star. After measuring the brightness of Kepler-78 for four years in 30-minute intervals, researchers found that the star's brightness declined by .02% every 8.5 hours as Kepler-78b passed in front of it.
A planet's mass is traditionally found by measuring the velocity of its host star toward and away of from us, which is a result of a planet's gravitational tug. Light waves from a star moving toward us are shifted toward the blue end of the spectrum (they become compressed) and light waves from a star moving away from us are shifted toward the red end of the spectrum (they are stretched out). The closer a planet is to its host star, the faster the star moves, resulting in a larger color shift. Although most exoplanets produce Doppler shifts that are too small to be measured, Kepler-78b is close enough to its parent star that shifts in wavelength — and therefore both mass and density — could be determined.
Two independent studies found the exoplanet’s density to be 5.3 and 5.57 grams per cubic centimeter. This is similar to Earth's density of 5.5 grams per cubic centimeter, meaning Kepler-78b is probably composed of iron and rock.
However, because Kepler-78b orbits so close to its host star it would be way too hot to support life. But being able to calculate the planet's density is a big step toward finding an Earth-twin. |
The measurement of radiocarbon by mass spectrometry is very difficult because its concentration is less than one atom in 1,000,000,000,000.
The accelerator is used to help remove ions that might be confused with radiocarbon before the final detection.
The older a sample is, the less (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by this process date to around 50,000 years ago, although special preparation methods occasionally permit accurate analysis of older samples.
The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained.
Research has been ongoing since the 1960s to determine what the proportion of in the atmosphere has been over the past fifty thousand years.
The resulting data, in the form of a calibration curve, is now used to convert a given measurement of radiocarbon in a sample into an estimate of the sample's calendar age.
During the lifetime of an organism, the amount of c14 in the tissues remains at an equilibrium since the loss (through radioactive decay) is balanced by the gain (through uptake via photosynthesis or consumption of organically fixed carbon).
However, when the organism dies, the amount of c14 declines such that the longer the time since death the lower the levels of c14 in organic tissue.
They are used for a wide variety of dating and tracing applications in the geological and planetary sciences, archaeology, and biomedicine.
The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen.
The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire in a sample from a dead plant or animal such as a piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died.
The ensuing atomic interactions create a steady supply of c14 that rapidly diffuses throughout the atmosphere.
Plants take up c14 along with other carbon isotopes during photosynthesis in the proportions that occur in the atmosphere; animals acquire c14 by eating the plants (or other animals). |
Your child’s aggressive outbursts, clumsiness, inability to dress herself, or constant meltdowns may be due to a condition called Sensory Processing Disorder (SPD). About 5-15% of school-aged children have it, and the rate it’s believed, is higher for internationally adopted children. It can run in families, and there is also evidence that prenatal stressors can contribute. A lack of proper stimulation in a child’s first years may also impact the brain’s ability to process sensory information.
Sensory integration is a person’s ability to automatically sort and organize the multitude of sensory messages our brain constantly receives. In addition to the “five senses,” our proprioceptor system (the ability to distinguish where the various parts of your body are located in relation to each other) and vestibular system (movement and sense of balance) are involved. For children with SPD, the world may not be a place that makes sense. Their difficulties processing sensory information may even make the world a frightening place. Children with sensory integration disorders often respond by trying to control what’s happening. They would much prefer an environment that’s very predictable and consistent from day to day. They may want the TV so low no one else can hear it, they may spill things at the dinner table or not be accepting of hugs.
Three primary patterns, with six total subtypes, of the disorder have been proposed (Miller et al., 2007). These patterns include, sensory modulation disorder, sensory discrimination disorder and sensory-based motor disorder. This article will consider children with sensory modulation challenges. There are three subtypes: over- responsive, under-responsive and sensory-seeking.
The over-responsive type responds too much, for too long, or to stimuli of weak intensity. Children with over-responsive type have more difficulty filtering out repeated or irrelevant sensory information. These children are easily overwhelmed by daily sensory experiences and display fight, flight, or freeze defensive responses. This can result in frequent meltdowns, withdrawal from others, or severe aggression after being touched. Something as simple as being jostled while standing in line at school can result in an aggressive outburst. They are overwhelmed in situations where sensation from multiple modalities is present (e.g. a mall or a school cafeteria). The affects of over-responsiveness can be profound, impacting a child and family’s quality of life, interfering with engagement in social interaction, participation in home and school routines, self-regulation, learning and self-esteem.
Another child may be what is considered under-responsive. They respond too little or need extremely strong stimulation to become aware of the stimulus. They tend to “tune-out” or not notice the same sensory experiences as others. This could be related to what some people see as a high tolerance to pain. This could be the kid that breaks a bone and doesn’t seem to feel any pain.
The sensory-seeking subtype craves and seeks out sensory experiences. They twirl on the tire swing long after other children have moved on to different activities. They run and crash into other kids for the sensation. They can listen and learn while active, but are not able to attend to visual clues while on the move. They are soothed by being held tight, accompanied by strong, rhythmic motions.
Numerous studies have shown that interactions among the senses profoundly influence behavior, perception, emotion and cognition (Kisley et al., 2004). It can even lead to difficulties in learning. Advances in brain imaging demonstrate that individuals with SPD use a different part of their brain when processing sensory input. One thing that often happens is that a child with sensory integration problems will put 100% effort into doing something and manage to get it done. However, no one can sustain that level of effort all the time. When a parent or a teacher sees a child accomplish something once in awhile, they think, “Oh, she can do it and maybe she’s just not motivated, or she’s just not trying”. Adults will then often attribute a behavioral reason when a task is hard or left undone. Another scenario is that a kid may focus all their energy at school, where they know their performance is really important and they want to fit in with the group. They wear themselves out getting it right at school then arrive home exhausted and fall apart. Most kids don’t realize what’s bothering them. There may be times when they cannot put into words what is happening to them because they are so overwhelmed or simply do not know the right words to use. The inability to verbally communicate also contributes to emotional outbursts. Some children might be aware that they are agitated or feel out of control. Once a child is properly assessed and diagnosed it is important for the adults in their lives to help them understand what is going on in their bodies. Understanding the behaviors a child uses to try to self-regulate is vital in addressing each child’s unique needs, and to be able to teach them appropriate coping skills.
If you think your child may have a sensory processing disorder, you can have an evaluation done by a trained occupational therapist. The therapist will then prescribe a “sensory diet” for your child to help them develop more typical reactions to sensory stimulation. For further information, please go to www.spdfoundation.net or www.SPDnetwork.org. |
how might the Pythagorean theorem be used in everyday life?
2 Answers | Add Yours
The Pythagorean theorem can be a very great help in finding the distance between two specigfic locations using the distance formula such that:
`D^2 = (x_A - x_B)^2 + (y_A - y_B)^2`
`x_A, y_A` represents the x and y coordinates of the location A
`x_B, y_B` represents the x and y coordinates of the location B
D represents the distance between locations A and B
You may also use Pythagorean theorem to evaluate the diagonal of your laptop screen.
Hence, these are just some of the many examples where Pythagorean theorem is of great help.
I just used it last summer while helping to build a deck. We wanted to check whether two pieces of wood made a right angle so we made a pencil mark 3 feet from the corner on one board, 4 feet from the corner on the other, and then we measured the distance between the two pencil marks. It was almost exactly 5 feet, so we knew we had a nice right angle!
Actually, if you want to get technical, we used the converse of the Pythagorean Theorem, which tells us that if we have a triangle with side lengths 3, 4, and 5, it must be a right angle. Close enough though.
Join to answer this question
Join a community of thousands of dedicated teachers and students.Join eNotes |
Created by Khoa Nguyen, Michal Brylinski, Benjamin Maas, Kristy Stensaas, Suniti Karunatillake, Achim Herrmann, and Wolfgang Kramer, this teachable unit aims to implicitly enable scientific modeling skills among the students. With developing a conceptual model from a set of observations as the underlying goal, variations in atmospheric oxygen content provides context.
Accordingly, students would attempt to understand chemical, biological, and geological processes that affect oxygen content of Earth’s atmosphere over geologic time scales. We considered the period from the outgassing of Earth’s atmosphere >3 Ga ago to the present because those events had the most significant impact on atmospheric oxygen content. Sources and sinks for atmospheric O2 are emphasized. Students will be able to utilize the developed skills to predict the effect of a hypothetical event on atmospheric O2 levels.
(Text from the Yale Center for Scientific Teaching's Teachable Tidbits).
How the unit is designed to include participants with a variety of experiences, abilities, and characteristics
-Allow students to choose the medium or technology in which they research or find these events (internet, books, podcast, video, etc. to match the learning styles)
-As a diversity aspect, tidbits have been divided into four different aspects to accommodate different learning styles (lecture, group work, think-pair share, clickers)
-Separation of numerical event is math-friendly (no math bias)
-No grading bias
Activities in class/during tidbit:
-Estimate and graph the oxygen level for each event on the graph.
-Follow-up activities relate events to geological timescale in groups.
-Follow-up clicker questions on evaluating bloom levels 3-5.
-Addressing misconception of the important of chemical and biological processes over geological time.
-Students will be able to analyze data and construct a graph.
-Students will be able to understand the importance of evolutionary events on atmospheric oxygen content.
-Given an event, students will be able to use the graph to predict the outcome.
-Students will be able to develop a response to changes in an environmental condition.
-Students will understand the co-evolution of chemical and biological systems on a geological timescale.
This activity was contributed by Yale University. |
National interest in K-12 engineering education has been growing. Yet, despite the growing prevalence of engineering standards in most states, it is estimated that only a small percentage of K–12 students are exposed to engineering-related coursework in school. Many students and even some teachers are confused about what exactly engineering is.
This Spotlight provides examples of NSF-funded programs that show promise for educating future generations of scientifically literate and engineering-talented adults.
DR K-12 WORK FOCUSED ON ENGINEERING
(sorted by grade level, with associated resources)
Readiness through Integrative Science and Engineering: Refining and Testing a Co-constructed Curriculum Approach with Head Start Partners
PI: Christine McWayne, Tufts University
Building upon prior research on Head Start curriculum, this phase of Readiness through Integrative Science and Engineering (RISE) will expand to include classroom coaches and community experts to enable implementation and assessment of RISE in a larger sample of classrooms. The goal is to improve school readiness for culturally and linguistically diverse, urban-residing children from low-income families, and the focus on science, technology, and engineering will address a gap in early STEM education.
Recent Publication: Family-school partnerships in a context of urgent engagement: Rethinking models, measurement, and meaningfulness
Elementary School Level
CAREER: Community-Based Engineering as a Learning and Teaching Strategy for Pre-service Urban Elementary Teachers
PI: Kristen Wendell, University of Massachusetts, Boston (UMass Boston)
This is a Faculty Early Career Development project aimed at developing, implementing, and assessing a model that introduces novice elementary school teachers (grades 1-6) to community-based engineering design as a strategy for teaching and learning in urban schools. Reflective of the new Framework for K-12 Science Education (NRC, 2012), the model addresses key crosscutting concepts, disciplinary core ideas, and scientific and engineering practices.
Design Technology and Engineering Education for English Learner Students: Project DTEEL
PI: Rebecca Callahan, University of Texas Austin
One significant challenge facing elementary STEM education is the varied preparation of English-language learners. This project addresses this with an innovative use of engineering curriculum to build on the English-language learners' prior experiences. This project supports teachers' learning about strategies for teaching English-language learners and using engineering design tasks as learning opportunities for mathematics, science and communication skills.
PI: Hasan Deniz, University of Nevada, Las Vegas (UNLV)
This project is conducting a study to develop and field-test curricula integrating science, engineering, and language arts at the elementary level which is aligned with the Next Generation Science Standards (NGSS).
PI: Christine Cunningham, Museum of Science
This project is developing evidence about the efficacy of the Engineering is Elementary curriculum under ideal conditions by studying the student and teacher-level effects of implementation. The project seeks to determine the core elements of the curriculum that support successful use. The findings from this study have broad implications for how engineering design curricular can be developed and implemented at the elementary level.
Resources: Poster | Project Website | Video
PIs: Patricia Paugh, University of Massachusetts Boston and Christopher Wright, University of Tennessee Knoxville
This collaborative, exploratory, learning strand project focuses on improving reflective decision-making among elementary school students during the planning and re-design activities of the engineering design process. Five teacher researchers in three elementary schools provide the classroom laboratories for the study. Specified units from Engineering is Elementary, a well-studied curriculum, provide the engineering content.
Integrating Engineering and Literacy
PI: Chris Rogers, Tufts University
This project is developing and testing curriculum materials and a professional development model designed to explore the potential for introducing engineering concepts in grades 3 - 5 through design challenges based on stories in popular children's literature. The research team hypothesizes that professional development for elementary teachers using an interdisciplinary method for combining literature with engineering design challenges will increase the implementation of engineering in 3-5 classrooms and have positive impacts on students.
Resources: Poster | Project Website | Video
Recent Publications: Stable Beginnings in Engineering Design | A Novel Way to Teach Kids About Engineering
Middle School Level
DIMEs: Immersing Teachers and Students in Virtual Engineering Internships
PI: Jacqueline Barber, University of California Berkeley
This project provides curricular and pedagogical support by developing and evaluating teacher-ready curricular Digital Internship Modules for Engineering (DIMEs). DIMES will be designed to support middle school science teachers in providing students with experiences that require students to use engineering design practices and science understanding to solve a real-world problem, thereby promoting a robust understanding of science and engineering, and motivating students to increased interest in science and engineering.
Resources: Poster | Video
PI: Michael Hacker, Hofstra University
This project creates, tests and revises two-six week prototypical modules for middle school technology education classes, using the unifying themes and important social contexts of food and water. The modules employ engineering design as the core pedagogy and integrate content and practices from the standards for college and career readiness.
PI: Vikram Kapila, New York University
Resources: Poster | Project Website
Recent Publications: Towards teleoperation-based interactive learning of robot kinematics using a mobile augmented reality interface on a tablet | Interactive mobile interface with augmented reality for learning digital control concepts
PI: Angela Calabrese Barton, Michigan State University
Identifying with engineering is critical to help students pursue engineering careers. This project responds to this persistent large-scale problem. The I-Engineering framework and tools address both the learning problem (supporting students in learning engineering design) and the identity problem (supporting students in recognizing that they belong in engineering).
Middle, High & Post-Secondary Levels
CAREER: Scaffolding Engineering Design to Develop Integrated STEM Understanding with WISEngineering
PI: Jennifer Chiu, University of Virginia
The development of six curricular projects that integrate mathematics based on the Common Core Mathematics Standards with science concepts from the Next Generation Science Standards combined with an engineering design pedagogy is the focus of this CAREER project.
PIs: Christian Schunn, University of Pittsburgh and Robin Shoop, Carnegie Mellon University
Computational and algorithmic thinking are new basic skills for the 21st century. Unfortunately few K-12 schools in the United States offer significant courses that address learning these skills. However many schools do offer robotics courses. These courses can incorporate computational thinking instruction but frequently do not. This research project aims to address this problem by developing a comprehensive set of resources designed to address teacher preparation, course content, and access to resources.
Resources: Project Website
Recent Publication: Case studies of a robot-based game to shape interests and hone proportional reasoning skills
PI: Ethan Danahy, Tufts University
This project designs, constructs, and field-tests a web-based, online collaborative environment for supporting the teaching and learning of inquiry-based high school physics. Based on an interactive digital workbook environment, the team is customizing the platform to include scaffolds and other supports for learning physics, fostering interaction and collaboration within the classroom, and facilitating a design-based approach to scientific experiments.
Resources: Project Website
An Examination of Science and Technology Teachers' Conceptual Learning Through Concept-Based Engineering Professional Development
PI: Rodney Custer, Black Hills State University
This project will determine the viability of an engineering concept-based approach to teacher professional development for secondary school science teachers in life science and in physical science. The project refines the conceptual base for engineering at the secondary level learning to increase the understanding of engineering concepts by the science teachers. The hypothesis is that when teachers and students engage with engineering design activities their understanding of science concepts and inquiry are also enhanced.
Engineering Teacher Pedagogy: Using INSPIRES to Support Integration of Engineering Design in Science and Technology Classrooms
PI: Julia Ross, University of Maryland, Baltimore County (UMBC)
This Engineering Teacher Pedagogy project implements and assesses the promise of an extended professional development model coupled with curriculum enactment to develop teacher pedagogical skills for integrating engineering design into high school biology and technology education classrooms.
SmartCAD: Guiding Engineering Design with Science Simulations (Collaborative Research: Chiu | Magana-de-Leon | Xie)
PI: Jennifer Chiu, University of Virginia and Alejandra Magana-de-Leon, Purdue University and Qian Xie, Concord Consortium
This project investigates how real time formative feedback can be automatically composed from the results of computational analysis of student design artifacts and processes with the envisioned SmartCAD software. The project conducts design-based research on SmartCAD, which supports secondary science and engineering with three embedded computational engines capable of simulating the mechanical, thermal, and solar performance of the built environment. |
What’s 100 times stronger than steel, weighs one-sixth as much and can be snapped like a twig by a tiny air bubble? The answer is a carbon nanotube — and a new study by Rice University scientists details exactly how the much-studied nanomaterials snap when subjected to ultrasonic vibrations in a liquid.
“We find that the old saying ‘I will break but not bend’ does not hold at the micro- and nanoscale,” said Rice engineering researcher Matteo Pasquali, the lead scientist on the study, which appears this month in the Proceedings of the National Academy of Sciences.
Carbon nanotubes — hollow tubes of pure carbon about as wide as a strand of DNA — are one of the most-studied materials in nanotechnology. For well over a decade, scientists have used ultrasonic vibrations to separate and prepare nanotubes in the lab. In the new study, Pasquali and colleagues show how this process works — and why it’s a detriment to long nanotubes. That’s important for researchers who want to make and study long nanotubes.
“We found that long and short nanotubes behave very differently when they are sonicated,” said Pasquali, professor of chemical and biomolecular engineering and of chemistry at Rice. “Shorter nanotubes get stretched while longer nanotubes bend. Both mechanisms can lead to breaking.”
Discovered more than 20 years ago, carbon nanotubes are one of the original wonder materials of nanotechnology. They are close cousins of the buckyball, the particle whose 1985 discovery at Rice helped kick off the nanotechnology revolution.
Nanotubes can be used in paintable batteries and sensors, to diagnose and treat disease, and for next-generation power cables in electrical grids. Many of the optical and material properties of nanotubes were discovered at Rice’s Smalley Institute for Nanoscale Science and Technology, and the first large-scale production method for making single-wall nanotubes was discovered at Rice by the institute’s namesake, the late Richard Smalley.
“Processing nanotubes in liquids is industrially important but it’s quite difficult because they tend to clump together,” co-author Micah Green said. “These nanotube clumps won’t dissolve in common solvents, but sonication can break these clumps apart in order to separate, i.e., disperse, the nanotubes.”
Newly grown nanotubes can be a thousand times longer than they are wide, and although sonication is very effective at breaking up the clumps, it also makes the nanotubes shorter. In fact, researchers have developed an equation called a “power law” that describes how dramatic this shortening will be. Scientists input the sonication power and the amount of time the sample will be sonicated, and the power law tells them the average length of the nanotubes that will be produced. The nanotubes get shorter as power and exposure time increase. |
Imagine an overheated Earth where the oceans have become steaming pools of acid and most plants and animals are extinct.
This is no doomsday vision of the future. Our planet went through this exact scenario 250 million years ago during a time that scientists call the Great Dying.
The world’s greatest extinction event wiped out 90 percent of life in the oceans and about 70 percent on land.
Earth did recover, but it took about 5 million years, according to a team of earth scientists, including Ohio State University geologist Matthew Saltzman.
“That’s a relatively long amount of time,” he said. “We see mass extinctions throughout Earth’s history and, in most cases, the recovery took place in about 1 million years or so.”
The researchers say they have unraveled the mystery of this 5-million-year hangover. The answer, they say, is climate change.
The same phenomenon that climatologists today link to the burning of fossil fuels played an integral role in the extended recovery from the Great Dying.
Researchers say the mass extinction was triggered by a series of severe volcanic eruptions in a region called the Siberian Traps. After 1 million years of heavy volcanic activity, an area larger than Europe was covered in a layer of once-molten igneous rock 1 mile to 3 miles thick.
But that alone likely wasn’t enough to kill off nearly everything and delay the eventual recovery. Researchers theorize that magma from the initial eruptions burned through an ancient coal bed. That event, in essence, unleashed hell.
Thomas Algeo, a University of Cincinnati geologist, said huge amounts of carbon dioxide and methane were released, killing off most remaining species. (Those species that survived, including dinosaurs, later grew and diversified.)
Algeo leads the team that includes Saltzman and OSU doctoral student Alexa Sedlacek. Their work is supported by the National Science Foundation.
Scientists say carbon dioxide is the main culprit behind modern climate change, trapping heat in the atmosphere. Add methane to the mix, and you have even more trouble. Methane is 20 times more effective than carbon dioxide at trapping that heat.
After the Great Dying, increases in global temperatures made life nearly impossible for plants and animals on land and heated the oceans to an average 100 degrees Fahrenheit. In the air, carbon dioxide and methane mixed with water and formed acid rain, which turned the world’s oceans acidic.
The acid rain also eroded rock and sent tons of sediment into the oceans, where it clogged gills on remaining aquatic animals and buried plants. The loss of life on land also aided the erosion, Sedlacek said.
“Because of the mass extinction on land, there were less forest ecosystems, and that left more exposed rocks for weathering,” she said.
Sedlacek and Saltzman analyzed limestone rock that formed in the oceans from those eroded sediments and turned up in a gorge in northern Iran. By measuring the amount of carbon in the rock, collaborating Austrian researchers were able to show that Earth’s climate was altered for about 5 million years after the extinction event.
Powdered rock was then sent to Saltzman and Sedlacek, who measured the ratio of two isotopes of strontium to show how much bedrock eroded from the land to the oceans. A change in that ratio indicated a rapid pace of erosion.
The findings on the ocean’s increased temperatures came from a separate study by researchers at China University of Geosciences, Wuhan, which was published recently in the journal
Science. Saltzman and Sedlacek presented their erosion findings on Nov. 4 in Charlotte, N.C., at the Geological Society of America’s annual meeting.
Saltzman said the Great Dying offers a window on the effects of climate change. He and Algeo cautioned that the current predictions for climate change are far from the global catastrophe that occurred 250 million years ago.
Algeo estimates the average temperature increase then was two to three times higher than the increase climatologists are forecasting.
Still, Saltzman said reactions to climate change can be severe.
“Life is quite sensitive to the temperature changes,” he said. “It’s pretty clear from the geologic record that the more severe the episode of global warming the more difficult it is for species to re-establish themselves.
“To me, the lesson is if you add CO2 to the atmosphere you increase global temperatures, and this has a negative impact on the diversity of life.” |
Summary of Mathematical Processes (Learning Outcome 9) One of the ways in which college level mathematics courses differ from high school mathematics courses is in the expectation that students will grow and develop in their mathematical sophistication. In particular, students will develop new ways of acquiring and using mathematical knowledge. These are goals that go above and beyond the learning of specific mathematics topics and are often not explicitly stated in textbooks or by college mathematics professors. 1. Understand understanding • Recognize the validity of different approaches • Recognize the equivalence of different answers • Analyze errors to identify misunderstandings • Analyze levels of understanding • Explain multiple ways of understanding the same idea • Recognize when language use is ambiguous, well-defined, or meaningless • Recognize examples and non-examples 2. Utilize representations and connections • Identify situations that can be modeled using mathematics
This is the end of the preview. Sign up
access the rest of the document. |
Area Moment of Inertia
Knowing the area moment of inertia is a critical part of being able to calculate stress on a beam. It is determined from the cross-sectional area of the beam and the central axis for the direction of interest. For basic shapes there are tables that contain area moment of inertia equations which can be viewed below. However, there are certain cases where the area moment of inertia will have to be calculated either through calculus or by manipulating the equations found in this table.
To calculate the area moment of inertia through calculus equation 1 would be used for a general form.
Area Moment of Inertia for Multiple Sectioned Beams
As mentioned earlier in some cases, such as an I-beam, the equations above would have to be manipulated to calculate the area moment of inertia for that shape. To calculate the area moment of inertia equations 2 and 3 would be used. Refer to the example below for a better understanding.
Finding the Centroid of a Beam
On the figures above you may have noticed the letter C next to a dot. This is the centroid of the part. The centroid is important in determining the area moment of inertia because, as seen in the previous example, sections relate of the centroid. Basically what the centroid does is it splits the area of the cross-section evenly across an x and y axis. To determine the centroid, equations 4 and 5 would be used, or you could take symmetry into consideration if it applies. Refer to the example below.
If you found this information helpful please donate to show your support. |
According to Tumin, the nature of Social Stratification becomes clear from its following features:
1. Social Stratification is a Social Phenomenon:
Stratification is social because it does not represent biologically caused inequalities. It is true that biological factors like age, sex, strength can also serve as the basis on which the status or strata and distinguished.
These factors do not determine social superiority or inferiority until these are recognised by society. For example, a man attains the position of chief executive officer in a company not on the basis of his sex or age but by the reason of his education, training and skill which are socially recognised and more important than biological traits.
Tumin observes that stratification system is:
(a) Governed by social norms and sanctions,
(b) Different factors may influence/disturb social stratification and lead to instability, and
(c) Intimately connected with other systems of society such as the political, religious, family, economic, educational and others.
2. It is Very Ancient:
History tells us that social stratification has been a very ancient phenomenon. Right in their early stages of evolution the society including small societies and wandering tribes, got stratified into a hierarchy of social classes.
The works of ancient Greek philosophers like Plato and Aristotle contain accounts of social stratifications in Greek City- states. Stratification on the basis of such factors like age, sex, physique and economic position was present in ancient societies. Ancient Indian, Chinese and European thinkers were deeply concerned with economic, social and political inequalities.
3. Social Stratification is in Diverse Forms:
The stratification system has never been uniform in all societies. For example, Ancient Roman society was divided into Patricians and Plebians, Aryan society was divided into four Varnas (Brahmins, Kshalriyas, Vaishyas and Shudras) The Greek society was divided into freemen and slaves.
Ancient Chinese society was divided into the mandarins, merchant farmers and soldiers. Thus each ancient society was a stratified society but on different bases. This was also true of the medieval societies and it is has been true of all modern societies. In all contemporary societies’ caste, class and estate have been the three general basis of stratification.
4. Stratification is Universal:
Social stratification is a universal phenomenon. It characterizes every society living in every nook and corner of the world, all societies, there are rich and poor, haves and have-nots educated and uneducated, higher castes and lower castes, nobility and commonality are universally present. It is present in all Asian, Africans and European societies Stratification is universal and we can study it everywhere in the world.
5. Stratification is Consequential:
Consequences of Social stratification can be described in two parts: Life Chances and Life Style. Life Chances refers to inequality in respect of rate or incidences of infant mortality, longevity, physical and mental illnesses, childlessness, marital conflict, separation and divorce.
Life Style refers to the distinctive features of status groups. For example, life style of those who share common economic positions, and socio-economic positions and attributes of those who share common life styles.
All social classes living in the common framework of culture, however, have different styles of life. Living within the framework of Indian Culture various Indian communities and classes have different styles of life. Social Stratification acts as a source of somewhat different life styles and life chances. |
The reality that Indigenous people were enslaved in large numbers was new to me when I first learned about it, and it may be to you, too. This is understandable, since it is a topic that is not really taught at all in secondary schools and even many college level classes. Most people know something about African slavery and the trans-Atlantic slave trade, although even then, specific details are fuzzy. But Indigenous enslavement? That is news to most people. Even if the scholarship on Indigenous enslavement is growing rapidly, academic research on any topic takes time – years, a generation, at times – to filter into textbooks and classrooms in ways that shifts the conversation (please see our growing bibliography for a larger listing of suggested resources; we’ve also listed a few resources as a starting place at the end of this post). That is one of the goals of this project: to bring visibility to a history that has for too long been overlooked.
Scholars now estimate that between 2.5 and 5 million Indigenous people were enslaved in the Americas between 1492 and 1900. (Andrés Reséndez, in his terrific book The Other Slavery, has some excellent educated estimates in the appendices for those who are interested in seeing where those numbers come from region by region.) This is an enormous figure, by any measure. Although the comparison is imperfect, it is helpful to realize that between 10.5 and 12 million enslaved Africans were forcibly brought to the Americas as a whole. So, no, Indigenous enslavement was never as large in scale as African slavery, but it was equally important in terms of the processes and ideologies of settler colonialism.
As recent scholarship has decisively shown, coerced labor and knowledge of Indigenous people were central to the colonization of the Americas. Europeans met, traded with, fought, and enslaved Native people everywhere they stepped foot in the Americas. Indigenous labor and land were made to be essential components of European colonial pursuits in every colony. By the time English merchants dropped anchor in a land they called Virginia in 1607, the Spanish, Portuguese, French, Dutch, and English had been crisscrossing the Atlantic for over a century. They routinely stole Indigenous people off of the coasts in some locales, and—in the case of the Spanish and Portuguese—decimated entire populations and enslaved hundreds of thousands of Native people to perform the hard work of mining silver and gold and working sugar plantations and large farms.
Each European colonizing power in the Americas enslaved Natives or in other ways coerced their labor over time. In Brazil, Indigenous people were enslaved by the tens of thousands and forced to work on sugar plantations, as scholar John Monteiro has shown. In the Spanish Caribbean, Central, and South America, conquistadores and colonial administrators commandeered the labor of Native Americans to work the land (on encomiendas, for example, but also larger plantations) as well as to work in the dangerous gold and silver mines. French colonists used enslaved Natives in households in New France and on plantations in the Caribbean. The same was true for the English and the Dutch, who had colonies in North America and the Caribbean.
Each colonial context had a different set of laws that either licensed or tried to regulate Indigenous slavery over time. In some cases, this was in conjunction with African slavery; in other cases, it was handled separately. The most famous debates about the enslavement of Natives was in Spain, where officials, jurists, and priests debated how the Indigenous populations of the Americas should be treated, and whether enslavement or coerced labor should be permitted. This led to a series of New Laws in 1542, but as Andrés Reséndez and others have shown, these laws were not effective over time.
Indigenous peoples were enslaved through a variety of processes – outright warfare against them; slave raids; incentivizing other Indigenous nations to slave raid for Europeans by offering high prices or desired trade goods; slavery as a punishment for crimes; slavery as a means to pay off debt; bounties in wartime; and, in some notable cases, receiving enslaved Indigenous peoples as a sign of an alliance with other Native groups.
Indigenous slavery did not perfectly map onto African slavery; it had its own origins, rising and falling, and resurfacing over time. In North America, for example, although the outright and active enslavement of Native Americans had mostly ceased on the East Coast, the ongoing westward American colonialism under the United States brought a whole new round of Native coerced labor in New Mexico, Utah, and California, as Americans enslaved, bought, and sold Indigenous populations, even in supposedly free states. Although the Thirteenth Amendment ended legal Black slavery in the United States, it was not until a 1867 separate act of U.S. Congress that the various forms of Indigenous servitude and slavery were officially outlawed as well.
Even so, some Indigenous scholars and elders have noted that the various forms of coerced labor continued on well into the twentieth century in other forms, including the forced residential schooling system.
One of the most important aspects of this history, and one that we hope to highlight with this project, are the perspectives and experiences of Indigenous nations. For them, enslavement was integral to the process of the invasion of Europeans. Whenever Europeans stole or captured individuals or whole groups from their homes, it had a devastating effect on individuals, families, and communities. When Europeans enslaved and stole Indigenous people, they were chipping away at the sovereignty and political autonomy of those same tribes. Enslavement led to smaller populations over time, and land loss, which made it harder for Native nations to adequately stand their ground in the face of settler colonialism. All of this has directly contributed to the historical trauma that so many Indigenous people and communities experience today. And yet, despite all of this, Indigenous peoples today have survived; they remain culturally vibrant, and are still here, fighting for their sovereignty and rightful place on these continents.
In this way, then, these stories that we are recovering in this project are not just about the past; they are also about the present, now, and about the kinds of futures that are possible when we together start fully acknowledging the past.
For further reading:
Gallay, Alan, ed. Indian Slavery in Colonial America. Lincoln: University of Nebraska Press, 2009.
Monteiro, John M. Blacks of the Land: Indian Slavery, Settler Society, and the Portuguese Colonial Enterprise in South America. Translated by James Woodard and Barbara Weinstein. Cambridge, United Kingdom ; New York, NY: Cambridge University Press, 2018.
Newell, Margaret Ellen. Brethren by Nature: New England Indians, Colonists, and the Origins of American Slavery. Ithaca: Cornell University Press, 2015.
Reséndez, Andrés. The Other Slavery: The Uncovered Story of Indian Enslavement in America. Boston: Houghton Mifflin Harcourt, 2016.
Rushforth, Brett. Bonds of Alliance: Indigenous and Atlantic Slaveries in New France. Chapel Hill: The University of North Carolina Press, 2012. |
In this blog post, Sreeraj K.V., a student of Government Law College, Ernakulam, Kerala writes an article about Articles 14, 19 and 21 of the Indian Constitution. The article includes the Constitution and its various principles catering to needs of the public at large and also gives a brief idea as to why Article 14, 19 and 21 are considered the Golden Triangle of the Constitution.
We all know the underlying fact that our Constitution is the longest written Constitution of any sovereign country in the world. A nation is governed by its Constitution. It is the Supreme Law of our Country. Constitution declares India a sovereign, socialistic, secular, democratic, republic, assuring its citizens of justice, equality and liberty, and endeavors to promote fraternity among them. While looking at the fundamental rights enumerated in the Constitution, it will be well clear that the framers of the Constitution had done it in such a way that it acts a pillar to the national security and integrity of the country. The fundamental rights, embodied in part III of the Constitution provide civil rights to all the citizens of India and prevent them from the encroachment of society and also ensure their protection. There are seven rights which are enumerated as fundamental rights which include:
- Right to equality
- Right to freedom
- Right against exploitation
- Right to freedom of religion, education and cultural rights
- Right to property
- Right to constitutional remedies
Later on, Right to property was removed from the part III by the 44th Amendment in 1978. Such fundamental rights are to be enforced for each and every citizen living in India irrespective of race, caste, religion, gender or place of birth. They are enforceable by courts, subject to specific restrictions. Now looking into the topic in detail, Article 14, 19 and 21 are popularly known as the ‘golden triangle’ of the Indian Constitution.
The Golden Triangle
- Article 14 – Equality before the law, the state shall not deny any person equality before the law or equal protection of law within the territorial limits of India or prohibition on the grounds of race, caste, religion, sex or place of birth.
- Article 19 – Protection of certain rights regarding freedom of speech and expression. All citizen shall have the right
- To freedom of speech and expression
- To assemble peacefully and without arms
- To form associations or unions
- To move freely throughout the territory of India
- To reside and settle in any part of the territory of India, and
- To practice any profession or to carry on any occupation, trade or business
- Article 21 – Protection of life and personal liberty, no person shall be deprived of his personal liberty except according to the procedures established by law.
To know more about the golden triangle of the Indian Constitution in brief, please refer to the video below:
Now it is clear why these provisions under the Constitution regarded as the ‘golden triangle’. These rights are regarded as the basic principles for the smooth running of life for the citizens of our country. The golden triangle provides full protection to individuals from any encroachment upon their rights from the society and others as well. Article 14, it provides for equality before law and equal protection of the law. It means that no person is deprived of his equality among other citizens of our country. The provision also gains importance because the enactment of such a provision leads to the abolishing of certain inhuman customary practices of our country. The provisions of this article also envisage certain legal rights like protection of law which purely means that the law should be the same for every person with some necessary exceptions.
Article 19 provides certain absolute rights such as freedom of speech and expression, freedom of movement, freedom of forming associations and unions, etc. This Article brings about important changes in the society as it provides various rights to the people so that there is harmony among the people of our country. Even though this Article covers a vast area of operation, it does not provide a person the freedom to do anything and everything as per his whims and fancies. Various other provisions of the Article provide restrictions to various issues affecting public tranquillity and security. Such restrictions include:
- Security of the State
- Friendly relation with foreign states
- Public order
- Decency and morality
- Contempt of court
- Incitement of offenses
- Sovereignty and integrity of India.
On the other hand Article, 21 provides for protection of life and personal liberty. This provision of the Constitution is one of the most implemented as well as widely interpreted areas in the field of law enforcement. The Article covers the most sensitive area, i.e. protection and securing the life and liberty of a person. Perhaps this may be the most violated provision of our Constitution as well. Various courts in our country have interpreted the constitutional validity of Article 21 in a common man’s life. Important among them is the case of Maneka Gandhi v. The Union of India wherein the court looked into matters not only affecting Article 21 but also Articles 14 and 19 as well. The court stated that the act on the part of the respondents was violating Article 14 in the sense that the act leads to arbitrariness on the part of the respondent which violated the right to equality of the petitioner. Article 21 was being violated in the sense that petitioner was restrained from going abroad. The judgment was one of the landmarks among the cases relating to the violation of certain fundamental rights mainly, Articles 14, 19 and 21.
Article 21 is applicable even during the time of election wherein people have the sole right of electing the best person as their representative. No person has a right to compel anyone to elect the person other than his/her wish. Even though voting is not a fundamental right but a ‘statutory right’, the court, in the judgment of the case PUCL v. Union of India, distinguishes “right to vote” and the “freedom of voting as the species of the freedom of expression” under Article 19 of the Constitution. There are various other major judgments in cases regarding enforcement of fundamental rights. For example, the case of Kesavananda Bharathi v. Union of India, which is considered as a landmark among cases regarding the enforceability of constitutional rights in favour of the citizens. The judgment in the said case makes it clear that even the Central or State Government has certain limitations in encroaching into a person’s rights, mainly fundamental rights.
Thanks to the drafters of the Constitution for framing it in such a way that it neither makes any mandatory provisions regarding various rights for the citizens nor makes any citizen free from certain fundamental duties that must be followed by every citizen of the country. It has also looked deeply into the socio-economic scenario of India so that no rights or duties will be omitted. Apart from certain fundamental rights, the Constitution also provides certain other rights and duties towards the citizen which are enclosed in Part IV of the Constitution known as ‘Directive Principles of State policy.’ Such provisions are framed under the notion that rights of each and every individual change accordingly and such rights cannot be considered as fundamental but have to be enforced. One of the merits of the of our Constitution is that it neither restricts a person from enforcing his fundamental rights, nor it provides full freedom to a person in such a manner that he exploits or violates such rights himself or against the society. Perhaps this feature of our Constitution makes it different from any of the other major Constitutions of the world.
LawSikho has created a telegram group for exchanging legal knowledge, referrals and various opportunities. You can click on this link and join:
Retrieved on: https://en.wikipedia.org/wiki/Constitution_of_India
Retrieved on: https://en.wikipedia.org/wiki/Fundamental_Rights,_Directive_Principles_and_Fundamental_Duties_of_India
Retrieved on https://indiankanoon.org/doc/1218090/
Retrieved on: https://indiankanoon.org/doc/1199182/
Retrieved on: http://www.gktoday.in/article-19-of-constitution-of-india-and-freedom-of-speech/
Maneka Gandhi v. Union of India 1978 AIR 597: Retrieved on http://lawfarm.in/a-case-analysis-of-the-maneka-gandhi-case/
Pucl v. Union of India WP (C) 490 of 2002. Retrieved on: https://indconlawphil.wordpress.com/2013/09/28/pucl-v-union-of-india-the-supreme-court-and-negative-voting/
His Holiness Kesavananda Bharathi Sripadagalvaru and or v.State of Kerala and Anr ((1973) 4 SCC 225)
Retrieved on: http://lawnn.com/case-study-kesavananda-bharati-vs-state-of-kerala/
[…] CREDIT: https://blog.ipleaders.in/golden-triangle-indian-constitution/ […]
[…] CREDIT: https://blog.ipleaders.in/golden-triangle-indian-constitution/ […]
[…] has been understood to be at the pinnacle of public duties. As has been enunciated in multiple golden triangle cases, these rights are interpreted broadly. Just recently, the Kerala High […]
[…] life has been understood to be at the pinnacle of public duties. As has been enunciated in multiple golden triangle cases, these rights are interpreted broadly. Just recently, the Kerala High Court held that access […]
[…] Image Source: iPleaders […]
Dear Sreeraj !
Hai nice to see and read your article on the Golden Triangle of the Constitution.
Please cite the source from where the term ‘golden triangle’ is attributed to the arts. 14, 19 and 21 or is it the author’s own expression. |
Yes, this cephalopod is looking at you funny. It’s a kind of cockeyed squid—an animal that looks like some jokester misassembled a Mr. Potato Head. One of the cockeyed squid’s eyes is big, bulging and yellow. The other is flat and beady. After studying more than 25 years’ worth of undersea video footage, scientists think they know why.
The Monterey Bay Aquarium Research Institute (MBARI) in California has been dropping robotic submarines into the ocean for decades. The footage from those remotely operated vehicles gets annotated and compiled into a database that’s been used for hundreds of research projects, says Katie Thomas, a graduate student at Duke University. She decided to delve into this database to figure out why the cockeyed squid is so symmetrically challenged.
In footage that dated back to 1989, Thomas and her coauthors found videotaped encounters with 152 Histioteuthis heteropsis and 6 Stigmatoteuthis dofleini. These are two of the 18 species of cockeyed squid. (They didn’t try to identify individual animals—due to “the vastness of the habitat,” they write, it’s unlikely that their robots ran into the same squid twice.)
Both species, Thomas saw, swam with their big left eyes angled upward and their tiny right eyes angled slightly down. All the animals kept their arms and heads pointed toward the seafloor and their mantles upright or at a tilt, as in the image on the left above.
In most adult squid, the left eye wasn’t just big and bulging—it was also yellow. Young squid didn’t have this yellow pigmentation, which means the left eye probably turns yellow as a squid grows up. Human eyes carry pigment in the iris. But, Thomas points out, squid eyes have their yellow pigment right in the lens. “These deep-sea squids don’t have corneas like us,” she says, “so the lens sticks directly out of the eye and is the part of the eye you see in the images.”
The lens of a squid’s smaller eye is “perfectly clear,” Thomas says. “This makes it look clear from the side but black when you’re looking straight down it.”
Yellow lenses are common in deep-sea fish that face upward. This may help them see prey swimming overhead. Some ocean animals produce light as a kind of camouflage, so that predators underneath them can’t easily spot their silhouettes against the sunlight. But yellow lenses may help predators like the squid see through this camouflage. The researchers found that a cockeyed squid’s pigment filtered light with a similar wavelength to that of sunlight shining down into the ocean.
They used computer modeling to learn more about how these animals see. “Squid eyes are optically very similar to cameras,” Thomas says. The researchers wanted to know why the two eyes had such a dramatic difference in size, which is like the aperture of a camera. They used a simple simulation to model how different eye sizes and different amounts of light would change what a squid saw. Since all the light in the ocean comes from straight overhead, how much light reaches an animal’s eye depends on the eye’s angle.
“What we found was that an upward-facing eye has high gains for fairly small increases in eye size,” Thomas says. “A downward-facing eye gets relatively little improvement from similar increases in eye size.” In other words, it’s worthwhile for a sea creature to put its resources into growing a bigger eye, if that eye points up toward the sun. But it’s not worthwhile for an eye that points down. “This helps us to understand how this difference in eye size might have evolved,” Thomas says.
With its big, yellow, upward-facing eye, a cockeyed squid can scan for prey that light up to camouflage themselves against the sunlight—such as shrimp, lanternfish, and other cephalopods. Its beady, downward-facing eye is good enough to see creatures lighting up against the blackness below.
Thomas says she isn’t sure why the eyes are angled, rather than pointing straight up and straight down. It may have to do with the vertical position the squid like to swim in, as well as that they probably evolved from squid with normal, horizontally oriented eyes. But, she adds, each eye has a broad enough field of view that a cockeyed squid can see fully above and below itself even while swimming upright.
Thomas notes that understanding the cocked eyes of the cockeyed squid wouldn’t have been possible without the decades of scientific monitoring that came before her. “It’s fun to think that while I was still crawling around in diapers, MBARI scientists were out collecting video data that I would someday use for my PhD research,” she says.
Images: top, Kate Thomas; bottom, Thomas et al.
Thomas KN, Robison BH, & Johnsen S (2017). Two eyes for two purposes: in situ evidence for asymmetric vision in the cockeyed squids Histioteuthis heteropsis and Stigmatoteuthis dofleini. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 372 (1717) PMID: 28193814 |
Adenovirus Disease: Symptoms, Causes, Risk Factors, Spread, Epidemiology, and Diagnosis – Adenovirus is a family of viruses that can cause a wide variety of diseases in humans, ranging from common colds to gastrointestinal infections and red eyes.
Now, scientists have used the virus as a base ingredient in several types of Covid-19 vaccines, including the Johnson & Johnson vaccine and the AstraZemeca vaccine.
According to a 2019 report in the journal Scientific Reports, there are 88 types of adenoviruses that can infect humans and are grouped into 7 different species, from A to G.
The virus circulates throughout the year, which means it does not occur seasonally, such as influenza viruses. Adenoviruses can also infect a variety of vertebrate animals, including mammals, birds, reptiles and fish.
Adenovirus Disease Symptoms
Infection of this virus usually only causes mild symptoms and can heal itself within a few days. But this virus can cause serious problems in people with weakened immune systems, especially children. Adenoviruses can also cause an epidemic keratoconjunctivitis (EKC) and are thought to be responsible for an outbreak of respiratory disease that occurred in 1997. Like influenza viruses, adenoviruses are often found in latent infections in healthy people.
Symptoms of respiratory diseases caused by adenovirus infections range from the common cold syndrome to pneumonia, croup, and bronchitis. Patients with vulnerable immune systems can experience severe complications from adenovirus infection. Acute respiratory disease (ARI), first discovered among military recruits during World War II, can be caused by adenovirus infection during crowding and stressful conditions.
Adenovirus Disease Causes
Adenoviruses are most common due to cold. They also cause a number of other types of infections.
Adenovirus Disease Risk Factors
Factors, which increase the risk of adenovirus infection:
- Exposure to coughing or sneezing from an infected person;
- Exposure to fecal contamination (e.g., zarazhennaya water, poor hygiene);
- Consumption of food contaminated with flies;
- Transmission from person to person, especially being in close contact with others (for example, in military units);
- Swim in a lake or pond.
How Adenovirus Disease Spread
Like colds and flu, adenovirus infections usually spread through respiratory secretions when a person coughs or sneezes.
But these viruses are tougher than flu and cold viruses: They can live long on surfaces such as doorknobs or towels and they are resistant to many common disinfectants. This makes them very easy to spread from one person to another.
Adenoviruses can also be spread through fecal contamination, for example, when diapers are changed or in swimming pools.
Adenovirus outbreaks tend to be more common in indoor environments such as treatment facilities or in schools or military barracks.
Adenovirus Disease Epidemiology
The source of infection is a sick person who releases the virus into the environment during the course of the disease, as well as a carrier of the virus. Isolation of the virus occurs from the upper respiratory tract, with feces, or tears. The role of the carrier of the virus “healthy” in the transmission of infection is quite significant. The maximum time of the release of the virus is 40-50 days.
Adenoviral conjunctivitis can be a nosocomial infection. The mechanism of transmission is by air, fecal-oral. Ways of transmission: air, food, household contact. Possible intrauterine infection in the fetus. Vulnerability is high. Most children and teenagers are sick. Seasonally not critical, but in winter, the incidence of adenoviral infection increases, with the exception of pharyngoconjunctival fever, which is diagnosed in the summer.
The nature of the epidemic process is largely determined by the type 7 of the adenovirus. Epidemics caused by adenoviruses type 1, 2, 5, rare, type 3, 7 are more common. After the illness, the immunity of a particular species is formed.
Adenovirus infection disease diagnosis
The doctor makes a diagnosis based on a physical examination. If necessary, you may need the results of this analysis: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.