content
stringlengths 275
370k
|
---|
Florence Nightingale's Birthday
Themes: Florence Nightingale’s Birthday; International Nurses Day; caring for others; making a difference; significant individuals
Summary: This assembly includes information about Florence Nightingale (1820 - 1910) and reflections about her significance from KS2 pupils. It would be most suitable for use around the time of International Nurses Day - 12th May each year - chosen because it is Florence Nightingale's birthday.
- Florence Nightingale was named after the city of her birth - Florence in Italy.
- During the Crimean War (1853 - 1856) her hospital was across the Black Sea in Scutari - part of modern-day Istanbul.
The video begins with a KS2 pupils describing the work her parents do as nurses, how she feels about this and how it impacts their family life. She is then joined by other pupils who respond to a variety of questions and reveal why Nightingale remains an inspiration to them and to others.
The video ends with a short summary of Nightingale's achievements and the information that Nightingale's birthday - 12th May - has since become International Nurses Day.
Duration: 3' 48"
Last words: '...inspired to be a nurse when I am older.'
During the video the pupils consider the following questions:
- What do nurses do?
- Have you ever been treated by a nurse?
- Why is Florence Nightingale still famous today?
- Where did Florence Nightingale go?
- Why is Florence Nightingale still an inspiration today?
1. Entry music
In her book Notes on Nursing Nightingale describes how the aria 'Assia a pie d'un salice' from Otello by Rossini may be used to soothe the sick.
Ask pupils to think about a time when they might have met or been treated by a nurse. You could show them a picture of a modern-day nurse in uniform. Remind them of the nurses who come in to give them their flu vaccination every year, as well as nurses in their local GP surgery and hospital. Perhaps take a few examples from pupils of times a nurse has looked after them. Show a picture of Florence Nightingale and ask pupils to name her. Explain that you're now going to watch a short video about Florence Nightingale and her importance.
3. The video
Play the video. The duration is 3' 48" and the final words are: '...inspired to be a nurse when I am older.'
4. After the video
Ask the children to spend a few moments thinking about Florence Nightingale and all the people she helped. You could go back over the key events in her life using the BBC Bitesize video in the link below.
5. Time to talk
Ask: How did Florence Nightingale make a difference? Try to draw out that not only did she make a difference to the soldiers that she treated, but she also made a difference to us because she was the person who invented modern nursing. Without her we might not have the amazing nurses we have today.
6. Opportunity to sing
'Be the change' from Sing Up. Suggestions from BBC collections below.
7. Opportunity to reflect
Ask the children to close their eyes and imagine Florence Nightingale working hard in the hospital in Scutari. She must have found it very difficult but she didn’t give up because she knew how important it was to help the soldiers, and she knew she was making a difference to those soldiers. How can we make a difference in our lives? Who could we help?
8. Opportunity for prayer
Thank you for all those people who care for us when we are sick.
Thank you for people who spend their lives caring for people in hospital.
Help us to find ways to help and care for other people, so that we can make a difference to others like Florence Nightingale did. |
Researchers from Harvard University and MIT have demonstrated that graphene, a surprisingly robust planar sheet of carbon just one-atom thick, can act as an artificial membrane separating two liquid reservoirs.
Their findings were reported this month in Nature.
By drilling a tiny pore just a few nanometers in diameter, called a nanopore, in the graphene membrane, the researchers were able to measure exchange of ions through the pore and demonstrate that a long DNA molecule can be pulled through the graphene nanopore just as a thread is pulled through the eye of a needle.
“By measuring the flow of ions passing through a nanopore drilled in graphene we have demonstrated that the thickness of graphene immersed in liquid is less then 1 nm thick, or many times thinner than the very thin membrane which separates a single animal or human cell from its surrounding environment,” says lead author Slaven Garaj, a physics research associate at Harvard. “This makes graphene the thinnest membrane able to separate two liquid compartments from each other. The thickness of the membrane was determined by its interaction with water molecules and ions.”
Graphene, the strongest material known, has other advantages. Most importantly, it is electrically conductive. (Update: On Oct. 5, Russian scientists Andre Geim and Konstantin Novoselov received the Nobel Prize in physics for their work investigating the properties of graphene. “Carbon, the basis of all known life on earth, has surprised us once again,” the Royal Swedish Academy said in its announcement statement.)
“Although the membrane prevents ions and water from flowing through it, the graphene membrane can attract different ions and other chemicals to its two atomically close surfaces. This affects graphene’s electrical conductivity and could be used for chemical sensing,” says co-author Jene Golovchenko, the Rumford Professor of Physics and Gordon McKay Professor of Applied Physics at Harvard, whose pioneering work started the field of artificial nanopores in solid-state membranes. “I believe the atomic thickness of the graphene makes it a novel electrical device that will offer new insights into the physics of surface processes and lead to a wide range of practical application, including chemical sensing and detection of single molecules.”
In recent years graphene has astonished the scientific community with its many unique properties and potential applications, ranging from electronics and solar energy research to medical applications.
Jing Kong, also a co-author on the paper, and her colleagues at MIT first developed a method for the large-scale growth of graphene films that was used in the work.
The graphene was stretched over a silicon-based frame, and inserted between two separate liquid reservoirs. An electrical voltage applied between the reservoirs pushed the ions towards the graphene membrane. When a nanopore was drilled through the membrane, this voltage channeled the flow of ions through the pore and registered as an electrical current signal.
When the researchers added long DNA chains in the liquid, they were electrically pulled one by one through the graphene nanopore. As the DNA molecule threaded the nanopore, it blocked the flow of ions, resulting in a characteristic electrical signal that reflects the size and conformation of the DNA molecule.
Co-author Daniel Branton, the Higgins Professor of Biology Emeritus at Harvard, is one of the researches who, more than a decade ago, initiated the use of nanopores in artificial membranes to detect and characterize single molecules of DNA.
Together with his colleague David Deamer at the University of California, Branton suggested that nanopores might be used to quickly read the genetic code, much as one reads the data from a ticker-tape machine.
As a DNA chain passes through the nanopore, the nucleobases, which are the letters of the genetic code, can be identified. But a nanopore in graphene is the first nanopore short enough to distinguish between two closely neighboring nucleobases.
Several challenges still remain before a nanopore can do such reading, including controlling the speed with which DNA threads through the nanopore. When achieved, nanopore sequencing could lead to very inexpensive and rapid DNA sequencing.
“We were the first to demonstrate DNA translocation through a truly atomically thin membrane. The unique thickness of the graphene might bring the dream of truly inexpensive sequencing closer to reality. The research to come will be very exciting,” concludes Branton.
Additional co-authors on the Nature paper were W. Hubbard of the Harvard School of Engineering and Applied Sciences and A. Reina of the Department of Materials Science and Engineering, MIT. The research was funded by the National Human Genome Research Institute and the National Institutes of Health. |
The central issue, say the researchers, is that the Arabicas grown in the world’s coffee plantations are from very limited genetic stock and are unlikely to have the flexibility required to cope with climate change and other threats, such as pests and diseases. Recent studies have confirmed the climate sensitivity of Arabica and support the widely reported assumption that climate change will have a damaging impact on commercial coffee production worldwide.
The study, which uses computer modeling, represents the first of its kind for wild Arabica coffee. Surprisingly, modeling the influence of climate change on naturally occurring populations of any coffee species has never been undertaken before.
The researchers used field study and “museum” data (including herbarium specimens) to run bioclimatic models for wild Arabica coffee, in order to deduce the actual (recorded) and predicted geographical distribution for the species. The distribution was then modeled through time until 2080 using three different emission scenarios. Ultimately, the models showed a profoundly negative influence on the number and extent of wild Arabica populations. The worst case scenario, drawn from the analyses, is that wild Arabica will be extinct by 2080.
The researchers say the predicted reduction in the number of wild Arabica localities, between 65 percent and 100 percent, can be taken as a general assessment of the species’ survival as a whole, given the scope and coverage of the data and analyses used in the study. They stress, however, that the predictions are regarded as conservative, as the modeling does not factor in the large-scale deforestation that has occurred in the highland forests of Ethiopia and South Sudan.
Other factors, such as pests and diseases, changes in flowering times, and a possible reduction in the number of birds (which disperse the coffee seeds), are not included in the modeling, and these, suggests the study, are likely to have a compounding negative influence.
The outcome of climate change in Ethiopia for cultivated Arabica, the only coffee grown in the country, is also assumed to be profoundly negative, as natural populations, forest coffee (semi-domesticated) and some plantations occur in the same general bioclimatic area as indigenous Arabica. Generally, the results of the study indicate that Arabica is a climate sensitive species, which supports previously recorded data, various reports, and anecdotal information from coffee farmers. The logical conclusion is that Arabica coffee production is, and will continue to be, strongly influenced by accelerated climate change, and that in most cases the outcome will be negative for the coffee industry.
The study notes that in many areas of Ethiopia loss of habitat due to deforestation might pose a more serious threat to the survival of Arabica, although the findings indicate that even if a forest area is well protected, climate change alone could lead to extinction in certain locations.
The researchers hope their work will form the basis for developing strategies for the survival of Arabica in the wild. They identify a number of core sites which might be able to sustain wild populations of Arabica throughout this century, serving as long-term in situstorehouses for coffee genetic resources. The study also identifies populations that require immediate conservation action, including collection and storage at more favorable sites (for example in seed banks and living collections).
“Coffee plays an important role in supporting livelihoods and generating income, and has become part of our modern society and culture. The extinction of Arabica coffee is a startling and worrying prospect… The scale of the predictions is certainly cause for concern,” concluded Aaron Davis, the Head of Coffee Research at the Royal Botanic Gardens (Kew, UK). |
Your guide to equality
The word equality has its root in the Latin term for knights – or equites – which meant propertied men who, unlike common foot soldiers, had sufficient wealth to ride into battle on a horse. They considered themselves to be equals among their peers but of a distinct social class from the common soldiery. As such, the modern concept of equality never really means a level playing field for all – such ideas are more associated with the concept of egalitarianism. Equality can, therefore, mean different things to different people and have unique meanings in different contexts around the world. Many people who campaign for greater equality want outcomes to be more equal and for people to have a fair chance in life rather than ensuring everyone has an exactly shared out amount of resource and wealth.
Why are equality rights important?
In the past, many people were treated in an unequal way – something that continues in certain societies. By enshrining some human rights into law, usually in the form of a state constitution, people have gained equality rights which, for instance, meant that their rights to protest, to employment and to worship were protected. A good example is when women won the right to vote in Western countries. A previously disenfranchised group won the equal right to vote and to stand in elections meaning that their right to equality was established in law.
Are equality and fairness the same thing?
Most people feel that their sense of fairness is tied up with ideas surrounding equality. That said, the two concepts are different. People who are paid on an unequal basis, for example, may think that the difference is entirely fair given that they perform differently or work in slightly different ways even though there is no exact parity in their pay. In some cases, of course, fairness and equality coincide with one another, but it is not always so.
What is political equality?
It is usually defined as a group or society in which all members have the same standing, something which means their ability to speak and to vote is equally respected. This has been bound up with gender and racial equality in the past because women and certain ethnic groups have been disenfranchised in the past. This is one of the founding principles behind liberal democracy. |
Main Difference – Golgi Bodies vs Mitochondria
Golgi bodies and mitochondria are vital organs found in the eukaryotic cells. Golgi bodies are made up a series of folded membranes. They are a part of the endomembrane system of the cell. Mitochondria are bean-shaped organelles surrounded by double membranes. The surface of the inner membrane is increased by the membrane folds known as cristae. The main difference between Golgi bodies and mitochondria is that Golgi bodies direct the flow of substances such as proteins to their destinations whereas mitochondria provide a location for the rearmost events of aerobic respiration.
Key Areas Covered
1. What are Golgi Bodies
– Definition, Structure, Function
2. What are Mitochondria
– Definition, Structure, Function
3. What are the Similarities Between Golgi Bodies and Mitochondria
– Outline of Common Features
4. What is the Difference Between Golgi Bodies and Mitochondria
– Comparison of Key Differences
Key Terms: Aerobic Respiration, Cristae, Eukaryotic Cells, Golgi Bodies, Membrane-Bound Organelles, MAM, Mitochondria, Porins, Translocase
What are Golgi Bodies
Golgi bodies or Golgi Apparatus refer to a complex of vesicles and folded membranes inside the eukaryotic cells. The flattened, membrane-enclosed sacs are called cisternae. Golgi bodies provide a site for the syntheses of carbohydrates like pectin and hemicellulose. Glycosaminoglycans, which are found in the extracellular matrix of the animal cells, are also synthesized in the Golgi bodies. Golgi bodies receive proteins from the endoplasmic reticulum. Further processing and sorting of proteins take place inside Golgi bodies. These proteins are transported to their destinations, plasma membrane, lysosomes or secretions. The structure of a Golgi body is shown in figure 1.
The most significant features of the Golgi body is its distinct polarity. Two faces can be identified in Golgi: cis face and trans face. Proteins enter the Golgi at the cis face while the exit of them occur at the trans face. The cis face is the convex of the Golgi body, and it is oriented towards the nucleus. The concave of the Golgi is its trans face.
What are Mitochondria
Mitochondria refer to the organelles found in large numbers in some cells in which the biochemical processes of the aerobic respiration take place. The number of mitochondria present in a particular cell depends on the cell type, tissue, and organism. The citric acid cycle, which is the second step of the cellular respiration, occurs in the matrix of mitochondria. ATP is produced in the oxidative phosphorylation, which occurs at the inner membrane of mitochondria. The structure of a mitochondrion is shown in figure 2.
Mitochondria are bean-shaped organelles separated from the cytoplasm by double membranes; inner and outer mitochondrial membranes. The inner mitochondrial membrane forms fold into the matrix called cristae. Cristae increase the surface area of the inner membrane. The inner mitochondrial membrane consists of more than 151 different protein types, functioning in many ways. It lacks porins; the type of translocase in the inner membrane is called as TIC complex. The intermembrane space is situated between inner and outer mitochondrial membranes. The outer mitochondrial membrane contains a large number of integral membrane proteins called porins. Translocase is an outer membrane protein. The translocase-bound N-terminal signal sequence of large proteins allows the protein to enter into mitochondria. The association of mitochondrial outer membrane with endoplasmic reticulum forms a structure called MAM (mitochondria-associated ER-membrane). MAM allows the transport of lipids between mitochondria and the ER through calcium signalling.
The space enclosed by the two mitochondrial membranes is called the matrix. Mitochondrial DNA and ribosomes with numerous enzymes are suspended in the matrix. Mitochondrial DNA is a circular molecule. The size of the DNA is around 16 kb, encoding 37 genes. Mitochondria may contain 2-10 copies of its DNA in the organelle.
Similarities Between Golgi Bodies and Mitochondria
- Both Golgi bodies and mitochondria are membrane-bound organelles found in eukaryotic cells.
- Both Golgi bodies and mitochondria play a vital role in the cell.
Difference Between Golgi Bodies and Mitochondria
Golgi Bodies: Golgi bodies refer to a complex of vesicles and folded membranes inside the eukaryotic cells.
Mitochondria: Mitochondria are a type of organelles in which the biochemical processes of respiration and energy production occur.
Golgi Bodies: Animal cells contain a few large Golgi bodies while plant cells contain many small Golgi bodies inside cells.
Mitochondria: The number of mitochondria present in a particular cell depends on the cell type, tissue, and organism. Human liver cells contain 1000-2000 mitochondria.
Golgi Bodies: Golgi bodies are made up a series of membranous stacks.
Mitochondria: Mitochondria are bean-shaped organelles surrounded by double membranes.
Golgi Bodies: Golgi bodies are composed of cisternae, tubules, vesicles, and Golgian vacuoles.
Mitochondria: The inner membrane of the mitochondria consists of finger-like inward projections called cristae.
Golgi Bodies: A Golgi body is enclosed by a single membrane.
Mitochondria: A mitochondrion is enclosed by double membranes.
Golgi Bodies: Golgi bodies direct the flow of substances such as proteins to their destinations.
Mitochondria: Mitochondria provide a location for the rearmost events of the aerobic respiration.
Golgi Bodies: Golgi bodies are the secretory organ of the cell.
Mitochondria: Mitochondria are the powerhouse of the cell.
Golgi Bodies: Golgi bodies lack their own DNA.
Mitochondria: Mitochondria consist of a circular DNA molecule inside the organelle.
Golgi Bodies: Golgi bodies lack ribosomes inside the organelle.
Mitochondria: Mitochondria contain ribosomes.
Golgi bodies and mitochondria are vital organelles inside the eukaryotic cells. The maturation and transportation of molecules to their destinations take place inside the Golgi bodies. Mitochondria are the powerhouse of the cell that carries out biochemical reactions of the aerobic respiration, producing ATP. The main difference between Golgi bodies and mitochondria is the role they play in the cell.
1.Cooper, Geoffrey M. “The Golgi Apparatus.” The Cell: A Molecular Approach. 2nd edition., U.S. National Library of Medicine, 1 Jan. 1970, Available here.
2.Cooper, Geoffrey M. “Mitochondria.” The Cell: A Molecular Approach. 2nd edition., U.S. National Library of Medicine, 1 Jan. 1970, Available here.
1. “0314 Golgi Apparatus a en” By OpenStax – Version 8.25 from the TextbookOpenStax Anatomy and PhysiologyPublished May 18, 2016 (CC BY 3.0) via Commons Wikimedia
2. “Mitochondrion structure” By Kelvinsong; modified by Sowlos – Own work based on: Mitochondrion mini.svg (CC BY-SA 3.0) via Commons Wikimedia |
— By Tanya Petersen
In just over thirty years there may be almost 10-billion people on earth, 2.5-billion more than now.
This population growth makes the global food security challenge seem quite straightforward: the UN’s Food and Agriculture Organisation estimates that by 2050 food production will need to increase by around 70% in order to feed the world’s people.
But underneath these seemingly simple numbers are layers of complexity, as outlined by Sustainable Development Goal 2 to “End hunger, achieve food security and improved nutrition and promote sustainable agriculture”. This SDG recognizes the inter-linkages between so many of the critical factors that will help us to achieve food security – increased agricultural efficiency, empowering small farmers, ending rural poverty, tackling the impacts of climate change and a critical drive towards overall sustainable agriculture.
Now, a fascinating new collection of research is tackling another of the complex layers in the global food security challenge: understanding the relationships between plants and microbes and how microbes and fungal parasites fool plants’ immune systems, sometimes causing devastating crop failures.
Led by Dr Ralph Panstruga from the Aachen University in Germany and Dr Pietro Daniele Spanu from London’s Imperial College, more than 200 researchers contributed to 34 research articles investigating Biotrophic Plant-Microbe Interactions with the hope of helping to improve agriculture for the benefit of future generations. This research, says Dr Panstruga, “will help us to develop new and novel strategies to protect crops from microbial invaders, reduce crop losses and contribute to increased agricultural efficiency”.
It’s well known that animals and plants live closely with microbes. All plants are colonised on their surface above and below ground, as well as within their bodies, by bacteria and fungi. Some microbes appear to bring benefits, for example by providing greater access to soil nutrients or elements (such as nitrogen) that are abundant in the air but not readily available to plants: these are mutualistic symbionts. Others are at least potentially harmful and can have a negative impact on the plants’ life to a greater or lesser extent: these are considered pathogens.
When microbes coexist intimately with plants exchanging nutrients, but without causing the direct death of the hosts’ cells and tissues, the interactions are called biotrophic, as opposed to necrotrophic ones in which the microbes kill and feed of the remains. In between these two extremes lie the hemibiotrophic interactions, which are characterized by an initial biotrophic and a later necrotrophic phase. For plants, there is evidence that these associations are extremely ancient and have existed as long as the organisms themselves, and they are thought to have influenced evolution for hundreds of millions of years.
Participate in next year’s Spotlight Award: Submit your Research Topic idea
Plants have evolved highly complex molecular detection and defence systems to control microbial colonisation of all these types, and interacting microbes have had to adapt and develop equally complex avoidance and counter defence measures. Until now, agriculturalists have based their attempts to breed disease resistant crops on the existence of genetically determined plant immune mechanisms but the capacity of microbes to evolve rapidly and effectively to overcome these resistances have led to catastrophic epidemics that threaten our core food security.
In one paper, Analysis of Cryptic, Systemic Botrytis Infections in Symptomless Hosts, Dr Michael W. Shaw and his colleagues investigate the Botrytis fungi that causes grey moulds. This fungi was generally thought to be necrotrophic – one that eventually kills its plant hosts – but the research questioned this common understanding. It found that, indeed, some microbes, thought to be epitomes of necrotrophic parasites, are in some instances capable of living as symptomless “endophytes”, that is, they don’t in fact, kill their hosts.
The authors hope that such research developments may challenge our view that biotrophs and necrotrophs are fundamentally different pathogens, underpining the emerging concept of a much more nuanced separation between these different forms of interactions. They also hope that the results of this study will encourage further research into other plant pathogens that cause diseases in agricultural crops such as the fungal rusts of soybean and cereals. This may uncover new developments in plant disease resistance helping to drive more sustainable, efficient and affordable farm practices by coming up with alternative approaches to the pesticides and chemicals currently used in large sale agriculture.
Another paper, an opinion article by Dr Andrea Genre and Dr Giulia Russo, Does a Common Pathway Transduce Symbiotic Signals in Plant–Microbe Interactions? explores a long-standing dogma in the molecular analysis of symbiotic interactions: the existence of a common symbiosis pathway. The authors challenge the current view, arguing that research in the past might have been biased by the usage of legumes as model plants for studying plant-symbiont interactions. They discuss that legumes may in fact represent the exception rather than the rule.
Topic editor Dr Panstruga is passionate about how this seminal collection of papers may impact the future of agriculture, “Global food security is a key problem of this and the forthcoming centuries and this research helps to address some of the fundamental questions regarding crop losses caused by microbial phytopathogens. Results of this research will help to develop novel effective strategies to protect crops from microbial invaders and sharing these ideas and the results of my research with my fellow scientists is key to promoting science as a whole”.
Co-editor, Dr Pietro Daniele Spanu, also believes that without sharing ideas and results there will be little scientific progress, “The main driver of my research is firstly, an insatiable curiosity about how nature works, and in particular how certain organisms “talk” to each other across huge taxonomic boundaries to establish disease of protect themselves from it. But also for me there is little or no point in finding things out if these discoveries are not then communicated to anyone else who care to listen”.
We all depend directly or indirectly on plants for our nutrition, health and well-being. The ability to grow healthy, productive plants in sustainable and efficient agricultural systems is critical for our future existence on this planet. And essential to the future of humanity is the need to so in ways that minimize impacts on the broad environment, both in terms of respecting the biosphere and the physical resources such as water and nutrients. Scientific research like that compiled in this Research Topic will help to improve agriculture further in this sense for the benefit of generations to come. |
Imagine that you are on the train. You’re sitting down and trying to read an Alliance Française blog post on your phone but you can’t help but get distracted by those around you. The man across from you is loudly eating a ham sandwich. The woman next to you yawns and suddenly it strikes you how you also need to yawn.
Some theories suggest that you are feeling the urge to yawn because of your mirror neurons. On a mechanical level, mirror neurons are neurons that have been observed to fire both when you do something and when you see someone else do the same thing. They are known to exist in humans and other primates (and maybe more) and while they are thought to originally be a simple survival mechanism, they have developed into a facilitator of culture, language learning and empathy.
Mouse spinal cord neurons.These neurons are not necessarily mirror neurons.
Mirror neurons are a big part of what allows babies to copy facial expressions and later replicate sounds and language and all of the cultural norms that go along with learning. Babies gather endless things to mimic and later cut some behaviors and sounds to most effectively fit into the culture(s) they are being raised in.
In some ways, adults learning a new language are not too different. When you learn a new language you aren’t just learning the words and grammar structures in a vacuum. If you are learning a language fully, you are learning about the cultures related to the language and the tiny mannerisms that make up the meat of interaction and understanding the similarities and differences between that culture and others.
When you first try to replicate the sounds of a foreign language and mannerisms of a different culture it is very possible that you might make a few mistakes along the way. While this might make you cringe in the moment, this is actually essential to you learning effectively. By making these mistakes you know what not to do and you hone your skills more specifically on a specific culture similar to the aforementioned babies. Making mistakes and learning from them in language learning causes you to develop a higher tolerance of ambiguity and this in turn helps make you a more empathetic person in general.
Increase your empathy and sign up for a French class at the Alliance today! Just in time for the 2nd four-week session of the season!
We’ve discussed just a few of the major theories of mirror neurons here and there is still a ton to learn about the nervous system and how it relates to language learning and empathy. Here are our sources if you want to read about this topic more in depth: |
Scientists working in teams developed and used standardized methods to assess the health effects of commonly used engineered nanomaterials (ENMs).
Papers & Resources:
Nanotechnology Notable Papers and Advances (http://www.niehs.nih.govhttp://edit:9992/Rhythmyx/assembler/render?sys_authtype=0&sys_variantid=639&sys_revision=2&sys_contentid=641848&sys_context=0)
What are nanomaterials?
Scientists have not unanimously settled on a precise definition of nanomaterials, but agree that they are partially characterized by their tiny size, measured in nanometers. A nanometer is one millionth of a millimeter - approximately 100,000 times smaller than the diameter of a human hair.
Nano-sized particles exist in nature and can be created from a variety of products, such as carbon or minerals like silver, but nanomaterials by definition must have at least one dimension that is less than approximately 100 nanometers. Most nanoscale materials are too small to be seen with the naked eye and even with conventional lab microscopes.
Materials engineered to such a small scale are often referred to as engineered nanomaterials (ENMs), which can take on unique optical, magnetic, electrical, and other properties. These emergent properties have the potential for great impacts in electronics, medicine, and other fields. For example,
- Nanotechnology can be used to design pharmaceuticals that can target specific organs or cells in the body such as cancer cells, and enhance the effectiveness of therapy.
- Nanomaterials can also be added to cement, cloth and other materials to make them stronger and yet lighter.
- Their size makes them extremely useful in electronics, and they can also be used in environmental remediation or clean-up to bind with and neutralize toxins.
However, while engineered nanomaterials provide great benefits, we know very little about the potential effects on human health and the environment. Even well-known materials, such as silver for example, may pose a hazard when engineered to nano size.
Nano-sized particles can enter the human body through inhalation and ingestion and through the skin. Fibrous nanomaterials made of carbon have been shown to induce inflammation in the lungs in ways that are similar to asbestos .
Where are nanomaterials found?
Some nanomaterials can occur naturally, such as blood borne proteins essential for life and lipids found in the blood and body fat. Scientists, however, are particularly interested in engineered nanomaterials (ENMs), which are designed for use in many commercial materials, devices and structures. Already, thousands of common products-- including sunscreens, cosmetics, sporting goods, stain-resistant clothing, tires, and electronics—are manufactured using ENMs. They are also in medical diagnosis, imaging and drug delivery and in environmental remediation.
What are some of the main take-home points that NIEHS and NTP want people to know about nonmaterials?
There are three main take-home points:
- There is no single type of nanomaterial. Nanoscale materials can in theory be engineered from minerals and nearly any chemical substance, and they can differ with respect to composition, primary particle size, shape, surface coatings and strength of particle bonds. A few of the many examples include nanocrystals, which are composed of a quantum dot surrounded by semiconductor materials, nano-scale silver, dendrimers, which are repetitively branched molecules, and fullerenes, which are carbon molecules in the form of a hollow sphere, ellipsoid or tube.
- The small size makes the material both promising and challenging. To researchers, nanomaterials are often seen as a "two-edged sword." The properties that make nanomaterials potentially beneficial in product development and drug delivery, such as their size, shape, high reactivity and other unique characteristics, are the same properties that cause concern about the nature of their interaction with biological systems and potential effects in the environment. For example, nanotechnology can enable sensors to detect very small amounts of chemical vapors, yet often there are no means to detect levels of nanoparticles in the air—a particular concern in workplaces where nanomaterials are being used.
- Research focused on the potential health effects of manufactured nano-scale materials is being developed, but much is not known yet. NIEHS is committed to developing novel applications within the environmental health sciences, while also investigating the potential risks of these materials to human health.
Why is NIEHS involved in nanotechnology?
NIEHS has two primary interests in the field of nanotechnology: harnessing the power of engineered nanomaterials to improve public health, while at the same time understanding the potential risks associated with exposure to the materials.
What is NIEHS Doing?
Currently, very little is known about nanoscale materials and how they affect human health and the environment. NIEHS is committed to supporting the development of nanotechnologies that can be used to improve products and solve global problems in areas such as energy, water, medicine and environmental remediation, while also investigating the potential risks these materials pose to human health and the environment. NIEHS researchers are committed to prevention through design, a phrase which embodies the effort to avoid any potential hazards in the production, use, or disposal of nanoscale products and devices by anticipating them in advance.
NIEHS has developed an integrated, strategic research program that includes grantee support, utilizing our in house research expertise, investing in the development of nano-based applications that benefit the environment and public health, and tapping into the world class toxicity testing capabilities of the National Toxicology Program (NTP), to understand the impacts of engineered nanomaterials on human health, and to support the goals of the National Nanotechnology Initiative .
One of the key ways NIEHS is supporting research on the health impacts of engineered nanomaterials is through NIEHS Centers for Nanotechnology Health Implications Research (NCNHIR) consortium. The NCNHIR is an interdisciplinary program consisting of eight Cooperative Centers and other active grantees. Established in 2010, consortium researchers are working to understand how engineered nanomaterials interact with biological systems and how these effects may impact human health.
NIEHS also established contractual agreements for nanomaterial characterization and an informational database to support this consortium. The overarching goals of these efforts are to gain fundamental understandings on the interactions of ENMs with biological systems to better understand potential health risks associated with ENM exposure. These findings will also guide safe development and use of nanotechnology.
The consortium grew out of work that began by grantees supported through the Engineered Nanomaterials Grand Opportunity (Nano GO) grant program funded through the American Recovery and Reinvestment Act. The NCNHIR consortium continues to build upon the research protocols and lessons learned through Nano GO.
Additionally, NIEHS has formed partnerships with other federal agencies to support grantees across the country as part of its Environmental Health and Safety program. For example, NIEHS has joined with the Environmental Protection Agency (EPA), National Science Foundation (NSF), National Institute for Occupational Safety and Health (NIOSH) and other NIH institutes and centers (ICs) over the years to support research strategies addressing environmental health and safety aspects of engineered materials.
Visit NIEHS Nano Environmental Health and Safety (Nano EHS) website for additional information on NIEHS' involvement in the field of nanotechnology.
Visit the "Who We Fund" website for a complete list of NIEHS-supported grants. The “Who We Fund: Application of Technology to Disease – Nanotechnology” list identifies NIEHS grantees working on nanotechnology.
NIH Research Portfolio Online Reporting Tools (REPORT) provides access to reports, data, and analysis of NIH research activities.
NIH Funding Opportunities and Notices are online.
Nanotechnology Notable Papers and Advances - A searchable list of 401 Nanotechnology papers supported by NIEHS and the American Recovery and Reinvestment Act of 2009 (ARRA) grants from 2010 – July 12, 2017
NanoHealth and Safety - NIEHS encourages and supports research into the underlying properties of engineered nanomaterials (ENM) to determine their potential biocompatibility or toxicity to human health. Consortiums established by NIEHS are fostering collaboration aimed at building a foundation of understanding on how the unique chemical and physical properties that emerge at the nanoscale may affect interactions between environmental exposures and the body.
Can you provide examples of the type of work NIEHS has funded?
NIEHS researchers have produced hundreds of papers advancing our knowledge of nanomaterials and their potential impact on the environment. A small sampling demonstrates the depth and breadth of the work:
- Researchers were concerned about whether inhaled carbon nanotubes could lead to certain lung diseases, including pleural fibrosis, which results in the hardening and thickening of the tissue which covers the lungs, impairing breathing. Testing this hypothesis, they exposed laboratory mice to varied doses of pollutants and nanoparticles. Mice exposed to certain doses of carbon nanotubes developed subplural fibrosis just two to six weeks after inhaling carbon nanotubes. The work suggests that minimizing inhalation of nanotubes is prudent until further long-term assessments are conducted.[i]
- Low concentrations of carbon nanoparticles had profound effects on cells lining renal tubules—a critical structure in the kidneys. Both barrier cell function and protein expression were impacted. The results indicate that carbon nanoparticles impact renal cells at concentrations lower than previously known and suggest caution with regard to increasing carbon nanoparticles levels entering the food chain.[ii]
- Nanoscale materials are being used in many cosmetics, sunscreen and other consumer products. Possible absorption of the materials through the skin, and potential consequences, have not been determined. NIEHS-funded scientists applied nanosized particles of cadmium selenide, a known carcinogen , on hairless laboratory mice. They found that when the mice’s skin had been abraded to remover upper skin layers before the solution was applied, cadmium elevation was detected in the mice’s lymph nodes and liver. When the quantum dots of cadmium selenide were applied to the undisturbed skin of mice, no consistent cadmium elevation was detected in the organs. The study concluded that skin absorption of nanomaterials depended on the quality of the skin barrier and that future risk assessments should consider key barrier aspects of skin and its overall integrity.[iii]
- Nanosized materials show great promise in drug delivery, with the potential of targeting cancerous cells with a drug but avoiding an attack on healthy cells. One NIEHS-funded study demonstrated that the ability of two lines of cancer cells to absorb nanosized, rod-shaped particles differed depending on the aspect ratio of the nanoparticles—meaning the proportions between the particles’ height and width. The finding could help in achieving more efficient drug delivery.[iv]
[i] Ryman-Rasmussen JP, MF Cesta, AR Brody, JK Shipley-Phillips, JI Everitt, EW Tewskbury, OR Moss, BA Wong, DE Dodd, ME Anderson JC Bonner. Inhaled carbon nanotubes reach the subplural tissue in mice. Nature Nanotechnology (2009) v. 4 (11): 747-51. Abstract[ii] Blazer-Yost BL, A Banga, A Amos, E Chernoff , X Lai, C Li, S Mitra, FA Witzmann. Effect of carbon nanoparticles on renal epithelial cell structure, barrier function and protein expression. Nanotoxicology (2011) v.5 (3):354-71. Abstract[iii] Gopee, N, D Roberts, P Webb, C Cozart, P Siitonen, J Latendresse, A Warbitton A, W Yu, V Colvin, N Walker, P Howard. Quantitative Determination of Skin Penetration of PEG-Coated CdSe Quantum Dot in Dermadraded but not Intact SKH-1 Hairless Mouse Skin. Toxicological Sciences (2009) v. 111(1):37-48. Abstract
What is NIEHS doing to advance the development and application of nanomaterials to be used in environmental health research?
Much of NIEHS’s efforts are focused on the potential toxicity of engineered materials. However, NIEHS has developed a nanotechnology application program mostly through cadmium efforts, including multi-institute bioengineering research opportunities, the NIH Genes, Environment and Health Initiative, and the Small Business Program (SBIR). NIEHS-funded grantees are working to develop nanotechnology-based sensors to detect exposure to toxic pollutants that will help increase our understanding of the biological consequences of exposure, and developing strategies to reduce the toxicity of environmental factors. Several investigator-initiated grants are being supported.
Specifically, how is the Superfund Research Program involved in nanotechnology-related issues?
The Superfund Research Program
is supporting grantees that are developing new or improved technologies and methods, including the promising field of nanotechnology, to help monitor and remediate, or clean up, around Superfund sites. Nanomaterials offer some distinct advantages to remediation technologies such as large surface-area-to-volume ratio and high chemical reactivity. Superfund researchers are also looking at how nanomaterials behave in the environment as they are used for remediation.
For more information specifically related to nanotechnology, visit the SRP Search webpage and enter the search term "nano*",The SRP is a network of university grants that are designed to seek solutions to the complex health and environmental issues associated with the nation's hazardous waste sites. The research conducted by the SRP is a coordinated effort with the Environmental Protection Agency, which is the federal entity charged with cleaning up the worst hazardous waste sites in the country.
The SRP also collaborates with other agencies to conduct interactive web-based "Risk e Learning" seminars that provide information about innovative treatment and website characterization technologies to the hazardous waste remediation community. Visit the Nanotechnology - Applications and Implications for Superfund webpage for a list of some of the seminars related to nanotechnology.
What is the National Toxicology Program (NTP) doing to assess the health risks associated with nanotechnology?
The National Toxicology Program is engaged in a broad-based research program to address the potential human health hazards associated with the manufacture and use of nanomaterials.
Scientists at the three core agencies that comprise NTP - NIEHS, National Center for Toxicological Research at the U.S. Food and Drug Administration, and National Institute for Occupational Safety and Health of the Centers for Disease Control and Prevention - are working to evaluate the toxicological properties of a representative cross-section of several different classes of nanoscale materials, including (1) metal oxides, (2) fluorescent crystalline semiconductors (quantum dots), (3) carbon fullerenes (Buckyballs ), and (4) carbon nanotubes, through the NTP Nanotechnology and Safety Initiative. Key parameters of greatest concern relative to their potential toxicity are size, shape, surface chemistry and composition. Researchers are using studies in laboratory animals and cells, as well as mathematical models to evaluate and predict where these materials go in the body, and what potential health effects they may cause.
What is NIEHS doing to help protect workers exposed to nanomaterials?
NIEHS Worker Education and Training Program (WETP) supports workers engaged in activities related to hazardous materials, and waste generation, removal, containment, transportation and emergency response. As part of this effort, the National Clearinghouse is the primary national source for hazardous waste worker curricula, technical reports, and weekly news. The Clearinghouse provides a number of safety-related resources in the expanding field of nanotechnology. NIEHS WETP also supported the development of the publication, Training Workers on Risks of Nanotechnology, that addresses how workers who are creating and handling nanomaterials should be trained about the hazards they face – in laboratories, manufacturing facilities, at hazardous waste cleanup sites and during emergency responses.
Cross-Agency Nanotechnology Initiatives
What cross-agency initiatives is NIEHS involved in?
NIEHS is involved in the following cross-agency initiatives:
- The National Nanotechnology Initiative (NNI), a federal, multi-agency program dedicated to expediting world class nanotechnology research and development, to foster the transfer of new technology for commercial and public benefits, to develop and sustain a skilled workforce and to support responsible development of nanotechnology.
- The Nanoscale Science, Engineering and Technology (NSET) subcommittee of NNI has four working groups that coordinate the planning, budget, program implementation and review of the nanotechnology initiative.
- The Nanotechnology Environmental and Health Implications (NEHI) subcommittee is a working group that supports Federal activities focused on the health and safety implications of nanotechnologies.
- NIEHS partnered with two other NIH institutes, the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the National Cancer Institute (NCI) to develop the NanoRegistry . The registry provides a central repository for published findings related to nanotechnology.
- Developed an interagency agreement with NCI’s Nanotechnology Characterization Laboratory to provide NIEHS grantees common engineered nanomaterials (ENMs) and to characterize the physical and chemical properties of the. This allows the grantees to have standardized characterization of the materials they are using so they can more easily compare results across studies.
- The NIH Nanomedicine Initiative is a cross-institute effort to understand and develop nanoscale technologies that could be applied to treating disease and repairing damaged tissue.
- The NIH Nano Task Force , coordinated by NIEHS represents the interests of institutes and centers within NIH who are working with nanomaterials to understand medical uses and to assess safety and toxicology associated with these materials.
Are nanomaterials regulated?
NIEHS is not a regulatory agency and, therefore, does not enforce statutes associated with nanomaterials or other hazardous substances. For regulatory questions, or information on what other federal agencies are doing regarding nanotechnology, please visit the appropriate agency. An abbreviated listing is provided below.
- The U.S. Food and Drug Administration (FDA) regulates a wide range of products, including foods, cosmetics, drugs, devices, and veterinary products, some of which may utilize nanotechnology or contain nanomaterials.
- At the U.S. Environmental Protection Agency (EPA) , many nanomaterials are regarded as "chemical substances" under the Toxic Substances Control Act (TSCA).
- The U.S. Consumer Product Safety Commission (CPSC) is an independent federal regulatory agency that was created in 1972 by Congress in the Consumer Product Safety Act. In that law, Congress directed the CPSC to "protect the public against unreasonable risks of injuries and deaths associated with consumer products."
- ONE Nano: NIEHS's Strategic Initiative on the Health and Safety Effects of Engineered Nanomaterials - As part of its role in supporting the National Nanotechnology Initiative, the National Institute of Environmental Health Sciences (NIEHS) has developed an integrated, strategic research program—“ONE Nano”—to increase our fundamental understanding of how ENMs interact with living systems, to develop predictive models for quantifying ENM exposure and assessing ENM health impacts, and to guide the design of second-generation ENMs to minimize adverse health effects.
Stories from the Environmental Factor (NIEHS Newsletter)
- Blocking Mosquitoes with a Graphene Shield (September 2019)
- Nanoparticles Offer Low-cost, Reusable Way to Clean Up Drinking Water (December 2018)
- Indian Scholar Offers Global Perspective on Fiber Nanotoxicology (August 2013)
- Challenges Persist in the Critical Task of Determining Safety of Nanomaterials (October 2012)
- Miller Promotes Prevention by Design at Nano Meeting (September 2012)
- NIH-Funded Nanomaterial Registry Now Available Online (August 2012)
- Holian Discusses Lung Inflammation Caused by Nanoparticles (January 2012)
- Nano Grand Opportunities Researchers Share Findings (January 2012)
- Nanotechnology - Information from the Occupational Safety and Health Administration, part of the US Department of Labor.
- Nanotechnology (NIOSH) - Information from The National Institute for Occupational Safety and Health (NIOSH), part of the CDC.
- Nanotechnology Programs at FDA - Nanotechnology allows scientists to create, explore, and manipulate materials measured in nanometers (billionths of a meter). Such materials can have chemical, physical, and biological properties that differ from those of their larger counterparts.
- National Nanotechnology Initiative - Official website of the United States National Nanotechnology Initiative.
- Research on Nanomaterials - EPA scientists research the most prevalent nanomaterials that may have human and environmental health implications.
- Nanomaterial Registry - The Nanomaterial Registry compiles data from multiple databases into a single resource.
Related Health Topics
This content is available to use on your website.Please visit NIEHS Syndication to get started. |
Bonneville Flood Walters Bar Area
suggested grade levels: 9-12
view Idaho achievement standards for this lesson
View the map and answer the following questions:
1. Determine the maximum water level height of the Bonneville Flood at high water stand.
2. Locate the pendant bar on Walters Bar. How did the pendant bar form and the resulting Jensen Lake?
3. Provide an explanation for the thick deposits of sand on the north side of Walters Bar.
4. Use the following photographs: * wb#4wabu.jpg * wb#5wabu.jpg * wb#9wabu.jpg
to determine the height of the boulders that were deposited on Walters bar by the Bonneville Flood. (Use Sara for scale, height = 5'5" Note you will need to convert feet to meters.) Click here for some useful conversion tables.
5. Use Hjulströms diagram to determine velocity of water needed to move boulders that were deposited on Walters Bar. Note: You may have to extrapolate on the Hjulströms diagram to determine the velocity.
6. Conduct a survey of boulder size on Walters Bar demonstrating water velocity decreased from proximal to distal portion of bar. (note: may want to incorporate the boulder size database).
7. Measure the width of the canyon at the Can-Ada Counties border. Measure the width of the canyon on the western most portion of the map near Walters Ferry. What is the difference between these two measurements? How did the canyon width control flood water flow and how does this relate to the production of the resultant features?
8. See figures wb#2wabu and wb#3wabu. Define festoon crossbedding. How are festoon cross stratification formed? Determine the relative thickness of the entire gravel unit and also the thickness of one cross bed. (You can use Sara once again for scale) Describe the flood mechanism that formed this deposit. Determine the velocity of the flood water that deposited such a thick series of cross stratification. |
Molecules, from simple diatomic ones to macromolecules consisting of hundreds of atoms or more, come in many shapes and sizes. The term "molecular geometry" is used to describe the shape of a molecule or polyatomic ion as it would appear to the eye (if we could actually see one). For this discussion, the terms "molecule" and "molecular geometry" pertain to polyatomic ions as well as molecules.
When two or more atoms approach each other closely enough, pairs of valence shell electrons frequently fall under the influence of two, and sometimes more, nuclei. Electrons move to occupy new regions of space (new orbitals—molecular orbitals) that allow them to "see" the nuclear charge of multiple nuclei. When this activity results in a lower overall energy for all involved atoms, the atoms remain attached and a molecule has been formed. In such cases, we refer to the interatomic attractions holding the atoms together as covalent bonds. These molecular orbitals may be classified according to strict mathematical (probabilistic) determinations of atomic behaviors. For this discussion, the two most important classifications of this kind are sigma ( σ ) and pi ( π ). Though we may be oversimplifying a highly complex mathematics, it may help one to visualize sigma molecular orbitals as those that build up electron density along the (internuclear) axis connecting bonded nuclei, and pi molecular orbitals as those that build up electron density above and below the internuclear axis.
This discussion will examine two approaches chemists have used to explain bonding and the formation of molecules, the molecular orbital (MO) theory and the valence bond (VB) theory. At their simplest levels, both approaches ignore nonvalence shell electrons, treating them as occupants of molecular orbitals so similar to the original (premolecular formation) atomic orbitals that they are localized around the original nuclei and do not participate in bonding. The two approaches diverge mainly with respect to how they treat the electrons that are extensively influenced by two or more nuclei. Though the approaches differ, they must ultimately converge because they describe the same physical reality: the same nuclei, the same electrons.
Molecular orbital theory. In MO theory, there are three types of molecular orbitals that electrons may occupy.
1. Nonbonding molecular orbitals. Nonbonding molecular orbitals closely resemble atomic orbitals localized around a single nucleus. They are called nonbonding because their occupation by electrons confers no net advantage toward keeping the atoms together.
2. Bonding molecular orbitals. Bonding molecular orbitals correspond to regions where electron density builds up between two, sometimes more, nuclei. When these orbitals are occupied by electrons, the electrons "see" more positive nuclear charge than they would if the atoms had not come together. In addition, with increased electron density in the spaces between the nuclei, nucleus-nucleus repulsions are minimized. Bonding orbitals allow for increased electron-nucleus attraction and decreased nucleus-nucleus repulsion, therefore electrons in such orbitals tend to draw atoms together and bond them to each other.
3. Antibonding molecular orbitals. One antibonding molecular orbital is formed for each bonding molecular orbital that is formed. Antibonding orbitals tend to localize electrons outside the regions between nuclei, resulting in significant nucleus-nucleus repulsion—with little, if any, improvement in electron-nucleus attraction. Electrons in antibonding orbitals work against the formation of bonds, which is why they are called antibonding.
According to MO theory, atoms remain close to one another (forming molecules) when there are more electrons occupying lower energy sigma and/or pi bonding orbitals than occupying higher energy antibonding orbitals; such atoms have a lower overall energy than if they had not come together. However, when the number of bonding electrons is matched by the number of antibonding electrons, there is actually a dis advantage to having the atoms stay together, therefore no molecule forms.
Valence bond theory. Valence bond (VB) theory assumes that atoms form covalent bonds as they share pairs of electrons via overlapping valence shell orbitals. A single covalent bond forms when two atoms share a pair of electrons via the sigma overlap of two atomic orbitals—a valence orbital from each atom. A double bond forms when two atoms share two pairs of electrons, one pair via a sigma overlap of two atomic orbitals and one via a pi overlap. A triple bond forms by three sets of orbital overlap, one of the sigma type and two of the pi type, accompanied by the sharing of three pairs of electrons via those overlaps. (When a pair of valence shell electrons is localized at only one atom, that is, when the pair is not shared between atoms, it is called a lone or nonbonding pair.)
Let us apply this greatly simplified picture of VB theory to three diatomic molecules: H 2 , F 2 , and HF. VB theory says that an H 2 molecule forms when a 1 s orbital containing an electron that belongs to one atom overlaps a 1 s orbital with an electron of opposite spin belonging to the other, creating a sigma molecular orbital containing two electrons. The two nuclei share the pair of electrons and draw together, giving both electrons access to the positive charge of both nuclei. Diatomic fluorine, F 2 , forms similarly, via the sigma overlap of singly occupied 2 p orbitals. The HF molecule results from the sharing of a pair of electrons whereby an electron in a hydrogen 1 s orbital experiences sigma overlap with an electron in a fluorine 2 p orbital.
This VB approach allows us to return to the focus of our discussion. The geometry of a molecule or polyatomic ion is determined by the positions of individual atoms and their positions relative to one another. It can get very complicated. However, let us start with some simple examples and your imagination will help you to extend this discussion to more complicated ones. What happens when two atoms are bonded together in a diatomic molecule? The only possible geometry is a straight line. Hence, such a molecular geometry (or shape) is called "linear." When we have three bonded atoms (in a triatomic molecule), the three atoms may form either a straight line, creating a linear molecule, or a bent line (similar to the letter V), creating a "bent," "angular," "nonlinear," or "V-shaped" molecule. When four atoms bond together, they may form a straight or a zigzag line, a square or other two-dimensional shape in which all four atoms occupy the same flat plane, or they may take on one of several three-dimensional geometries (such as a pyramid, with one atom sitting atop a base formed by the other three atoms). With so many possibilities, it may come as a surprise that we can "predict" the shape of a molecule (or polyatomic ion) using some basic assumptions about electron-electron repulsions.
We start by recognizing that, ultimately, the shape of a molecule is the equilibrium geometry that gives us the lowest possible energy for the system. Such a geometry comes about as the electrons and nuclei settle into positions that minimize nucleus-nucleus and electron-electron repulsions, and maximize electron-nucleus attractions.
Modern computer programs allow us to perform complex mathematical calculations for multiatomic systems with high predictive accuracy. However, without doing all the mathematics, we may "predict" molecular geometries quite well using VB theory.
Valence shell electron pair repulsion approach. In the valence shell electron pair repulsion (VSEPR) approach to molecular geometry, we begin by seeing the valence shell of a bonded atom as a spherical surface. Repulsions among pairs of valence electrons force the pairs to locate on this surface as far from each other as possible. Based on such considerations, somewhat simplified herein, we determine where all the electron pairs on the spherical surface of the atom "settle down," and identify which of those pairs correspond to bonds. Once we know which pairs of electrons bond (or glue) atoms together, we can more easily picture the shape of the corresponding (simple) molecule.
However, in using VSEPR, we must realize that in a double or triple bond, the sigma and pi orbital overlaps, and the electrons contained therein, are located in the same basic region between the two atoms. Thus, the four electrons of a double bond or the six electrons of a triple bond are not independent of one another, but form coordinated "sets" of four or six electrons that try to get as far away from other sets of electrons as possible. In an atom's valence shell, a lone pair of electrons or, collectively, the two, four, or six electrons of a single, double, or triple bond each form a set of electrons. It is repulsions among sets of valence shell electrons that determine the geometry around an atom.
Consider the two molecules carbon dioxide (CO 2 ) and formaldehyde (H 2 CO). Their Lewis structures are and
In CO 2 , the double bonds group the carbon atom's eight valence electrons into two sets. The two sets get as far as possible from each other by residing on opposite sides of the carbon atom, creating a straight line extending from one set of electrons through the carbon nucleus to the other. With oxygen atoms bonded to these sets of electrons, the oxygen–carbon–oxygen axis is a straight line, making the molecular geometry a straight line. Carbon dioxide is a linear molecule.
In H 2 CO, the carbon atom's eight valence electrons are grouped into three sets, corresponding to the two single bonds and the one double bond. These sets minimize the repulsions among themselves by becoming as distant from one another as possible—each set pointing at a vertex of a triangle surrounding the carbon atom in the center. Attaching the oxygen and hydrogen atoms to their bonding electrons has them forming the triangle with the carbon remaining in the center; all four atoms are in the same plane. Formaldehyde has the geometry of a trigonal (or triangular) planar molecule, "planar" emphasizing that the carbon occupies the same plane as the three peripheral atoms.
|COMMONLY ENCOUNTERED ELECTRON GEOMETRIES|
|Most Common "Set"|
|Number of Sets||Geometry||Appearance|
|3||Trigonal (Triangular) Planar|
We may extend this approach to central atoms with four, five, six, or even more sets of valence shell electrons. The most common geometries found in small molecules appear in Table 1.
Until now, this article has focused on all the electrons in a central atom's valence shell, including sets not engaged in bonding. Though all such sets must be included in the conceptualization of the electron-electron repulsions, a molecule's geometry is determined solely by where its atoms are: A molecule's geometry is identified by what people would see if they could see atoms. In the carbon dioxide and formaldehyde examples, the molecules have the same overall geometries as the electron sets, because in both cases all sets are attached to peripheral atoms: Carbon dioxide is a linear molecule and formaldehyde is a trigonal (or triangular) planar one.
On the other hand, a water molecule (H 2 O)
has four sets of electrons around the O atom (two lone pairs and those making up two sigma bonds) that assume a tetrahedral arrangement, but the molecular geometry as determined by the positions of the three atoms is a bent, or V-shaped, molecule, with a H–O–H angle approaching the tetrahedral angle of 109.5°.
Similarly, a hydronium ion (H 3 O + )
has four sets of electrons around the central O atom (one lone pair and those making up three sigma bonds) in a tetrahedral arrangement, but the molecular geometry as determined by the four atoms is a trigonal (three-pointed base) pyramidal ion with the O atom "sitting" atop the three H atoms. The hydronium ion also has a H–O–H angle approaching the tetrahedral angle of 109.5°.
Table 2 outlines the most common molecular geometries for different combinations of lone pairs and up to four total sets of electrons that have assumed positions around a central atom, and the hybridizations (see below) required on the central atom.
Hybridization. Finally, what does valence bond theory say about the atomic orbitals demanded by VSEPR? For example, though the regions occupied by sets of electrons having a tetrahedral arrangement around a central atom make angles of 109.5° to one another, valence p -orbitals are at 90° angles.
To reduce the complex task of finding orbitals that "fit" VSEPR, we base their descriptions on mathematical combinations of "standard" atomic orbitals, a process called hybridization; the orbitals thus "formed" are hybrid orbitals. The number of hybrid orbitals is equal to the number of "standard" valence atomic orbitals used in the mathematics. For example, combining two p -orbitals with one s -orbital creates three unique and equivalent sp 2 ( s - p -two) hybrid orbitals pointing toward the vertices of a triangle surrounding the atom.
|ELECTRON SETS, HYBRIDIZATION AND MOLECULAR GEOMETRIES|
|Electrons||Electron "Set"||Number of||Molecular|
|3||Trigonal (Triangular) Planner||1||Bent or V-shaped||sp 2|
|2||Bent or V-shaped|
Valence electron sets (lone pairs and electrons in sigma bonds) are "housed," at least in part, in hybrid orbitals. This means that an atom surrounded by three electron sets uses three hybrid orbitals, as in formaldehyde. There, the central carbon atom uses hybrid orbitals in forming the C–H single bonds and the sigma portion of the C=O double bond. The carbon's remaining unhybridized p -orbital overlaps a p -orbital on the oxygen, creating the pi bond that completes the carbon-oxygen double bond. The H–C–O and H–C–H angles are 120°, as is found among sp 2 hybridized orbitals in general. The hybridizations required for two, three, and four electron sets are given in Table 2, along with their corresponding electron geometries. |
Black racers are a common species of snake that lives in North and Central America. They are also known as “eastern racers,” though they do not only live in eastern North America.
These snakes live throughout the United States, with the exception of large portions of the Midwest. There are several different subspecies of black racer, and not all subspecies are black! Read on to learn about the black racer.
Description of the Black Racer
The appearance of this snake species varies based on the region. All of the subspecies have rather shiny scales, but their color varies from black and gray, to bluish and brown.
The young, juvenile snakes have patterns across their scales, usually in red or brown. Once the young snakes reach 30 in. long they usually lose these patterns and assume a more uniform color. Adults grow up to 5 feet long, and weigh a little over a pound.
Interesting Facts About the Black Racer
Because these snakes are so common, they are relatively well-known creatures. While many people can identify black racers, far fewer know much more about these snakes.
- Coluber constrictor – Despite their species name of “constrictor,” these snakes are not particularly constrictor-like. Other constrictors wrap their muscular bodies around their prey, and squeeze to suffocate them. Black racers do not constrict their prey.
- Nonvenomous – While they do not constrict their prey, they also do not use venom to subdue potential meals. Black racers are not venomous, and while their sharp teeth might hurt, they do not have the dangerous venom of some other species.
- Group Egg Laying – Most racers find a nice secluded space to lay their eggs, like an old log or a pile of rotting leaves. However, some racers lay eggs in large, communal nests. In fact, researchers found a single communal nest on one occasion with over 300 eggs inside!
- Spook Snake – Like many other species of snakes, black racers use a number of different methods while attempting to escape predators. If they cannot swiftly escape into a nearby bush or rock pile, they will coil their bodies and rattle their tails. They do not have rattles, like the venomous rattlesnake does, but if they shake their tail in dry leaves or branches the sound is quite similar and scares away predators.
Habitat of the Black Racer
Black racers live in a variety of different ecosystems within their range. Because they live in so many different areas, they must be flexible to survive. Some of their favorite habitats are grasslands and fields, open woodlands, and the edges of forests.
They also live in swamps, marshes, along the edges of lakes, and more. Because they live in northern portions of the United States, they frequently encounter cold seasons. During these times they retreat to the safety of the dense vegetation and go into a hibernation-like state.
Distribution of the Black Racer
This species of snake has a relatively large range. They live as far north as southern Canada, and as far south as Guatemala. In the United States, they live along both the East Coast and the West Coast, with their only absence in the Midwest. They also spread down from Texas into portions of Mexico and into Guatemala. Different regions commonly have different subspecies of racers.
Diet of the Black Racer
Like most snakes, black racers are carnivorous, which means they eat meat. Their diet depends on their age and location. Younger snakes frequently prey on lizards, small snakes, frogs, insects, and small rodents.
Adults eat all of the above, and also feed on small birds, eggs, squirrels, rats, mice, small rabbits, and more. Instead of squeezing their food like a constrictor, these snakes simply hold their prey down so that they can swallow it whole.
Black Racer and Human Interaction
Because racers live in urban and agricultural areas, they run across the path of humans pretty frequently. However, black racers are not dangerous to humans in the slightest. In fact, they pose no threat to humans in any way, and usually attempt to flee if confronted. Additionally, these snakes hunt pesky animals like mice and rats, making them beneficial to people.
Humans have not domesticated racer snakes in any way.
Does the Black Racer Make a Good Pet
Unlike some other species of snakes, racers do not make good pets. They are not particularly docile animals, and rarely become accustomed to handling. These snakes are accustomed to traveling long distances, so they do not thrive in small habitats.
Black Racer Care
In a zoological setting, zookeepers handle racers infrequently to reduce stress. They keep them in relatively large habitats for their size, because they are active snakes. Feeding them is relatively simple, because they eat small rodents in the wild. Like any other reptile, racers also need a light source and a heat source to thrive.
Behavior of the Black Racer
These snakes usually cruise along at approximately 4 mph, and search for potential prey. They commonly hide in burrows, rotten logs, rock crevices, and other hidden areas. These snakes use their eyesight and sense of smell to search for prey and watch for predators.
When threatened, these snakes flee quickly to cover. If cornered, they coil up, rattle their tails, and strike. Also, they writhe around if grabbed.
Reproduction of the Black Racer
Black racers breed in spring and lay their eggs in summer. A typical batch of eggs, called a “clutch,” can contain anywhere from 3 to 32 eggs. The sun incubates the eggs, and they usually hatch within a month or two.
Young snakes are up to 2 feet long at hatching, and do not receive any maternal care. they reach sexual maturity anywhere between 1 and 3 years of age. |
Boreal forests stretch across the high northern latitudes of Europe, Asia, and North America. These forests are major carbon sinks, sequestering carbon in their above-ground biomass and below-ground in the forest soils. The boreal ecosystem also contains vast areas of peatlands. The ability of boreal ecosystems to sequester carbon is very dependent on retaining high levels of moisture within the plants and soils. The moisture deters or retards fires that release carbon back into the atmosphere. But peatlands and forests have very different moisture retention capacities, and the peatlands are more vulnerable to water loss and damage from global warming.
Boreal forests exist where freezing temperatures occur for six to eight months of the year. These forests are defined by trees that reach a minimum height of five meters and develop a canopy cover that is at least ten percent of the forest area. Peatlands, however, are dominated by lakes, bogs, and fens. These wetlands contain thick layers of living and dead moss, thus making them excellent carbon sinks.
However, warming associated with climate change poses a variety of threats to the boreal ecosystems. Warmer climates bring dryer air that sucks more moisture from the forests and peatlands. The ability of each local environment (forest or peatland) to retain moisture is dependent on the plants growing there.
Vascular versus nonvascular
Moss, liverwort, and hornwort are all nonvascular plants. These were the first species of plants to creep out of the ocean 470 million years ago during the Ordovician period and carve out an ecological niche on land. It was not until the ensuing Silurian period that vascular, tree-like plants arrived on the scene.
The nonvascular plants provide low creeping vegetation. They are small in size since their growth is limited by poor internal transport of water, gases, and nutrients. The needs of these nonvascular plants are well met by the wet conditions of the peatlands. Vascular plants, however, are needed to develop forested landscapes. The vascular plants are much more efficient at transporting and storing water and nutrients. This efficiency, therefore, allows them to grow taller and live longer.
This difference between vascular and nonvascular plants dramatically affects their response to warmer, dryer conditions. Trees use microscopic pores in their leaves to take in carbon dioxide and breathe out water and oxygen. So, their vascular system lets them store lots of water, and when dry conditions arrive, they close their leaf pores to reduce water loss. Moss, however, is nonvascular and small in size. Its water storage capacity is minimal, and thus the plant quickly desiccates under dry conditions.
Natural firebreaks disappear
Boreal forests are not exempt from wildfires. But, peatlands interspersed among the woodlands provide a level of protection. Because these wetlands serve as natural firebreaks, they retard the spread of wildfires, thus limiting the damage. However, this advantage may disappear as the climate warms.
As the peatlands dry out, decomposition of their stored carbon will accelerate. The resulting larger volumes of dried organic material in the peatlands will alter them from firebreaks into zones that propagate the spread of wildfires. The natural balance of the boreal ecosystem will change as water loss accelerates, and its ability to act as a carbon sink will be diminished.
Forests as a pathway for terrestrial carbon sequestration (Source: ArcheanWeb) – https://archeanweb.com/2020/02/06/forests-as-a-pathway-for-terrestrial-carbon-sequestration/ Also:
Tongass National Forest: The good, the better, and the beautiful (Source: ArcheanWeb) –https://archeanweb.com/2020/02/24/tongass-national-forest-the-good-the-better-and-the-beautiful/ Also:
Water loss in northern peatlands threatens to intensify fires, global warming (Source: McMaster University) – https://www.sciencedaily.com/releases/2020/05/200511112557.htm Also:
About Boreal Forests (Source: International Boreal Forest Research Association) – http://ibfra.org/about-boreal-forests/ Also:Feature Image: Impressions Of A Boreal Forest (Modified) – By Kerbla Edzerdla – Imported from 500px (archived version) by the Archive Team. (detail page), CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=71412440 |
Reach for a tall glass of iced tea. Don't drink. Look at the glass instead.
The glass is an amorphous solid, consisting of molecules jumbled in disarray. It's the complete opposite of the ice in your drink. Ice is a crystalline solid made up of water molecules arranged in a repeat pattern.
In the world of science, glasses are solids that have a non-crystalline structure.
The viscoelasticity of glycerol can be seen by comparing the low-frequency limit of sound speed measured at different pressures by ultrasound (US) techniques (dashed line), the high-frequency limit measured by IXS (solid squares), and the high-frequency extrapolation of US results (solid line).
"The structure of a glass is frozen," said Alessandro Cunsolo, physicist in the Photon Sciences Directorate. "You can also think of glass as a type of fluid in which viscosity is so high that orderly arrangements of molecules are stopped."
Another property of glass is that it undergoes a so-called glass-liquid transition. Cunsolo explained that the transition from liquid to solid in a material that forms glass is exceptional – unlike water freezing as crystalline ice, a glass former remains amorphous as it cools to solid. That physical change of a liquid making a transition to an amorphous solid has puzzled scientists for a long time.
"You would not expect any global rearrangement of the molecules in a glass. But still, you observe a relaxation process," said Cunsolo. "This means that if you perturb the glass equilibrium, for instance, with a sound wave, the latter decays within a finite time. Something mysterious is happening since the decay has seemingly nothing to do with molecular movement, as would be the case with most liquids. Instead, it is connected to a static property: the lack of order in the structure."
Cunsolo uses spectroscopy to study fluids. He has published several papers on glass-forming systems, more recently a publication he co-authored with Bogdan Leu, Ayman Said and Yong Cai in the Journal of Chemical Physics (2011), and a solo paper Cunsolo published in the Journal of Physics: Condensed Matter (2012). That latest paper describes the results of inelastic X-ray scattering (IXS) measurements on a sample of glycerol at various pressures. Glycerol is a glass-forming liquid.
Cunsolo determined the ability of glycerol to support the propagation of high-frequency acoustic waves by comparing IXS spectra measured at different pressures with ultrasound absorption data. He was able to infer the presence of two distinct relaxation processes: a slow one and a fast one. The slow relaxation triggers an increase of the frequency-dependent sound velocity. The fast relaxation induces no visible dispersive effects on sound propagation. According to Cunsolo, these apparently opposite behaviors are likely to coexist in all glass formers near the temperatures at which they melt into liquids.
"This is a novel result because it provides a quantitative understanding of the measured sound dispersion using recently developed theory," said Cunsolo. "Thus, it potentially solves a long-standing debate about the dynamics of glass formers."
Cunsolo joined the Photon Sciences Directorate in 2009 to help develop the Inelastic X-ray Scattering (IXS) beamline at the National Synchrotron Light Source II (NSLS-II). He has done his research using the Advanced Photon Source at Argonne National Laboratory. When NSLS-II comes on line in 2015 for early science experiments, Cunsolo is looking forward to using the IXS beamline's new high-resolution and high-contrast spectrometer to continue his work.
"One of the most controversial issues revolves around the hypothesis that high-frequency excitations remain trapped inside local regions of the sample, instead of propagating throughout it," said Cunsolo. "Improved resolution will provide a more accurate measurement of the spectral shape, which will help settle many open issues on high-frequency excitations in glass formers."
2013-3892 INT/EXT | Media & Communications Office |
HIV, or human immunodeficiency virus, weakens the body’s immune system by damaging and destroying the crucial cells that guard against diseases and fight off infections. If uncontrolled, HIV will most often progress to AIDS, which is fatal without treatment.
Once a veritable death sentence, today HIV and AIDS, while not exactly curable, can often be successfully treated and made into chronic conditions that don’t lessen a person’s lifespan. Medications can even make it so that an HIV-positive person can completely eliminate their risk of transmitting the virus.
Learn more about HIV and AIDS, including what the virus and disease do to the body, how prevalent the virus is and how advances in medical science have completely changed the picture of HIV and AIDS.
In This Section
- HIV Symptoms
- HIV Transmission
- HIV Prevalence
- At-Risk Groups
- HIV Complications
- HIV & Pregnancy
- HIV Testing
- HIV Treatment
- HIV Prevention
Many people who contract HIV are unaware they have become infected because they often do not feel sick or mistake the brief illness that they’re experiencing for something else. For many people, within a couple of weeks of infection, they begin to notice flu-like symptoms that can last for a few weeks. During this period, newly infected people are highly contagious, as the amount of virus in their bloodstream is quite high. But this is just the first stage of an HIV infection.
In the second stage, infected individuals will likely have no symptoms at all. This stage is often referred to as chronic HIV infection, and without treatment, infected people can remain in this second, dormant stage for a decade or more. Many people who take medication to suppress their viral load can remain in this phase for the remainder of their lives.
In HIV’s third phase, the virus depletes the body’s supply of CD4 cells, often called T-cells, and the immune system is so badly damaged that the infected person is diagnosed with AIDS, acquired immune deficiency syndrome. AIDS patients are susceptible to many other infections, called opportunistic illnesses, and symptoms of AIDS include chills, fever, swollen glands, weight loss and weakness.
HIV cannot be transmitted through casual contact, like holding hands, sharing food or drinks or even kissing. The virus is carried in bodily fluids, including blood, semen, vaginal fluids and breast milk, and for the virus to be transmitted, those fluids must be introduced to the bloodstream or come into contact with certain areas of the body like the mouth, penis, vagina or rectum.
It’s also possible to spread HIV from mother to child during pregnancy or birth or while breastfeeding, though pregnant women with HIV who are on medication regimens can essentially eliminate this risk.
People who work in healthcare settings also could be at risk of exposure through a stick from a contaminated needle or other sharp object, and it’s possible, though less common, for HIV to be transmitted during oral sex.
The number and rate of new HIV infections has consistently fallen in the U.S. over the past decade. According to CDC data, HIV infections in 2018 occurred at a rate of 13.6 per 100,000 in 2018, which represents a decline of more than 25% since 2008.
Gonorrhea is among the most common sexually transmitted infections that are tracked nationally by federal health officials, impacting hundreds of thousands of Americans per year. While chlamydia, a disease that often has similar symptoms to gonorrhea, is more common, the clap has seen the number of cases and the population-adjusted rate rise dramatically in recent decades.
In 2018, the Centers for Disease Control and Prevention estimate that nearly 600,000 gonorrhea infections took place in the U.S., compared to 115,000 syphilis cases and 1.8 million cases of chlamydia.
Cases of gonorrhea in the U.S. have climbed every year since 2013, though numbers are down from the historic high recorded in 1978. Since 1994, the population-adjusted gonorrhea rate is up by only about 10%, but it’s climbed more than 80% over just the past 10 years.
New HIV infections and rate per 100,000 by year
Outside of the District of Columbia, which has the highest population-adjusted rate of new HIV infections in the U.S., Georgia has the highest rate among the states at 24.3 new infections per 100,000 people. On the other end of the spectrum, HIV infections are least common in Wyoming, Maine and Idaho.
New HIV infections per 100,000 people by state
|District of Columbia||29.6|
Anybody who is sexually active could potentially be at risk of exposure to HIV, but the virus is more common among a few groups, particularly those who engage in anal sex as well as injection drug users.
Still, sex is the primary risk behavior associated with HIV. Male-to-male sexual contact accounted for about two-thirds of all new infections in 2018, while heterosexual contact accounted for another nearly 25%.
Overall, those between the ages of 25 and 34 have the highest rates of HIV, accounting for more than 1 in 3 new infections in 2018. Across all age groups, men consistently have higher HIV rates than women, with the overall gap between men and women standing at about 40%.
HIV infections by age at diagnosis and sex per 100,000 people
African-Americans have the highest rate of new HIV infections among all ethnic groups, with Hispanics and those who are multiracial in a distant second and third place. Asians and whites have the lowest rates of HIV infections.
New HIV infections per 100,000 people by race or ethnicity
|Native Hawaiians/Pacific Islanders||11.8|
|Native Americans/Alaska Natives||7.8|
For individuals with untreated HIV or AIDS, the health consequences can be dire. In most cases, untreated AIDS will lead to death. HIV-positive people need to closely monitor their viral load and CD4 cell count to ensure they don’t begin developing the opportunistic infections common in AIDS patients. Several of these infections are common in those with AIDS:
- Candidiasis of bronchi, trachea, esophagus or lungs
- Invasive cervical cancer
- Cytomegalovirus diseases
- Herpes simplex virus
- Kaposi’s sarcoma
- Mycobacterium avium complex
- Pneumocystis carinii pneumonia
- Progressive multifocal leukoencephalopathy
- Salmonella septicemia
- Toxoplasmosis of brain
- Wasting syndrome
In addition to opportunistic infections that become a major concern with AIDS, HIV also can raise a person’s risk of contracting other STDs, such as gonorrhea, chlamydia and syphilis, and HIV-positive individuals face enormous societal stigma and have unique mental health challenges.
Discrimination against those with a positive HIV diagnosis or who have AIDS remains rampant, both in the U.S. and around the world despite decades of research and expanding awareness about the virus and how it is transmitted. HIV-positive people have a homelessness rate that’s about three times higher than the overall population.
Pregnant women who are HIV-positive can take medicines that prevent transmission of the virus to their babies as well as protecting the woman’s health. Even without treatment, the risk of passing HIV to a baby during pregnancy is about 25%; treatment lowers that to less than 1%. The number of mother-to-baby HIV infections has declined by more than 95% since the early 1990s.
All women who are pregnant or considering pregnancy should be screened for several diseases and STDs, including HIV. The sooner a diagnosis is made, the more precautions can be taken to reduce the risk of transmission to the baby.
There is no way to diagnose HIV on sight, and the only way to know your status is to get tested. Everyone between the ages of 13 and 64 should get at least one screening for HIV in their lifetimes, but an estimated 1 in 7 Americans with HIV are unaware they have the virus.
In addition to getting tested after a known exposure to HIV, the federal government recommends several groups get tested, whether once or regularly:
- Everyone: Almost all people, those between 13 and 64, should be screened at least once in their lives for HIV.
- Pregnant women: All should be screened for HIV early in their pregnancies. For many women, one test will suffice, but those in high-risk groups may need repeated screenings.
- Injection drug users and those who engage in unsafe sex: Everyone who falls into this group should be tested for HIV at least annually, and perhaps more frequently depending on other risk factors.
- Gay and bisexual men: Men who have sex with men should have an HIV test at least annually, and those who have multiple partners may benefit from testing every three to six months.
Testing for HIV is readily available at thousands of locations, and many sites offer this testing free of charge. Many places also have rapid HIV testing that can provide results in just a few minutes, though this varies depending on the testing provider.
The resources available near you will vary, but here are some of the most commonly available sources of HIV screenings:
- Doctor’s office
- Planned Parenthood
- Pharmacy clinic
- Urgent care center
- Community health clinics
- Campus health clinic
- At-home test kits
It is not technically possible to completely cure someone of HIV or AIDS. However, medication has made it possible for HIV-positive people to reduce their viral load to an undetectable level. If a patient reaches this state, it’s no longer possible for them to transmit the virus to others, though their bodies still are infected.
Treatment for HIV consists of medications called antiretrovirals that reduce the amount of the virus present in the body, reducing the ability of the virus to attack the immune system. It’s usually a combination of three or more drugs, though some people are able to take just one pill that combines the necessary medications. Common brand names of these drugs include Ziagen, Emtriva, Epivir and Retrovir.
An estimated 75% of HIV-positive people in states where data is available are undergoing medical treatment for HIV, ranging from 90% in Montana to 58% in South Dakota.
Percentage of HIV-positive individuals receiving treatment for HIV
|District of Columbia||68.5%|
For individuals who have developed AIDS, doctors will first focus on treating any opportunistic infections that may have developed, often through antibiotics and antifungal drugs. People with AIDS still should take antiretroviral drugs to lower their viral load and strengthen their immune systems.
No vaccination for HIV exists, but in addition to smart sexual and other health practices, medications called PrEP, pre-exposure prophylaxis, can reduce the risk of contracting HIV through sex.
These medications, when taken every day, can reduce the risk of getting HIV through sex by up to 99% for people who are HIV-negative, and for people who use injection drugs, daily PrEP can lower their HIV risk by more than 70%. There currently are two federally approved PrEP medications, Truvada and Descovy.
It also may be possible to reduce the risk of contracting HIV by taking antiretrovirals after being exposed. These PEP, post-exposure prophylaxis, drugs are only approved for emergency use within 72 hours of a possible HIV exposure in a person who is HIV-negative.
With PrEP, PEP and antiretrovirals, the risk of passing along HIV has declined considerably, and thousands of people who are HIV-positive have successfully repressed their viral loads. In the states where data is available, an average of about 63% of HIV-positive individuals have achieved viral suppression.
A few decades ago, a diagnosis of HIV was little more than a death sentence. But as infection rates have fallen and medications have made it possible for HIV-positive people to live for decades without developing AIDS, that’s all changing. In future years, it may be possible for HIV to go into remission, and science is advancing seemingly by the day. Still, for people who contract HIV today, getting tested so you can begin treatment is crucial.
- Centers for Disease Control and Prevention, HIV Basics. (2019.) Retrieved from https://www.cdc.gov/hiv/basics/index.html
- Centers for Disease Control and Prevention, 2015 STD Treatment Guidelines, Screening Recommendations and Considerations Referenced in Treatment Guidelines and Original Sources. (2015.) Retrieved from https://www.cdc.gov/std/tg2015/screening-recommendations.htm
- Centers for Disease Control and Prevention, AtlasPlus, HIV, custom tables. Accessed here https://www.cdc.gov/nchhstp/atlas/index.htm
- Doorways Housing.org, Homelessness and HIV. (Undated.) Retrieved from https://www.doorwayshousing.org/about-housing-hiv/homelessness-and-hiv/
- U.S. Department of Health and Human Services, HIV Treatment, FDA-Approved HIV Medicines. (2019.) Retrieved from https://aidsinfo.nih.gov/understanding-hiv-aids/fact-sheets/21/58/fda-approved-hiv-medicines |
Astronomers address impact of the climate crisis on astronomy, and of astronomy on the climate crisis
September 10, 2020
The "pale blue dot" image, Earth as photographed by the Voyager spacecraft in 1990, highlights the unique astronomical perspective on Earth as a comparatively small habitable planet in the hostile environment of space. Earth is visible as a tiny dot within one of the stripes. The stripes are caused by stray sunlight entering the camera.[less]
The "pale blue dot" image, Earth as photographed by the Voyager spacecraft in 1990, highlights the unique astronomical perspective on Earth as a comparatively small habitable planet in the hostile environment of space. Earth is visible as a tiny dot within one of the stripes. The stripes are caused by stray sunlight entering the camera.
Astronomers are no strangers to climate change. Our sister planet Venus is a poignant example of an extremely strong greenhouse effect, with hostile surface temperatures of more than 460 degrees Celsius. And the ongoing search for planets orbiting stars other than the Sun, in combination with the immensity of astronomical distances, gives astronomers a unique perspective that underscores the statement that "there is no planet B".
But in a much more immediate sense, astronomers themselves interact with climate change, here on Earth: their observations are affected by climate change, and astronomers in turn are responsible for specific emissions of carbon dioxide, and thus contribute to climate change themselves.
Now, astronomers from around the world have applied their analytical skills to their own challenging relationship with the climate crisis. Their results have now been published in six articles in the journal Nature Astronomy. The article series developed out of the special session "Astronomy for Future" at the 2020 (virtual) conference of the European Astronomical Society.
The carbon footprint of an institute
The first step in reducing emissions is to assess the carbon footprint of an institution. In one of the articles, a team of astronomers from the Max Planck Institute for Astronomy (MPIA) in Heidelberg, has done just that for their own institute. Adding up the CO2 emissions for the year 2018, the astronomers found that they are dominated by intercontinental flights – to attend conferences or for in-person observing at observatories in North and South America – and electricity consumption at supercomputing facilities. Astronomers rely on supercomputers both for simulations and for data analysis.
All in all, this adds up to 18 tons of carbon dioxide per scientist, for research activities alone. For comparison: That is almost twice as much as the carbon dioxide emissions per person in Germany.
Knud Jahnke, a group leader at MPIA and lead author of the article, says: "We astronomers are responsible for our fossil fuel emissions. But reduction is rarely a question of personal choice. We need an analysis of where those emissions come from, and then figure out whether we need to take action at the institute level, at the level of the whole astronomical community, or even at the level of society as a whole in order to effect a major reduction."
The article makes several recommendations for how astronomical institutes like the MPIA could reduce their emissions. One is to move supercomputing facilities to locations where electricity is predominantly produced from renewable sources and where cooling is easier – Iceland being a possible choice. The other is a drastic reduction of research-related flights.
Virtual meetings vs. in-person meetings
The question of astronomical conferences, traditionally held as in-person meetings, with many participants travelling to the event location by plane, is addressed in another one of the six articles, in which Jahnke is a co-author.
The article compares the last two annual meetings of the European Astronomical Society: The 2019 meeting held in Lyon, France, a face-to-face conference with more than 1200 attendees, and the 2020 meeting, held as a virtual event due to the world-wide pandemic, with nearly 1800 participants.
While the mere fact that the online meeting's footprint is much smaller will not come as a surprise, the number itself might: the astronomers found that the online meeting was responsible for less than a thousandth of the carbon dioxide emissions of the face-to-face meeting.
Just like for the rest of us, the pandemic is currently forcing astronomers to experiment with online formats. And while some formats, such as plenary talks, can readily move online, there is as yet no effective virtual version of the face-to-face networking, the personal contacts that a traditional conference allows. Leonard Burtscher of the University of Leiden, first author of the paper, says: "From a climate perspective, the solution could be face-to-face conferences happening at several locations at once, allowing participants to travel by train. Plenary talks would be online, while the gathering of scientists at each `conference hub' would allow for personal interactions."
The influence of climate change on astronomy
While these two articles focus on the impact of astronomical research activities on climate change, a third article provides a complementary perspective: There, the astronomers assessed the extent to which climate change is affecting astronomy, more specifically the quality of astronomical observations.
For their analysis, they focused on one of the most productive modern observation sites: the Paranal Observatory of the European Southern Observatory (ESO) in Chile, for which there is an exhaustive data set collected by environmental sensors over the last three decades.The Paranal site has seen an increase of the average temperature by 1.5º C over the past four decades, slightly above the world-averaged value of 1º C since the pre-industrial era.
On an engineering level, this creates difficulties with telescope cooling. The enclosure of the Very Large Telescope (VLT) on Paranal is cooled during the day to night-time temperatures in order to avoid internal turbulence when opening the dome at sunset, which would degrade the observations. For sunset temperatures warmer than 16º C, complete cooling is impossible, as the cooling system reaches its limitations, resulting in some blurring of observations. Such warmer days have become more frequent with increased average temperature.
Last but not least, the cutting-edge instruments installed at the telescopes of the VLT are sensitive to specific properties of the atmosphere. Low water vapor content is crucial for infrared observations. Paranal is currently one of the driest places on Earth. For specific types of observations, air turbulence affecting the passage of light plays a role. Paranal is located below a strong jet stream layer whose strength is linked to the amplitude of El Niño events. While the available data shows no significant trends so far, El Niño events are expected to increase in amplitude over the next decades as the climate crisis progresses.
The four 8-m telescopes and four auxiliary telescopes at ESO's Paranal observatory in Chile. In planning for the future, astronomers will need to take into account the adverse effects of the climate crisis on their observations.[less]
The four 8-m telescopes and four auxiliary telescopes at ESO's Paranal observatory in Chile. In planning for the future, astronomers will need to take into account the adverse effects of the climate crisis on their observations.
Acting on the information
For future telescopes, such as the Extremely Large Telescope (ELT) with its 39-meter mirror currently under construction in sight of Paranal, astronomers will have to take account of these and other effects – cautiously evaluating how observation conditions might change for the most pessimistic projections of an increase of about 4ºC over the next century and changes foreseen in global circulation with, for instance, stronger El Niño events and more frequent high-humidity events.
With their articles, the astronomers are hoping to facilitate change within their community. MPIA's Faustine Cantalloube, lead author of the article on how the climate crisis affects observations, says: "As astronomers, we are immeasurably lucky to work in a fascinating field. With our unique perspective on the universe, it is our responsibility to communicate, inside and outside our community, about the disastrous consequences of anthropogenic climate change on our planet and our society."
Now it is up to the scientific community, as well as to the authorities that create the environment for scientific research, to act on this information. The newly published articles show a way forward – continuing astronomy research, with its unique capability of putting planet Earth and its environment in a broader perspective, but reducing the carbon footprint of that research.
The results reported here have been published as K. Jahnke et al., “An astronomical institute’s perspective on meeting the challenges of the climate crisis”; L. Burtscher et al., “The carbon footprint of large astronomy meetings” and F. Cantalloube et al., “The impact of climate change on astronomical observations” in the September 10 edition of the journal Nature Astronomy.
The MPIA researchers involved are Knud Jahnke, Faustine Cantalloube, Christian Fendt, Morgan Fouesneau, Iskren Georgiev, Tom Herbst, Melanie Kaasinen, Diana Kossakowski, Jan Rybizki, Martin Schlecker, Gregor Seidel, Thomas Henning, Laura Kreidberg and Hans-Walter Rix
in collaboration with
Leonard Burtscher (Leiden Observatory), Didier Barret (CNRS), Abhijeet P. Borkar (Czech Academy of Sciences), Victoria Grinberg (Universität Tübingen), Sarah Kendrew (ESA), Gina Maffey (JIVE), Mark J. McCaughrean (ESA), Julien Milli (CNRS and ESO), Christoph Böhm and Susanne Crewell (both Universität Köln), Julio Navarrete (ESO), Kira Rehfeld (Universität Heidelberg), Marc Sarazin (ESO) and Anna Sommani (Universität Heidelberg). |
Benefits of Ionisation
This electrical charge makes ions highly reactive since they are in an unstable state and they will react with the first ion of opposite charge they encounter, in an attempt to return to a neutral uncharged state.
Ions of silver and copper are positively charged, having lost an electron and will therefore react with atoms or molecules, which carry a negative charge.
The outer membrane of most bacteria is negatively charged and ions of silver and copper are therefore preferentially attracted to bacteria, where they react and cause cell death.
Silver-Copper ionisation works when a low voltage DC current is passed between a pair of electrodes each composed of an alloy of copper and silver.
This flow of current causes some of the outermost atoms of the positively charged anode to lose electrons, thus forming positive ions, which are then attracted to the negatively charge cathode.
However, since the electrodes are placed in a stream of running water, some of these ions are carried away by the flow and released into the bulk of the water.
Both copper and silver are active against a wide range of bacteria, fungi, algae and viruses, including the pathogenic organisms found as pollutants of surface water systems.
The positively charged ions form electrostatic bonds with negatively charged sites on the organisms” cell walls, creating stresses, which lead to, increased cell wall permeability. This result in entry of the ions into the cells, where they interfere with enzymes responsible for cellular respiration and bind at specific sites to DNA, thereby killing the cells.
In the presence of 0.4 ppm chlorine, copper and silver ions have been shown to effect a 3.7 log reduction of Legionella pneumophila in 90 seconds in laboratory trials.
In large complex systems with long standing biofilm contamination it may take up to six months for ionisation to remove all traces of biofilm, but once this has been achieved the system will remain free of contamination indefinitely.
Compare the pros and cons of the most common sanitising systems with Enviroswim using the following links: Chlorine Salt Ozone Ioniser Enviroswim |
Cattle - Botulism
Botulism is a bacteria that can cause a fatal paralysis. It can be vaccinated against.
There are a number of potential sources of botulism poisoning:
2)Contaminated Water Sources
3)Fertilizing Pasture With Chicken Manure
All farmers who feed out sileage should consider using this vaccine. We do see outbreaks in this area of botulism. There is no treatment for Botulism, and will often affect large numbers in the herd. It is often found in dead carcasses, but can also be found in some birds and around watering holes.
Cattle - Bovine Ephemeral Fever (Three day sickness)
This is a virus spread by mosquitos. It commonly causes problems in Summer. Animals that are affected will commonly seek shade and become stiff in the joints. They can become so ill that they won't get up and some animals can die. Most animals that recover from the disease have a long lasting immunity. Animals that are worth considering vaccinating are valuable animals and bulls. Vaccination must be carried out yearly to stay effective.
Cattle - Buffalo Fly
Is a common parasite in this area. It can cause production loss by fly worry, It can cause severe damage to animals that are sensitive to it and spend large amounts of time scratching themselves and can also spread disease like Pink eye.
Treatments that can be used are sprays, back rubs and ear tags.
It is also important to keep a healthy population of dung beetles on your farm as this will reduce the number of flies that hatch out.
This is the link to the dpi fact sheet on buffalo fly control.
Cattle - E Coli
Is a bacteria that will often cause calves to present with an extremely watery yellow to white diarrhoea that is maloderous. Dehydration must be managed in these calves. It can also cause disease in other parts of the body as well. Vaccination is available and recommended when this bacteria has been confirmed to be a problem on your farm. Confirmation can be made through laboratory testing.
Cattle - Feeding
Supplemental nutrition can be very important in certain times of the year. It is important to maintain cows in good body condition to avoid many of the diseases that we will commonly see. An animal with good nutrition is more likely to have a good immune system and more likely to recover from diseases without intervention.
Cattle - Flood Mud Scours (Yersinia)
Cattle - Intestinal Worms
This includes Barbers pole worm, Cooperia, Nematodirus, Ostertagia.
It is really important to have a good worming protocol to control these worms.
These are the NSW DPI recommendations for cattle worm control
Basics Of Cattle Worm Control
Cattle - Liver Fluke
Is a common parasite that we see, especially in the low flat countryside where you can get pooling of water and the cattle can access creeks and rivers.
The most important time to drench is in Late autumn when there are both immature and mature worms around. Most drenches containing triclabendazole will kill Liver fluke down to about 2 weeks of age.
Cattle - Manheimia
Is a bacteria that causes respiratory disease. This vaccine should be considered to be used in feedlot situations or where their has been a confirmed diagnosis of Manheimia on the farm.
Cattle - Paralysis Tick (Ixodes Holycyclus)
The Paralysis tick can cause signs in young animals and animals that are newly introduced to this area from an area that does not have the Paralysis tick. Animals that are affected may initially appear slow, and calves may stop feeding. Most animals start with weakness in the hindlegs, with the paralysis quickly affecting the animals front legs and possibly their breathing muscles. The Paralysis tick can be fatal in calves and non immune older cattle.
Cattle - Pestivirus
Is an extremely common virus in Australian herds. It is a disease that can manifest in many different ways including abortions, dummy calves, deformed calves and illthrifty calves that may suffer from pneumonia and diarrhoea. it is estimated that 80% of australian herds have this disease on it. It often becomes endemic on a herd. Whether to institute a vaccination protocol is a difficult question to answer. It depends on your individual circumstances.
Cattle - Pink Eye
Pink eye is commonly caused by a bacteria called Moraxella bovis. There is currently a trivalent vaccine that has been released to help prevent disease. The bacteria commonly invades the eye and causes ulceration of the eye and without treatment cows can become completely blind. The bacteria is spread easily by Flies, dust, grass and close contact.
Prevention: Preventative tactics can include isolating affected animals, maintaining shorter pastures to avoid damage to the eye from grass seeds, limiting yarding during very dry dusty periods and implementing vaccination.
Cattle - Poisonous Plants
Plant poisonings occur relatively commonly in our area
Peak times that plant poisonings occur are during or just following drought when available pasture becomes scarce and animals are forced to eat whatever vegetation is available.
In order to prevent poisonings supplementary feeding should be undertaken during these times of limited pasture availablity, and animals should be restricted from grazing areas with known poisonous plants.
Cattle - Salmonella
Is a bacteria that can cause severe diarrhoea and septicaemia (blood infection). It is also a zoonotic disease, this means that people are able to contract it from infected animals. Vaccination is recommended when you have this disease diagnosed on your farm.
Cattle - Vaccinations
In this area, all farmers should vaccinate with '5 in 1' at least. This covers against the five fatal Clostridial diseases that we can see. The vaccine is not particularly expensive, and well worth using. |
New York, Jan 30 (IANS) US researchers have found the first reliable evidence that humans played a substantial role in the extinction of Australian mega fauna — large animal species in Australia with body mass estimates of more than 45 kilograms.
Studies have shown that more than 85 percent of Australia’s mammals, birds and reptiles weighing over 100 pounds went extinct shortly after the arrival of the first humans.
Researchers from the University of Colorado Boulder in the US discovered the first direct evidence that humans were preying on the now extinct huge, wondrous beasts — in this case a 500-pound bird.
The flightless bird, known as Genyornis newtoni, was nearly 7 feet tall and appears to have lived in much of the continent prior to the advent of humans 50,000 years ago, the study revealed.
The evidence consists of diagnostic burn patterns on Genyornis eggshell fragments that indicated that humans were collecting and cooking its eggs, thereby reducing the birds’ reproductive success, the researchers said.
“We consider this the first and only secure evidence that humans were directly preying on now extinct Australian mega fauna,” said Gifford Miller, professor in the University of Colorado Boulder.
The Genyornis eggs are thought to have been roughly the size of a cantaloupe – a fruit also called muskmelon, and weighed about 3.5 pounds, Miller said.
In analysing unburned Genyornis eggshells from more than 2,000 localities across Australia, primarily from sand dunes where the ancient birds nested, several dating methods helped researchers determine that none were younger than 45,000 years old.
Burned eggshell fragments from more than 200 of those sites suggested that they were exposed to a wide range of temperatures, Miller said.
It was likely that the blackened fragments were burned in transient, human fires — presumably to cook the eggs — rather than in wild fires, Miller explained in the paper published online in Nature Communications.
The researchers also found many of the burnt Genyornis eggshell fragments in tight clusters less than 10 feet in diameter, with no other eggshell fragments nearby.
“The conditions are consistent with early humans harvesting Genyornis eggs, cooking them over fires, and then randomly discarding the eggshell fragments in and around their cooking fires,” Miller noted.
Another line of evidence for early human predation on Genyornis eggs is the presence of ancient, burned eggshells of emus — flightless birds weighing only about 100 pounds and which still exists in Australia — in the sand dunes.
Emu eggshells exhibiting burn patterns similar to Genyornis eggshells signalled that they were most likely scorched after humans arrived in Australia, Miller said.
The extinct mega fauna included a 1,000-pound kangaroo, a two-ton wombat, a 25-feet-long lizard, a 300-pound marsupial lion and a Volkswagen-sized tortoise. |
To say that there is widespread acceptance of the principle of human rights is not to say that there is complete agreement about the nature and scope of such rights or, indeed, their definition. Among the basic questions that have yet to receive conclusive answers are the following: whether human rights are to be viewed as divine, moral, or legal entitlements; whether they are to be validated by intuition, culture, custom, social contract, principles of distributive justice, or as prerequisites for happiness or the achievement of human dignity; whether they are to be understood as irrevocable or partially revocable; and whether they are to be broad or limited in number and content. Even when the principle of human rights is accepted, there are controversies: whether human rights are a way of privileging narrowly conceived special interests over the common interest; whether they are the political tools of predominantly progressive elites; whether they are a stalking horse for Western economic imperialism; and so forth. It is thus sometimes claimed that there exists no universally agreed upon theory or even understanding of human rights.
The nature of human rights: commonly accepted postulates
Despite this lack of consensus, a number of widely accepted (and interrelated) postulates can assist in the task of defining human rights. Five in particular stand out, though not even these are without controversy.
First, regardless of their ultimate origin or justification, human rights are understood to represent both individual and group demands for political power, wealth, enlightenment, and other cherished values or capabilities, the most fundamental of which is respect and its constituent elements of reciprocal tolerance and mutual forbearance in the pursuit of all other such values or capabilities. Consequently, human rights imply both claims against persons and institutions impeding the realization of these values or capabilities and standards for judging the legitimacy of laws and traditions. At bottom, human rights qualify state sovereignty and power, sometimes expanding the latter even while circumscribing the former (as in the case of certain economic and social rights, for example). Increasingly, human rights are said also to qualify “private sovereignty” (as in the case, for example, of challenging the impunity of overbearing business enterprises, protecting family members from domestic violence, and holding non-state terrorist actors to account).
Second, human rights are commonly assumed to refer, in some vague sense, to “fundamental,” as distinct from “nonessential,” claims or “goods.” In fact, some theorists go so far as to limit human rights to a single core right or two—for example, the right to life or the right to equal opportunity. The tendency is to emphasize “basic needs” and to rule out “mere wants.”
Third, reflecting varying environmental circumstances, differing worldviews, and inescapable interdependencies within and between different value or capability systems, human rights refer to a wide continuum of claims, ranging from the most justiciable (or enforceable) to the most aspirational. Human rights partake of both the legal and the moral orders, sometimes indistinguishably. They are expressive of both the “is” and the “ought” in human affairs.
Fourth, most assertions of human rights—though arguably not all (freedom from slavery, genocide, or torture are notable exceptions)—are qualified by the limitation that the rights of individuals or groups in particular instances are restricted as much as is necessary to secure the comparable rights of others and the aggregate common interest. Given this limitation, which connects rights to duties, human rights are sometimes designated “prima facie rights,” so that ordinarily it makes little or no sense to think or talk of them in absolutist terms.
Finally, if a right is determined to be a human right, it is understood to be quintessentially general or universal in character, in some sense equally possessed by all human beings everywhere, including in certain instances even the unborn. In stark contrast to the divine right of kings and other such conceptions of privilege, human rights extend in theory to every person on Earth, without regard to merit or need, simply for being human or because they mitigate inherent human vulnerability or are requisite to social justice.
In several critical respects, however, all these postulates raise more questions than they answer. For instance, if, as is increasingly asserted, human rights qualify private power, precisely when and how do they do so? What does it mean to say that a right is fundamental, and according to what standards of importance or urgency is it so judged? What is the value of embracing moral as distinct from legal rights as part of the jurisprudence of human rights? Do nonjusticiable rights harbour more than rhetorical significance? If so, how? When and according to what criteria does the right of one person or group of people give way to the right of another? What happens when individual and group rights collide? How are universal human rights determined? Are they a function of culture or ideology, or are they determined according to some transnational consensus of merit or value? If the latter, is the consensus in question regional or global? How exactly would such a consensus be ascertained, and how would it be reconciled with the right of nations and peoples to self-determination? Is the existence of universal human rights incompatible with the notion of national sovereignty? Should supranational norms, institutions, and procedures have the power to nullify local, regional, and national laws on capital punishment, corporal punishment of children, “honour killing,” veil wearing, female genital cutting, male circumcision, the claimed right to bear arms, and other practices? For some in the human rights debate, this raises a further controversy concerning how such situations comport with Western conceptions of democracy and representative government.
In other words, though accurate, the five foregoing postulates are fraught with questions about the content and legitimate scope of human rights and about the priorities, if any, that exist among them. Like the issue of the origin and justification of human rights, all five are controversial.
The content of human rights: three “generations” of rights
Like all normative traditions, the human rights tradition is a product of its time. Therefore, to understand better the debate over the content and legitimate scope of human rights and the priorities claimed among them, it is useful to note the dominant schools of thought and action that have informed the human rights tradition since the beginning of modern times.
Particularly helpful in this regard is the notion of three “generations” of human rights advanced by the French jurist Karel Vasak. Inspired by the three themes of the French Revolution, they are: the first generation, composed of civil and political rights (liberté); the second generation of economic, social, and cultural rights (égalité); and the third generation of solidarity or group rights (fraternité). Vasak’s model is, of course, a simplified expression of an extremely complex historical record, and it is not intended to suggest a linear process in which each generation gives birth to the next and then dies away. Nor is it to imply that one generation is more important than another, or that the generations (and their categories of rights) are ultimately separable. The three generations are understood to be cumulative, overlapping, and, it is important to emphasize, interdependent and interpenetrating.
Liberté: civil and political rights
The first generation, civil and political rights, derives primarily from the 17th- and 18th-century reformist theories noted above (i.e., those associated with the English, American, and French revolutions). Infused with the political philosophy of liberal individualism and the related economic and social doctrine of laissez-faire, the first generation conceives of human rights more in negative terms (“freedoms from”) than positive ones (“rights to”); it favours the abstention over the intervention of government in the quest for human dignity. Belonging to this first generation, thus, are rights such as those set forth in Articles 2–21 of the Universal Declaration of Human Rights, including freedom from gender, racial, and equivalent forms of discrimination; the right to life, liberty, and security of the person; freedom from slavery or involuntary servitude; freedom from torture and from cruel, inhuman, or degrading treatment or punishment; freedom from arbitrary arrest, detention, or exile; the right to a fair and public trial; freedom from interference in privacy and correspondence; freedom of movement and residence; the right to asylum from persecution; freedom of thought, conscience, and religion; freedom of opinion and expression; freedom of peaceful assembly and association; and the right to participate in government, directly or through free elections. Also included are the right to own property and the right not to be deprived of it arbitrarily—rights that were fundamental to the interests fought for in the American and French revolutions and to the rise of capitalism.
Yet it would be wrong to assert that these and other first-generation rights correspond completely to the idea of “negative” as opposed to “positive” rights. The right to security of the person, to a fair and public trial, to asylum from persecution, or to free elections, for example, manifestly cannot be assured without some affirmative government action. What is constant in this first-generation conception is the notion of liberty, a shield that safeguards the individual—alone and in association with others—against the abuse of political authority. This is the core value. Featured in the constitution of almost every country in the world and dominating the majority of international declarations and covenants adopted since World War II (in large measure due to the brutal denial of the fundamentals of civic belonging and democratic inclusion during the Nazi era), this essentially Western liberal conception of human rights is sometimes romanticized as a triumph of the individualism of Hobbes and Locke over Hegel’s glorification of the state.
Égalité: economic, social, and cultural rights
The second generation, composed of economic, social, and cultural rights, originated primarily in the socialist tradition, which was foreshadowed among adherents of the Saint-Simonian movement of early 19th-century France and variously promoted by revolutionary struggles and welfare movements that have taken place since. In large part, it is a response to the abuses of capitalist development and its underlying and essentially uncritical conception of individual liberty, which tolerated, and even legitimized, the exploitation of working classes and colonial peoples. Historically, economic, social, and cultural rights are a counterpoint to the first generation, civil and political rights, and are conceived more in positive terms (“rights to”) than in negative ones (“freedoms from”); they also require more the intervention than the abstention of the state for the purpose of assuring the equitable production and distribution of the values or capabilities involved. Illustrative are some of the rights set forth in Articles 22–27 of the Universal Declaration of Human Rights, such as the right to social security; the right to work and to protection against unemployment; the right to rest and leisure, including periodic holidays with pay; the right to a standard of living adequate for the health and well-being of self and family; the right to education; and the right to the protection of one’s scientific, literary, and artistic production.
But in the same way that not all the rights embraced by the first generation (civil and political rights) can be designated as “negative rights,” so not all the rights embraced by the second generation (economic, social, and cultural rights) can be labeled as “positive rights.” For example, the right to free choice of employment, the right to form and to join trade unions, and the right to participate freely in the cultural life of the community (Articles 23 and 27) do not inherently require affirmative state action to ensure their enjoyment. Nevertheless, most of the second-generation rights do necessitate state intervention, because they subsume demands more for material than for intangible goods according to some criterion of distributive justice. Second-generation rights are, fundamentally, claims to social equality. However, in part because of the comparatively late arrival of socialist-communist and compatible “Third World” influence in international affairs, but more recently because of the ascendency of laissez-faire capitalism and the globalization of neoliberal, free-market economics since the end of the Cold War, the internationalization of these “equality rights” has been relatively slow in coming and is unlikely to truly come of age any time soon. On the other hand, as the social inequities created by unregulated national and transnational capitalism become more and more evident over time and are not directly accounted for by explanations based on gender or race, it is probable that the demand for second-generation rights will grow and mature, and in some instances even lead to violence. Indeed, this tendency was apparent already at the beginning of the 2010s, most notably in the widespread protests against austerity measures in Europe as the euro-zone debt crisis unfolded and in wider efforts (including social movements such as the “Occupy” movement) to regulate intergovernmental financial institutions and transnational corporations to protect the public interest.
Fraternité: solidarity or group rights
Finally, the third generation, composed of solidarity or group rights, while drawing upon and reconceptualizing the demands associated with the first two generations of rights, is best understood as a product of both the rise and the decline of the state since the mid-20th century. Foreshadowed in Article 28 of the Universal Declaration of Human Rights, which proclaims that “everyone is entitled to a social and international order in which the rights set forth in this declaration can be fully realized,” this generation appears so far to embrace six claimed rights (although events of the early 21st century arguably suggest that a seventh claimed right—a right to democracy—may be in the process of emerging). Three of the claimed rights reflect the emergence of nationalism in the developing world in the 1960s and ’70s and the “revolution of rising expectations” (i.e., its demand for a global redistribution of power, wealth, and other important values or capabilities): the right to political, economic, social, and cultural self-determination; the right to economic and social development; and the right to participate in and benefit from “the common heritage of mankind” (shared Earth and space resources; scientific, technical, and other information and progress; and cultural traditions, sites, and monuments). The three remaining claimed solidarity or group rights—the right to peace, the right to a clean and healthy environment, and the right to humanitarian disaster relief—suggest the impotence or inefficiency of the state in certain critical respects.
All of these claimed rights tend to be posed as collective rights, requiring the concerted efforts of all social forces, to a substantial degree on a planetary scale. However, each of them also manifests an individual dimension. For example, while it may be said to be the collective right of all countries and peoples (especially developing countries and non-self-governing peoples) to secure a “new international economic order” that would eliminate obstacles to their economic and social development, so also may it be said to be the individual right of every person to benefit from a developmental policy that is based on the satisfaction of material and nonmaterial human needs. It is important to note, too, that the majority of these solidarity rights are more aspirational than justiciable in character and that their status as international human rights norms remains somewhat ambiguous.
Thus, at various stages of modern history, the content of human rights has been broadly defined, not with any expectation that the rights associated with one generation would or should become outdated upon the ascendancy of another, but expansively or supplementally. The history of the content of human rights reflects evolving and conflicting perceptions of which values or capabilities stand, at different times and through differing lenses, most in need of responsible attention and, simultaneously, humankind’s recurring demands for continuity and stability. Such dynamics are reflected, for example, in a rising consensus that human rights extend to the private as well as to the public sector—i.e., that non-state as well as state actors must account for their violations of human rights. Similarly reflecting the continuing pressure for human rights evolution is a current suggestion that there exists a “fourth generation” of human rights consisting of women’s and intergenerational rights (i.e., the rights of future generations, including existing children) among others.
Legitimacy and priority
Liberté versus égalité
The fact that the content of human rights has been broadly defined should not be taken to imply that the three generations of rights are equally accepted by everyone. Nor should broad acceptance of the idea of human rights suggest that their generations or their separate elements have been greeted with equal urgency. The ongoing debate about the nature and content of human rights reflects, after all, a struggle for power and for favoured conceptions of the “good society.”
First-generation proponents, for example, are inclined to exclude second- and third-generation rights from their definition of human rights altogether or, at best, to regard them as “derivative.” In part this is because of the complexities involved in putting these rights into operation. The suggestion that first-generation rights are more feasible than other generations because they stress the absence over the presence of government is somehow transformed into a prerequisite of a comprehensive definition of human rights, such that aspirational claims to entitlement are deemed not to be rights at all. The most-compelling explanation for such exclusions, however, has more to do with ideology or politics than with operational concerns. Persuaded that egalitarian claims against the rich, particularly where collectively espoused, are unworkable without a severe decline in liberty, first-generation proponents, inspired by the natural law and laissez-faire traditions, are committed to the view that human rights are inherently independent of organized society and are necessarily individualistic.
Conversely, second- and third-generation defenders often look upon first-generation rights, at least as commonly practiced, as insufficiently attentive to material—especially “basic”—human needs and, indeed, as being instruments in service to unjust social orders, hence constituting a “bourgeois illusion.” Accordingly, if they do not place first-generation rights outside their definition of human rights, these partisans tend to assign such rights a low status and to treat them as long-term goals that will come to pass only after the imperatives of economic and social development have been met, to be realized gradually and fully achieved only sometime vaguely in the future.
This liberty-equality and individualist-collectivist debate was especially evident during the period of the Cold War, reflecting the extreme tensions that then existed between liberal and Hegelian-Marxist conceptions of sovereign public order. Although Western social democrats during this period, particularly in Scandinavia, occupied a position midway between the two sides, pursuing both liberty and equality—in many respects successfully—it remains true that the different conceptions of rights contain the potential for challenging the legitimacy and supremacy not only of one another but, more importantly, of the sociopolitical systems with which they are most intimately associated.
The relevance of custom and tradition: the universalist-relativist debate
With the end of the Cold War, however, the debate took on a more North-South character and was supplemented and intensified by a cultural-relativist critique that eschewed the universality of human rights doctrines, principles, and rules on the grounds that they are Western in origin and therefore of limited relevance in non-Western settings. The viewpoint underlying this assertion—that the scope of human rights in any given society should be determined fundamentally by local, national, or regional customs and traditions—may seem problematic, especially when one considers that the idea of human rights and many of its precepts are found in all the great philosophical and religious traditions. Nevertheless, the historical development of human rights demonstrates that the relativist critique cannot be wholly or axiomatically dismissed. Nor is it surprising that it should emerge soon after the end of the Cold War. First prominently expressed in the declaration that emerged from the Bangkok meeting held in preparation to the second UN World Conference on Human Rights convened in Vienna in June 1993 (which qualified a reaffirmation of the universality of human rights by stating that human rights “must be considered in the context of…national and regional particularities and various historical, cultural and religious backgrounds”), the relativist critique reflects the end of a bipolar system of alliances that had discouraged independent foreign policies and minimized cultural and political differences in favour of undivided Cold War loyalties.
Against the backdrop of increasing human rights interventionism on the part of the UN and by regional organizations and deputized coalitions of states (as in Bosnia and Herzegovina, Somalia, Liberia, Rwanda, Haiti, Serbia and Kosovo, Libya, and Mali, for example), the relativist viewpoint serves also as a functional equivalent of the doctrine of respect for national sovereignty and territorial integrity, which had been declining in influence not only in the human rights context but also in the contexts of national security, economics, and the environment. As a consequence, there remains sharp political and theoretical disagreement about the legitimate scope of human rights and about the priorities that are claimed among them.
Inherent risks in the debate
On final analysis, however, this legitimacy-priority debate can be dangerously misleading. Although useful for pointing out how notions of liberty and individualism have been used to rationalize the abuses of capitalism and Western expansionism and for exposing the ways in which notions of equality, collectivism, and culture have been alibis for authoritarian governance, in the end the debate risks obscuring at least three essential truths that must be taken into account if the contemporary worldwide human rights movement is to be understood objectively.
First, one-sided characterizations of legitimacy and priority are very likely, at least over the long term, to undermine the political credibility of their proponents and the defensibility of the rights they regard as preeminently important. In an increasingly interdependent global community, any human rights orientation that does not support the widest possible shaping and sharing of values or capabilities among all human beings is likely to provoke widespread skepticism. The period since the mid-20th century is replete with examples, among them the official U.S. position that only civil and political rights—including the rights to own property and to invest in processes of production and exchange—can be deemed legally recognizable rights.
Second, such characterizations do not accurately reflect reality. In the real world, virtually all societies, whether individualistic or collectivist in essential character, at least consent to, and most even promote, a mixture of all basic values or capabilities. U.S. President Franklin Delano Roosevelt’s Four Freedoms (freedom of speech and expression, freedom of worship, freedom from want, and freedom from fear) is an early case in point. A later demonstration is found in the Vienna Declaration and Programme of Action of the 1993 conference mentioned above, adopted by representatives of 171 states. It proclaims that
[w]hile the significance of national and regional particularities and various historical, cultural and religious backgrounds must be borne in mind, it is the duty of States, regardless of their political, economic and cultural systems, to promote and protect all human rights and fundamental freedoms.
Finally, in the early 21st century, none of the international human rights instruments in force or proposed said anything about the legitimacy or priority of the rights it addresses, save possibly in the case of rights that by international covenant are stipulated to be “nonderogable” and therefore, arguably, more fundamental than others (e.g., freedom from arbitrary or unlawful deprivation of life, freedom from torture and from inhuman or degrading treatment and punishment, freedom from slavery, and freedom from imprisonment for debt). To be sure, some disagreements about legitimacy and priority can derive from differences of definition (e.g., what is “torture” or “inhuman treatment” to one may not be so to another, as in the case of punishment by caning or waterboarding or by death). Similarly, disagreements can arise also when treating the problem of implementation. For instance, some insist first on certain civil and political guarantees, whereas others defer initially to conditions of material well-being. Such disagreements, however, reflect differences in political agendas and have little if any conceptual utility. As confirmed by numerous resolutions of the UN General Assembly and reaffirmed in the Vienna Declaration and Programme of Action, there is a wide consensus that all human rights form an indivisible whole and that the protection of human rights is not and should not be a matter of purely national jurisdiction. The extent to which the international community actually protects the human rights it prescribes is, on the other hand, a different matter. |
This Summer, the coast of the Carolinas has seen more than its share of shark attacks . Although shark attacks are rare, most people have run afoul of some critter at one point or another. In the United States, there are millions of animal bites every year resulting in hundreds of thousands of ER visits. In this article, we’ll talk about the furry kind, but here are my articles on other types of bites:
INSECT BITES
SPIDER BITES
Wild animals will bite when threatened, ill, or to protect their territory and offspring. Most, however, avoid humans if at all possible. In the grand majority of cases, pets like cats, dogs, and rodents are the perpetrators. Most animal bites affect the hands (in adults) and the face, head, and neck (in children).
Dog bites are responsible for 1000 emergency care visits every day in the U.S.. According to a 1994 study, dog bites are:
• 6.2 times more likely to be incurred by male dogs.
• 2.6 times more likely by dogs that haven’t been neutered.
• 2.8 times more likely if the dog is chained or otherwise restrained.
• More commonly seen in children 14 years and younger than any other age group. Boys are much more likely to be the victims.
Although more common, bog bites are usually more superficial than cat bites; A dog’s teeth are relatively dull compared to felines’. Despite this, their jaws are powerful and can inflict crush injuries to soft tissues.
Cats’ teeth are thin and sharp, and puncture wounds tend to be deeper. Any bite can lead to infection if ignored, but cat bites inject bacteria into deeper tissues and become contaminated more often. Rabies and Tetanus are just some of the infections that can be passed through a bite wound.
9 THINGS TO DO WHEN AN ANIMAL BITES
Whenever a person has been bitten, there are several important actions that should be taken:
- Control bleeding with direct pressure using gloves and a bandage or other barrier.
- Clean the wound thoroughly with soap and water. Flushing the wound aggressively with a 60-100 cc irrigation syringe filled with clean water will help remove embedded dirt and bacteria-containing saliva.
- Use an antiseptic to decrease the chance of infection. Betadine (2% povidone-iodine solution) or Benzalkonium Chloride (BZK) are good choices.
- When off-grid, don’t close the wound if at all possible. Many animal bites wounds are stitched closed in a modern medical facility, but this may be inadvisable in a survival setting. Any animal bite should be considered a “dirty” wound; closing the wound may lock in dangerous bacteria.
- Remove any rings or bracelets in a bite wound to the hand. If swelling occurs, they may be very difficult to remove afterwards.
- Use an ice pack to decrease swelling, bruising, and pain.
- Frequently clean and cover a recovering bite wound. Clean, drinkable water or a dilute antiseptic solution will suffice.
- Apply antibiotic ointment to the area and be sure to watch for signs of infection. These may include redness, swelling or oozing. In many instances, the site might feel unusually warm to the touch. Warm moist compresses to the area will help an infected wound drain. Learn more about infected wounds in our video on the subject.
- Consider oral antibiotics as a precaution if off-grid (especially after a cat bite). Although Amoxicillin with Clavulanic acid 500mg every 8 hours for a week is a good first line therapy, Clindamycin (veterinary equivalent: Fish-Cin) 300mg orally every 6 hours and Ciprofloxacin (Fish-Flox) 500 mg every 12 hours in combination are also good choices. Azithromycin, Metronidazole (Fish-Zole) and Ampicillin-Sulbactam have been used as alternatives.
Children who suffer animal bites may become traumatized by the experience and benefit from counseling. Youngsters should be informed about the risks of animal bites and taught to avoid stray dogs, cats, and wild animals. Never leave a small child unattended around animals: Without an able-bodied person to intervene, the outcome may be tragic.
It is important to remember that humans are animals, too. In rare cases, you might see bites from this source as well. Approximately 10-15% of human bites become infected, due to the fact that there are over 100 million bacteria per milliliter in human saliva. Treat as you would any contaminated wound.
Disclaimer: All content in this article is meant to be informational in nature, and does not constitute medical advice. It is not meant to serve as a replacement for evaluation and treatment by a qualified medical professional.
Joe Alton, MD
Learn more about animal bites and many other medical issues you’d encounter off-the-grid in The Survival Medicine Handbook , with over 200 5-star reviews on Amazon! |
Fascinating Education is a full online science curriculum created by Dr. Margulies. Dr. Margulies wanted to encourage a love of science and he has found a way to take the technology that kids want to use and brings science to life. Fascinating Education offers online science for middle and high school.
In Fascinating Education, the teaching process approaches science through the “right hemisphere” of the brain. Instead of just looking at textbooks to explain science, he uses simple, colorful slides as the teaching tool. Audio helps students to understand the text included.
Fascinating Education offers classes in Chemistry, Biology and Physics, the audio-visual technique used by Fascinating Education allows for clear instruction. It allows for a “right-hemispheric” learning approach to take advantage of the brains ability to process images more efficiently and effectively than just reading textbook.
Fascinating Education starts by teaching the students the basics of chemistry, biology or physics in plain English and working into the advanced areas of each subject. Instead of just learning vocabulary, students learn science is part of real life and can be seen all the way around them.
Fascinating Chemistry allows students to learn about how atoms bond to each other to create molecules, and how each bond helps determine the properties of the resulting molecule. Learn how these special molecular properties explain our everyday world – from water freezing to nuclear energy to food to metals to weather, and more.
Fascinating Biology teaches the basic principles of biology, including the components of life: cell membranes, taking in nutrients, creating chemical energy, growing and repairing, reproducing, maintaining a stable internal environment, and adapting to a changing external environment.
Fascinating Physics studies the laws of nature governing movement, energy and sound. Learn about the forces of electricity, magnetism, gravity, and the atomic nucleus. They will look at mathematical formulas and how they predict events around us.
Parents will like that they can follow easily what their student is learning, and there are also periodic tests available throughout lessons. The easy to use format allows students to use independently or along with an instructor.
Fascinating Education has created an online learning environment that will bring out your child’s scientific curiosity and not just have them memorizing facts. Fascinating Education can be used by any student whether they are homeschooled, in public school, private school or just want to take extra science courses.
A big thank you to Renita Kuehner of Krazy Kuehner Days for writing this introductory post. |
Level: ECE, Primary, Junior
Grades: PreK-5 | Age: 2-11yrs | Written by: Andrea Mulder-Slater
[Andrea is one of the creators of KinderArt.com]
Students will use leaves and twigs to create figures.
What You Need:
What You Do:
- Leaves (all shapes, all sizes, all colors).
- Small twigs.
- Construction paper (diffferent colors).
- Have a look at a small pile of leaves to see if their shapes suggest heads, arms, bodies etc.
- Choose some leaves that resemble people parts and glue the shapes down on construction paper. You may need to do some cutting and rearranging to come up with a pleasing shape.
- If you have the leaves (and the time) you could create a huge leaf person by drawing out a body shape and gluing leaves all over to fill in the shape.
- You can use twigs and construction paper scraps to add details to your leaf person.
This content has been printed from: |
Since cosine is not a one-to-one function, the domain must be limited to 0 to pi, which is called the restricted cosine function. The inverse cosine function is written as cos^-1(x) or arccos(x). Inverse functions swap x- and y-values, so the range of inverse cosine is 0 to pi and the domain is -1 to 1. When evaluating problems, use identities or start from the inside function.
I want to talk about the inverse cosine function. We start with the function y equals cosine x I have a graph here and you can see that y equals cosine x is very much not a 1 to 1 function and we can only find the inverses of 1 to 1 functions. So we have to restrict the domain of the cosine function and the convention is to restrict it to this interval from 0 to pi so let me draw the restricted cosine function. Just this piece of the cosine graph up to and including pi and down to and including 0. So y equals cosine x for x between 0 and pi that's the restricted cosine function it is 1 to 1 and so we can invert it. And we call this inverse y equals inverse cosine of x that's how this is read this superscript negative 1 is not an exponent it means the inverse of cosine and this function is also called y equals arc cosine x. Now I want to graph our cosine or inverse cosine and so I start with key points of the cosine curve. I've got 0, 1 pi over 2, 0 and pi negative 1, these are these 3 key points and remember when you're graphing an inverse function you just interchange the x and y coordinates so the point 0, 1 becomes 1, 0, the point pi over 2, 0 becomes 0 pi over 2 and the point pi negative 1 becomes negative 1 pi and that's going to be somewhere here. Let me connect these, keeping that the graph of a function and it's inverse have to be symmetric about the line y=x so this is a pretty good graph. Now very important the domain, I'll mark negative 1 here, the domain of the inverse cosine function is between negative 1 and 1 very important. And think about that the cosine function can only output numbers between negative 1 and 1 so it makes sense that the domain of the inverse cosine function is this interval and the range is going to be between 0 and pi because that was the domain of the restricted cosine function and that's it. This is the graph of the inverse cosine domain between negative 1 and 1, range between 0 and pi and it has these 3 key points. |
Styles of handwriting
Until 1875 the so-called "German" or "Gothic" style of handwriting was commonly used in Denmark. This was also the style of script that children were taught in school. Practically all the older documents in the Archives are written in this style - it is necessary to learn how to read it if you wish to use the records at The Danish National Archives.
There are some letters that you should pay particular attention to. The letters “f”, “h” and the extended “s” are quite similar and could easily be mistaken for each other. The short “s” might resemble the Latin “r” as it is used in present-day handwriting. The letters “v” and “r” also appear to be similar.
Spelling from bygone-ages
When using Records from some historic period do not expect to find the same spelling of a word that might be found in a modern dictionary. You must not expect the spelling to be correct, or that the same word will be spelled the same each time it is written. Often a word will be spelled two or three different ways – on the same page, or even within the same sentence!
In older texts, from around 1600-1680, you might find some words spelled with an “i” where contemporary Danish would require an “e”. For instance, the word “her” (in Danish, “hendes”) spelled “hendis”; or “their” (in Danish, “deres”) would be spelled “deris”. Similarly the letter “g” could be substituted by “ck” or “ch”. Thus, the word “and” (in Danish, “og”) could be spelled “ock” or “och”. Another difficulty could arise if the writer spoke a dialect, which would often be reflected in the spelling and usage of certain Words. |
BBSRC is not responsible for the content of external websites
The plant that doesn’t feel the cold
7 January 2010
Scientists at the John Innes Centre, an institute of BBSRC, have discovered that plants have a built-in thermometer that they use to control their development.
Plants are exposed to huge variations in temperature through the seasons as well as big differences between night and day. To cope with this, they sense the temperature around them, and adjust their growth accordingly. Publishing in the journal Cell, they have now identified a thermometer gene, which could be crucial for breeding crops able to cope with the effects of climate change.
Plants can sense differences of just 1ºC, and climate change has already had significant effects, bringing forward when some plants flower and changing global distributions of species. While the effect of temperature on plants has been known for hundreds of years, it has been a mystery until now how temperature is sensed.
To solve this problem, Vinod Kumar and Phil Wigge at the John Innes Centre looked at all of the genes in the model plant Arabidopsis to see which are switched on by warmer temperature. They connected one of these genes to a luminescent gene to create plants that give off light when the temperature is increased. In this way, the team could screen for mutants that could no longer sense the proper temperature. One mutant was particularly interesting, since it lost the ability to sense temperature correctly. The plant behaved as though it was hot all the time, and the scientists could see this as the plant was luminescent when it was warm and cold.
“It was amazing to see the plants,” said Dr Vinod Kumar, who discovered the mutant plant. “They grew like plants at high temperature even when we turned the temperature right down.”
This plant has a single defect that affects how a special version of a histone protein works. Histone proteins bind to DNA and wrap it around them, and so control which genes are switched on. Remarkably, when this specialised histone is no longer incorporated into DNA, plants express all their genes as if they are at a high temperature, even when it is cold. This told the scientists that this specialised histone is a key regulator of temperature responses.
The histone variant works as a thermometer by binding to the plant’s DNA more tightly at lower temperatures, blocking the gene from being switched on. As the temperature increases, the histone loses its grip and starts to drop off the DNA, allowing the gene to be switched on.
The temperature sensing histone variant was found to control a gene that has helped some plant species adapt to climate change by rapidly accelerating their flowering. Species that do not adjust their flowering time are going locally extinct at a high rate. Plants must continually adapt to their environment as they are unable to move around, and understanding how plants use temperature sensing will enable scientists to examine how different species will respond to further increases in global temperatures.
“We may be able to use these genes to change how crops sense temperature,” said Dr Wigge. “If we can do that then we may be able to breed crops that are resistant to climate change.”
Notes to editors
A video interview with Dr Phil Wigge is available www.jic.ac.uk
Reference: H2A.Z-Containing Nucleosomes Mediate the Thermosensory Response in Arabidopsis Vinod Kumar and Philip Wigge, Cell 2010, 140(1) to be published 8 January 2010
About the John Innes Centre
The John Innes Centre, www.jic.ac.uk, is an independent, world-leading research centre in plant and microbial sciences with over 800 staff. JIC is based on Norwich Research Park and carries out high quality fundamental, strategic and applied research to understand how plants and microbes work at the molecular, cellular and genetic levels. The JIC also trains scientists and students, collaborates with many other research laboratories and communicates its science to end-users and the general public. The JIC is grant-aided by the Biotechnology and Biological Sciences Research Council
The Biotechnology and Biological Sciences Research Council (BBSRC) is the UK funding agency for research in the life sciences. Sponsored by Government, BBSRC annually invests around £450M in a wide range of research that makes a significant contribution to the quality of life for UK citizens and supports a number of important industrial stakeholders including the agriculture, food, chemical, healthcare and pharmaceutical sectors. BBSRC carries out its mission by funding internationally competitive research, providing training in the biosciences, fostering opportunities for knowledge transfer and innovation and promoting interaction with the public and other stakeholders on issues of scientific interest in universities, centres and institutes.
The Babraham Institute, Institute for Animal Health, Institute of Food Research, John Innes Centre and Rothamsted Research are Institutes of BBSRC. The Institutes conduct long-term, mission-oriented research using specialist facilities. They have strong interactions with industry, Government departments and other end-users of their research.
Andrew Chapple, JIC Press office
tel: 01603 251490
Zoe Dunford, JIC Press office
tel: 01603 255111
Matt Goode, Head of External Relations
tel: 01793 413299
Tracey Jewitt, Media Officer
tel: 01793 414694
fax: 01793 413382 |
Three wide angle views taken by the Mars Orbiter Camera on NASA's Mars Global Surveyor at intervals approximately one Mars year apart show similar spiral dust clouds over a volcano named Arsia Mons. The upper-left image (figure 1) was taken on June 19, 2001, the first day of southern winter on Mars. The upper-right image (figure 2) was taken on April 24, 2003, in late southern autumn on Mars. The lower image was taken on Feb. 25, 2005, slightly earlier in late southern autumn on Mars.
|Figure 1: |
19 June 2001, Ls 180°
| Figure 2: |
24 April 2003, Ls 173°
Some parts of Mars experience weather phenomena that repeat each year at about the same time. In some regions, the repeated event may be a dust storm that appears every year, like clockwork, in such a way that we can only wish the weather were so predictable on Earth. One of the repeated weather phenomena occurs each year near the start of southern winter over Arsia Mons, which is located near 9 degrees south latitude, 121 degrees west longitude. Just before southern winter begins, sunlight warms the air on the slopes of the volcano. This air rises, bringing small amounts of dust with it. Eventually, the rising air converges over the volcano's caldera, the large, circular depression at its summit. The fine sediment blown up from the volcano's slopes coalesces into a spiraling cloud of dust that is thick enough to actually observe from orbit.
The spiral dust cloud over Arsia Mons repeats each year, but observations and computer calculations indicate it can only form during a short period of time each year. Similar spiral clouds have not been seen over the other large Tharsis volcanoes, but other types of clouds have been seen.
The spiral dust cloud over Arsia Mons can tower 15 to 30 kilometers (9 to 19 miles) above the volcano. The white and bluish areas in the images are thin clouds of water ice. In the 2005 case, more water ice was present than in the previous years at the time the pictures were obtained. For scale, the caldera of Arsia Mons is about 110 kilometers (68 miles) across, and the summit of the volcano stands about 10 kilometers (6 miles) above its surrounding plains.
The Mars Orbiter Camera was built and is operated by Malin Space Science Systems, San Diego, Calif. Mars Global Surveyor left Earth on Nov. 7, 1996, and began orbiting Mars on Sept. 12, 1997. JPL, a division of the California Institute of Technology, Pasadena, manages Mars Global Surveyor for NASA's Science Mission Directorate, Washington. |
Comb Jelly DNA Studies Are Changing How Scientists Think Animals Evolved
Comb jellies are these beautiful, otherworldly creatures that sparkle gently in the sea. And now, if a study in the journal Science and another one in the journal Nature hold up, they may not be so gentle on evolution or the tree of life. These “aliens of the sea” are fundamentally changing how we think about both.
The standard line for evolution has been that all the complicated stuff evolved once. Way back when some common ancestor evolved a nervous system, muscles and so on and all of our systems are built on those first ones.
Seems reasonable given how hard it probably was to cobble together all the components to get these systems to work. And there was a lot of evidence to support this idea too. For example, it looked like a subset of parts of the nervous system were shared by all the animals that have a nervous system.
This no longer seems to be the case. Back in December, a group of researchers took a close look at the DNA of the sea walnut (Mnemiopsis leidyi) and found that it lacked the usual set of genes animals have to make a nervous system. They also found that this comb jelly lacked almost all of the genes needed to make muscles. This was even though this comb jelly has both muscles and a nervous system.
This is mind blowing stuff that reshapes how we think about evolution.
This result has now been confirmed in a study out on May 21 on a second comb jelly, the Pacific sea gooseberry (Pleurobrachia bachei). The researchers not only found that this comb jelly lacks the same set of genes, but they also showed that its nervous system works in a unique way too.
Most animals use a very similar set of chemicals to communicate from one nerve cell to another. The authors found that the Pacific sea gooseberry uses hardly any of these shared neurotransmitters. No dopamine, serotonin or any of the other common ones you may have heard of.
Instead, this comb jelly appears to have its own unique set of neurotransmitters. And because these signaling chemicals are captured by their own specific set of receptors, this means that the Pacific sea gooseberry has its own set of unique receptors too. The DNA confirms this result.
The easiest explanation for this is that the comb jelly nervous system evolved independently of every other animal’s nervous system. The other explanation that it had one like ours, lost it, and then invented a new one seems way less likely. Something similar probably happened with comb jelly muscles too.
All animals to date that have muscles use the same subset of genes to make them. As these studies show, comb jelly (or ctenophore) DNA has almost none of these genes. It looks like these beautiful sea creatures have reinvented the wheel on this one as well. They have their own set of genes that cause their muscles to develop.
So complicated systems can evolve more than once. This is mind blowing stuff that reshapes how we think about evolution.
Apparently evolving a nervous system isn’t so hard that there is only one way to do it. It also isn’t so hard that once something does it, that animal outcompetes everyone else before they can make their own nervous system. There is (or was) room in nature for many paths to complicated systems.
These findings have also caused scientists to remake the tree of life. Comb jellies now have their own branch, separate from all other animals. In other words, our common ancestor split into a group that led to comb jellies and another group that led to all other animals.
Looking at the DNA of lots of different beasts is causing us to rethink how evolution happens. Results like this make it imperative that we sequence as many living things as we can get our hands on especially since looking at DNA has become so cheap and easy. Of course this all depends on the government giving scientists the money they need to do these studies. |
Although a lightning strike may be the direct cause of the runaway blaze that's burned more than 47,800 acres of Southern California since Sunday, a look at the bigger picture shows there may be a more insidious culprit as well: global warming.
Between 1970 and 2003, the average length of the active wildfire season (from the start of the first reported fire to the day the last reported fire is controlled) increased by 64 percent, or 78 days. Wildfires between 1987 and 2003 burned for an average of 37.1 days before being controlled—almost five times longer than the 7.5 day average between 1970 and 1986. Recent fires also burned more than six and a half times the total area charred by fires in the earlier time period. This research, published in the journal Science by researchers at Scripps Institute of Oceanography, the US Geological Survey, the University of Arizona and the University of California, examined 34 years of hydro-climatic and land-surface data of large wildfires (more than 988 acres) in the western United States. The most dramatic changes were seen beginning in the mid-1980s.
Why the change? Simply put, warmer temperatures. The study showed the average spring and summer temperature between 1987 and 2003 increased 1.6°F from the average temperatures between 1970 and1986. Mountain snow-cover is also melting one- to four-weeks earlier than it did 50 years ago, lengthening the window in which wildfires are easily triggered and spread.
During their lifetime, trees and other plants absorb carbon dioxide, a greenhouse gas, from the atmosphere, countering the effect that's linked to warming temperatures on Earth while leaving oxygen for humans to enjoy. When a tree dies, however, its CO2 is gradually re-released into the atmosphere. (See The Tree Solution for a refresher about CO2 production in forests.) But when a tree burns, all of its stored CO2 is released at once. Meghan Salmon, a physical scientist at the United States Department of Agriculture Missoula Fire Sciences Lab, estimates that burning one acre of forest releases 36 tons of CO2 into to the atmosphere—that includes CO emissions that are converted to CO2 in the atmosphere by gas phase reaction. In 2002, CO2 emissions due to wildfires in Colorado equaled an entire year's worth of the state's transportation emissions, according to the National Center for Atmospheric Research.
It doesn't look like California firefighters can count on the weather to help contain the fire still burning in Yucca Valley; forecasted highs are more than 100°F and there's no rain in sight. As of Thursday afternoon, 20 percent of the fire, called the Sawtooth Complex, had been contained, but officials are concerned that the blaze could meet with other, smaller fires also triggered by Sunday's lightening and spread west into the San Bernardino National Forest. Droughts there have weakened the trees' natural defenses allowing for an onslaught of bark beetles to move in and further debilitate the forest, creating a supply of dry fuel that could keep the Sawtooth burning strong. — Erin Scottberg |
Respiratory Syncytial Virus InfectionA subgroup of the myxoviruses resembling paramyxovirus causes respiratory syncytial virus (RSV) infection. RSV is the leading cause of lower respiratory tract infections in infants and young children; it's the major cause of pneumonia, tracheobronchitis, and bronchiolitis in this age-group and a suspected cause of the fatal respiratory diseases of infancy.
Causes and incidence
Antibody titers seem to indicate that few children under age 4 escape contracting some form of RSY, even if it's mild. In fact, RSV is the only viral disease that has its maximum impact during the first few months of life (incidence of RSV bronchiolitis peaks at age 2 months).
This virus creates annual epidemics that occur during the late winter and early spring in temperate climates and during the rainy season in the tropics. The organism is transmitted from person to person by respiratory secretions and has an incubation period of 4 to 5 days.
Re-infection is common, producing milder symptoms than the primary infection. School-age children, adolescents, and young adults with mild re-infections are probably the source of infection for infants and young children.
Signs and symptoms
The following are the most common symptoms of RSV. However, each baby may experience symptoms differently. Symptoms may include:
The symptoms of RSV may resemble other conditions or medical problems. Always consult your baby's physician for a diagnosis.
Rapid tests for this virus can be performed at many hospitals on fluid obtained from the nose. Listening to the chest with a stethoscope ( auscultation ) may reveal wheezes or other abnormal lung sounds.
Tests used in the diagnosis of RSV include:
TreatmentAmong the goals of treatment are support of respiratory function, maintenance of fluid balance, and relief of symptoms.
Because RSV spreads in fluids from the nose and throat of an infected person, it's best to wash your hands after touching anyone who has either a cold or a known RSV infection. Also, it's wise not to touch your nose or eyes after contact with someone with RSV as the virus could enter your body through either of these two areas. And keep your school-age child with a cold away from an infant brother or sister until the symptoms pass.
Treatments can be given to protect infants who are at highest risk for severe illnesses if they are infected with RSV, such as those who were born prematurely or those with chronic heart and lung disease. These treatments provide temporary immunity against RSV. One treatment, palivizumab, is given as monthly intramuscular injections during the autumn months and provides protection throughout the typical RSV season. Unlike a vaccine, its protection is short-lived and has to be repeated in following years, until the child is no longer at severe risk from RSV infection.
(c) Health-care-clinic.org All rights reserved
Disclaimer: Health-care-clinic.org website is designed for educational purposes only. It is not intended to treat, diagnose, cure, or prevent any disease. Always take the advice of professional health care for specific medical advice, diagnoses, and treatment. We will not be liable for any complications, or other medical accidents arising from the use of any information on this web site. Please note that medical information is constantly changing. Therefore some information may be out of date. |
According to the researcher Dr. Lars Olson at the Karolinska Institute in Sweden they were successful in creating the environment that mimics Parkinson's disease.
The experiments were carried in mice and conditions in the mice were mimicked in such a way that the mitochondria were directed to those nerve cells that produce the transmitter substance dopamine. In the mouse model generated by the research team, a gene called TFAM is automatically deleted from the genome in dopamine nerve cells only. Without TFAM, mitochondria cannot function normally.
The so called respiratory chain is compromised and energy production decreases severely in the dopamine cells. The new mice are born healthy from healthy but genetically modified parents and will develop spontaneous disease. Previous studies in the field have been based on researchers delivering neurotoxic substances to kill the dopamine neurons.
In the new mice, however, mice develop disease slowly in adulthood, like humans with Parkinson's disease, which may facilitate research aimed at finding novel medical treatments and other therapies.
'We see that the dopamine producing nerve cells in the brain stem slowly degenerate', says Dr. Nils-Göran Larsson. 'In the microscope we can see that the mitochondria are swollen and that aggregates of a protein, probably alpha-synuclein starts to accumulate in the nerve cell bodies. Inclusions of alpha-synuclein-rich so called Lewy bodies is typical for the human disease.' The causes of Parkinson's disease have long remained a mystery.
Genes and environment are both implicated, but recently there has been an increased focus on the roles of genetic factors. It has been found that mutations in a number of genes can lead directly to disease, while other mutations may be susceptibility factors, so that carriers have an increased risk of becoming ill. A common denominator for some of the implicated genes is their suggested role for the normal functioning of mitochondria. 'Like patients, the mice can be treated with levo-Dopa, a precursor of the lost substance dopamine', says Dr. Nils-Göran Larsson. 'The course of the disease as well as the brain changes in this mouse are more similar to Parkinson's disease than most other models.
This supports the notion that genetic risk factors are important.' 'Like in patients, the dopamine nerve cells in the new mouse model die in a specific order', says Dr. Lars Olson. 'We hope the mouse will help us understand why certain dopamine nerve cells are more sensitive than others, so that we can develop drugs that delay, ore even stop the nerve cell death.' |
Mars Phoenix Lander
OverviewA Mars lander, launched on August 4, 2007, which successfully landed on the Martian northern plains on May 25, 2008. Phoenix is the first mission in NASA's Scout Program and the sixth successful Mars lander. It is specifically designed to measure volatiles (especially water) and complex organic molecules in the arctic plains of Mars, where the Mars Odyssey orbiter has discovered evidence of ice-rich soil very near the surface.
Similar to its mythical namesake, Phoenix has risen from the ashes of an earlier version of itself – in fact, from the instruments and other hardware of two previous unsuccessful attempts to explore Mars. Phoenix uses a lander that was intended for use by 2001's Mars Surveyor lander prior to its cancellation and carries a complex suite of instruments that are improved variations of those that flew on the lost Mars Polar Lander.
In the continuing pursuit of water on Mars, the Martian poles are a good place to investigate, as water ice is found there. Phoenix landed on the icy northern pole of Mars near 68° north latitude, 127° west longitude. During the course of the 150-Martian-day mission, Phoenix will deploy its robotic arm (see below) and dig trenches up to half a meter (1.6 ft) deep into the layers of water ice. These layers, thought to be affected by seasonal climate changes, could contain organic compounds that are necessary for life.
Having arrived on the surface of Mars, Phoenix began to take detailed photos of its new surroundings. Imaging technology inherited from both the Pathfinder and Mars Exploration Rover missions has been implemented in Phoenix's stereo camera, located on its 2-meter (6.6-ft) mast. The camera's two stereoscopic eyes can provide a high-resolution perspective of the landing site's geology, and also provide range maps that will enable the mission's science team to choose ideal digging locations. Multi-spectral capability will enable the identification of local minerals.
To analyze soil samples collected by the robotic arm, Phoenix carries a miniature oven and a portable laboratory. Selected samples will be heated to release volatiles that can be examined for their chemical composition and other characteristics.
To update our understanding of Martian atmospheric processes, Phoenix will scan the Martian atmosphere up to 20 km (12.4 miles) in altitude, obtaining data about the formation, duration and movement of clouds, fog, and dust plumes. It will also carry temperature and pressure sensors.
To see photos taken by the Phoenix Lander, go here.
Robotic Arm (RA)The Robotic Arm (RA) is intended to dig trenches, scoop up soil and water ice samples, and deliver these samples to the TEGA and MECA instruments (see below) for detailed chemical and geological analysis. Designed similar to a back hoe, the RA can operate with four degrees of freedom: (1) up and down, (2) side to side, (3) back and forth, and (4) rotate around.
The RA is 2.35 meters (just under 8 ft) long with an elbow joint in the middle, allowing the arm to trench about 0.5 m (1.6ft) below the Martian surface, deep enough to where scientists believe the water-ice soil interface lies. At the end of the RA is a moveable scoop, which includes ripper tines (sharp prongs) and serrated blades. Once icy soil is encountered, the ripper tines will be used to first tear the exposed materials, followed by applying the serrated blades to scrape the fractured soil. The scoop will then be run through the furrows to capture the fragmented samples, ensuring enough sample mass for scientific study on the lander platform.
Robotic Arm Camera (RAC)Built for the Mars Surveyor 2001 Lander, the RAC is attached to the Robotic Arm (RA) just above the scoop. The instrument provides close-up, full-color images of (1) the Martian surface in the vicinity of the lander, (2) prospective soil and water ice samples in the trench dug by the RA, (3) verification of collected samples in the scoop prior to analysis by the MECA and TEGA instruments, and (4) the floor and side-walls of the trench to examine fine-scale texturing and layering.
By examining the color and grain size of scoop samples, scientists will better understand the nature of the soil and water-ice in the trench being dug by the RA. Additionally, floor and side-walls images of the trench may help determine the presence of any fine-scale layering that may result from changes in Martian climate.
The RAC is a box-shaped imager with a double Gauss lens system, commonly found in many 35 mm cameras, and a charged-coupled device similar to those found on many consumer digital cameras. Two lighting assemblies provide illumination of the target area. The upper assembly contains 36 blue, 18 green, and 18 red lamps and the lower assembly contains 16 blue, 8 green, and 8 red lamps. The RAC has two motors: one sets the lens focus from 11 mm to infinity and the other opens and closes a transparent dust cover. The instruments magnification is 1:1 at closest focus, providing image resolutions of 23 microns per pixel.
Microscopy, Electrochemistry, and Conductivity Analyzer (MECA)MECA is designed to characterize the soil of Mars much like a gardener would test the soil in his or her yard. By dissolving small amounts of soil in water, the wet chemistry lab (WCL) determines the pH, the abundance of minerals such as magnesium and sodium cations or chloride, bromide and sulfate anions, as well as the conductivity and redox potential. Looking through a microscope, MECA examines the soil grains to help determine their origin and mineralogy. Needles stuck into the soil determine the water and ice content, and the ability of both heat and water vapor to penetrate the soil.
MECA contains four single wet chemistry labs, each of which can accept one sample of martian soil. Phoenix's RA will initiate each experiment by delivering a small soil sample to a beaker, which is ready and waiting with a pre-warmed and calibrated soaking solution. Alternating soaking, stirring, and measuring, the experiment continues until the end of the day. After freezing overnight and thawing the next morning, the experiment continues with the addition of four crucibles containing solid reagents. The first contains an acid to tease out carbonates and other constituents that are better dissolved in an acidic solution. The other three crucibles contain a reagent to test for sulfate.
The optical and atomic-force microscopes complement MECA's wet chemistry experiments. With images from these microscopes, scientists will examine the fine detail structure of soil and water ice samples. Detection of hydrous and clay minerals by these microscopes may indicate past liquid water in the martian arctic. The optical microscope will have a resolution of 4 microns per pixel, allowing detection of particles ranging from about 10 micrometers up to the size of the field of view (about 1 mm by 2 mm). Red, green, blue, and ultraviolet LEDs will illuminate samples in differing color combinations to enhance the soil and water-ice structure and texture at these scales. The atomic force microscope will provide sample images down to 10 nanometers – the smallest scale ever examined on Mars. Using its sensors, the AFM creates a very small-scale topographic map showing the detailed structure of soil and ice grains.
Prior to observation by each of the microscopes, samples are delivered by the RA to a wheel containing sixty-nine different substrates. The substrates are designed to distinguish between different adhesion mechanisms and include magnets, sticky polymers, and "buckets" for bulk sampling. The wheel is rotated allowing different substrate-sample interactions to be examined by the microscopes.
MECA's final instrument, the thermal and electrical conductivity probe, will be attached at the "knuckle" of the RA. The probe consists of three small spikes that will be inserted into the ends of an excavated trench. In addition to measuring temperature, the probe will measure thermal properties of the soil that affect how heat is transferred, providing scientists with better understanding of surface and atmospheric interactions. Using the same spikes, the electrical conductivity will be measured to indicate any transient wetness that might result from the excavation. Most likely, the thermal measurement will reflect ice content and the electrical, unfrozen water content.
Surface Stereo Imager (SSI)SSI will serve as Phoenix's eyes for the mission, providing high-resolution, stereo, panoramic images of the Martian arctic. Using an advanced optical system, SSI will survey the arctic landing site for geological context, provide range maps in support of digging operations, and make atmospheric dust and cloud measurements.
Situated on top of an extended mast, SSI will provide images at a height two meters above the ground, roughly the height of a tall person. SSI simulates the human eye with its two optical lens system that will give three-dimensional views of the arctic plains. The instrument will also simulate the resolution of human eyesight using a charged-coupled device that produces high density 1024 × 1024 pixel images. But SSI exceeds the capabilities of the human eye by using optical and infrared filters, allowing multispectral imaging at 12 wavelengths of geological interest and atmospheric interest.
Looking downward, stereo data from SSI will support robotic arm operations by producing digital elevation models of the surrounding terrain. With these data, scientists and engineers will have three-dimensional virtual views of the digging area. Along with data from the TEGA and the MECA, scientists will use the three-dimensional views to better understand the geomorphology and mineralogy of the site. Engineers will also use these three-dimensional views to command the trenching operations of the robotic arm. SSI will also be used to provide multispectral images of samples delivered to the lander deck to support results from the other scientific instruments.
Looking upward, SSI will be used to estimate the optical properties of the Martian atmosphere around the landing site. Using narrow-band imaging of the Sun, the imager will estimate density of atmospheric dust, optical depth of airborne aerosols, and abundance of atmospheric water vapor. SSI will also look at the lander itself to assess the amount of wind-blown dust deposited on spacecraft. Deposition rates provide important information for scientists to understand erosional and atmospheric processes, but are critical for engineers who are concerned about the amount of deposited dust on the solar panels and associated power degradation.
Thermal and Evolved Gas Analyzer (TEGA)TEGA is a combination high-temperature furnace and mass spectrometer instrument that scientists will use to analyze Martian ice and soil samples. The robotic arm will deliver samples to a hopper designed to feed a small amount of soil and ice into eight tiny ovens about the size of an ink cartridge in a ballpoint pen. Each of these ovens will be used only once to analyze eight unique ice and soil samples.
Once a sample is successfully received and sealed in an oven, the temperature is slowly increased at a constant rate, and the power required for heating is carefully and continuously monitored. This process, called scanning calorimetry, shows the transitions from solid to liquid to gas of the different materials in the sample: important information needed by scientists to understand the chemical character of the soil and ice.
As the temperature of the furnace increases up to 1000°C (1800°F), the ice and other volatile materials in the sample are vaporized into a stream of gases. These are called evolved gases and are transported via an inert carrier to a mass spectrometer, a device used to measure the mass and concentrations of specific molecules and atoms in a sample. The mass spectrometer is sensitive to detection levels down to 10 parts per billion, a level that may detect minute quantities of organic molecules potentially existing in the ice and soil.
With these precise measurement capabilities, scientists will be able to determine ratios of various isotopes of hydrogen, oxygen, carbon, and nitrogen, providing clues to origin of the volatile molecules, and possibly, biological processes that occurred in the past.
Meteorological Station (MET)Throughout the course of Phoenix surface operations, MET will record the daily weather of the Martian northern plains using temperature and pressure sensors, as well as a light detection and ranging (LIDAR) instrument. With these instruments, MET will play an important role by providing information on the current state of the polar atmosphere and how water is cycled between the solid and gas phases in the Martian arctic.
The MET's lidar is an instrument that operates on the same basic principle as radar, using powerful laser light pulses rather than radio waves. The lidar transmits light vertically into the atmosphere, which is reflected off dust and ice particles. These reflected light pulses and their time of return to the lidar instrument are analyzed, revealing information about the size of atmospheric particles and their location.
From this distribution of dust and ice particles, scientists can make important inferences about how energy flows within the polar atmosphere, important information for understanding martian weather. These particles also reveal the formation, duration, and movement of clouds, fog, and dust plumes, improving scientific understanding of Mars' atmospheric processes.
The very cold temperatures of the martian arctic will be measured with thin wire thermocouples, a technology that has been used successfully on meteorological stations for both the Viking and Pathfinder missions. In a thermocouple, electric current flows in a closed circuit of two dissimilar metals (chromel and constantan in the case of the MET) when one of the two junctions is at a different temperature. Three of these thermocouple sensors will be located on a 1.2 meter vertical mast to provide a profile of how the temperature changes with height near the surface.
Atmospheric pressure on Mars is very low and requires a sensitive sensor for measurement. Pressure sensors similar to those used on the Viking and Pathfinder missions will be part of the MET.
Landing siteThe Phoenix Lander came to rest in the the northern polar region of Vastitas Borealis at 68.2°N 234.3°W. Images from the spacecraft revealed a flat landscape with a strange "quilted" appearance. The polygonal shapes, defined by trough-like boundaries, had been seen from orbit and were likely created by the repeated expansion and contraction of subsurface ice.
Did Phoenix see life on Mars?Phoenix was not a life-detction mission, as Viking was. However, it did apparently detect movement in the martian soil which might, just possibly, be due to microorganisms. Below is a Youtube video showing what have been called "unidentified moving objects." It is not clear what these objects are or why they are moving.
External sitePhoenix lander home page (University of Arizona)
Related categories MARS PROBES
SATELLITES AND SPACE PROBES
Source: NASA / University of Arizona
Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact |
What Is Angina?
The term "angina" usually refers to "angina pectoris," or chest pain that occurs when the heart doesn't get enough oxygen. This lack of oxygen is called myocardial ischemia. It is generally caused by a buildup of plaque that partially clogs the coronary arteries, reducing blood flow to the heart. The pain can be severe or mild, and often follows exertion or stress.
There are three types of angina:
- Stable angina - The most common type, it occurs when the heart is exerted and usually resolves quickly after resting or taking medication.
- Unstable angina - Unlike stable angina, this can occur without physical exertion and is not relieved with rest or medication. Emergency treatment is required.
- Variant angina (also known as Prinzmetal's angina) - A rare condition, this usually occurs at rest, but is relieved by medication.
Your physician may suspect angina if you have any or all of the following symptoms:
- A tightness or heaviness in the chest
- Difficulty breathing
- Pressure, squeezing or burning in the chest
- Discomfort that spreads to the arm, neck, jaw or back
- Numbness or tingling in the shoulders, arms or wrists
Not all chest pain or discomfort is angina. Similar symptoms may occur with other conditions, including indigestion, panic attack or heart attack.
Chest pain that lasts longer than a few minutes and is not relieved by rest or angina medicine may mean you are having a heart attack. Don't wait. Call 911 to get emergency help right away.
Your physician may recommend any or all of the following diagnostic tests:
- Blood tests
- Fasting lipoprotein profile checks the cholesterol level
- Fasting glucose test checks blood glucose level
- C-reactive protein (CRP) test can indicate an inflammation (a risk factor for coronary artery disease)
- Cardiac enzyme tests look for enzymes such as troponin, which is released with severe ischemia or injury.
- A routine check for low hemoglobin (the part of a red blood cell that carries oxygen throughout the body) can help rule out other possible causes for the chest pain.
- Electrocardiogram (EKG or ECG) - This test involves attaching (with adhesive material) small electrodes to the arm, leg and chest to measure the rate and regularity of a heartbeat and to check for heart muscle damage.
- Exercise stress test - The patient is asked to perform exercise, such as walking on a treadmill, and EKG and blood pressure readings are taken before, during and after exercise to measure changes in heartbeat and blood pressure.
- Cardiac catheterization - A thin flexible tube (catheter) is passed through an artery in the groin or arm into the coronary arteries, to examine the coronary arteries and monitor blood flow.
If angina is diagnosed, your doctor may give you medications including:
- Nitroglycerin - relaxes blood vessels, allowing more blood to flow to the heart
- Beta blocker - decreases the heart's need for blood and oxygen by slowing the heart rate and lowering blood pressure; decreases abnormal heart rhythms
- Calcium channel blocker - relaxes blood vessels, allowing more blood to flow to the heart
- ACE inhibitor - reduces blood pressure, reducing strain on the heart
Your physician may also recommend lifestyle changes to reduce your chance of having angina attacks, including one or all of the following:
- Quitting smoking
- Losing weight
- Eating a heart-healthy diet to prevent or reduce high blood pressure and high cholesterol (medications may also be prescribed)
- Practicing relaxation and stress management techniques
- Avoiding extreme temperatures, which can bring on angina attacks
- Avoiding strenuous activities
MidMichigan offers cardiac rehabilitation services, including supervised, prescribed aerobic training on exercise machines, to help you safely build a stronger cardiovascular system.
If medicines and lifestyle changes do not control your angina, your physician may recommend additional measures, including surgery. |
The Warnabi, Warnavi, Warnahi, Wranovi, Wranefzi, Wrani, Varnes, or Warnower were a West Slavic tribe of the Obotrite confederation in the ninth through eleventh centuries. They were one of the minor tribes of the confederation living in the Billung Mark on the eastern frontier of the Holy Roman Empire. They were first mentioned by Adam of Bremen.
Etymologically their name is related to the river, the Warnow (also Warnof, Wrana, or Wranava), along which they settled in the region of Mecklenburg. It may have meant "crow river" or "black river" in their Slavic language, or been derived from the name of the Warni (from earlier warjan), a Germanic people who had previously lived in the same area. The name Warnabi may be a combination of Warni and Abodriti.
In the second half of the ninth century the chief town of the Warnabi was on an island in Lake Sternberg at the site of the castle of Gross Raden. The centre of their culture was near the present towns Sternberg and Malchow. From 1171, 1185, and 1186 there are references to the land of the Warnabi: the Warnowe. In 1189 it is called the Warnonwe and by 1222 this was called the Wornawe.
- Howorth, H. H. "The Spread of the Slaves. Part III. The Northern Serbs or Sorabians and the Obodriti." The Journal of the Anthropological Institute of Great Britain and Ireland, Vol. 9. (1880), pp 181–232.
- WARNOWER at Lexikon des Mittelaters. |
The observation is at the base of all intervention. Thanks to the observations of
the educators (usually in daycares), questions emerge and these questions motivate
them to ask for professional help.
Following the observations, we screen if there are signs of delays in the different
areas of development.
The developmental assessment provides information on the
child’s overall development, specifying his level of developmental
regarding his achieved vs unachieved skills
- To determine appropriate educational objectives.
- To develop an individualized educational program.
- To monitor and document your child's progress
(evaluating periodically, ex: every 6 months or yearly).
Specialized interventions are recommended in order to achieve the educational objectives
chosen. Support may be offered to educators in order to implement the methods in
the environment and to coach them while they apply the recommendations. If needed,
workshops are also offered.
In order to evaluate whether or not the recommended interventions are providing the
desired results, it is important to assess the child's progress periodically, in
order for the intervention methods or learning objectives be modified when required. |
Students will be able to review letter combinations and sounds of the Italian alphabet prior to assessment.
Standards: Communication, Comparisons
SL1: Participate effectively in a range of conversations demonstrating the ability to understand the ideas of others and clearly express your own.
SL2: Evaluate and use information from diverse media to understand and communicate.
Materials Used: anticipatory set, guided notes, guided and independent practice, listening script
Estimated Time: 45 minutes
1. Students review Italian sounds and letters.
2. Students self-evaluate strengths and weaknesses prior to assessment. |
Designing Experiments Using the Scientific Method
How do the scientists know what they know? When it comes to gathering information, scientists usually rely on the scientific method.
The scientific method is a plan that is followed in performing a scientific experiment and writing up the results. It is not a set of instructions for just one experiment, nor was it designed by just one person. The scientific method has evolved over time after many scientists performed experiments and wanted to communicate their results to other scientists. The scientific method allows experiments to be duplicated and results to be communicated uniformly.
As you're about to see, the format of the scientific method is very logical. Really, many people solve problems and answer questions every day in the same way that experiments are designed.
When preparing to do research, a scientist must form a hypothesis, which is an educated guess about a particular problem or idea, and then work to support it and prove that it is correct, or refute it and prove that it is wrong.
Whether the scientist is right or wrong is not as important as whether he or she sets up an experiment that can be repeated by other scientists, who expect to reach the same conclusion.
The value of variables
Experiments must have the ability to be duplicated because the "answers" the scientist comes up with (whether it supports or refutes the original hypothesis) cannot become part of the knowledge base unless other scientists can perform the exact same experiment(s) and achieve the same result; otherwise, the experiment is useless.
"Why is it useless," you ask? Well, there are things called variables. Variables vary: They change, they differ, and they are not the same. A well-designed experiment needs to have an independent variable and a dependent variable. The independent variable is what the scientist manipulates in the experiment. The dependent variable changes based on how the independent variable is manipulated. Therefore, the dependent variable provides the data for the experiment.
Experiments must contain the following steps to be considered "good science."
1. A scientist must keep track of the information by recording the data.
The data should be presented visually, if possible, such as through a graph or table.
2. A control must be used.
That way, results can be compared to something.
3. Conclusions must be drawn from the results.
4. Errors must be reported.
Suppose that you wonder whether you can run a marathon faster when you eat pasta the night before or when you drink coffee the morning of the race. Your hunch is that loading up on pasta will give you the energy to run faster the next day. A proper hypothesis would be something like, "The time it takes to run a marathon is improved by consuming large quantities of carbohydrates pre-race." The independent variable is the consumption of pasta, and the dependent variable is how fast you run the race.
Think of it this way: How fast you run depends on the pasta, so how fast you run is the dependent variable. Now, if you eat several plates of spaghetti the night before you race, but then get up the next morning and drink two cups of coffee before you head to the start line, your experiment is useless.
Why is it useless? By drinking the coffee, you introduce a second independent variable, so you will not know whether the faster race time is due to the pasta or the coffee. Experiments can have only one independent variable. If you want to know the effect of caffeine (or extra sleep or improved training) on your race time, you would have to design a second (or third or fourth) experiment.
Checking your stats
Of course these experiments would have to be performed many times by many different runners to demonstrate any valid statistical significance. Statistical significance is a mathematical measure of the validity of an experiment. If an experiment is performed repeatedly and the results are within a narrow margin, the results are said to be significant when measured using the branch of mathematics called statistics. If results are all over the board, they are not that significant because one definite conclusion cannot be drawn from the data.
Tracking the information
Once an experiment is designed properly, you can begin keeping track of the information you gather through the experiment. In an experiment testing whether eating pasta the night before a marathon improves the running time, suppose that you eat a plate of noodles the night before and then drink only water the morning of the race. You could record your times at each mile along the 26-mile route to keep track of information. Then, for the next marathon you run (boy, you must be in great shape), you eat only meat the night before the race, and you down three espressos on race morning. Again, you would record your times at each mile along the route.
What do you do with the information you gather during experiments? Well, you can graph it for a visual comparison of results from two or more experiments. The independent variable from each experiment is plotted on the x-axis (the one that runs horizontally), and the dependent variable is plotted on the y-axis (the one that runs vertically). In experiments comparing the time it took to run a marathon after eating pasta the night before, getting extra sleep, drinking coffee, or whatever other independent variable you may want to try, miles 1 to 26 would be labeled up the y-axis. The factor that does not change in all the experiments is that a marathon is 26 miles long. The time it took to reach each mile would be plotted along the x-axis. This data might vary based on what the runner changed before the race, such as diet, sleep, or training. You can plot several independent variables on the same graph by using different colors or different styles of lines. Your graph might look something like the one in Figure 1.
Taking control of your experiment
How would you know if your race times were improved either by eating pasta or drinking coffee? You would have to run a marathon without eating pasta the night before or drinking coffee the morning of the race. (Exhausted yet?) This marathon would be your control. A control is a set of base values against which you compare the data from your experiments. Otherwise, you would have no idea if your results were better, worse, or the same.
So, maybe it took you less time to reach each mile along the marathon route after the night of pasta eating, but your race times after drinking the coffee matched those of the control. That would support your initial hypothesis, but it would refute your second hypothesis. There's nothing wrong with being wrong, as long as the information is useful. Knowing what doesn't work is just as important as knowing what does.
Your conclusion to these two experiments would be something like: "Consuming pasta the night before a 26-mile marathon improves race time, but consuming caffeine has no effect."
However, in scientific experiments you have to confess your mistakes. This confession lets other scientists know what could be affecting your results. Then, if they choose to repeat the experiment, they can correct for those mistakes and provide additional beneficial information to the knowledge base. In the pasta-caffeine-race experiment, if you had consumed the pasta the night before and then the caffeine the morning of the race, your major error would be that of including more than one independent variable.
Another error would be having too small of a sample. A more accurate determination could be made by recording the race times at each mile for many runners under the same conditions (i.e., having them eat the same amount of pasta the night before a race or consuming the same amount of caffeine the morning of a race). Of course, their individual control times without those variables would have to be taken into account. Science. It's all in the details. |
We all know how important bees are for pollination, and without them we would be in real trouble.
Scientists in Australia have been trying to determine what is causing “colony collapse disorder” (CCD), and they may have found a reason.
This video discusses some of the reasons behind CCD:
CCD is when bee hives have only a few young bees and a queen bee inside. There are no worker bees, nor are there any worker bee bodies. Without the worker bees, the entire colony collapses, with no evidence of why this has happened.
How external conditions can trigger CCD
Studies on external conditions, such as pests, pesticides, and food quality, have been conducted before without finding any explanation as to why CCD occurs.
Most researchers believed that a new disease was infecting the hives.
The team of Australian scientists exposed worker bees to stressors, and when they did, the workers often died prematurely. This then triggered the younger bees to leave the hive to forage; the team monitored the younger forage bees with radio tag tracking to find out the consequences. What they found was that young bees completed fewer foraging trips in their life, and had a higher risk of death in their first time out.
Lead scientist Dr. Barron said: “Bees who start to forage when they’ve been adults for less than two weeks are just not good at it. They take longer, and they complete fewer trips. Our model suggests bees are very good at buffering against stress, but there’s a tipping point, and then you see this rapid transition into complete societal failure.”
This study shows the processes of rapid depopulation of a colony, and is the first study to explain at least one pathway for how CCD happens. Now, they just have to find ways to prevent this. |
The Opposable Thumb As A Human Adaptation : Thumb-Taping Lab
In this biology activity, students will first brainstorm and observe how humans use their hands. They will then conduct a short experiment to determine the importance of the opposable thumb to humans. This experiment involves students taping their thumb to render it useless while they proceed to do an everyday activity. In the end students will have the opportunity to reflect on why the opposable thumb is an adaptation important to humans.
Context for Use
Resource Type: Activities:Classroom Activity
Grade Level: High School (9-12)
Description and Teaching Materials
1. Divide students into small groups. They should brainstorm ideas about how humans use their hands versus other animals and especially other primates.
2. Discuss as a class their ideas. Create a class discussion summary on the board or overhead. The term "fully opposable thumbs" should be introduced if it hasn't already.
3. Next they will test a question. This will also allow them to experience the use of their own thumbs, possibly something they never think about, as well as appreciate the shape and design of their own hands.
Question: " Can I tie my shoes faster with or without my thumbs?"
Students should write a lab report that will contain information similar to the following.
TITLE / QUESTION: "Can I tie my shoes faster with or without my thumbs?"
MATERIALS: masking tape, timer or clock, shoe with laces
1. Put students into groups of two. Collect baseline data by having each student tie their shoe while being timed by a partner. This should be done three times to get an average time. Students should put their data in data table in their lab book.
2. Next, using masking tape, tape the thumb down on each hand. For best results, a partner can tape it to the pointer finger to render it useless.
3. Repeat step 1 with the thumbs taped. (NOTE: It is important to follow the same procedure as done in step 1 so that data is consistent.)
4. Have students average their data.
5. Contribute data averages to a class chart.
6. After viewing/discussing the class data, write a conclusion.
1. What is special about the hand of humans?
2. The design of the human hand is an adaptation. How does our hand's design allow us to do so many tasks such as writing, texting, and tying shoes? |
Mother knows best – even how to improve crop yield
30 July 2007
Scientists at the University of Oxford have paved the way for bigger and better quality maize crops by identifying the genetic processes that determine seed development.
Plant scientists have known for some time that genes from the maternal plant control seed development, but they have not known quite how. The Oxford research, supported by the Biotechnology & Biological Sciences Research Council (BBSRC) and highlighted in the new issue of BBSRC Business, has found at least part of the answer.
Working in collaboration with researchers in Germany and France, Professor Hugh Dickinson’s team found that only the maternal copy of a key gene responsible for delivering nutrients is active. The copy derived from the paternal plant is switched off. This gene encodes a potential signalling molecule found in the endosperm – a placenta-like layer that nourishes the developing grain, which is involved in ‘calling’ for nutrients from the mother plant, and so triggers an increased flow of resources. Similar mechanisms can almost certainly be expected in other cereals, and with cereal grain being a staple food across the world, the potential to harness this science to improve yields is clear.
Prof. Dickinson explains: “By understanding the complex level of gene control in the developing grain, we have opened up opportunities in improving crop yield.
“The knowledge and molecular tools needed to harness these natural genetic processes are now available to plant breeders and could help them improve commercial varieties further. For example, they can better understand how to successfully cross-breed to produce higher quality crops. The cereal grain is a staple food of the world’s population: with the changing climate and growing population, the need for sustainable agriculture is increasingly pressing.”
The mechanism used to switch off paternal genes ensures supremacy of maternally-derived genes. This process is known as ‘imprinting’ and is achieved mainly through ‘methylation’ – a naturally occurring chemical change in the DNA. A very similar mechanism takes place in animal embryos. However, unlike the animal imprinting systems where genes are often grouped in the chromosomal DNA, in maize imprinted genes are ‘solitary’ and independently regulated.
Notes to editors
This project was a collaboration between the University of Oxford’s Department of Plant Sciences, researchers at the University of Hamburg and Biogemma, a French biotech company.
It was funded initially through the EC Framework Programme V, and then under BBSRC’s initiative on Integrated Epigenetics.
The Biotechnology and Biological Sciences Research Council (BBSRC) is the UK funding agency for research in the life sciences. Sponsored by Government, BBSRC annually invests around £380 million in a wide range of research that makes a significant contribution to the quality of life for UK citizens and supports a number of important industrial stakeholders including the agriculture, food, chemical, healthcare and pharmaceutical sectors. http://www.bbsrc.ac.uk
Matt Goode, Head of External Relations
tel: 01793 413299
fax: 01793 413382
Tracey Jewitt, Media Officer
tel: 01793 414694
fax: 01793 413382 |
The latter half of the nineteenth century witnessed great scientific advances in a number of fields, including biology. Charles Darwin’s On the Origins of the Species by Means of Natural Selection and Gregor Mendel’s experiments shed light on the hereditary transmission of characteristics, prompting consideration of how to use this knowledge to aid humanity. In 1883, the English scientist Sir Francis Galton was the first to coin the phrase ‘‘eugenics,’’ which means ‘‘good in birth’’ in Greek. Eugenics was concerned with applying principles of animal husbandry to humans, encouraging positive genetic traits and eliminating negative genetic traits by controlling who could breed and pass on their genes.
The scientific approach to human breeding found particular resonance in the Progressive movement in the United States. That movement was concerned with improving the conditions of the worst-off in society. Eugenics offered the promise that through the application of scientific principles, future generations of such unfortunates could avoid being born, improving the lot of all society. The rudimentary understanding of genetics at the time conceptualized such ills as pauperism, criminality, insanity, immorality, and low intelligence as negative hereditary characteristics that could be ‘‘bred out’’ of the bloodline. Progressive eugenicists sought to ensure that those possessing these characteristics would decline as a proportion of the population by encouraging or mandating the sterilization of people who displayed those characteristics.
While the ideas behind eugenics found favor across the world, the United States led the way in their practical application with Indiana becoming the first state to pass a law enabling the sterilization of persons for eugenic purposes in 1907. A further twenty-two states passed similar laws by 1926, and by 1940, thirty states had passed eugenic sterilization laws, with California and Virginia being particularly strong proponents. Most states provided for involuntary eugenic sterilization, whereby the state could forcibly sterilize a person found to be unfit to have children, because such offspring would be a similar burden on society.
The Supreme Court ruled involuntary eugenic sterilization laws constitutional in the case of Buck v. Bell (1927). The eight-to-one decision was announced with an opinion by Oliver Wendell Holmes declaring, ‘‘It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind . . . . Three generations of imbeciles are enough.’’ In Skinner v. Oklahoma (1942), the Supreme Court limited the use of involuntary eugenic sterilization for habitual criminals.
For a number of years eugenics was widely accepted; it was taught in many of the nation’s colleges, eugenicists took part in state fairs across the nation and a number of famous people supported eugenics, including Presidents Theodore Roosevelt and Calvin Coolidge; John Maynard Keynes, the noted economist; and Margaret Sanger, the founder of Planned Parenthood. The popularity of eugenics declined in the late 1930s, but involuntary eugenic sterilizations continued in the United States until 1979, by which time over 60,000 Americans had been sterilized.
The concern for the gene pool in the United States that was expressed through eugenic sterilization laws can also be seen in its miscegenation laws and the curtailment of immigration from Southern and Eastern Europe (whose immigrants were seen as particularly degenerate and a threat to future generations of Americans) in the Immigration Act of 1924.
The American eugenic sterilization laws formed the basis for similar laws across the world, starting in the Swiss canton of Vaud in 1928. Denmark, Sweden, Finland, Belgium, Austria, Norway, and Germany were among those countries to embrace eugenic sterilization laws a way of dealing with hereditary defects and controlling the growth of ethnic minorities. Despite the United Kingdom’s early association with the eugenics movement, political opposition prevented the adoption of eugenic sterilization laws.
Eugenic sterilization was carried out most vigorously in Nazi Germany. Adolf Hitler cited the United States and its laws aimed at genetic purity as an example for the world in his book Mein Kampf, and there was much interaction between eugenicists in the United States and Germany until the late 1930s. Nazi Germany sterilized hundreds of thousands of people between 1933 and 1945. Between 1939 and 1941, the logic of eugenic sterilization was carried to its conclusion and over 70,000 people deemed to be a burden on the state were forcibly euthanized.
Despite the association of eugenics and eugenic sterilization with the Nazi regime, the United States and much of Northern Europe continued to carry out eugenic sterilizations for the remainder of the twentieth century, although many countries repealed these laws by the 1980s and 1990s. Japan did not begin its program of eugenic sterilization until after World War II, starting in 1948 and continuing until 1996.
State-mandated eugenic sterilization is no longer a current concern in the field of American civil rights and civil liberties. Many states have apologized for their prior actions and a return to eugenic sterilization programs seems unlikely. Instead, the concern is more focused on similar results being achieved through coercion.
One fear is that advances in DNA testing raise the possibility that genetic defects could be accurately identified and that information used to affect decisions about reproduction to ensure ‘‘designer babies.’’ Fetuses possessing unfavorable genetic material could be aborted, or in vitro fertilization techniques used to ensure that only children with the couple’s favored genetic characteristics are born.
State coercion is seen as a bigger concern for the reproductive freedom of individuals. Some have called for the state to restrict access to welfare payments for those unwilling to be sterilized or have other long-term birth control procedures such as implants. Others see the existence of federally funded voluntary sterilizations as promoting a program of eugenic sterilization. These sterilizations, in addition to those provided by a number of charities, are performed on the poor with disproportionately large numbers from ethnic minorities. The argument is that by funding sterilization instead of other means of birth control the state encourages individuals to undergo sterilization simply because they cannot afford other options.
GAVIN J. REDDICK
References and Further Reading
Cases and Statutes Cited |
Dr. Martin Luther King had a dream for the children of our nation. Here are some ideas for studying this 20thcentury hero in our 21st century classrooms.
- All our students need to learn how to do online research effectively. Make it easier for younger students by starting them out at a safe list of links on Dr. Martin Luther King. As a class or in small groups, develop KWL charts for Dr. King. Then send students to the list of links. Have students click to the articles, skim to determine whether they can find the answers they seek, and return to the list. Have individuals or groups of students report back to the class with the information they’ve learned. For older students, add the requirement to cite the sources correctly.
- Another great place for online research on Dr. King is The King Center. This is an excellent multimedia website. Explore it together as a class or have students visit at the computer lab and search for the answers to questions developed with KWL charts.
- For older students, use the website of Stanford’s Martin Luther King, Jr. Institute in the same way.
An email exchange
Third grade classes in Birmingham, Alabama, and in Kent, Washington, used an email exchange to explore the differences and similarities between their classrooms, as well as a study on Dr. King and civil rights. The Seattle Times shares their email archive. You can study this exchange in your classroom, and use it as inspiration to plan your own email exchange with classrooms in other places or circumstances.
In addition to seeking information, we now use the internet as a primary source of entertainment and learning in a variety of media. Use some of these online resources to deepen students’ understanding of Dr. King:
- National Parks Service Virtual Tour of Dr. King’s childhood home.
- Join a visiting class by video and use the same lesson plans and materials they did.
- Listen to Dr. King’s famous speech, I Have a Dream.
- Make a virtual visit to the Martin Luther King Jr. Memorial.
- Learn Trout Fishing In America‘s song, Martin Luther King & Rosa Park from their album, Big Round World. The song grew out of a songwriting workshop the duo conducted at an elementary school in California. The kids wanted to write a song about celebrities, and these were the celebrities they chose. Warms your heart, doesn’t it? Download Big Round World at Amazon to have the whole album, or you can download it for free from FreshPlans by gracious permission of Trout Fishing.
Create digital media
Use a simple timeline of Dr. King’s life as a starting point, and create a class project with the software you have available. A PowerPoint presentation, a video of a dramatic presentation of the timeline’s events, or a class book created with MSWord or online storytelling software will give students lots of practice in using the computers as well as cementing learning about Dr. King. |
Kids start understanding prejudice by the time they’re three years old. They can distinguish between physical traits—hair color, height, weight, etc.—even earlier. But by the time children enter preschool, they can already tell how certain characteristics, like skin color or gender, affect how people see them and their peers.
As kids get older, this can lead to intolerance and discrimination in schools. A California Student Survey found that nearly one-fourth of students across grades report being harassed or bullied on school property because of their race, ethnicity, gender, religion, sexual orientation, or disability. A survey in the UK found that 75 percent of girls aged 11 to 21 feel that sexism affects their confidence and goals. According to the Gay Lesbian and Straight Education Network, 90 percent of LGBT youth report being verbally harassed at school, 44 percent physically harassed, and 22 percent physically assaulted.
Schools and other organizations have struggled to find an effective way to confront diversity issues. Although research has shown that talking about race, gender, and sexuality decreases prejudice—and avoiding those conversations encourages stereotyping—many people remain skeptical that diversity education can actually work.
My first experience with diversity education was with a program called Anytown, a week-long camp for teenagers that provides workshops and activities about prejudice and discrimination. The experiences I had there convinced me that, when it's done the right way, diversity education can work.
Anytown currently operates in more than 20 cities across the U.S. and has existed in my hometown of Tampa, Florida, for two decades. Its mission is to “empower diverse groups of young people to create more inclusive and just schools and communities, where all individuals are treated with respect and understanding.”
“I think youth as well as adults don’t realize how many discriminatory messages infiltrate our everyday lives,” says Jessica Estevez, the Tampa program’s director. “We want our students to leave with the knowledge of what happens when prejudices go on unchecked, when we choose to interact based on our stereotypes and we create systems that discriminate [against] whole groups of people.”
“Empathy and respect is developed through genuine dialogue about these issues,” says Estevez. “But there has to be a safety created in the space that respects each person’s perspective. ”
Research also shows that the more meaningful, face-to-face contact people have with other racial groups, the less likely they are to be prejudiced. That’s why Anytown is residential, requiring participants to eat, sleep, and shower in the same space for a week. .
“I’ve had students tell me, ‘Before Anytown, I would have never talked to so and so and thought they would have shared any kind of experience with me,’” says Estevez. “I think the residential experience really allows those kinds of changes to happen.”
But even in such a positive environment, diversity workshops can become hostile when the focus shifts from sharing experiences to dictating what people should believe and how they should behave.
“The way we’ve pitched diversity in the past was all about what not to say, how not to discriminate,” says DeEtta Jones, a senior member of the consulting team for Diversity Best Practices. “But it shouldn’t be about learning exactly what to say and what not to say. The goal is to put people in a learning space, not a scary place, and make everyone feel that this an exploratory, energizing discussion.” |
The latest news from academia, regulators
research labs and other things of interest
Posted: February 11, 2010
For the first time, 'ion traps' are used to measure super heavy elements
(Nanowerk News) Besides the 92 elements that occur naturally, scientists were able to create 20 additional chemical elements, six of which were discovered at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt. These new elements were produced artificially with particle accelerators and are all very short-lived: they decay in a matter of a split second. However, scientists predict the existence of even heavier elements with an extreme longevity, leaving them to only decay after years. These elements form an island of stability.
The picture shows the Penning trap, which is part of the Shiptrap experiment. A magnetic field parallel to the tube forces the arriving ions onto a spiral course inside the tube. The ions’ spiraling frequency is used to directly calculate their atomic mass.
An international team of scientists headed by Michael Block was able to trap atoms of the element 102, nobelium, in an ion trap. This is the first time in history that a so-called super heavy element had been trapped. Trapping the element allowed the research team to measure the atomic mass of Nobelium with unprecedented accuracy. The atomic mass is one of the most essential characteristics of an atom. It is used to calculate the atom’s binding energy, which is what keeps the atom together. The atom’s binding energy determines the stability of an atom. With the help of the new measuring apparatus, scientists will be able to identify long-lived elements on the so called islands of stability that can no longer be assigned by their radioactive decay . The island of stability is predicted to be located in the vicinity of the elements 114 to 120.
„Precisely measuring the mass of nobelium with our Shiptrap device was a successful first step. Now, our goal is to improve the measuring apparatus so that we can extend our method to heavier and heavier elements and, one day, may reach the island of stability”, says Michael Block, head of the research team at the GSI Helmholtz Centre.
For their measurements, Michael Block and his team built a highly complex apparatus, the ion trap “Shiptrap”, and combined it with “Ship”, the velocity filter which was already used in the discovery of six short-lived elements at GSI. To produce nobelium, the research team used the GSI accelerator to fire calcium ions onto a lead foil. With the help of Ship, they then separated the freshly produced nobelium from the projectile atoms. Inside the Shiptrap apparatus, the nobelium was first decelerated in a gas-filled cell, then the slow ions were trapped in a so-called Penning trap.
Held inside the trap by electric and magnetic fields, the nobelium ion spun on a minuscule spiral course at a specific frequency. This frequency was used to calculate the atomic mass. With an uncertainty of merely 0,000005 per cent, this new technique allows determining the atomic mass and binding energy with unprecedented precision and, for the first time, directly without the help of theoretical assumptions.
The experiment was a collaboration between GSI, the Max-Planck-Institut für Kernphysik Heidelberg, the Universities Gießen, Greifswald, Heidelberg, Mainz, Munich, Padua (Italy), Jyväskylä (Finland) and Granada (Spain) as well as the PNPI (Petersburg Nuclear Physics Institute) and the JINR (Joint Institute for Nuclear Research) in Russia.
Source: GSI Helmholtzzentrum für Schwerionenforschung
If you liked this article, please give it a quick review on reddit or StumbleUpon. Thanks!
Check out these other trending stories on Nanowerk: |
Magma Flow: How Does Temperature Affect the Movement of Magma?
How does temperature affect the movement of magma?
- Teaspoon (5-ml spoon)
- Soft margarine
- Small baby-food jar
- Cereal bowl
- Warm tap water
- Fill the teaspoon (5-ml spoon) with margarine.
- Using your finger, push the margarine out of the spoon and into the babyfood jar so that the glob of margarine is centered in the bottom of the jar.
- Hold the jar in your hand and turn it on its side.
- Observe any movement of the margarine.
- Fill the bowl halfway with warm (slightly hotter than room temperature) tap water.
- Set the baby-food jar in the warm water.
- After three minutes, pick up the jar and turn it on its side.
- Again, observe any movement made by the glob of margarine.
At first the margarine inside the tilted jar does not move much, but heating the margarine causes it to move more freely.
As the temperature of the margarine increased, it became thinner and moved more easily. Molecules in colder materials have less energy, are closer together, and move more slowly than warmer molecules with more energy. These warm, energized molecules move away from each other, causing solids to melt and liquids to thin. Just as the temperature of the margarine affected the way it moved across the surface of the jar, the temperature of magma affects the way it moves up the volcano's vent (the channel of a volcano that connects the source of magma to the volcano's opening). Hot magma is thin and moves easily and quickly up the vent, while cooler magma is thick and sluggish.
- Would a different heating time affect the results? Repeat the experiment, checking the movement of the margarine every minute for six minutes. Use a thermometer to keep the temperature of the water as constant as possible, and replace the warm water each time you make an observation. Quickly replace the jar in the warm water after each testing.
- Does the composition of the material being heated affect the results? Repeat the original experiment using other solids such as butter or chocolate candy with and without nuts.
- Thick liquids are said to have a high viscosity (the measurement of a liquid's ability to flow). Viscous liquids flow slowly, and particles dropped into the liquid fall slowly as well. Liquids such as water, honey, and shampoo can be used to simulate magma of various viscosities. Test the viscosity of each liquid by dropping a marble into a tall, slender glass filled with the liquid at room temperature. The slower the marble falls, the more viscous the liquid is.
- Demonstrate the effect of temperature on the viscosity of the liquids by repeating the experiment above twice. First, raise the temperature of the samples by standing each glass in a jar of warm water. After three minutes, stir the liquids and use a thermometer to determine the temperature of each. Second, lower the temperature of the samples to about 50° Fahrenheit (10°C) by inserting a thermometer in each liquid sample and placing them in a refrigerator. Diagrams and the results of each experiment can be used as part of a project display.
CHECK IT OUT!
The three distinct types of magma—andesitic, basaltic, and rhyolitic—harden into different kinds of rock. Find out more about the characteristics of these magma types. What is their chemical composition? Where are they formed? How does the temperature of each differ? Which is the most common? For information about magma see pages 74-79 in The Dynamic Earth, Second Edition (New York: Wiley, 1992), by Brian J. Skinner and Stephen C. Porter.
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety. |
Maths Zone: Measurement, Space and Data contains sections covering the basic measurement concepts of length, area, volume and time as well as exploring spatial concepts such as polygon attributes, 2D shapes and manipulation of shapes. Chance and data activities have a strong language focus and require students to comprehend tables, graphs, data and texts.
The Maths Zone series is designed to add some fun to the traditional strands of the mathematics curriculum by presenting challenging activities and humorous problems to students in a meaningful context. The worksheets are a mix of traditional algorithms and puzzle sheets and are completely original in nature, offering the novelty of a new approach. Classroom teachers and remedial teachers will find this book to be a valuable mathematical resource. Activities can be used as early finisher tasks, whole class lessons, practice and consolidation tasks or as small group challenges. All activities are linked to relevant mathematics outcomes.
Author: Edward Connor |
If you think of an electrical device as a piece of plumbing, voltage is the amount of water that you send down into the pipe, resistance is the pipe's relative width or narrowness, and current is the speed with which the water flows. Power measures the water's relative difficulty or ease making its way through the pipe. You relate all these values to one another using a common set of physics equations known as Ohm's law. If you need to calculate electricity's current flow, you'll need to have at least two of the three values -- voltage, resistance or power -- listed above.
- Skill level:
Other People Are Reading
Things you need
Calculate current flow using voltage and resistance. According to Ohm's law, you can express electricity's current in amps as a ratio of its voltage in volts to the resistance of the device it's flowing through in ohms -- I = E/R, respectively. For example, if you want to know the current flow of 220 V of electricity as it flows through a laptop computer with 80 ohms of resistance, you would simply plug these values into the equation as follows: I = 220/80 = 2.75 amps.
Calculate current flow using power and resistance. Ohm's law also states that electrical current, "I," is equal to the square root of the power dissipated as it travels through the device divided by that device's resistance. If a light bulb dissipates 80 watts of power and has a resistance of 55 ohms, you can calculate the electricity's current as follows: I = sqrt(80/55) = sqrt(1.4545) = 1.20 amps.
Calculate current flow using power and voltage. If you have a space heater which dissipates 420 watts of power when it takes in 120 V of electricity, Ohm's law states you can calculate this electricity's current using the equation "I = P/E." For this example, compute current like so: I =420/120 = 3.5 amps.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
You will be much better off not drinking any sodas at all. The results of the study connecting artificial sweeteners with metabolic syndrome, a collection of conditions that together dramatically increase the risk of heart disease, stroke and diabetes, by no means vindicate sugar. Instead, they suggest that artificial sweeteners are as bad for health as too much sugar.
The study, from Israel, showed that artificial sweeteners altered the collection of bacteria (known as the microbiome) in the digestive tract in a way that caused blood glucose levels to rise higher than expected and to fall more slowly than they otherwise would. This finding may solve the longstanding mystery of why drinking artificially sweetened diet sodas doesn’t lead to weight loss. It also strongly suggests that the use of artificial sweeteners has been contributing to the worldwide obesity epidemic and rising rates of type 2 diabetes.
To arrive at their conclusions, the Israeli researchers gave 10-week old mice water sweetened with saccharin, sucralose or aspartame, plain water, or sugar-sweetened water. After one week, the mice that received the artificially sweetened water had developed glucose intolerance, the first step on the path to metabolic syndrome and type 2 diabetes. With glucose intolerance, the body cannot easily handle large amounts of sugar.
The researchers next gave the mice antibiotics, which killed the bacteria in the animals’ digestive systems. The glucose intolerance disappeared, supporting the hypothesis that this condition is caused by a change in the microbiome. The investigators were able to confirm this by injecting bacteria from the mice that had been drinking water sweetened with saccharin into “sterile” mice that had never been given artificial sweeteners. The result: glucose intolerance developed in the sterile mice. Examination of the microbiome from these mice showed bacterial changes associated with a propensity to obesity and diabetes.
Looking at data from a human trial exploring the link between nutrition and the microbiome, the researchers found a “significant association” between consumption of artificial sweeteners (as reported by the study participants), configurations of gut bacteria, and the propensity for glucose intolerance. And when the investigators asked seven study participants who generally didn’t consume artificially sweetened drinks or foods to eat and drink them for a week, tests showed that many (but not all) of the volunteers had begun to develop glucose intolerance.
These results aren’t the final word on the effect of artificial sweeteners, but they do suggest that these chemicals aren’t doing us any good. As the Israeli team leader noted, rather than protecting us from obesity, metabolic syndrome and diabetes, artificially sweetened food and drink are now linked to a tendency to develop “the very disorders they were designed to prevent.”
We’ve known for some time that the more artificially sweetened food and drink we consume, the fatter we get. This new study sheds light on how and why this happens.
Andrew Weil, M.D. |
Fluoride, a naturally occurring mineral, is essential for proper tooth development and the prevention of tooth decay. In communities throughout the United States, tooth decay may still be a significant problem — but it is far less prevalent than it would have been, if not for the fluoridation of public water supplies. That's why the major associations of pediatric dentists and doctors support water fluoridation to the current recommended levels of 0.70 parts per million (ppm). It's also why the federal Centers for Disease Control and Prevention (CDC) has called fluoridated water one of the most significant health achievements of the 20th century.
Of course, not everyone has access to fluoridated water. That's one reason why a fluoride supplement is often recommended for your child and/or the use of toothpastes and other products that contain this important mineral. Because it is possible for children to get too much fluoride, it is best to seek professional advice on the use of any fluoride-containing product.
How Fluoride Helps
The protective outer layer of teeth, called enamel, is often subject to attacks from acids. These can come directly from acidic foods and beverages, such as sodas and citrus fruits — or sometimes through a middleman: the decay-causing bacteria already in the mouth that create acid from sugar. These bacteria congregate in dental plaque and feed on sugar that is not cleansed from your child's mouth. In metabolizing (breaking down) sugar, the bacteria produce acids that can eat through tooth enamel. This is how cavities are formed. When fluoride is present, it becomes part of the crystalline structure of tooth enamel, hardening it and making it more resistant to acid attack. Fluoride can even help repair small cavities that are already forming.
Delivering Fluoride to the Teeth
Fluoride ingested by children in drinking water or supplements can be taken up by their developing permanent teeth. Once a tooth has erupted, it can be strengthened by fluoride topically (on the surface). Using a fluoride-containing toothpaste is one way to make sure your children's teeth receive helpful fluoride exposure daily. We recommend using only a pea-sized amount for children ages 2-6 and just a tiny smear for kids under two. Fluoride should not be used on children younger than six months. A very beneficial way to deliver fluoride to the teeth is with topical fluoride applications painted right onto your child's teeth and allowed to sit for a few minutes for maximum effectiveness.
How Much Is Too Much?
Teeth that are over-exposed to fluoride as they are forming beneath the gum line can develop a condition called enamel fluorosis, which is characterized by a streaked or mottled appearance. Mild fluorosis takes the form of white spots that are hard to see. In more severe cases (which are rare), the discoloration can be darker, with a pitted texture. The condition is not harmful, but may eventually require cosmetic dental treatment. Tooth decay, on the other hand, is harmful to your child's health and can also be quite painful in severe cases.
The risk for fluorosis ends by the time a child is about 9 and all the permanent teeth have fully formed. Since fluoride use is cumulative, all the sources your child comes in contact with — including powdered infant formula mixed with fluoridated tap water — need to be evaluated. While caution is advised, however, it would be a mistake to forgo the benefits that this important mineral can bring to your child's teeth — and his or her overall health.
Fluoride and Fluoridation in Dentistry The Center for Disease Control says that water fluoridation is “One of the ten most important public health measures of the 20th century.” Extensive systematic reviews of the evidence conclusively show that water fluoridation and fluoride toothpastes both substantially reduce dental decay. Learn why through the amazing fluoride story... Read Article
Topical Flouride Fluoride has a unique ability to strengthen tooth enamel and make it more resistant to decay. That's why dentists often apply it directly to the surfaces of children's teeth after routine dental cleanings. This surface (topical) application can continue to leach fluoride into the tooth surface for a month or more... Read Article
Tooth Decay — A Preventable Disease Tooth decay is the number one reason children and adults lose teeth during their lifetime. Yet many people don't realize that it is a preventable infection. This article explores the causes of tooth decay, its prevention, and the relationship to bacteria, sugars, and acids... Read Article |
How and where you use a layer depends on the type of layer and what you need to do with it. The following are examples:
- Use layers in maps and scenes to visually convey spatial information.
- You can use layers, or the maps and scenes that contain the layers, in apps.
- Use feature layers in analysis tools to discover patterns in the data.
- Editable feature and table layers can be added to Map Viewer or apps to allow you to update feature and attribute data. In some apps, editable feature and table layers are added via maps.
- For hosted feature layers configured to allow it, you can obtain a copy of the underlying data by exporting the data from the hosted feature layer to a CSV file, shapefile, GeoJSON file, file geodatabase, or Microsoft Excel file.
Use layers in maps and scenes
You build a map or scene by adding data layers to them and configuring how the layers look and behave in the map or scene. You can add layers you published and layers from other providers, such as ArcGIS Living Atlas of the World, to the maps and scenes. You can use Open in Map Viewer or Open in Scene Viewer in a layer's item page to open it in Map Viewer or Scene Viewer, respectively, or you can start in Map Viewer or Scene Viewer and add layers there. See Get started with maps and Get started with scenes for overviews of the process to create the maps and scenes you and others can use to interact with your layers.
Feature layers can be used in analysis tools—in Map Viewer and ArcGIS Pro—and custom apps to answer spatial questions, discover patterns, and identify trends.
Use layers in apps
Apps are similar to tools, in that many provide focused functionality that allows you to interact with the layers in your portal.
You need to choose the app that meets the needs of the app users. Sometimes, you'll add a layer directly to an app, such as ArcGIS Pro, to use the layer as a basemap, provide reference information in your map, or edit or analyze features. In many other cases, you'll create and configure a map or scene containing the layers people need and add that map or scene to an app that provides specific functionality. You can create apps for that purpose, or use out-of-the-box apps such as ArcGIS Dashboards or ArcGIS GeoPlanner.
Edit features and tables
These editable layers can be edited in ArcGIS Pro or ArcMap. Editable feature and table layers can also be added to maps that you subsequently include in apps.
The owner of the feature layer or an administrator can also configure maps containing editable feature layers for offline use. The feature layers and the maps must be enabled for offline use. You could then load the map containing the editable, offline-enabled layers into ArcGIS Collector and collect and edit data while disconnected from the Internet.
Export data from hosted feature layers
You can export data from a hosted feature layer if one of the following is true:
- You own the features.
- You are a portal administrator.
- You aren't the hosted feature layer owner or the administrator, but the owner or administrator has configured the hosted feature layer to allow others to export the data.
This setting can be changed on the item page Settings tab by checking the Allow others to export to different formats check box under Export Data.
When you export from a hosted feature layer, ArcGIS Enterprise creates one of the following items on the My Content tab of the Content page:
- CSV files—When you export from a point layer, latitude and longitude values for the points are exported to the CSV file. When you export a line or polygon layer, only nonspatial attributes are exported.
- Microsoft Excel files—When you export from a point layer, latitude and longitude values for the points are exported to the Excel file. When you export a line or polygon layer, only nonspatial attributes are exported.
- File geodatabases
- GeoJSON files
- Feature collections
Once the item is created, you can download the file.
If the layers in the hosted feature layer contain metadata, the metadata is included if you export to a shapefile or file geodatabase.
When you export from a hosted feature layer view, only the data included in the view definition is included in the exported file.
Follow these steps to export data from the details page of a hosted feature layer or hosted feature layer view.
- Sign in and open the item page for the features you want to export.
- If you own the feature layer, click Content > My Content and click the item title.
- If you do not own the feature layer, search for the layer, and click the feature layer name in the search results list.
- To export individual layers, go to the Layers section of the Overview tab, click Export To under the layer you want to export, and choose the format you want to export. To export all the layers in the hosted feature layer, click the Export Data button on the Overview tab and choose the format you want to export.
If the layer you export contains attachments and you want to export the attachments, export to a file geodatabase. When you export the layer to any of the other formats listed below, attachments are not included.
- Export to Shapefile—Creates a compressed file (.zip file) containing a shapefile for each layer and its associated metadata (if present) that you export. You can download the file and save it to your computer.
- Export to CSV file—Creates a comma-separated values file when you export from a layer. You can open the file or save it to your computer. If you export all layers to a CSV file, a CSV collection is created, which is a .zip file containing one CSV file per layer. You can download the .zip file and save it to your computer.
- Export to Excel—Creates a Microsoft Excel spreadsheet. You can open the file or save it to your computer. If you export all layers to Excel, each layer will be a separate sheet in the spreadsheet.
- Export to FGDB—Creates a .zip file containing a file geodatabase. The file geodatabase contains a feature class and its associated metadata and attachments (if present). You can download the .zip file and save it to your computer. Note that the .zip file uses the name you specify for the Title, but the geodatabase name is randomly generated, and the feature class has the same name as the layer you exported.
- Export to GeoJSON—Creates a GeoJSON file containing definitions for all layers you export. You can download the file and save it to your computer.
- Export to Feature Collection—Creates a feature collection item you can open in Map Viewer.
Choose Generalize features for web display to optimize the layer for web apps. You can only generalize features from layers published in the WGS 1984 Web Mercator (Auxiliary Sphere) coordinate system. Note that exported feature collections that are generalized for web display do not work in desktop and mobile apps.
Alternatively choose Keep original features if you need to maintain all the precision in your data, or if you intend to use the feature collection in desktop or mobile apps.
If you have privileges to perform spatial analysis, you can also export data from a hosted feature layer or hosted feature layer view that has export enabled, or export data from a feature collection using the Extract Data tool in Map Viewer. |
As we did in our previous post, we are going to share with you a tutorial to learn about LTI Systems in the time domain and filtering. Therefore, we are going to see some exercises with Matlab code and solutions.
Before we start, make sure you know what the convolution is. Then, we are going to create the following sequence:
and plot it. To do that, simply write and run in Matlab the following code:
The function h[n] will be used in the activities as the system to process the different input signals. Its representation, in the discrete time domain is:
Figure 1. LTI System
Now that we have all the the tools we need to complete the tutorial, let’s get started!
Exercise 1: Linearity property of LTI system demonstration
In this activity we are going to study the linearity property of LTI systems. The first thing we are going to do is defining two signals, s1 and s2, that will be our inputs to the system:
Before running the previous code, notice that our input signals are sines that we are going to create using this function:
n=[start:1:L-1]; %this is the sampling interval%
sines= sin(omega*n + phase); %sinusoidal signal%
This is the representation of both signals:
Figure 2. Sine input signal
Figure 3. Second sine input signal
When we convolve each signal with h[n] separately and we add the outputs, we obtain the same output as if we first add s1 + s2 and then convolve with h[n]. The code to convolve each signal and add the outputs is:
Now, if we add s1+s2, we get:
Figure 4. System’s output
The code to get the previous signal and the system output is:
Now, we can plot both outputs, y3 and y4 in the same graph by using the following code:
And finally, as we see in the following image, both y3 and y4 are the same, therefore the system is linear:
Figure 5. Outputs comparison: linear property
Exercise 2: Time invariance property of LTI systems demonstration
In this example, we are going to see how if we process the input signal through the system and we delay the output, we will obtain the same result as if we process a shifted version of the input through it. This will happen in all the cases that our system is LTI; this is the invariance property.
Our input signal will be s1, created before. Now, in order to obtain a shifted version of it, we are going to use the function delay that we created in our previous tutorial. In order to shift the signal, for example, 10 samples, we write the following code:
Now, by doing:
y2=delay(y1,10,-100,200);%first output delayed
s2=delay(s1,10,-100,200);%second input as a delayed version of the first input
stem((-100:398),s2)%code needed to plot the second input
We will obtain the following outputs, y2 and y3:
Figure 6. Outputs comparison: time invariance property
which, as you can observe, they are identical.
Exercise 3: LTI systems and filtering
For this exercise we are going to create an additional h[n]. We will work with the following two systems:
h[n]=(u[n]-u[n-10]) cos(pn) e-0.9n
We will use them as filters. In order to define and plot them in Matlab, write the following code:
These are the two filters representation:
Figure 7. LTI filters
Now, how do we know if these filters are low, high or band pass if they are represented in the time domain? First, you can demonstrate that both filters are LTI systems by applying the properties explained above and the rest of them.
The input signal (the signal that we will filter through both filters) is going to be a periodic square signal that we going to create using the function that we used in our previous post:
This signal will look like the following one:
Figure 8. Periodic square pulses
In order to process it through h[n] and h1[n], we need to apply the convolution:
And we will obtain the following outputs, y1 and y2:
Figure 9. Filters’ outputs
As we can observe, the graphic at the right, which is the output from h[n], shows that this is a low-pass filter because the oscillations are slow. However, in the graphic at the right, corresponding to the system h1[n], the oscillations are faster, so this is a high-pass filter. You can observe this effect better if you represent the previous outputs, y1 and y2, by using the function plot:
Figure 10. Low pass filter output
Figure 11.High pass filter output
Exercise 4: Rough estimation of the frequency response
In this exercise, we are going to use our low pass filter, h[n], to process different sines signals through it in order to observe the change in amplitude and phase.
By representing the amplitude change of each sinusoidal, we would be representing an estimation of the frequency response of h[n].
Actually, we are using an interesting property of LTI systems for complex exponential inputs (sines and cosines, according to the Euler formula): when the input signal produces an output that is a constant (real or complex) multiplied by this input, this signal is an eigenvector of the system and its amplitude is an eigenvalue of the system.
Now, what about the phase change? The phase change is equivalent to the delay experimented by input when being processing by the system; actually, another way to represent the frequency response will be plotting the phase of the output signal against the selected frequencies.
For the first case, in order to plot the amplitude against the different frequencies, we use the following code:
Remember that this is a system’s frequency response estimation. Matlab provides functions that allow to study the frequency response in a more accurate way. For example, by using the function freqz, we simply write the following code:
Now, let’s compare ampH vs. omegas_vector and H vs omegas_vector:
Figure 12. Filter frequency response
As we can see, when using the function freqz (right graph), the shape is smoother than in our approximation but in both cases, we can observe that the cut frequency is around 1.5 and both systems represent the same: a Butterworth filter.
We hope this tutorial for you and don’t worry if you didn’t understand some process or steps: leave a comment below or send us an email to [email protected] and we will reply to you asap 🙂 |
When was the Korean War?
The Korean War was a military conflict begun on the 25th of June, 1950 between North and South Korea, in which a United Nations force fought for the South, and China was on the side of North Korea. The Soviet Union was also assisting the North. The war arose from the division of Korea at the end of World War II and from the global tensions of the Cold War that developed immediately afterwards. About 5 million people died during the war, which finally came to its end in 1953. The Korean War wasn't widely covered by American mass media. |
Author: Elaine Kirn-Rubin
Suitable for: K-8, Secondary, Young Adult, Adult
What They Are: 34 paper-and-pencil (or onscreen) letter and word puzzles of many types that focus on basic phonics patterns rather than "just answers," with pedagogical explanations before them + solutions after.
Why You Need Them: Even pre-literate readers and novice language learners are likely to recognize what puzzles are and enjoy trying to solve them! They'll be acquiring and practicing Basic literacy and language skills without even noticing that they're learning!
What They Do:
get learners to practice the names and order of the 26 alphabet letters with activities called "Letter LInes," "Letter Shapes," "Letter Spaces," "Letter Finds & Counts," "Letter Words," "Letter Cards," and "Letter Formation," starting them out right away learning by doing
introduce the concept of sounds spelled by letters with elemental info + charts of Initial & Final Consonants & Blends + Simple ("Short") & Complex ("Long")Vowel Sounds & Spellings
"teach" very basic Sound-Letter Correlations through Letter & Word Play of 12 Puzzle Types:
• Word Find • Criss-Cross • Linked Words • Letter Choices • Word Maze • Switched Letters • So What's Different? • Meaning Categories • Letter Connect • Letter Jumble • Letter Blocks • Rebus Crossword
involve learners in choosing, connecting, sequencing, and correcting letters in pleasing visual contexts--all with illustrated, everyday, one-syllable vocabulary words; have them compare their answers with those in reduced-sized Solutions |
- Do the following:
- Find With your parent’s permission, use the internet to find a blog, podcast, website, or an article on the use or conservation of energy. Discuss with your counselor what details in the article were interesting to you, the questions it raises, and what ideas it addresses that you do not understand.
- After you have completed requirements 2 through 8, revisit your source for requirement 1a. Explain to your counselor what you have learned in completing the requirements that helps you better understand the article.
- Show you understand energy forms and conversions by doing the following:
- Explain how THREE of the following devices use energy, and explain their energy conversions: toaster, greenhouse, lightbulb, bow drill, cell phone, nuclear reactor, sweat lodge.
- Construct a system that makes at least two energy conversions and explain this to your counselor.
- Show you understand energy efficiency by explaining to your counselor
a common example of a situation where energy moves through a system
to produce a useful result. Do the following:
- Identify the parts of the system that are affected by the energy movement.
- Name the system's primary source of energy.
- Identify the useful outcomes of the system.
- Identify the energy losses of the system.
- Conduct an energy audit of your home. Keep a 14 day log that records
what you and your family did to reduce energy use. Include the following
in your report and, after the 14 day period, discuss what you have learned
with your counselor.
- List the types of energy used in your home such as electricity, wood, oil, liquid petroleum, and natural gas, and tell how each is delivered and measured, and the current cost; OR record the transportation fuel used, miles driven, miles per gallon, and trips using your family car or another vehicle.
- Describe ways you and your family can use energy resources more wisely. In preparing your discussion, consider the energy required for the things you do and use on a daily basis (cooking, showering, using lights, driving, watching TV, using the computer). Explain what is meant by sustainable energy sources. Explain how you can change your energy use through reuse and recycling.
- In a notebook, identify and describe five examples of energy waste
in your school or community. Suggest in each case possible ways to reduce
this waste. Describe the idea of trade offs in energy use. In your response,
do the following:
- Explain how the changes you suggest would lower costs, reduce pollution, or otherwise improve your community.
- Explain what changes to routines, habits, or convenience are necessary to reduce energy waste. Tell why people might resist the changes you suggest.
- Prepare pie charts showing the following information, and explain
to your counselor the important ideas each chart reveals. Tell where
you got your information. Explain how cost affects the use of a nonrenewable
energy resource and makes alternatives practical.
- The energy resources that supply the United States with most of its energy
- The share of energy resources used by the United States that comes from other countries
- The proportion of energy resources used by homes, businesses, industry, and transportation
- The fuels used to generate America's electricity
- The world's known and estimated primary energy resource reserves
- Tell what is being done to make FIVE of the following energy systems
produce more usable energy. In your explanation, describe the technology,
cost, environmental impacts, and safety concerns.
- Biomass digesters or waste to energy plants
- Cogeneration plants
- Fossil fuel power plants
- Fuel cells
- Geothermal power plants
- Nuclear power plants
- Solar power systems
- Tidal energy, wave energy, or ocean thermal energy conversion devices
- Wind turbines
- Find out what opportunities are available for a career in energy. Choose one position that interests you and describe the education and training required.
|For Requirement 4 you may wish to use this checklist:
(The checklist is already included in the worksheets below.)
|Word Format||PDF Format|
BSA Advancement ID#:
Scoutbook ID#: 43
Requirements last updated in: 2018
Pamphlet Publication Number: 35889
Pamphlet Stock (SKU) Number: 619599
Pamphlet Revision Date: 2014
Page updated on: May 08, 2022 |
11/4/2016· The Lewis structure of Nitrogen atom can be drawn if one knows the nuer of valence electrons of Nitrogen. The electronic configuration of Nitrogen is 1s^2 2s^2 2p^3 The nitrogen atom has five electron present in 2s and 2p subshell and these electrons are called valence electrons. Nitrogen atom has 5 valence electrons, so its Lewis dot syol for N is This video shows how to use the …
Lewis Research Center SUMMARY Silicon carbide is a wide band gap semiconductor with much potential for high- temperature appliion. Because of high strength in whisker form and excellent chemi- cal stability, it is also a potential fiber-reinforcement
26/2/2018· Although we demonstrate the concept using silicon meranes for eedded cooling of silicon substrates, our merane structure can also be microfabried in silicon carbide…
Annealed silicon rich carbide (SRC), owing to its electrical conductivity, thermal stability and energy band gap compatible with Si QD cell fabriion, has the potential to overcome this problem. Further, this quasi-transparent thin-film can be used as either substrate or superstrate of a Si QD solar cell and therefore provides flexibility in cell structure design.
Here, the term silicon-oxycarbide refers specifically to a carbon-containing silie glass wherein oxygen and carbon atoms share bonds with silicon in the amorphous, network structure. Thus, there is a distinction between black glass, which contains only a second-phase dispersion of elemental carbon, and oxycarbide glasses which usually contain both network carbon and elemental carbon.
Lewis syols can also be used to illustrate the formation of ions from atoms, as shown here for sodium and calcium: Likewise, they can be used to show the formation of anions from atoms, as shown here for chlorine and sulfur: Figure 7.10 demonstrates the use of Lewis syols to show the transfer of electrons during the formation of ionic compounds.
In almost all cases, chemical bonds are formed by interactions of valence electrons in atoms. To facilitate our understanding of how valence electrons interact, a simple way of representing those valence electrons would be useful. A Lewis electron dot diagram (or electron dot diagram or a Lewis diagram or a Lewis structure) is a representation of the valence electrons of an atom that uses dots
Silicon oxyfluoride (SiOF2) Properties Structure Search Silicon oxyfluoride (SiOF2) CAS No.: 14041-22-6 Formula: F2OSi Molecular Weight: 82.08170 Synonyms: Silicon oxyfluoride (SiOF2);
Lewis Structure Diagram representing the arrangement of valence electrons in a moleculeMost atoms need 8 valence electrons to become stable. The exceptions are H and He which needLewis Structure for Cl 2 Each Cl atom has 7 valence electrons; giving a total of 14 valence electrons to work with.
Silicon Properties Silicon is a Block P, Group 14, Period 3 element. The nuer of electrons in each of Silicon''s shells is 2, 8, 4 and its electron configuration is [Ne] 3s 2 3p 2.The silicon atom has a radius of 111.pm and its Van der Waals radius is 210.pm. In its
20/7/2011· The key difference between silicon and carbon is that the carbon is a nonmetal whereas the silicon is a metalloid. Carbon and silicon, both are in the same group (group 14) of the periodic table. Hence, they have four electrons in the outer energy level. They occur in
Silicon as a semiconductor: Silicon carbide would be much more efficient Date: Septeer 5, 2019 Source: University of Basel Summary: In power electronics, semiconductors are based on the element
Dot cross representation Otherwise known as Lewis structures, dot-cross representations show the electrons in the outer shell (valence electrons) of all of the atoms involved in a molecule. Covalent bonds consist of electron pairs and there are also lone, or non
Intrinsic semimetallicity of graphene and silicene largely limits their appliions in functional devices. Mixing carbon and silicon atoms to form two-dimensional (2D) silicon carbide (SixC1–x) sheets is promising to overcome this issue. Using first-principles calculations coined with the cluster expansion method, we perform a comprehensive study on the thermodynamic stability and
Lewis structures show each atom in the structure of the molecule using its chemical syol. Lines are drawn between atoms that are bonded to one another (rarely, pairs of dots are used instead of
8/4/2012· I don''t think 12345 understands that silicon dioxide has two oxygen atoms for every silicon, nor does she understand that silicon dioxide is a network solid with no discrete molecules and therefore, no Lewis structure.
Silicon carbide (SiC) is a particularly interesting species of presolar grain because it is known to form on the order of a hundred different polytypes in the laboratory, and the formation of a particular polytype is sensitive to growth conditions. Astronomical evidence for the formation of SiC in expanding circumstellar atmospheres of asymptotic giant branch (AGB) carbon stars is provided by
Electron Dot Line-bond Notation Notation Critical Analysis Questions: 1. a. Draw the Lewis dot structure for an atom of silicon and an atom of iodine. b. Draw the “Electron Dot Notation” for a molecule of SiI 4. c. Draw the “Line-Bond 4.
The inferred crystallographic class of circumstellar silicon carbide based on astronomical infrared spectra is controversial. We have directly determined the polytype distribution of circumstellar SiC from transmission electron microscopy of presolar silicon carbide from the Murchison carbonaceous meteorite. Only two polytypes (of a possible several hundred) were observed: cubic 3C and
In drawing Lewis structures for relatively small molecules and polyatomic ions, the structures tend to be more stable when they are compact and symmetrical rather than extended chains of atoms. EXAMPLE: Write the Lewis structure for CH 2 O where carbon is the central atom.
Structure, properties, spectra, suppliers and links for: Disilane, 1590-87-0. Predicted data is generated using the US Environmental Protection Agency’s EPISuite Log Octanol-Water Partition Coef (SRC): Log Kow (KOWWIN v1.67 estimate) = 0.83 Boiling Pt
Silicon quantum dot (Si-QD) eedded in amorphous silicon oxide is used for p-i-n solar cell on quartz substrate as a photogeneration layer. To suppress diffusion of phosphorus from an n-type
So far, devices using silicon nanocrystals have been realized either on silicon wafers, or using in-situ doping in the superlattice deposition which may hinder the nanocrystal formation. In this paper, a vertical PIN device is presented which allows to investigated the electrical and photovoltaic properties of nanocrystal quantum dot layers.
the Lewis structure for sulfur tetrafluoride (SF 4) which contains 34 valence electrons. SF 4: 6 + 4(7) = 34 There are four covalent bonds in the skeleton structure for SF 4. Because this requires using eight valence electrons to form the covalent bonds
Silicon carbide is usually divided into two egories, the black SiC and the green SiC, both having a hexagonal crystal structure, a density of 3.2 -3.25g/cm³ and microhardness of 2840-3320kg/mm2. The black SiC is manufactured with silica sand, tar and high quality silica as main materials in an electric resistance furnace at a high temperature.
The Lewis structure for the SiS2 molecule is shown below. 2. The central atom in the SiS2 molecule is silicon (Si). The nuer of electron groups around silicon (Si) is two. The hybridization of silicon (Si) is sp. Therefore, the electronic geometry of the SiS2linear
14/8/2020· Starting with a structure indiing only atom connections (single bonds), you can practice constructing a Lewis dot structure.Just click on the atom or bond you wish to modify. Nonzero formal charges are indied for each atom in the structure once the total nuer of electrons is correct. |
A child has fever, chills, muscle aches, and conjunctivitis. She is also developing a rash caused by subcutaneous hemorrhaging and complains about exposure to sunlight. With which of the following viruses might she be infected?
A scientist discovers a new virus affecting birds. After isolation, the virus is characterized as having single- strand RNA in an icosahedral capsid and an envelope. To which of the following virus families might this new virus belong?
Public health scientists discover and become concerned about a new strain of RNA virus among farm animals, especially geese and pigs, in the Midwest. Each virion is composed of lipid, helical proteins, and multiple pieces of RNA. This new virus may be |
OOP and UML
The purpose of this document is to provide you with information about UML and how to use it.
What is UML?
UML, Unified Modeling Language, is a standard notation for the modeling of real-world objects as a first step in developing an object oriented program. It describes one consistent language for specifying, visualizing, constructing and documenting the artifacts of software systems.
Developing a model for a software system before you actually start programming the software, can be seen as having a blueprint for a large building you want to build. It is essential to have one. Although this doesn’t mean that you have to draw a model each time a simple class is introduced in your software. You have to think for yourself whether you want a model or not.
A few notation-rules
Graphs and their contents
Most UML diagrams are graphs containing nodes connected by paths. The information is mostly in the typology, not in the size or placement of the symbols. There are three kinds of visual relationships that are important:
Connection (usually lines)
- Containment (2D shapes with boundaries)
- Visual attachment (one object being near another)
UML notation is intended to be drawn on a 2D surface.
There are basically four kinds of graphical constructs used in the UML notation:
Icons – an icon is a graphical figure of a fixed size and shape. It does not expand to hold contents. Icons may appear within area symbols, as terminators on paths or as standalone symbols that may or may not be connected to paths
2D symbols – Two dimensional symbols have variable length and width and they can expand to hold other things, such as lists, strings or other symbols. Many of them are divided into compartments of similar or different kinds. Dragging or deleting a 2D-symbol affects its contents and any paths connected to it.
Paths – Sequences of line segments whose endpoints are attached. Conceptually, a path is a single topological entity, although its segments may be manipulated graphically. A segment may not exist apart from his path. Paths are always attached to other graphic symbols at both ends (no dangling lines). Paths may have terminators, which are icons that appear in some sequence on the end of the path and that qualify the meaning of the path symbol.
Strings – Present various kinds of information in an "unparsed" form. UML assumes that each usage of a string in the notation has a syntax by which it can be parsed into underlying model information. For example, syntaxes are given for attributes, operations and transitions. Strings may exist as singular elements of symbols or compartments of symbols, as elements in a list as labels attached to symbols or paths, or as stand-alone elements in a diagram.
Attributes and behavior
Each object has various attributes. An attribute is a name/value pair. For example, my age is 22. The attribute name is "age", the value is "22". Objects also have behavior. I can stand, walk, sleep etc.
There are different kinds of relationships:
Dependency is where one object must know about another.
In a simple dependency relationship, all we know is that one object has knowledge of another. For example: if one class requires the inclusions of the header for another, that establishes a dependency.
We draw a dependency using a dashed arrow. If a depends on b be sure the arrow points at b.
In association, the state of the object depends on another object.
In an association we say that, as part of understanding the state of one object, you must understand the relationship with the second object. There are many types of association which model real-world relationships, such as owns (Arjen owns this bike), works for (Raymond works for Harry) and so forth.
In an association the two objects have a strong connection, but neither one is a part of the other. The relationship is stronger that dependency; there is an association affecting both sides of the relationship.
An association between a and b is shown by a line joining the two classes:
If there is no arrow on the line, the association is taken to be bi-directional. A unidirectional association is indicated like this:
To improve the clarity of a class diagram, the association between two objects may be named:
Aggegration models the whole/part relation
Objects are often made up of other objects. Cars are made up of steering wheels, engines, transmissions and so forth. Each of these components may be an object in its own right. The special association of a car to its compoment parts is known as aggegration.
An aggregation relationship is indicated by placing a white diamond at the end of the association next to the aggregate class. If b aggregates a, then a is a part of b, but their lifetimes are independent:
Composition models a relationship in which one object is an integral part of another.
Often, the component parts of an object spring into existence only with the entire object. For example, a person may consist of a number of parts including the heart, lungs etc. If you were modeling a person, the lifetime of the heart and lungs would be directly controlled by the lifetime of the aggregating person.We call this special relationship composition.
In aggregation, the parts may live independently. While my car consists of its wheels and tires and radio, each of those components may have existed before the car was created. In composition, the lifetime of the contained object is tied to the lifetime of the containing object.
Composition is shown by a black diamond on the end of association next to the composite class. If b is composed of a, then b controls the lifetime of a.
Inheritance is a specialization/generalization relationship between objects.
We (humans) have inherited the ability to create categories based on the behavior and characteristics of the things in our environment. This is best shown with an example: If something breathes and moves, we say it’s an animal. If one of those things that move and breathe also has live young and nurses them, we say it’s a mammal. We know that mammals are kinds of animals, and so we can predict that if we see a mammal, it will in all likelihood breathe and move around.
If a mammal barks and wags its tail, we say it’s a dog. If it won’t stop barking and runs about our feet demanding attention, we figure it’s a terrier. Each of these classifications gives us additional information. When we’re done, we have created a hierarchy of types.
Some animals are mammals and some are reptiles. Some mammals are dogs and some are horses. Each type will share certain characteristics, and that helps us understand them and predict their behavior and attributes.
There is only one right way to draw this:
Once we have this categorization, we can see that reading up the animal hierarchy reveals the generalization of shared characteristics.
In the same way, we could create a model of a car. To do so, we must ask ourself some questions:
What is a car? What makes a car different from a truck, from a person, from a rock? One of the delights of object oriented programming is that these questions become relevant to us; understanding how we perceive and think about the objects in the real world directly relates to how we design these objects in our model.
From one perspective, a car is the sum of its parts: steering wheel, brakes, seats, headlights. Here, we are thinking in terms of aggregation. From a second perspective, one that is equally true, a car is a type of vehicle.
Because a car is a vehicle, it moves and carries things. That is the essence of being a vehicle. Cars inherit the characteristics moves and carries things from their "parent" type, which is "vehicle".
We also know that car specializes vehicles. They are a special kind of vehicle, one which meets the federal specifications for automobiles.
We can model this relationship with inheritance. We say that the car type inherits publicly from the vehicle-type; that a car is-a vehicle.
Public inheritance establishes a is-a relationship. It creates a parent class (vehicle) and a derived class (car) and implies that the car is a specialization of the type vehicle. Everything true about a vehicle should be true about a car, but the converse is not true. The car may specialize how it moves, but it ought to move.
What is a motor vehicle? This is a different specialization, at a different level of abstraction. A motor vehicle is any vehicle driven by a motor. A car is one such type, a truck is another. We can model these more complex relationships with inheritance as well.
Which model is better? Depends on what you’re modeling! How do you decide which model you want to use? Ask yourself questions. Is there something about "Motor Vehicle" I want to model? Am I going to model other, non-motorized vehicles? If you do, you should use the second model. To show this with an example: Suppose you want to create two classes for vehicles which are horse-drawn.
A critical aspect of public inheritance is that it should model specialization/generalization, and nothing else! If you want to inherit implementation, but are not establishing an is-a relationship, you should use private inheritance.
Private inheritance establishes an implemented-in-terms-of rather than an is-a relationship.
One of the capabilities available in C++, is multiple inheritance. Multiple inheritance allows a class to inherit from more than one base class, bringing in the members and methods of two or more classes.
In simple multiple inheritance, the two base classes are unrelated. And example of multiple inheritance is shown below. Also notice how the functions are displayed in this model.
In this rather simple model, the Griffin class inherits from both Lion and Eagle. This means a Griffin can eatMeat(), roar(), squawk() and fly(). A problem arises when both Lion and Eagle share a common base class, for example Animal.
This common base class, Animal, may have methods of member variables which Griffin will now inherit twice. When you call Griffin’s Sleep() method, the compiler will not know which Sleep() you wish to invoke. As the designer of the Griffin class, you must remain aware of these relationships and be prepared to solve the ambiguities they create. C++ facilitates this by providing virtual inheritance.
Without virtual inheritance
With virtual inheritance
With virtual inheritance, Griffin inherits just one copy of the members of Animal, and the ambiguity is solved. The problem is that both Lion and Eagle classes must know that they may be involved in a multiple inheritance relationship; the virtual keyword must be on their declaration of inheritance, not that of Griffin.
Using multiple inheritance when you need aggregation
How do you know when to use multiple inheritance and when to avoid it? Should a car inherit from steering wheel, tire and doors? Should a police car inherit from municipal property and vehicle?
The first guideline is that public inheritance should always model specialization. The common expression for this is that inheritance should model is-a relationships and aggegration should model has-a relationships.
Is a car a steering wheel? Clearly not. You might argue that a car is a combination of a steering wheel, a tire and a set of doors, but this is not modeled in inheritance. A car is not a specialization of these things; it’s an aggregation of these things. A car has a steering wheel, it has doors and it has tires. Another good reason why you should not inherit car from door is the fact that a car usually has more than one door. This is not a relationship that can be modeled with inheritance.
Is a police car both a vehicle and a municipal property? Clearly it is both. In fact, it specializes both. As such, multiple inheritance makes a lot of sense here:
Base classes and derived classes
Derived classes should know who their base class is, and they depend on their base classes. Base classes, on the other hand, should know nothing about their derived classes. Do not put the header for derived classes into your base class files.
You want to be very suspicious of any design that calls for casting down the inheritance hierarchy. You cast down when you ask a pointer for it’s "real" (run-time) class and then cast that pointer to the derived type. In theory, base pointers should be polymorphic, and figuring out the "real" type of the pointer and calling the "right" method should be left to the compiler.
The most common use of casting down is to invoke a method that doesn’t exist in the base class. The question you should be asking yourself is why you are in a situation where you need to do this. If knowledge of the run-time type is supposed to be hidden, then why are you casting down?
Single instance classes
You also want to be very aware of derived classes for which there is always only one instance. Don’t confuse it with a singleton, in which the application only needs a single instance of a type, for example only one document or only one database.
Drawing a class
Suppose you want to create a class CFloatPoint, which has two members: x and y, which are both of type ‘float’, and a function "Empty()" which resets both members to value 0.00000.
First of all, you draw the class itself:
Now, we want the members x and y to be visible in the model:
As you can see, x and y are both private (lock-sign) and have the type "float".
Now, we want to make the function Empty() visible in the model:
Suppose you need to give some additional information about your class. You can easily do this with adding a note, which looks like this:
Keep in mind that these models are created with Visual Modeler, which is delivered with Visual Studio Enterprise Edition, so you can try and draw them yourself. I’m not going to explain how Visual Modeler works here, look in the manual or MSDN Library for more information.
Future versions of this document
I think this document is a good step in OOP and UML. Together with the document " Different styles of Programming " they provide a good first-step tutorial in the world of Object Oriented Programming.
There are many features of UML which are not covered in this document. One of them are so-called "use-cases" which is a story on its own. This will be explained in a new document which has yet to be written at this moment. |
noun, plural: gene pools
(population genetics) The total number of genes of every individual in an interbreeding population
Gene pool refers to the total number of genes of every individual in a population. It usually involves a particular species within a population. Determining the gene pool is important in analyzing the genetic diversity of a population. The more genetically diverse is a population, the better are the chances of acquiring traits that boost biological fitness and survival.
A large gene pool indicates high genetic diversity, increased chances of biological fitness, and survival. A small gene pool indicates low genetic diversity, reduced chances of acquiring biological fitness, and increased possibility of extinction. Genetic equilibrium is a condition where a gene pool is not changing in frequency because the evolutionary forces acting upon the allele are equal. As a result, the population does not evolve even after several generations.
Gene pool increases when mutation occurs and survives. Gene pool decreases when the population size is significantly reduced (e.g. famine, genetic disease, etc.). Some of the consequences when gene pool is small are low fertility, and increased probability of acquiring genetic diseases and deformities.
Gene pool gives an idea of the number of genes, the variety of genes and the type of genes existing in a population. It can be used to help determine gene frequencies or the ratio between different types of genes in a population. |
When students build their own Sky Eagle Plane, a rubber motor plane, they are encouraged to learn about the scientific principles of gravity, lift, drag, potential and kinetic energies, and more.
After students construct the balsa wood frame, cover it with the printed paper cover, and add the motor and propeller, it’s time for some classroom fun. Wind it up and let it fly outside or in the gymnasium. Students can experiment with the tabs and number of motor winds to learn how different variables affect the flight.
Looking for standards-based activities to accompany these materials? The Model Airplanes Teacher’s Guide is available for print here, or you can download a digital copy for free here. |
Gallium is the chemical element with the atomic number 31 and symbol Ga on the periodic table. It is in the Boron family (group 13) and in period 4. Gallium was discovered in 1875 by Paul Emile Lecoq de Boisbaudran. Boisbaudran named his newly discovered element after himself, deriving from the Latin word, “Gallia,” which means “Gaul.” Elemental Gallium does not exist in nature but gallium (III) salt can be extracted in small amounts from bauxite and zinc ores. Also, it is known for liquefying at temperatures just above room temperature.
Gallium is one of the elements originally predicted by Mendeleev in 1871 when he published the first form of the periodic table. He dubbed it ekaaluminum, indicating that it should have chemical properties similar to aluminum. The actual metal was isolated and named (from the Latin Gallia, for France) by Paul-Emile Lecoq de Boisbaudran in 1875.
The detective work behind the isolation of gallium depended on the recognition of unexpected lines in the emission spectrum of a zinc mineral, sphalerite. Eventual extraction and characterization followed. Today, most gallium is still extracted from this zinc mineral.
Although once considered fairly obscure, gallium became an important commercial item in the '70s with the advent of gallium arsenide LEDs and laser diodes. At room temperature gallium is as soft as lead and can be cut with a knife. Its melting point is abnormally low and it will begin to melt in the palm of a warm hand. Gallium is one of a small number of metals that expands when freezing.
Basic Chemical and Physical Properties
|Atomic Mass||69.723 g/mol|
|Element Category||Post-transition metal|
|Electronegativity||1.6 (Pauling Scale)|
|Density (at 0oC)||5.91 g/cm3|
|Atomic Radius||135 pm|
|Ionic Radius||62 pm|
|Isotopes||2 (69Ga; 60.11% & 71Ga; 39.89%)|
|1st ionization energy||578.8 kJ/mol|
|Electrode Potential||-0.56 eo|
|Oxidation States||+3,+2, +1|
|Hardness||1.5 (Mohs) 60 MPa (Brinell)|
|Specific Heat||25.86 J/molK|
|Heat of Fusion||5.59 kJ/mol|
|Heat of Vaporization||254 kJ/mol|
Gallium has a few notable characteristics which are summarized below:
- In its solid phase, Gallium is blue-grey in color
- It melts in temperatures warmer than room temperature; therefore, if you were to hold a chunk of gallium in your hand, it will start to liquefy.
- Solid gallium is soft and can easily be cut with a knife.
- It is stable in air and water, but reacts and dissolves in acids and alkalis.
- If solidifying, gallium expands by 3.1 percent and thus storage in glass or metal is avoided.
- It also easily to transform into an alloy with many metals and has been used in nuclear bombs to stabilize the crystal structure.
- Gallium is one of the few metals that can replace the use the mercury in thermometers because its melting point is close to room temperature.
Video 1: the video depicts the solidifying of liquid Gallium in 10x speed. Density of solid Gallium smaller than density of the liquid, so it's expanding during solidification and break the bottle.
Video 2: The video shows Gallium melting in your hands due to its melting point.
Gallium usually cannot be found in nature. It exists in the earth's crust, where its abundance is about 16.9 ppm. It is extracted from bauxite and sometimes sphalerite. Gallium can also be found in coal, diaspore and germanite.
Health: While Gallium can be found in the human body in very small amounts, there is no evidence for it harming the body. In fact, Gallium (III) salt is used in many pharmaceuticals, used as treatment for hypercalcemia, which can lead to growth of tumors on bones. Further, it has even been suggested that it can be used to treat cancer, infectious disease, and inflammatory disease. However, exposure to large amounts of Gallium can cause irritation in the throat, chest pains, and the fume it produces can lead to very serious conditions.
Semiconductors: Roughly 90-95% of gallium consumption is in the electronics industry. In the United States, Gallium arsenide (GaAs) and gallium nitride (GaN) represent approximately 98% of the gallium consumption. Gallium arsenide (GaAs) can convert light directly into electricity. Further, gallium arsenide is also used in LEDs and transistors.
Other applications of Gallium deal with wetting and alloy improvement:
Gallium has the property to wet porcelain and even glass surfaces. As a result, gallium can be used to create dazzling mirrors. Scientists employ an alloy with Gallium for the plutonium pits of nuclear weapons to stabilize the alloptropes of plutonium. As a result, some have issue with the element.
- Petrucci, Harwood, Herring, and Madura - General Chemistry 9th Edition
- What is the electronic configuration of Gallium?
- What do you think is one of the issues that people might have with usage of gallium?
- Gallium is part of which group and period?
- What are some applications of Gallium?
- Name three properties of Gallium that make it different from any other element.
- The use of it in nuclear bombs.
- Gallium is in group 13 (Boron family) and in period 4.
- Semiconductors; cancer treatment; hypercalcemia treatment; stabilization in nuclear bombs. See section above on Application for more detail.
- 5. See the section above on properties and characteristics for more detail.
- Gallium is blue-grey in color in its solid phase.
- Melts in temperatures warmer than room temperature
- Stable in air and water, but reacts and dissolves in acids and alkalis.
Contributors and Attributions
- Angela Tang, Sarang Dave
Stephen R. Marsden |
After a female mosquito receives a blood meal, it will look for a body of calm, still water to lay its eggs. It will either lay its eggs on the surface of the water or on the surface surrounding the water, depending on the species.
Once the eggs hatch, they go through 4 aquatic larvae stages, then molt into the pupae stage and emerge out of the water into mosquitoes. This process can take as little as 5 days.
Larvae grow from about 1/16 of an inch in the first stage to about 1/4 of an inch in the fourth stage. The larvae rest at the surface of the water, breathe through a siphon tube located at the end of their abdomen and dive to the bottom to feed on algae. Larvae will also dive to the bottom when disturbed and appear to wiggle.
During the pupae stage, no feeding takes place but the pupae still rest at the water's surface to breathe. The siphon tube is now located on the top of the pupae.
Once an adult mosquito emerges, the average life span is about a month, but depending on the species it can be as short as a week or as long as 9 months. |
Increases natural carbon sequestration & storage
Reduces harmful carbon in atmosphere
LEARN MORE ABOUT ANIMATING THE CARBON CYCLE
What is rewilding?
Rewilding is simple! It’s the process of helping nature heal by restoring the species and lands it needs to thrive. When the web of life is endowed with an abundance of species and ecological interactions, it can manage itself without human interference, creating the most large-scale and efficient carbon sequestration system possible.
How does rewilding help the climate?
Rewilding works with nature to efficiently and effectively capture and store carbon dioxide in vegetation, soils, and marine sediments. When we restore nature, we have the potential to remove over ⅓ of the excess carbon in the atmosphere causing global climate change, giving us more time to reduce emissions in other parts of the economy.
What are some examples of rewilding actions?
Reintroducing lost species, such as jaguars, bison, and sea otters so that they can fulfill their critical role in shaping forests, grasslands, wetlands, and coastal kelp forests.
Excluding damaging fishing from marine areas so fish, other wildlife, and plants can bounce back to a healthy abundance.
Removing dams, dikes, and other infrastructure from rivers and coastal areas so fish can again migrate and free-flowing water and its sediments can help animal and plant species to recolonize lost territory.
Who can join the Global Rewilding Alliance? Where do I sign up?
If you or your organization is involved in restoring wildlands and are interested in joining an international alliance committed to helping nature heal, please contact Magnus Sylvén to learn more about the Global Rewilding Alliance.
Join The Conservation Conversation Online
#WorldRewildingDay #WhyWeRewild #GenerationRestoration #Rewilding
#ClimateEmergency #EconomicStability #Extinction |
Plastics comes from the Greek word plastikos, which means to form or to mold. However, plastics do not have a uniformly defined term. Some people define plastics in a narrower sense in which they focus on specific properties, others take a broader approach that includes various manufacturing processes. Let’s start by looking at plastics as a material. Plastics materials are composed of large molecules (polymers) that are synthetically made or modified. Firstly, take a moment to think about all the material available in this world. You will soon realize these materials fall into one of three categories: liquids, gases and solids. While noting that most materials can be converted from one state to another through heating or cooling, let’s narrow our scope on materials that remain solid in room temperature. You are left with metals (iron, copper, gold, etc.), ceramics (stones, clay, etc.) and polymers. plastic synthetics A polymer is a large molecule made up of smaller molecules that are joined together by chemical bonding. Polymers can be divided into natural polymers and synthetic polymers. In natural polymer, the selection of molecules and the process of chemical bonding occur naturally – thus giving us materials like wood, leather, cotton, rubber, hair, etc. In synthetic polymers, the selection of molecules and the chemical bonding process is man-made. This gives us material like nylon, polyester and polyethylene. It’s worth noting that some synthetic polymers can be “manufactured copies” of natural occurring materials – e.g. synthetic rubber. Modifying the chemical bonding process is another example of how celluloid or cellophane is synthetic ally modified to an extent that the natural polymer characteristic of cellulose is altered. These are the methods of how synthetic polymers are formed: o do not occur naturally o occur naturally but made by non-natural processes o modified natural processes In short, all synthetic polymers are considered plastics. The industrial term for plastics raw material is known as plastics resins. CategoryGeneral Post navigation Previous PostPrevious Are You Using a Safe CBD OilNext PostNext Stress Free Shifting Hiring Packers and Movers Leave a Reply Cancel replyYour email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment. |
“Intelligence is the passenger, Biology is the driver.” – Karl Schroeder
Remarkably soon after the Earth formed 4.5 billion years ago, life emerged. Though the details are fuzzy, it seems as though a burst of energy struck a pond of primordial chemical soup which energized molecules that clumped together to form RNA.
Soon afterwards, RNA created a more stable version of itself that could more accurately pass information from one generation to the next, this was DNA. However, because of this advantage, DNA would supplant RNA as the driving force of life and RNA was forced to cede control to it. But RNA was better at quickly passing messages around so it stuck around by being useful to DNA.
DNA’s genius, and the reason why it has persisted to this day, is in its ability to replicate and pass information on from one generation to the next. All life on Earth can trace its origins to this flash of brilliance.
With the help of a new friend called ribosomes, DNA used proteins as tools that enabled it to extend its reach and branch out into the world. With an array of various proteins it constructed cell bodies that acted as both houses and transport vehicles for itself. These cells offered better protection and more mobility which ultimately enabled DNA to create on a larger scale.
Encased in these cells, DNA explored even further and encountered organelles, other living things that each had their own unique skills, like mitochondria for example which were very good at making energy. They absorbed those useful organelles into its cell body and with a bunch of different organelles now living with them, DNA had a huge advantage over everything else that was around, allowing it to spread wider and faster than anything else on Earth.
(The DNA is wound up in the nucleus and constantly directs RNA to create and distribute proteins to the various parts of its cell body that need them)
It would build ever more elaborate structures to house itself in and experimented with a number of different kinds of cell bodies, each with different arrangements and quantities of parts. Eventually it had the pieces needed for the next big leap when it figured out how to join more than one cell together, creating multi-cellular life.
In their fancy new multi-cellular bodies, DNA continued to tinker. For billions of years it experimented with different parts to put into these creatures, like veins and bones and eyes and lungs and limbs, new tools that could give it more information about the world and further improve its mobility. The by product of all this, coupled with the properties of evolution by natural selection, was the creation of all of the plants and trees and animals that inhabit the earth.
But this was a slow, tedious process and countless mistakes were made along the way that produced stress and suffering for the billions of hosts that had to be sacrificed in the name of progress. It would all be worth it when one day they came up with their crowning achievement, the human brain.
Prior to, information was still being passed slowly down the line from one generation to another, making any kind of change very time consuming. After billions of years of tinkering they stumbled upon the human brain, a tool that would allow the host to build up and pass on information to other hosts during its lifetime.
With that came the arrival of a new species, Homo Sapiens, endowed with a novel form of intelligence that could pass intricate messages to each other and build its own array of tools. Today these apes are on the verge of creating a tool that might be able to create even better tools than they ever could, Artificial Intelligence. And much like how RNA created DNA, and DNA created the human brain, allowing each to reach new places and make new discoveries that they could not achieve on their own, so too will AI allow human beings to reach even loftier heights.
Artificial Intelligence is just the next step in what life does. Hopefully, just as RNA survived by being useful to DNA, we can figure out a way to be useful to AI.
(Note: Most of this is an over-simplification of the incredible complexity of biology and a lot of important steps are skipped, for more read this piece from BBC and watch this four part series from SciShow on the history of life on earth)
(For a different take and a quicker version of the story watch this from Crash Course) |
It is necessary to establish some form of communication between members belonging to a society.
Language helps us to express thoughts, feelings and emotions. In India, though English is considered an acquired language or L2, there are many challenges in teaching it.
In India, teaching English is not an easy task. Some of the reasons for this are as given below:
► There is too much emphasis on writing at the cost of other more necessary skills like listening, speaking and reading, even at a very early stage of schooling.
► There are comparatively fewer number of people who use English as their preferred language for communication. This problem is more acute in the rural areas.
► The majority of people lack the ability to express their views or thoughts clearly through spoken or written English.
► Ambiguities in comprehension exist at the phonological, lexical and structural levels.
► Appropriate words are not available for translating English to Hindi or vice versa.
► English is at times neither heard nor spoken in the environment in which the child lives.
English language is learnt or developed in a social context. Meeting and interacting with people is one way of learning a language. In the classroom, the teacher should make an effort to encourage the students to learn English. He/she can do this in the following ways:
► By encouraging creative efforts of the students to speak using the language.
► By organising regular debates and discussions in the class.
► By creating situations and contexts where language can be used for various purposes.
► By providing a relaxed environment for free expression of ideas, thoughts and feelings.
► By helping students to develop early reading habits.
Following are some of the problems faced by teachers in teaching English in a classroom:
► Overcrowded Class: An overcrowded class generally reduces the teacher’s ability to teach effectively. Right of Children to Free and Compulsory Education Act prescribes a Pupil-Teacher Ratio
(PTR) of 30:1 and 35:1 at the primary and upper-primary levels, respectively, in every school.
► Poor Vision: Weak or poor vision of learners is also one of the main causes of lack of concentration.
When students hold books too close to their eyes, rub their eyes or move their body forward to see the written matter clearly, or are unable to see or read from the blackboard, they may have a problem with their eyes or vision. In such cases, the teacher should tell the students’ parents to get their eyes checked by a doctor. The teacher should also allow such students to sit on the front bench so that they would be able to see the blackboard clearly.
► Faulty Reading Habits: This is also a big problem that is often found in children who acquire or learn English as a second language (L2). The reason for this is that the student is not familiar with the nuances of English. A teacher should check such faulty reading habits of the students and correct them as soon as possible. Moreover, there are some techniques that students adopt inadvertently in the course of their learning, which tend to slow down their reading speed.
Some of them are as follows:
a. Sub-vocalisation: It means reading in a low murmuring sound. Students who have this habit tend to read word by word.
b. Regression: It refers to the tendency of the eyes to move backwards over printed material instead of moving forward. Some readers develop this habit of checking the already read information to confirm their conclusion.
c. Finger Pointing: This becomes a hindrance in reading with speed as students put their finger on the words while reading to improve their concentration.
► Biological or Neurological Problems: Sometimes, the problems are biological or neurological in nature. For example, a student may not be able to read properly due to a condition called alexia. Alexia occurs due to the impairment of the Central Nervous System (CNS), and may be caused by a lesion in a particular region of the brain.
Children suffering from alexia are able to recognise the meaning of the word but cannot read them aloud. Such learners tend to put the words together letter by letter.
► Difficulties in Learning Words in English: Sometimes learning English vocabulary can be a problem for a student. One of the reasons for this is that English has a vast vocabulary. In addition, there are numerous homophones, phrasal verbs, etc. that create confusion.
Apart from this, the rules of writing and speaking English
(that is, English grammar) are difficult for learners to understand.
► Incorrect Spelling or Pronunciation: While teaching English in a class, correcting spellings and pronunciation of the words is a big challenge for a teacher. Many words are often not pronounced the way they are written.
Therefore, a teacher needs to make sure that the students are taught correct spelling and pronunciation from the very beginning to form a strong foundation of the language.
► Challenges Related to the Exposition of Words: While teaching, a teacher may come across difficult words or phrases from a passage of a chapter he/she is teaching.
Explaining the meaning of such words can be challenging.
So, the teacher should employ various methods and techniques to make the meanings of the words clear to the students. Some of these are using of synonyms and antonyms of the words, explaining the usage of the words, using the reference method or translation method, etc.
► Lack of Practice: Some students may not practise or revise the study material or topic taught in class. This is a challenge for a teacher as it hinders the teaching-learning process of the whole class.
► Lack of Study Material: This type of challenge is primarily faced by schools in backward areas or villages where proper technology and, at times, course-related books are not available. This may greatly affect the teaching-learning process.
► Lack of Appropriate Environment: Environmental challenges are those that arise due to the location of the school. For example, if a school is situated near a market or crowded locality, the students will get disturbed in their studies due to increased noise pollution.
► Lack of Good Infrastructure: Factors such as lack of electricity or toilets may also have a negative impact on the learning process. Schools must ensure proper seating plan, adequate size of classrooms, appropriate height of chair and desk, etc.
Scan this QR code to watch a video on the problems of teaching English
Let us now look at a few suggestions to make teaching more effective in classrooms. These are as given below:
► Provide libraries in schools.
► Motivate students to do their best. Motivation will also help create interest for English and provide an impetus to learn.
► Provide grade-sensitive (or level-sensitive for students with learning difficulties) study material.
► Use extensive reading material. This improves reading habit, builds a big vocabulary pool, helps gain knowledge of varied topics and familiarises students with the general usage of language.
► Adopt correct posture as it greatly affects the learner’s reading and writing capabilities. Having the correct posture makes the learner comfortable and tends to enhance learning.
► Correct problems related to spelling by different methods, such as the drill method, the incidental method, the playway method and the transcription method.
► Play vocabulary games such as word-finder, word chains, semantic mapping and word association to enhance the vocabulary of students.
► Conduct a lot of group activities as students learn better when they interact in society.
► Use play-way methods to give real-life situations for students to interact in English.
Answer the following questions by selecting the most appropriate option.
25. Use of grammar punctuation and spelling pertains to
(1) text production while writing.
(2) formal speech.
(3) listening to lecture.
(4) informal conversation.
1. One of the reasons for the occurrence of a faulty reading habit is
(2) finger pointing, or using the finger to guide the eye while reading.
(3) overcrowded class.
(4) vision problem.
2. The ‘interactional routine’ during speaking assessment includes
(1) negotiating meanings, taking turns and allowing others to take turns.
(2) describing one’s school or its environs informally.
(3) ‘telephone’ conversation with another.
(4) comparing two or more objects/places/events to the assessor.
3. Which of the following is an instance of non-formal learning?
(1) Children learning through correspondence lessons.
(2) Children learning to draw from their art teacher.
(3) Children learning to cook from their parents.
(4) Children learning a new game from friends.
4. Which of the following will help learners take greater responsibility for their own learning?
(1) Summative assessment
(2) Supervised reading sessions
(3) Controlled writing tasks
(4) Peer assessment
5. Instead of asking questions and getting answers from her learners, a teacher gives some short texts and asks her learners to frame questions. Her primary objective is to
(1) make the learners realise the difficulties faced by teachers in preparing question papers.
(2) enhance the learners’ analytical and critical thinking.
(3) train the learners as good question paper setters.
(4) take their help during examinations.
6. Which of the following is an effective method in learning L2 (Language 2)/English?
(1) Theoretical reading
(2) Watching related videos on YouTube
(3) Performing tasks
(4) Reading motivational books
7. If a teacher is rude in class, it reflects his/her
(1) personality and attitude.
(2) behaviour towards the students.
(3) knowledge about the subject he/she is teaching.
(4) inability to teach.
8. A ‘special needs language classroom’ ideally
(1) is exclusively furnished.
(2) is located separately.
(3) integrates all types of learners.
(4) has extra teachers to help regular
9. The learning experiences that offer a vicarious experience to learners in a classroom are
(1) real objects and specimens.
(2) abstract words, case study.
(3) display boards, film clips.
(4) field trips, observations.
10. To inculcate a ‘Never Give Up Attitude’, a suitable activity is the one where students
(1) sing two popular songs and exhibit some of their art and craft works during the parent-teacher meet.
(2) make modifications to their paper planes and test them again, experimenting with the best way to get them fly a long distance and share their findings.
(3) in groups create graphs about the difficult situations that they have faced in life.
(4) manage to get the Principal’s permission to go out and play during the English period.
11. A student’s learning is more effective if he/she is
(2) forced to learn.
(3) seated in the first row of the class.
(4) seated in the last row of the class.
12. Divide your class into two groups and have one person from each group come to the front board. Read a sentence which uses one of a pair of homophones. The first student who correctly writes that homophone on the board scores a point for his team. In this speaking game, students learn by
(1) consciously focussing on the meanings and usage of words.
(2) collaboratively playing the game, where the teacher facilitates.
(3) ensuring no one is the winner, with everyone getting an opportunity to excel.
(4) being active as they practise the sounds.
13. After reading a poem, a teacher involves the learners in group work. One group writes the summary of the poem, another draws a picture to depict the main theme and yet another sets the poem to music. This activity
(1) caters to diverse abilities and interests.
(2) is aimed at preparing the learners for assessment.
(3) will distract the learners from the lesson.
(4) is a sheer waste of time.
14. In a diverse classroom, learners find it difficult to speak and write good English and often lapse into their mother-tongue because
(1) they are not motivated to learn.
(2) they lack enough competence and the structures of the two languages are different.
(3) they do not have the ability to learn English.
(4) they are slow learners. |
Want the full lesson plan?
|Skills & Connections||Age|
– Measurements and length
– Basic forces
– Testing and data gathering
– Engineering design process
Project 1: Lighting Solutions
Students from an urban school became aware that a corner of their classroom is darker than they’d like.
Student design goals:
Students aimed to brighten the dark corner such that the brightness could maintain the same brightness for a period of time with an easily changeable power source and a working switch at a height that a student could access. Furthermore, only battery power was feasible as there was no electrical outlet available, and the students could not use a bulb bigger than a small bulb or an LED.
A group of four students experimented with different combinations of batteries and bulbs making a number of working circuits. After a working circuit was created, they worked on other smaller problems such as how to easily change the battery, how to attach the circuit to the wall, how to keep all the pieces of the circuit securely attached together, and how to create a switch all students could access. The team connected three small light bulb holders in a parallel circuit. The switch was created by connecting two alligator clips together.
Project 2: Storage Solutions
Students from an urban school realize the backpack and jacket storage area is too cramped and crowded.
Student design goals:
Students aimed to create a storage solution that allows for more space and different sized bags and jackets. Existing hooks assume that everyone needs the same amount of space. This project required that not only would the new racks allow for adaptable and easily accessible space for bags of varying size, but they would use pre-existing mounting points in the classroom without drilling into the brick.
A group of three students cleared clutter and installed a more adaptable storage unit comprised of a PVC pipe, 90 degree elbows, and S-hooks. The students debated amongst each other how to best attach the PVC pipe to the existing structure. The original plan, constructed with the aid of the teacher, did not work. Students were integral in the making of the device, using hand tools with the aid of the teacher. |
With CO2 emissions on the rise, there is increasing pressure to safeguard our ecosystems and livelihoods from the devastating effects of Climate Change.
COP24, the informal name for the 24th Conference of the Parties to the UN’s Framework Convention on Climate Change, was a summit intended to finalise the Paris Agreement implementation guidelines. The original agreement made back in 2015 was to limit the global temperature rise to well below 2 degrees – a target which is yet to be achieved. Failure to reach the initial agreement has contributed to global warming reaching a dangerous tipping point, and as concerns mount over the narrowing window of opportunity for securing a sustainable future across the nations, so too is the concern for individual governments to ensure that they don’t forfeit on their end of the deal.
The rapid incline in global temperatures has signalled the necessity of a global intervention which is why COP24 was so critical. The 2018 summit was hosted in Katowice, Poland (pictured above), the heart of the country’s coal-mining region, and amongst its attendees were some 200 nations. Although it has been reported that these negotiations were not entirely promising, some progress has been made in terms of outlining how efforts to lower carbon emissions will be documented across the governing bodies who have opted into COP24.
These efforts are stipulated in the form of a “Common Rulebook” that applies to all countries. It is also one that ensures flexibility to account for poorer nations. This full disclosure is a significant element of the negotiations because it creates transparency across the board. Not only does this create the sense of a global cooperative, but it also encourages individual countries to ensure that they take accountability and continue to meet the lower emissions target by 2020. Following this key deadline, nations will be expected to have cut their emissions significantly, at which point, new, and much more ambitious, targets will be affirmed for 2030.
What were some of the key issues highlighted during the summit?
- Firstly, the technicalities of the “Common Rulebook” make it difficult to implement. This is because countries have taken different practical approaches in terms of outlining their carbon-cutting efforts.
- Secondly, poorer nations expressed their concerns over being inundated with regulations they would inevitably struggle to meet.
- Legal liability for climate change was vetoed by richer nations amid concerns of raking up a huge bill.
- Friction was caused between Brazil and other countries over the carbon credit system – these are awarded to countries based on their efforts to reduce their carbon footprint. Brazil hopes to capitalise off of its large rainforests though this has been heavily contested, and discussions have been postponed until 2019.
What could an increasing failure to cut carbon emissions actually look like?
Findings by the Intergovernmental Panel on Climate Change (IPCC) have warned that we have 12 years to save the planet. They have also warned that if global warming were to reach 1.5 degrees above the pre-industrial level it could result in an increased threat to our ecosystems and Arctic region, extreme weather events, coastal flooding, heat-related morbidity and mortality, coral dying off, crop yields destroyed, and many more devastating effects. Global environment editor for The Guardian, Jonathan Watts, insists that:
…time and carbon budgets are running out. By mid-century, a shift to the lower goal would require supercharged roll-back of emissions sources that have built up over the past 250 years.
Will countries be prepared to put their emissions where their mouths are?
The USA, Russia, Saudi Arabia and Kuwait have refused to embrace the IPCC’s findings. Instead, they merely commended the effort of the research. In addition, the US delegation has expressed indifference towards rising emissions, concluding that there will be no change to their climate policy. The USA is the only state in the world which has withdrawn from the 2015 Paris Agreement, though a number of other states including Russia have yet to formally enshrine in law the agreement.
On Saturday, there was deadlock at #COP24 when the United States, Saudi Arabia, Russia and Kuwait objected to a proposal to “welcome” a major climate change report that was released in October. Here’s why that matters: https://t.co/eG487rOVZF
— CNN (@CNN) December 10, 2018
An international divide has been signalled further by Brazil who withdrew their offer to host the talks in 2019. Instead, the talks are set to take place in Chile to discuss the final elements of the “Common Rulebook”.
Despite this overwhelming sense of division, the EU has pledged to further cut emissions, as well as assisting poorer nations in achieving their low emission targets.
Has the deal fallen short?
There have been calls for climate goals to happen much faster and to be more ambitious, switching to renewable energy sources and ditching fossil fuels altogether. The cost of clean energy has also reduced significantly but while it has experienced exponential growth in popularity, there are calls for this to happen quicker. In spite of some of the biggest key players opting out of the agreement, increasing pressure on the governments through lobbying and protests are stepping in the right direction. For there to be any real success, however, countries must ensure that they act quickly. |
Your garden isn’t growing as well as it used to and some of the plants in the garden are starting to look a little yellow. You suspect a nitrogen deficiency in the soil, but you are unsure how to correct it. “Why do plants need nitrogen anyway?” you may be wondering. Nitrogen as a plant fertilizer is essential to proper plant growth. Let’s look at why do plants need nitrogen and how to correct a nitrogen deficiency in the soil.
Why Do Plants Need Nitrogen?
To put it in simple terms, plants need nitrogen to make themselves. Without nitrogen, a plant cannot make the proteins, amino acids and even its very DNA. This is why when there is a nitrogen deficiency in the soil, plants are stunted. They simply cannot make their own cells.
If there is nitrogen all around us, as it makes up 78 percent of the air we breathe, you may also wonder why do plants need nitrogen if it is everywhere? How is nitrogen made accessible to plants? In order for plants to use the nitrogen in the air, it must be converted in some way to nitrogen in the soil. This can happen through nitrogen fixation, or nitrogen can be “recycled” by composting plants and manure.
How to Test Nitrogen of Soil
There is no homemade way how to test nitrogen of soil. You will either have to have your soil tested or purchase a soil testing kit. Typically, your local extension office will gladly test your soil for a small fee or even for free, depending on where you live. When you have your soil tested at the extension office, they will also be able to tell you any other deficiencies you may have.
You can also purchase a kit as a way how to test nitrogen of soil. These can be found at most hardware stores and plant nurseries. Most are easy and quick to use and can give you a good idea of the nitrogen content of your soil.
Fixing a Nitrogen Deficiency in the Soil
There are two routes to go when fixing a nitrogen deficiency in the soil, either organic or non-organic.
To correct a nitrogen deficiency using organic methods requires time, but will result in a more even distribution of the added nitrogen over time. Some organic methods of adding nitrogen to the soil include:
- Adding composted manure to the soil
- Planting a green manure crop, such as borage
- Planting nitrogen fixing plants like peas or beans
- Adding coffee grounds to the soil
Nitrogen as a plant fertilizer is common when purchasing chemical fertilizers. When looking to specifically add nitrogen to your garden, choose a fertilizer that has a high first number in the NPK ratio. The NPK ratio will look something like 10-10-10 and the first number tells you the amount of nitrogen. Using a nitrogen fertilizer to fix a nitrogen deficiency in the soil will give a big, fast boost of nitrogen to the soil, but will fade quickly. |
Pneumonia is an inflammation of your lungs, usually caused by infection. Bacteria, viruses, fungi or parasites can cause pneumonia. Pneumonia is a particular concern if you're older than 65 or have a chronic illness or impaired immune system. It can also occur in young, healthy people.
Pneumonia can range in seriousness from mild to life-threatening. Pneumonia often is a complication of another condition, such as the flu. Antibiotics can treat most common forms of bacterial pneumonias, but antibiotic-resistant strains are a growing problem. The best approach is to try to prevent infection. |
Planning the first day of school can be overwhelming for the teacher. With so much to tell the students, it can be hard to know where to start. The trick is to determine what is most important and blend that with ice breakers and fun activities designed to relax the students and help them become familiar with you and the classroom.
Students and Teacher
An immediate objective on the first day of school is to calm the nerves of your students and introduce yourself to them. Games designed to help your students get to know each other can break the ice. Or, pass out a "pop quiz" with true and false answers to questions about you, their teacher. This will help them get to know you and may spark some discussions about your teaching experience and why you became a teacher.
One of your first goals should be to go over the rules with your students. Don't bombard them with the rules right when they walk through the door, however. Start with an icebreaker to make the students feel more comfortable, then move into a discussion of the rules. Some teachers like to let the students have a say in developing the classroom rules. Others set their own rules which may be fairly inflexible. Some schools even have the same rules in each classroom for consistency. Discuss why rules are important. Talk about what the classroom environment feels like when students are following the rules (a safe place), and what it feels like when the students are not following the rules. Ask the students which environment is best for learning.
Familiarity with procedures is important for a smoothly running classroom. One of your objectives on the first day of school should be to introduce classroom procedures to your students. Think about what you want a typical school day to look like, then plan the procedures. Tell the students the procedures for daily activities, such as turning in homework, doing attendance and lunch count, going to the bathroom and lining up for recess. Practice these procedures, either as a whole class or with individual volunteers. Finally, your goal should be to make sure your students understand behavior policies, including what happens when the students choose to disobey.
Classroom, Curriculum and School
Another goal for the first day of school should be to help the students become familiar with the layout of the classroom and the school. They should also be introduced to the curriculum. Show the students around your classroom. Point out where they can look at books, where the pencil sharpeners are and where the computers, art supplies and other items are. Show them where they will keep their books, hang up their coats and store their backpacks. Take the students on a tour of the school as well. This is especially important if you have new students or if you have a class of students who have moved up to a new school. Finally, talk about the curriculum. Let them look through the textbooks they will be using during the upcoming year. Ask them what they would like to learn about and what they are looking forward to doing as a class this year.
- school days image by Alexey Klementiev from Fotolia.com |
30 Aug Skin Protection
The skin is the single largest organ of the body. The skin, when healthy, protects us from chemical, physical, and biological hazards. Skin weighs about 10% of our total body weight and is approximately one-eighth of an inch thick. The skin is made up of two layers, the epidermis (outer layer) and the dermis (inner layer). The outer layer of skin is only 1/250th of an inch thick and is the part of our skin that forms the protective barrier.
There are many skin irritants that employees may be exposed to in the workplace. One out of every four workers may be exposed to something that will irritate the skin. Many different things may cause skin damage. When something penetrates through the outer layer, the inner layer of skin reacts to it. Strong, or regularly repeated irritations of the skin may lead to skin diseases.
The skin contains oil glands, hair follicles, and sweat glands. These are like tiny holes. So the skin can be like a sponge when it contacts something. Skin also contains blood vessels, and some chemicals can penetrate the outer layer and enter the blood stream.
The type of environment you are in can cause skin problems directly or they can work with other factors to increase skin problems. These factors include:
- Heat – causes sweating. Sweating may dissolve chemicals and bring them into closer contact with the skin. Heat increases the blood flow at the skin surface and may increase the absorption of substances into the body.
- Cold – dries the skin and causes microscopic cracking. This cracking allows substances to cross the outer layer of the skin, thus entering the body.
- Sun – burns and damages the skin. Sun can increase absorption of chemicals. Sun reacts with some chemicals to enhance their negative effects on the body.
How to Protect Your Skin
- Wear long sleeve shirts and pants, to minimize the amount of skin exposed.
- When working outdoors, wear a hat with a brim.
- Use a high sun protection factor (SPF) sunscreen and reapply often.
- Wash your hands regularly during and after work.
- Wear gloves when handling chemicals.
- Where possible, use tools to handle hazardous substances instead of your hands.
When using gloves or clothing to protect yourself and your skin, you should be careful when removing contaminated clothing, so as not to contaminate yourself.
If a worker is exposed or thinks he/she may have been exposed to a hazardous substance, the area should be rinsed for at least 15 minutes. If a worker is accidentally contaminated, he or she should get under a shower immediately and remove the clothing while showering. Certain substances can be absorbed quickly across the skin. Time is critical. Medical help should be obtained immediately.
For more detailed information visit the website maintained by the Occupational Safety & Health Administration at http://www.osha.gov/SLTC/dermalexposure/index.html. |
Anatomy of a
Anatomy of a Glacier
Click on the feature labels to learn more about the parts of a glacier.
Glaciers have tremendous power to drastically transform landscapes over relatively short periods of time. Water, ice, and wind sculpted soil and rock into landforms that we can see today in northern Washington.
Lidar enables us to see these landforms in fine detail. Click on the features on the map below to see what kinds of features the ice left behind.
Get the Poster
Glacial Landforms of the Puget Lowland
Daniel E. Coe
The Cordilleran Ice Sheet
- The Vashon Stade
- Glacial Lakes of the Puget Lobe
Over the last several million years, glaciers have repeatedly inundated northern Washington. The last glacial climatic interval was called the Fraser glaciation, which sculpted much of the topography we see today in the Puget Lowland and northern Washington. These glacial periods were interrupted by warmer nonglacial climatic periods.
The figure above shows how local glaciations correlate with the climate changes during the last 2.4 million years.
How Do Scientists Determine Past Climate?
Scientists are able to figure out what the climate was like so long ago by drilling really deep cores into thick Antarctic ice and the sea floor all over the world. After removing these cores, they study thin slices under microscopes, examining microfossils, microbes, minerals, magnetic orientation, and rock type.
Paleontologists look for microfossils called foraminifera to tell us more about past ocean temperatures and climate. These creatures prefer warmer oceans and create their shells out of calcite. Scientists measure the ratio of Oxygen 16 to Oxygen 18 in calcite shells, and can use these ratios to estimate past climate conditions.
Ice and Seafloor Cores
Scientists study ice cores by analyzing the light and dark rings on the cores. These rings are created by snow capturing dust and dirt as it falls to Earth. Snowfall is higher in the winter than in the summer resulting in thicker rings. In the summer, it begins to melt creating a different composition and texture before refreezing again in winter. As the ice freezes, it traps air bubbles, which scientists also study and determine prehistoric climates from using oxygen ratios just like in sediment core samples.
Photo of ice core, by Janine and Jim Eden, Flickr Creative Commons license.
Understanding Earth's Secrets, by Kevin Kurtz and Alice Feagan is a great resource for young scientists to learn about microfossils and ice core drilling. Click the image to download the pdf!
The Vashon Stade, part of the Fraser Glaciation was the latest major incursion of ice into the Puget Lowland. Ice advance as south as Tenino, WA, and was upward of 4,200 feet thick in the northern Puget Lowland.
Contours show approximate ice thickness in feet at glacial maximum during the Vashon Stade. Map derived from ice thickness contours by R. M. Thorsen (1980).
The Whens, Wheres, and Hows of the Vashon Stade
So how do geologists determine when the glacier arrived at or left a certain spot? Use the interactive graphic below to learn more.
The Missoula Floods
- Ice-Age Flood Story Map
- More Resources
Click on the image to open our Story Map about Washington's Ice Age Floods. This Story Map is best viewed on a desktop or laptop computer. Mobile devices will not show all of the content.
Get the Posters
Ice Age Floods National Geologic Trail, Washington Section
Daniel E. Coe and Ashley A. Cabibbo
DNR Earth Science Week 2016
The Cheney-Palouse Tract of Washington's Channeled Scablands
Daniel E. Coe
Content coming soon!
Lidar composite image of Mount Rainier's glaciers.
How do glaciers shape the landscape? from Oxford Education
14 insane glaciers you won't believe from Talltanic |
Brazil had the largest net forest loss of any country between 1990 and 2010. Perhaps for this reason, in 2008 Brazil pledged to reduce the rate of deforestation in Brazil's Legal Amazon to 80 percent of historical rates by 2020 as part of its National Climate Action Plan. Since an enormous stock of carbon remains trapped in Brazil's Amazonian forests, saving these lands could dramatically reduce global greenhouse gas emissions.
New economic models suggest that deforestation could be greatly reduced by 2030 simply by changing the economics of raising cattle. You can get a reduction forest clearance by taxing land on which cattle is pastured conventionally or by subsidizing land on which cattle is pastured semi-intensively. Both work, but the tax saves slightly more forest, and thus offers more greenhouse gas abatement.
And cattle ranching is the leading cause of deforestation in the Brazilian Amazon. Seventy percent of formerly forested land, and 91 percent of land deforested since 1970, is used for livestock pasture. It is possible, however, to graze cattle in what’s called a semi-intensive system, in which livestock feeding is based on a combination of pasture grazing and harvested forages. When utilized properly, this system can double the productivity of pasturelands compared to conventional land management practices.
The new paper models the impacts of two policies: a tax on land-intensive grazing, and a subsidy for semi-intensive grazing. The models revealed how each affected greenhouse gas emissions, land use, agriculture, and commodity markets. The authors also incorporated different trade scenarios to examine how international movement of cattle products and beef consumption would be affected by the tax or the subsidy.
The tax ended up raising the price of beef, which discouraged all livestock raising, including the adoption of the semi-intensive system. But the tax did save forest because the higher beef prices discouraged both exports and domestic consumption.
Under the subsidy, the decreased cost of beef production in Brazil led to increased exports and beef consumption. Even though less forest was cleared for each unit of beef produced, this ultimately caused continued deforestation and greenhouse gas emission. Even so, the pace went down, so either policy would achieve more than half of Brazil's deforestation policy targets.
Land use is a major source of greenhouse gas emissions in Brazil and other emerging economies. This model suggests that a tax, a subsidy, or a combination of the two methods can encourage intensive pasturing and land sparing. They might be an effective way for these countries to balance agricultural development with forest protection. |
ICS 53, Winter 2017
Lab 3: A Memory Allocator
You will write a program which maintains a heap which you will organize as an
implicit free list. Your program will allow a user to allocate memory, free memory,
and see the current state of the heap. Your program will accept user commands and
execute them. Your program should provide a prompt to the user (“>”) and accept
the following 7 commands.
1. allocate - This function allows the user to allocate a block of memory from
your heap. This function should take one argument, the number of bytes
which the user wants in the payload of the allocated block. This function
should print out a unique block number which is associated with the block of
memory which has just been allocated. The block numbers should increment
each time a new block is allocated. So the first allocated block should be block
number 1, the second is block number 2, etc. Notice that only the allocated
blocks receive block numbers.
> allocate 10
> allocate 5
2. free - This function allows the user to free a block of memory. This function
takes one argument, the block number associated with the previously
allocated block of memory.
> allocate 10
> free 10
When a block is freed its block number is no longer valid. The block number should
not be reused to number any newly allocated block in the future.
3. blocklist - This command prints out information about all of the blocks in your
heap. It takes no arguments. The following information should be printed
about each block:
II. Allocated (yes or no)
III. Start address
IV. End address
Addresses should be printed in hexadecimal. The blocks should be printed in the
order in which they are found in the heap. |
University and Elementary School Perspectives of Ideal Elementary Science Teacher Knowledge, Skills, and Dispositions
Type of Degreedissertation
DepartmentCurriculum and Teaching
MetadataShow full item record
Teacher education knowledge, skills, and dispositions have recently become a well-discussed topic among education scholars around the nation, mainly due to its attention by the National Council for Accreditation of Teacher Education (NCATE) over the past few years. Accrediting agencies, such as NCATE and the Interstate New Teacher and Assessment and Support Consortium (INTASC), have sought to improve the quality of teacher education programs by examining knowledge, skills, and dispositions as factors in preparing highly-qualified teachers. There is a paucity of research examining these factors for elementary science teachers. Because these factors influence instruction, and students are behind in scientific and mathematical knowledge, elementary science teachers should be studied. Teacher knowledge, skills, and dispositions should be further researched in order to ultimately increase the quality of teachers and teacher education programs. In this particular case, by determining what schools of education and public schools deem important knowledge, skills, and dispositions needed to teach science, higher education institutions and schools can collaborate to further educate these students and foster the necessary qualities needed to teach effectively. The study of knowledge, skills, and dispositions is crucial to nurturing effective teaching within the classroom. Results from this study demonstrated that there were prominent knowledge, skills, and dispositions identified by teachers, administrators, and science teacher educators as important for effective teaching of elementary science. These characteristics included: a willingness to learn, or open-mindedness; content knowledge; planning, organization, and preparation; significance of teaching science; and science-related assessment strategies. Interestingly, administrators in the study responded differently than their counterparts in the following areas: their self-evaluation of teacher effectiveness; how the teaching of science is valued; the best approach to science teaching; and planning for science instruction. When asked of their teaching effectiveness while teaching science, principals referred to enjoying science teaching and improving their practice, while teachers and science teacher educators discussed content knowledge. Administrators valued conducting experiments and hands-on science while teaching science, while their educational counterparts valued creating student connections and providing real-life applications to science for students. In their professional opinions, administrators preferred a hands-on approach to science teaching. Teachers and science teacher educators stated that they view scientific inquiry, exploration, and discovery as effective approaches to teaching within their classrooms. Administrators predicted that teachers would state that lack of resources affects their lesson planning in science. However, teachers and science teacher educators asserted that taking time to plan for science instruction was most important. |
Deaf Education Teachers - Help ensure that the classroom environment can be interpreted
The deaf educator is typically sensitive to the challenges that classrooms can pose for students who are deaf or hard of hearing. There can be problems with an interpreted education that do not have to do with the interpreter’s skills. Some classrooms and some teachers can be more challenging to interpret than others. The deaf educator can help the classroom interpreter and the classroom teacher evaluate the accessibility of the learning environment, the unique challenges that may be created by the teacher’s instructional style, and the appropriate strategies to support effective interpretation. Hearing teachers who have limited experience teaching students who are deaf or hard of hearing may not naturally understand how to adjust their teaching styles to allow the interpreter to perform well. |
Neanderthals reached full maturity faster than humans do today, suggests a new examination of teeth from 11 Neanderthal and early human fossils. The findings, detailed in the latest Proceedings of the National Academy of Sciences, portray Neanderthals as a live fast and die young species.
Our characteristically slow development and long childhood therefore appear to be recent and unique to Homo sapiens. These traits may have given our early modern human ancestors an evolutionary advantage over Neanderthals.
"I think Neanderthals retain a more primitive developmental condition that seems to be shared with earlier fossil humans," lead author Tanya Smith told Discovery News. "We know from other studies of dental and cranial development that australopithecenes (early hominids from Africa) and Homo erectus did not show long or slow developmental periods like our own."
Smith, an assistant professor in the Department of Human Evolutionary Biology at Harvard University, and her colleagues made the determination after using a high-tech method called synchrotron micro-computed tomography to virtually count growth lines in teeth. These lines, like rings in trees, reveal yearly growth progress.
"Even more impressive is the fact that our first molars contain a tiny 'birth certificate,' and finding this birth line allows scientists to calculate exactly how old a juvenile was when it died," she said. In one instance, a juvenile Neanderthal was determined to be only three years old when it died, as opposed to age four or five as had previously been suspected.
She and her team also discovered that anatomically modern human groups that left Africa some 100,000 years ago experienced an elongation of their childhood, which has been with our species ever since. All other primates have shorter gestation, faster childhood maturation, younger age at first reproduction, and a shorter overall lifespan.
While delaying reproduction poses a risk that individuals may not live long enough to reproduce, it could facilitate learning, social development and complex cognition. Neanderthals are known to have large brains, as well as large bodies. Without much time for learning, however, those big brains might not have been much of a match for our own impressively large-brained species.
Smith said some researchers also suggest that slowing down childhood "may have allowed for conservation of energy, and this may have accompanied decreased mortality rates and/or more favorable environmental conditions."
Even today, traditional human populations show variation in rates of growth and development, likely due to selective pressures and environmental constraints. These probably affected the different hominid groups as they evolved in either Africa or, in the case of Neanderthals, in Europe and other northern regions.
"There are more than 100 known Neanderthal fossil juveniles," Smith noted, "a relatively large number when compared with all known Neanderthal individuals, which may imply that childhood mortality was high."
Science news from NBCNews.com
The findings, part of a five-year project exploring the development of Neanderthals, add to the debate on what differences existed between these archaic humans and our own species, as well as what happened to the Neanderthals.
Debbie Guatelli-Steinberg, an associate professor of anthropology at Ohio State, and her team previously concluded that Neanderthal teeth grew no faster those of modern humans. But she and her colleagues left open whether a Neanderthal childhood was equal, at least in length, to that of our species.
Erik Trinkaus, a Neanderthal expert who is a professor of physical anthropology at Washington University in St. Louis, believes that the modern humans who came from Africa had no real edge over Neanderthals when they first spread across Eurasia.
At this time in our history, "archaic humans remained across the more northern areas, and even displaced the modern humans in Southwest Asia for an additional 50,000 to 70,000 years," Trinkaus told Discovery News. "It argues for very little adaptive advantage on the part of these modern humans."
He and some other anthropologists think Neanderthals and modern humans mated, so the Neanderthals may have simply been absorbed into our own species over time.
Smith and her team, however, hint that forthcoming new studies reveal genetic and brain differences that existed between Neanderthals and members of our species, further heating up the scientific debate.
© 2012 Discovery Channel |
by Kärt Tomberg.
Humans, like all vertebrates, have a closed cardiovascular system where blood circulates under pressure. To maintain the integrity of this system in the case of injuries, vertebrates have developed a complex blood clotting mechanism capable of closing wounds. Without such a capability, humans would literally bleed to death from even a small paper cut. Although blood clotting is an important part of our physiology, the occurrence and the extent of this process has to be accurately controlled. Too much clotting interferes with blood flow and can lead to serious medical conditions like stroke, heart attack and venous thrombosis, a common condition where a blood clot forms within a vein.
Venous thrombosis (VT) is the cause of major health problems in western societies. Every year in the United States, it affects 1 in every thousand individuals and is responsible for around 300,000 hospital admissions. Venous thrombosis is a complex disease, primarily because it is determined by both our genetic makeup (60%) but also our environment and lifestyle (40%). One gene mutation associated with venous thrombosis is called Factor V Leiden (FVL). 25-30% of those who suffer venous thrombosis carry the gene and 3-7% of Europeans who carry the mutation, have a 10% risk of developing venous thrombosis in their lifetime. Clearly, there are other genes influencing the development of venous thrombosis, and, understanding which genes is critical to prevention and improved treatment. Despite recent efforts, most of the genes that contribute to venous thrombosis risk are still unknown. The complexity of the disease, along with highly variable populations and living environments has made it difficult for the critical genes to be identified. While each human has a unique and complex genetic make-up, mice all have identical genetics (just like identical twins) and so it is simpler for us to study mice to understand gene mutations.
The control of the delicate balance between clotting and bleeding is achieved by multiple factors in our body, which we can broadly categorize into two groups: pro-clotting and pro-bleeding. These factors are in constant dialogue to allow for efficient clotting, but only in specific regions and when needed. In a healthy individual these factors are in perfect balance. If this balance is severely disrupted by a mutation in one of the factors, it will result in either extensive bleeding or clotting.
Although mutations in either pro-bleeding or pro-clotting factors alone may lead to severe disorders, having two of these mutations simultaneously can balance out the detrimental effects and result in a healthy individual. In other words, one mutation could ‘rescue’ the other from the disease. This ‘rescue’ phenomenon can be applied to screen for unknown clotting genes using a mouse with a disturbed bleeding-clotting balance due to a known mutation. For example, there are mice with the same mutation in the FVL gene that is also present in humans. The offspring of these mice, that have an inherited mutation in another clotting gene, suffer extensive blood clotting and die at birth – as illustrated by the crossed out red mice in the inset figure. In these mice the bleeding-clotting scale is heavily tilted towards clotting.
The idea of screening is to go through thousands of chemically introduced random mutations (illustrated by the white diamonds in the inset figure) in all genes in order to find the ones capable of re-balancing the scale in that mouse model. Only a mutation in a factor on the opposite side of the scales is expected to have an effect. Other mutations will either have no effect on the bleeding-clotting balance, or will make the condition worse. As the mice with severe clotting are expected to die at an early age, the presence of survivors is taken to indicate the presence of a balancing gene. Survivors, or ‘rescued’ mice, are expected to carry a mutation in a gene that plays a role in controlling blood clotting. In our laboratory, we have identified multiple such survivors, each with a different mutation.
Having identified ‘rescued’ mice, the next challenge is to identify where the randomly created mutation is located in their DNA. With new next-generation sequencing technologies it is now possible to search through mouse genes to find mutations and thus genes involved in rescue. Although such sequencing technologies are complex, they all rely on the identification of small pieces of DNA, followed by the reassembling of the whole DNA sequence from these small pieces using a reference sequence. All positions that differ from the reference are potential mutations.
The mutations found in these studies could potentially be an important link for understanding the mechanisms of blood clotting, and could serve as a potential target for new pharmaceutical treatments. To establish such potential, many years of research lie ahead. There remain a significant number of questions to be answered before the identification of a gene can lead to the development of a drug. Scientists are currently asking what other possible biological pathways such a gene might affect, whether the function of this gene might be the same in humans as in mice, what effect the drug treatment might have in animals and whether the drug would eventually be effective in humans.
Meanwhile, since many of the identified genes may not allow the development of an associated drug, it is important to identify as many potential genes as possible. One could well succeed, and if so, end up helping hundreds of thousands of venous thrombosis patients annually in the U.S.A. alone.
Kärt Tomberg is a 2010 fellow of the Fulbright Science & Technology Award Program, from Estonia, and a PhD candidate in Human Genetics at the University of Michigan, Ann Arbor. |
Today is World Health Day, which celebrates the founding of the World Health Organization in 1948. The theme of World Health Day for 2015 is Food Safety.
The World Health Organization reports that an estimated 2 million people die annually due to unsafe food; and America is not immune to this. CNN says that the number of Americans that are becoming sick or dying from contaminated foods has increased 44% since last year. CNN cites "tainted cantaloupe, unsafe mangoes, meat, and the peanut butter recall," as unsafe foods that have been integrated into our diets in the past.
And while popular belief says otherwise, the leading cause of food-borne illnesses is from the mishandling of foods at home - not when eating out!
Here are some helpful tips on World Health Day to avoid dealing with unsafe food:
1. Wash your hands as often as you can when preparing food!
Be sure to wash your hands before you work with any food or cooking utensils, as well as inbetween any time that you deal with raw meat or fresh vegetables. And while we all slack on the correct hand washing from time to time, it's important to wash all areas of your hands and wrists for at least 20 seconds.
On the same token, examine your hands before you cook. If you have any sores or cuts on your hands, wear gloves for cooking.
2. Avoid cross contamination
Make sure that you keep your different foods separate when cooking - especially raw meats! Take measures such as using different cutting boards, plates, and cooking untensils when preparing different foods.
3. Cook your food to the proper temperatures
Dangerous bacteria is killed when we cook our food to the correct temperature, but the correct temperature varies for each food. NutritionBite provides a good guide on the correct temperatures to cook at. Also be sure to refigerate foods quickly in order to prevent the growth of bacteria. |
So far we've presented a straightforward view of covalent bonding as the sharing of electrons between two atoms. However, we have yet to answer questions such as these: How are electrons shared? What orbitals do shared electrons reside in? Can we say anything about the energies of these shared electrons? Our task now is to extend the orbital scheme that we've developed for atoms to describe bonding in molecules.
Valence bond theory (VB) is a straightforward extension of Lewis structures. Valence bond theory says that electrons in a covalent bond reside in a region that is the overlap of individual atomic orbitals. For example, the covalent bond in molecular hydrogen can be thought of as result of the overlap of two hydrogen 1 s orbitals.
In order to understand the limitations of valence bond theory, first we must digress to discuss molecular geometry, which is the spatial arrangement of covalent bonds around an atom. A very simple and intuitive approach, the Valence Shell Electron Pair Repulsion (VSEPR) model, is used to explain molecular geometry. VSEPR states that electron pairs tend to arrange themselves around an atom in such a way that repulsions between pairs are minimized. /PARGRAPH For instance, VSEPR predicts that carbon, which has a valence of four, should have a tetrahedral geometry. This is the observed geometry of methane ( CH 4 ). In such an arrangement, each bond about carbon points to the vertices of an imaginary tetrahedron, with bond angles of 109.5 degrees, which is the largest bond angle that can be attained between all four bonding pairs at once. Similarly, the best arrangement for three electron pairs is a trigonal planar geometry with bond angles of 120 degrees. The best arrangement for two pairs is a linear geometry with a bond angle of 180 degrees.
The VSEPR scheme includes lone pairs as well as bonded pairs. Since lone pairs are closer to the atom, they actually take up slightly more space then bonded pairs. However, lone pairs are "invisible" as far as the geometry of the atom is concerned. For instance, ammonia ( CH 3 ) has three C-H bonded pairs and one lone pair. These four electrons will, like methane, occupy a tetrahedral arrangement. Since lone pairs take more space, the H-N-H bond angle is reduced from 109.5 degrees to about 107 degrees. The geometry of ammonia is trigonal pyramidal rather than tetrahedral since the lone pair is not included. By similar reasoning, water has a bent geometry with a bond angle of about 105 degrees.
Note that multiple bonds don't affect the VSEPR scheme. A double or triple bond is considered no more repulsive than a single bond.
The Valence Bond model runs into problems as soon as we try to take molecular geometries into account. The tetrahedral geometry of methane is clearly impossible if carbon uses its 2s and 2p orbitals to form the C-H bonds, which should yield bond angles of 90 degrees. |
In this tutorial the instructor shows how to find the x and y intercepts of rational functions. Finding the intercepts of a rational function is similar to finding the intercepts of other normal equations. You can find the x intercept of the equation by setting the value of y to zero and solving the equation. Similarly you can solve the y intercept by setting the value of x to zero and solving the equation. Now while solving this rational function for intercepts if you face a situation where the value in the denominator is zero it implies that there is no intercept in that case as dividing something by zero is indeterminate. This video shows how to solve x and y intercepts of rational functions. |
powered cars, wind-propelled skimmers). The testing involves measuring speed, distance, direction, and duration in conjunction with the systematic manipulation of key variables that affect vehicle performance (e.g., balloon inflation, sail size and shape, gear ratios, wing placement, nose weight). The data are organized into tables or graphs to see if they reveal patterns and relationships among the variables. The conclusions based on the data are then used to inform the design of subsequent vehicles.
Similar instances of gathering and using data for vehicle design were found in the Models and Designs unit in the “Full Option Science System” and the Gateway to Technology unit of “Project Lead the Way.” Other materials engage students in counting and measuring, completing tables, drawing graphs, and making inferences, such as evaluating pump dispensers, conducting surveys, and testing materials.
Engineers often use mathematical equations and formulas to solve for unknowns. Young people can learn about the utility of this application of math in various ways, such as by calculating the amount of current in a circuit based on known values for voltage and resistance or determining the output force of a mechanism based on a given input force and a known gear ratio. Several instances of this kind were found in the “Engineering the Future” curriculum. In one activity, students calculate the weight of a proposed product (an organizer) based on three different materials prior to prototyping. Another requires that students calculate the mechanical advantage of a lever to determine how much force is required to test the strength of concrete.
However, most of the mathematics in the “Engineering the Future” curriculum is used to teach science concepts by illustrating relationships between variables, rather than to assist in solving design problems. For example, simple algebraic equations are used to represent the relationship between the cross-section of a pipe and its resistance to fluid flow, to calculate the output pressure of a hydraulic pump, and to determine the power produced by an electrical circuit. In these cases, mathematics is used to build domain knowledge in much the same way mathematics is used in science classes.
Several projects (e.g., “A World in Motion,” “Building Math,” Gateway to Technology, “Design and Discovery,” “Designing for Tomorrow”) introduce and require the application of basic geometry principles in conjunction with the development of technical drawings. For example, “Engineering the Future” includes lessons dealing with the concepts of scale and X, Y, and Z axes in the context of making orthographic, isometric, oblique, and perspective drawings. Introduction to Engineering Design, a unit in “Project |
Understanding What Your Child is Saying
Learning to talk is a gradual process. It’s common for a child’s speech to become less clear as she tries to use more words with more difficult sounds, because these require more effort and motor control.
Your child may in fact end up saying as little as possible during different stages of learning to talk, or they may begin to act up, out of frustration at not being able to communicate the way your child would like.
It is very important for parents to pay close attention to their child’s attempts to communicate, and to encourage these attempts.
Here are some tips to use if you’re having trouble understanding what your child is trying to say:
- If you don’t understand what your child is saying, encourage them to repeat it by saying things like “Tell me again” or “Tell me more.”
- If you got part of what your child said, repeat the part that you understood, and ask them to fill in the missing parts.
Watch your child closely.
- Watch for eye movements or gestures that might give you a hint about what your child is trying to say.
Ask your child for help.
- Make it appear like you’re having trouble hearing by saying things like “I didn’t quite hear that” and ask your child to say it again.
If after all of your attempts, you still can’t understand what your child is trying to tell you, you may have to apologetically say that you do not understand.
Usually children’s speech improves over time. If you are concerned that your child’s speech isn’t improving or if your child keeps acting up out of frustration over not being able to be understood, you may want to discuss this with your child’s doctor. You can also call the Canadian Association of Speech-Language Pathologists and Audiologists at 1-800-259-8519, and they will guide you to an appropriate referral if needed. |
Growing Space - Let's Get Familiar With It!
Students study agricultural terms. They categorize words and provide justification for word categories and write and state aloud 6-8 new sentences, in context, from the new terms.
3 Views 2 Downloads
Transportation and Space: Reuse and Recycle
What can I use in space? The three-lesson unit has groups research what man-made or natural resources would be available during space exploration or habitation. Team members think of ways that resources can be reclaimed or reused in...
10th - 12th Science CCSS: Adaptable
Get Connected with Ohm's Law
Ideal for your electricity unit, especially with middle schoolers, this lesson gets engineers using multimeters in electrical circuits to explore the relationships among voltage, current, and resistance. Older learners may even plot data...
5th - 12th Science CCSS: Designed
Space Station Research Explorer
Take a trip into outer space from the safety of your classroom. A great addition to the digital library of any science teacher, this reference offers a behind-the-scenes look at the research going at the International Space Station.
4th - Higher Ed Science CCSS: Adaptable
Will Future Spacecraft Fit in Our Pockets?
Say goodbye to giant rocket ships and hello to micro-spacecraft. Taking a look at the future of space exploration, this video explores the development of tiny, expendable space probes that can investigate the far reaches of space and...
5 mins 7th - 12th Science CCSS: Adaptable
Who Won the Space Race?
Modern animation presents an overview of the history of space exploration. Beginning with Sputnik in 1957, the international space race was on. Eventually, space exploration became, not a competition, but rather a collaboration. Also,...
5 mins 6th - 12th Science CCSS: Adaptable |
Constitution of the United States
The Constitution has undergone gradual alteration with the growth of the country. Some of the 26 amendments were brought on by Supreme Court decisions. However, the first 10 amendments, which constitute the Bill of Rights, were added within two years of the signing of the federal Constitution in order to ensure sufficient guarantees of individual liberties. The Bill of Rights applied only to the federal government. But since the passage of the Fourteenth Amendment (1868), many of the guarantees contained in the Bill of Rights have been extended to the states through the "due process" clause of the Fourteenth Amendment.The Bill of Rights
The First Amendment guarantees the freedom of worship, of speech, of the press, of assembly, and of petition to the government for redress of grievances. This amendment has been the center of controversy in recent years in the areas of free speech and religion. The Supreme Court has held that freedom of speech does not include the right to refuse to testify before a Congressional investigating committee and that most organized prayer in the public schools violates the First Amendment.
The right to keep and bear arms—adopted with reference to state militias but interpreted (2008) by the Supreme Court as essentially an individual right—is guaranteed by the Second Amendment, while freedom from quartering soldiers in a house without the owner's consent is guaranteed by the Third Amendment. The Fourth Amendment protects people against unreasonable search and seizure, a safeguard only more recently extended to the states.
The Fifth Amendment provides that no person shall be held for "a capital or otherwise infamous crime" without indictment, be twice put in "jeopardy of life or limb" for the same offense, be compelled to testify against himself, or "be deprived of life, liberty, or property without due process of law." The privilege against self-incrimination has been the center of a great deal of controversy as a result of the growth of Congressional investigations. The phrase "due process of law," which appears in the Fifth Amendment, is also included in the Fourteenth Amendment. As a result there has been much debate as to whether both amendments guarantee the same rights. Those in favor of what is termed fixed due process claim that all the safeguards applied against the federal government should be also applied against the states through the Fourteenth Amendment. The supporters of the concept of flexible due process are only willing to impose those guarantees on the states that "are implicit in the concept of ordered liberty."
The Sixth Amendment guarantees the right of speedy and public trial by an impartial jury in all criminal proceedings, while the Seventh Amendment guarantees the right of trial by jury in almost all common-law suits. Excessive bail, fines and "cruel and unusual" punishment are prohibited by the Eighth Amendment. The Ninth Amendment states that "The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people."
By the Tenth Amendment "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people." Powers reserved to the states are often termed "residual powers." This amendment, like the commerce clause, has been a battleground in the struggle over states' rights and federal supremacy.
Of the succeeding sixteen amendments, the Eleventh, Seventeenth, Twenty-second and Twenty-third Amendments have already been discussed under Articles 1, 2, and 3. The Twelfth (1804) revised the method of electing President and Vice President. The Thirteenth (1865), Fourteenth (1868), and Fifteenth (1870) are the Civil War and Reconstruction amendments; they abolish slavery, while guaranteeing civil rights and suffrage to U.S. citizens, including former slaves. The Sixteenth Amendment (1913) authorizes the income tax. Prohibition was established by the Eighteenth Amendment (1919) and repealed by the Twenty-first (1933). The Nineteenth (1920) grants woman suffrage. The Twentieth (1933) abolishes the so-called lame-duck Congress and alters the date of the presidential inauguration. The poll tax and any other tax made a requirement for voting in primaries and elections for federal office was outlawed by the Twenty-fourth Amendment (1964). The Twenty-fifth (1967) establishes the procedure for filling the office of Vice President between elections and for governing in the event of presidential disability. The Twenty-sixth Amendment (1971) lowers the voting age in all elections to 18. The Twenty-seventh Amendment (1992), first proposed in 1789, establishes procedures for Congressional pay increases.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Constitution of the United States The Amendments from Fact Monster:
See more Encyclopedia articles on: U.S. Government |
Homographs are words that are written the same way but have different meanings and often different pronunciations:
'Wind' can mean the movement of air when talking about the weather. It can also mean to follow a course or way that is not straight; the road winds through the mountains. These are different words with different pronunciations although they are written the same way.
Browse the following links to other content related to the term 'Homograph' from the 'Spelling and Punctuation' grammar category: |
The Genetic Code educational game with two related readings, are based on the 1968 Nobel Prize in Physiology or Medicine awarded for work on the genetic code and its role in protein production. The work revealed how the four-letter code of DNA can be translated into the 20-letter alphabet of amino acids, the building blocks that make up proteins.
- What are proteins made up of?
- How many different building blocks are used to produce proteins?
- How is the information in the DNA strand translated into a recipe of a certain protein?
- What language codes for the various amino acids in humans?
Get cracking with the code! Within DNA lies the genetic information needed to produce the proteins that carry out all the vital functions in our cells -- but how do the four bases in DNA produce the 20 amino acids that are the building blocks of proteins? In each gene a sequence of three DNA bases in a specific order, known as a codon, acts as the blueprint for the manufacture of each of the 20 amino acids. Discover how this code works by getting the “Book of Life” -- which contains all the codes for translating DNA into amino acids -- either through using up some of your points or through carrying out an experiment. Armed with the Book of Life, you can take on the computer or challenge a friend to a genetic version of 'five in a row'.
For instructions on how to play the game, click on the HELP button found at the bottom of the game window.
- DNA – the Blueprint of Life
- RNA – a Blueprint copy
- Amino Acids Make Up the Protein
- The RNA Message
- Interpreting the Message
- Visualizing the Code
- What code?
- Making protein from DNA
- Attempts to Decipher the Code
- Early Ideas Sprung from the "RNA Tie club"
- Not a Member of the Club
- A Clever Experiment
- Solving the Rest of the Puzzle
- Using the Code Today |
By Opal Dunn, author and educational consultant
For the most part, it is parents who teach their young children to speak their home language. Throughout the first two years of life, it is often the mother’s voice and her special way of talking, called ‘parentese’, that teaches young children about language and how to talk.
Parents, even with a basic knowledge of English, can successfully support their young child learning English by re-using and adjusting many of these same parentese techniques.
Parents may worry about their accent in English. Young children have a remarkable ability to alter their accent to match the English of their surroundings. Young children need to feel ‘I can speak English’ and ‘I like English’ and their parents’ support can help them achieve this from their first lessons.
Read the notes below on speaking English at home. You can also download these notes as a booklet. Right-click on the link below to download the booklet to your computer. You may print this booklet.
Why parents’ help is best
- Parents can focus on their child, spending some one-to-one time with them.
- Parents can fit English sessions into any part of their day to suit their child and themselves.
- Parents can regulate the length of an English session and select activities to fit their child’s needs, interests and ability to concentrate.
- Parents know their child intimately and can intuitively judge the type of English talking suitable for their individual ways of picking up language.
- Parents can best interpret their child’s moods and respond to them. Children have days when they eagerly absorb language and others when they find it difficult to concentrate.
- Parents can introduce more fun, as they are working with an individual, not a class.
- Parents can introduce English culture into family life, so broadening their child’s outlook and understanding of their own culture as well as things English.
What is parentese language?
‘Parentese’ is a form of talking that tunes into and adjusts to a young child’s language, providing dialogue with the child and shepherding them to their next level of competence. Women appear to be innate users of parentese; some men seem to find it more difficult unless they can centre their talk around specific objects – a picture book or a game. However, children – especially boys – need male role models as men use language differently. Men tend to take a more technical approach to using language and ‘chatter’ less.
Parents, using a softer, caring voice and simpler language, unconsciously shepherd their young child through an activity by:
- a running commentary (talking aloud) on what is going on: ‘Let’s put it here.’ ‘There.’ ‘Look. I’ve put it on the table.’ ‘Which one do you like?’ [pause] ‘Oh, I like this one.’ ‘The red one’
- repeating useful language more often than in adult talk: repetition introduced naturally helps the child to confirm what they are picking up – it is not boring for the child, even if it is for the parent
- reflecting back what their child has said and enlarging it: Child: ‘Yellow’; Parent: ‘You like the yellow one.’ ‘Here it is.’ ‘Here’s the yellow one.’ ‘Let’s see. yellow, red and here’s the brown one.’ ‘I like the brown one, do you?’ [pause]
- talking more slowly and stressing new words naturally without altering the melody of the language. ‘Which rhyme shall we say today?’ ‘ You choose.’ [pause for child to select]
- using the same phrases each time to manage English sessions as well as activities and games. As children’s understanding increases, these basic phrases are enlarged: ‘Let’s play Simon says.’ ‘Stand there.’ ‘In front of me.’ ‘That’s right.’ ‘Are you ready?’
- adding facial expression and gesture to aid understanding
- using eye contact in one-to-one exchanges to reassure and also to encourage a hesitant child to speak
- pausing for a longer time as children need to think about what they hear before they are ready to reply. When speaking is still limited, exaggerated pauses can add fun or hold interest in a game.
Some parents find it embarrassing to dramatise and use parentese. However, for the child, it makes picking up English easier as they are familiar with these natural ‘mini-lessons’ in their home language. Once young children begin to speak, parents innately feel less need to use parentese, except when introducing new language or activities.
By using simple English with plenty of repetition, parents help their child to begin thinking in English during activities where they feel secure and can predict what is going to happen, like games or ‘rhyme times’.
Young children want to be able to talk in English about:
• themselves and what they like: ‘I like; I don’t like… yuk’
• what they have done: ‘I went to; I saw…; I ate…’
• how they and others feel: ‘I am sad; she’s cross …’
Parents can help by sharing picture books or making their own books using drawings or photographs.
Young children learning their home language become skilled in transferring a little language to many situations: ‘All gone.’ If adults transfer English phrases in the same way, young children soon copy them.
When children need to practise school English, use phrases like ‘What’s your name?’ ‘How old are you?’ ‘What’s this?’ ‘That’s a pencil.’ Parents can turn this into a fun activity by using a toy that speaks only English, asking it the questions and pretending to make it answer.
As young children become more competent speakers, they may include a word in their home language within an English phrase ‘He’s eating a (…)’ because they do not yet know the English word. If the adult repeats the phrase back using only English, the child can pick up the English word. ‘He’s eating a plum.’ ‘A plum.’
When to translate
Young children’s ability to understand should not be underestimated; they understand much more than they can say in English. In their home language young children are used to understanding only some of the words they hear and filling in the rest from the speaker’s body language and clues around them to get meaning. Where parentese is used, they appear to transfer these skills to working out the meaning in English.
When both new concepts and new language are introduced at the same time, it may be necessary to give a quick translation once, using a whisper, followed directly by the English. If translation is given more than once and again in following sessions, a child may get used to waiting for the translation instead of using his or her own clues to understand the English.
English sessions may last from just a few minutes up to about ten and can take place once or twice a day, depending on circumstances. The more frequently English is used, the quicker it is absorbed.
During English sessions parents need to focus on their child without any interruptions. Young children come to love English sessions, because for them English is a special time with their parent’s undivided attention.
Young children are logical thinkers: they need to have a reason for speaking English, since both they and their parents can speak the home language.
They may find it difficult to switch from their home language into English, so it is important to set the scene: ‘In three minutes we are going to have our English time.’ Setting the scene for English time might involve moving to a special place in the room: ‘Let’s sit on the sofa. Now, let’s talk in English.’ Warming up in English by counting or saying a familiar rhyme also helps to switch into English before introducing some new activity.
Children pick up language when the talk is based around an activity in which they are physically involved. If they have already been introduced to the activity in their home language and understood the content, they feel more secure and can concentrate on understanding and picking up the accompanying English.
Where sessions are in only English, activities need to be shorter since children’s attention span is generally not as long as in the home language. Listening only to English can be tiring.
Encouragement and praise
Young children look for their parents’ praise. They need to feel good, and know they are making progress in English. Continuous positive support, encouragement and praise from both mother and father, as well as the extended family, helps to build up self-confidence and motivate. In the early stages of learning, encouragement is especially important and praise for any small success motivates. ‘That’s good.’ ‘I like that.’ ‘Well done!’
Starting off in English is the time when young children need parents’ support the most. Once they are able to speak, recite rhymes and have memorised some stories, the support need no longer be so intensive. By this stage, English phrases, rhymes and stories are likely to have been playfully transferred into family life. In-family English can be bonding and is likely to stay. This can be the beginning of positive lifelong attitudes to English and other cultures. It is now generally accepted that lifelong attitudes are laid down in early childhood before the age of eight or nine. |
Biologists are able to identify this creatively named mollusk primarily by the knobbed and ridged shell. With a good dash of imagination, this thick brown shell is said to resemble the face of a monkey when viewed from the edge. Other characteristics include a square shell marked with small, dark green triangles.
Northeastern Oklahoma is on the southwestern edge of the monkeyface’s range; the bulk of the range is in the United States’ Midwest Region. Monkeyface mussels, or their remnants, have been found in the Verdigris, Neosho and Caney rivers of Oklahoma’s Green Country. These mussels typically prefer clear, swift moving water and are most often found in riffles and graveled areas of streams and rivers.
Monkeyface mussels are filter feeders; as they draw water through their valves, food particles (algae, bacteria, protozoans and other organic matter) are captured on the gills and carried to the mouth by hair-like structures found on the gills. Once the eggs are fertilized by upstream males, they are brooded on the female mussel’s gills. After three months, the eggs have developed into small larvae, or glochidia. These larvae, less than one-tenth the size of a mustard seed, then clasp onto the gills or fins of a nearby fish. But any nearby fish won’t do. Different species of mussels have different host fish. For monkeyface mussel larvae to develop into juvenile mussels, they must clasp on to the gills or fins of bluegill or other sunfish, or sauger. After the larvae have developed into mussels, they detach from the fish, slowly sink to the bottom of the river, and begin their life as mostly-stationary adults.
Up to 5 inches.
Species of Greatest Conservation Need |
Log In to abcteach
Free Account (Settings)
This packet contains an overview, for both teachers and parents, of the common core standards for Kindergarten Math. Ten student-friendly posters describe the standards in easy to understand terms. 4 student checklists are included for students to track their mastery of each standard.
Common Core Math: K.CC.1- K.CC.7
Easy Addition problems with room to draw a picture to help determine the answer. Sample pictures helps student understand the concept. Common Core - Operations and Alegbraic Thinking- K.OA.1
Three colorful pages of materials for a geometry-themed mini office: types of triangles; determining angles; defining polygon (regular and irregular) and polyhedron; perimeter, area, and volume for basic shapes. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1
All 20 of our shape posters in one easy download: quadrilaterals (square, rectangle, rhombus, parallelogram), triangles (equilateral, isoceles, scalene, right, obtuse, acute), curved shapes (circle, oval, crescent), other polygons (pentagon, hexagon, octagon); one per page, each with a picture and a definition. Common Core: Geometry K.3.1, 1.G.1 2.G.1, 3.G.1 |
It was in June 1994 that Hewlett-Packard announced their joint research-and development project aimed at providing advanced technologies for end-of- the-millennium workstation, server and enterprise-computing products and October 1997 that they revealed the first details of their 64-bit computing architecture. At that time the first member of Intel’s new family of 64-bit processors – codenamed Merced, after a Californian river – was slated for production in 1999, using Intel’s 0.18-micron technology. In the event the Merced development programme slipped badly and was estimated at still nearly a year from completion when Intel announced the selection of the brand name Itanium at the October 1999 Intel Developer Forum.
A major benefit of a 64-bit computer architecture is the amount of memory that can be addressed. In the mid-1980s, the 4GB addressable memory of 32-bit platforms was more than sufficient. However, by the end of the millennium large databases exceeded this limit. The time taken to access storage devices and load data into virtual memory has a significant impact on performance. 64-bit platforms are capable of addressing an enormous 16 TB of memory – 4 billion times more than 32-bit platforms are capable of handling. In real terms this means that whilst a 32-bit platform can handle a database large enough to contain the name of every inhabitant of the USA since 1977, a 64-bit one is sufficiently powerful to store the name of every person who’s lived since the beginning of time! However, notwithstanding the impact that its increased memory addressing will have, it is its Explicitly Parallel Instruction Computing (EPIC) technology – the foundation for a new 64-bit Instruction Set Architecture (ISA) – that represents Itanium’s biggest technological advance.
EPIC, incorporating an innovative combination of speculation, prediction and explicit parallelism, advances the state-of-art in processor technologies, specifically addressing the performance limitations found in RISC and CISC technologies. Whilst both of these architectures already use various internal techniques to try to process more than one instruction at once where possible, the degree of parallelism in the code is only determined at run-time by parts of the processor that attempt to analyse and re-order instructions on the fly. This approach takes time and wastes die space that could be devoted to executing, rather than organising instructions. EPIC breaks through the sequential nature of conventional processor architectures by allowing software to communicate explicitly to the processor when operations can be performed in parallel.
The result is that the processor can simply grab as large a chunk of instructions as possible and execute them simultaneously, with minimal pre-processing. Increased performance is realised by reducing the number of branches and branch mis-predicts, and reducing the effects of memory-to-processor latency. The IA-64 Instruction Set Architecture – published in May 1999 – applies EPIC technology to deliver massive resources with inherent scaleability not possible with previous processor architectures. For example, systems can be designed to slot in new execution units whenever an upgrade is required, similar to plugging in more memory modules on existing systems. According to Intel the IA-64 ISA represents the most significant advancement in microprocessor architecture since the introduction of its 386 chip in 1985.
IA-64 processors will have massive computing resources including 128 integer registers, 128 floating-point registers, and 64 predicate registers along with a number of special-purpose registers. Instructions will be bundled in groups for parallel execution by the various functional units. The instruction set has been optimised to address the needs of cryptography, video encoding and other functions that will be increasingly needed by the next generation of servers and workstations. Support for Intel’sMMX technology and Internet Streaming SIMD Extensions is maintained and extended in IA-64 processors.
Whilst IA-64 is emphatically not a 64-bit version of Intel’s 32-bit x86 architecture nor an adaption of HP’s 64-bit PA-RISC architecture, it does provide investment protection for today’s existing applications and software infrastructure by maintaining compatibility with the former in processor hardware and with the latter through software translation. However, one implication of ISA is the extent to which compilers will be expected to optimise instruction streams – and a consequence of this is that older software will not run at optimal speed unless it’s recompiled. IA-64’s handling of 32-bit software has drawn criticism from AMD whose own proposals for providing support for 64-bit code and memory addressing, codenamed Sledgehammer, imposes no such penalties on older software.
The following diagrams illustrate the greater burden placed on compiler optimisation for two of IA-64’s innovative features:
- Predication, which replaces branch prediction by allowing the processor to execute all possible branch paths in parallel, and
- Speculative loading, which allows IA-64 processors to fetch data before the program needs it, even beyond a branch that hasn’t executed
Predication is central to IA-64’s branch elimination and parallel instruction scheduling. Normally, a compiler turns a source-code branch statement (such as IF-THEN-ELSE) into alternate blocks of machine code arranged in a sequential stream. Depending on the outcome of the branch, the CPU will execute one of those basic blocks by jumping over the others. Modern CPUs try to predict the outcome and speculatively execute the target block, paying a heavy penalty in lost cycles if they mispredict. The basic blocks are small, often two or three instructions, and branches occur about every six instructions. The sequential, choppy nature of this code makes parallel execution difficult.
When an IA-64 compiler finds a branch statement in the source code, it analyses the branch to see if it’s a candidate for predication, marking all the instructions that represent each path of the branch with a unique identifier called a predicate for suitable instances. After tagging the instructions with predicates, the compiler determines which instructions the CPU can execute in parallel – for example, by pairing instructions from different branch outcomes because they represent independent paths through the program.
The compiler then assembles the machine-code instructions into 128-bit bundles of three instructions each. The bundle’s template field not only identifies which instructions in the bundle can execute independently but also which instructions in the following bundles are independent. So if the compiler finds 16 instructions that have no mutual dependencies, it could package them into six different bundles (three in each of the first five bundles, and one in the sixth) and flag them in the templates. At run time, the CPU scans the templates, picks out the instructions that do not have mutual dependencies, and then dispatches them in parallel to the functional units. The CPU then schedules instructions that are dependent according to their requirements.
When the CPU finds a predicated branch, it doesn’t try to predict which way the branch will fork, and it doesn’t jump over blocks of code to speculatively execute a predicted path. Instead, the CPU begins executing the code for every possible branch outcome. In effect, there is no branch at the machine level. There is just one unbroken stream of code that the compiler has rearranged in the most parallel order.
At some point, of course, the CPU will eventually evaluate the compare operation that corresponds to the IF-THEN statement. By this time, the CPU has probably executed some instructions from both possible paths – but it hasn’t stored the results yet. It is only at this point that the CPU does this, storing the results from the correct path, and discarding the results from the invalid path.
Speculative loading seeks to separate the loading of data from the use of that data, and in so doing avoid situations where the processor has to wait for data to arrive before being able to operate on it. Like prediction, it’s a combination of compile-time and run-time optimisations.
First, the compiler analyses the program code, looking for any operations that will require data from memory. Whenever possible, the compiler inserts a speculative load instruction at an earlier point in the instruction stream, well ahead of the operation that actually needs the data. It also inserts a matching speculative check instruction immediately before the operation in question. At the same time the compiler rearranges the surrounding instructions so that the CPU can despatch them in parallel.
At run time, the CPU encounters the speculative load instruction first and tries to retrieve the data from memory. Here’s where an IA-64 processor differs from a conventional processor. Sometimes the load will be invalid – it might belong to a block of code beyond a branch that has not executed yet. A traditional CPU would immediately trigger an exception – and if the program could not handle the exception, it would likely crash. An IA-64 processor, however, won’t immediately report an exception if the load is invalid. Instead, the CPU postpones the exception until it encounters the speculative check instruction that matches the speculative load. Only then does the CPU report the exception. By then, however, the CPU has resolved the branch that led to the exception in the first place. If the path to which the load belongs turns out to be invalid, then the load is also invalid, so the CPU goes ahead and reports the exception. But if the load is valid, it’s as if the exception never happened.
An important milestone was reached in August 1999 when a prototype 0.18-micron Merced CPU, running at 800MHz, was demonstrated running an early version of Microsoft’s 64-bit Windows operating system. Production Itaniums will use a three-level cache architecture, with two levels on chip and a Level 3 cache which is off-chip but connected by a full-speed bus. The first production models – currently expected in the second half of year 2000 – will come in two versions, with either 2MB or 4MB of L3 cache. Initial clock frequencies will be 800MHz – eventually rising to well beyond 1GHz.
- Principles of CPU architecture – logic gates, MOSFETS and voltage
- Basic structure of a Pentium microprocessor
- Microprocessor Evolution
- IA-32 (Intel Architecture 32 ) – base instruction set for 32 bit processors
- Pentium P5 microarchitecture – superscalar and 64 bit data
- Pentium Pro (P6) 6th generation x86 microarchitecture
- Dual Independent Bus (DIB) – frontside and backside data bus CPU architecture
- NetBurst – Pentium 4 7th generation x86 CPU microarchitecture
- Intel Core – 8th generation CPU architecture
- Moore’s Law in IT Architecture
- Architecture Manufacturing Process
- Copper Interconnect Architecture
- TeraHertz Technology
- Software Compatibility
- IA-64 Architecture
- Illustrated guide to high-k dielectrics and metal gate electrodes |
Brief SummaryRead full entry
A large (18 inches) grebe, the Red-necked Grebe in summer is most easily identified by its dark back and head, brown neck, and conspicuous white chin patch. In winter, this species becomes dark gray above and light gray below, retaining some white on its chin. Male and female Red-necked Grebes are similar to one another in all seasons. The Red-necked Grebe occurs across wide area of the Northern Hemisphere. In North America, this species breeds across central Alaska, western Canada, and locally in the western United States, wintering along the Pacific coast from Alaska to California, along the Atlantic coast from Newfoundland to North Carolina, and locally in the Great Lakes. In the Old World, this species breeds in Northern Europe and East Asia, wintering along the coast as far south as the Mediterranean Sea, south China, and India. Red-necked Grebes breed in ponds, lakes, and shallow marshes, preferring areas with thick vegetation to more open water. In winter, this species may be found in shallow marine environments near the coast. Red-necked Grebes primarily eat small insects in summer, switching to small fish during the winter. In appropriate habitat, Red-necked Grebes may be observed floating low in the water, periodically diving down to capture prey. Like most grebes, this species must run and flap along the surface of the water in order to become airborne, subsequently flying swiftly low over the water. Also like most grebes, this species’ legs are positioned at the far end of its body, making it an adept swimmer but rendering it almost entirely unable to move on land. Red-necked Grebes are most active during the day. |
There is a thin line between calculus and the calibration of roundness measuring system – not so many people understand them. Even the few ones who do end up making mistakes most of the time. Take the word ‘calibrate’ for example. Most precision experts use it all the time in the world of roundness. The word means to determine what one has as compared to a given proper value. The right word to use instead of calibrate is to ‘adjust’ or ‘correct’. So how is calibrating a roundness system done?
Understand How Your Measuring Tools Work
The roundness or spherical measuring sensor is made up of two main parts. There is the electronic part, designed to sense motion. This part is sometimes referred to as the ‘probe’. Then there is the shaft otherwise referred to as the ‘stylus’. These two parts work with the spindle so as to make a roundness measurement.
Understand Everything That Comes With The Roundness Gage
A typical roundness system comes with a few extra tools designed to help with adjustment or calibration. Understand how the extra tools work. They include:
- A precision hemisphere/sphere
- An optical flat complete with gage blocks
- A dynamic/flick calibration standard
The Precision Hemisphere/Sphere
Its round shape is designed to test the spindle. It features a ball which is ‘zero reference’ by default. The ball is considered perfect, compared to the measuring instrument. What most people do not know is that there is still some error in the ball. ‘Certificate of calibration’ as you will see as you measure is an actual error in the ball. The calibration lab arrives at the balls certified value using extremely accurate methods. It involves the ‘reversal’ technique, which is yet another complex feature under calibrating a roundness system.
Calibrating With The Ball
This is ‘the mother of all errors’. It is impossible to calibrate with the ball. People use the certified value of the hemisphere/sphere, a practice that is dangerous. Note that in roundness systems, the word ‘calibrate’ means ‘to adjust’ as already explained. Calibrating with the ball actually leads to innacurate measurements full of errors. It reduces microns all the time.
Using a precision hemisphere/sphere to adjust a roundness measuring system is more or less the same as using an optical flat so as to set the gain on an electronic indicator. You won’t get enough deflection to set the gain. That is because hemispheres and spheres are ‘zero references’. They are not adjusting tools.
Verifying The Probe Game
The ‘gain’ factor is one that must be set and controlled when it comes to any roundness measuring system. For example, there will be lower sensitivity if a long stylus shaft is used. The sensitivity will be represented by a ‘gain’ value inside the measuring instrument. The value must then be set through a process that the system identifies as ‘calibration’. To verify and adjust the probe gain, you have to exercise more of the probe’s motion. You can handle this through the Flick Standard, also referred to as the Dynamic Standard. Gage blocks can also do the same thing. |
Activity 5: Discussion - Option 1
Activity time: 25 minutes
Materials for Activity
- Newsprint, markers, and tape
Preparation for Activity
- Write these questions on newsprint and post:
- Why is this story preserved?
- What were they trying to teach with this story?
- What is it that we can learn?
Description of Activity
Invite participants to discuss the questions posted on newsprint. Use some of these questions to provoke, guide or further the discussion, as needed:
- Who is the God that appears in this story, and how is God different from the God in other stories we have explored?
- What does it mean if we are all related?
- Unitarian Universalists reject on scientific grounds the idea that there is a single creator and a single act of creation. Compare and contrast our seventh principle, respect for the interdependent web of existence of which we are a part, with the understanding of the universe as a single creation.
- How is the scientific understanding that humanity is one (we are all descended from the same set of ancestors) in harmony with the idea of humanity in this biblical creation story? How is it different?
- What wisdom is there for Unitarian Universalists in honoring Sabbath?
- Are there ways in which you honor Sabbath in your life, or long to do so? |
The outcrop of Pennsylvanian strata, shown on the geologic map, defines the limits of the Eastern and Western Kentucky Coal Fields (shown in gray on the physiographic map). The Western Kentucky Coal Field is smaller than its eastern counterpart. It comprises the southern edge of a larger geologic feature called the Illinois or Eastern Interior Basin, which includes the coal fields in Indiana and Illinois.
As in eastern Kentucky, the border of the Western Kentucky Coal Field and the Mississippi Plateau is commonly marked by an escarpment because thick Pennsylvanian-age sandstones are resistant to erosion. However, because this coal field is not adjacent to the Appalachian Mountains, and the sandstones are less continuous, the escarpment is not as dramatic as along the Cumberland Escarpment of the Eastern Kentucky Coal Field. Pennyroyal State Park (pictured above) is located along the southern edge of the escarpment. |
Biological nutrient removal (BNR) removes total nitrogen (TN) and total phosphorus (TP) from wastewater through the use of microorganisms under different environmental conditions in the treatment process. Effluent nitrogen and phosphorus are the primary causes of cultural eutrophication (i.e., nutrient enrichment due to human activities) in surface waters. In BNR systems, nitrification is the controlling reaction because ammonia-oxidizing bacteria lacks functional diversity, has stringent growth requirements and is sensitive to environmental conditions.
Nitrification by itself does not remove nitrogen from wastewater. Denitrification is needed to convert the oxidized form of nitrogen (nitrate) to nitrogen gas. Total effluent phosphorus comprises soluble and particulate phosphorus. Particulate phosphorus can be removed from wastewater through solids removal. To achieve low effluent concentrations, the soluble fraction of phosphorus also must be targeted. Here we examine the impact of microbiology on BNR, focusing primarily on nitrogen removal.
Biological Process Description
Nitrogen removal as a two-step process requires microbiology. Each step is performed by different types of bacteria in different environments. Nitroso bacteria (nitrosomonas) oxidizes ammonia to nitrite and then the reaction of nitrite to nitrate is processed by nitro bacteria (nitrobacter).
Nitrification occurs in the presence of oxygen under aerobic conditions. Alkalinity is required for the nitrification process, as carbon dioxide (CO2) is created and dissolved into the water to form carbonic acid, which lowers the pH. The synthesis or creation of biomass uses inorganic carbon sources to deliver CO2, alkalinity, ammonia and water into bacterial cells and oxygen. Nitrifying bacteria respiration and synthesis reactions are determined as dissimilatory (oxidation reduction that produces energy) and assimilatory (where biomass is created).
Nitrification at the wastewater treatment plant (WWTP) requires significant solids retention time (SRT) for the traditional nitrifiers. As a result, the reaction kinetics are slow and temperature sensitive. At 5°C, the nitrification process essentially ends, which is problematic for colder-weather plants. Biochemical oxygen demand (BOD) removal requires a five-day SRT, whereas nitrification requires four times as long at a 20-day SRT.
Denitrification occurs in the absence of oxygen under anoxic conditions. This second step in the nitrogen removal process is carried through the consumption of nitrate by facultative heterotrophic bacteria. Nitrate is used as an electron acceptor (reduced) during the oxidation of organic carbon. The complete reaction, neglecting biomass synthesis, uses the nitrates in the wastewater to deliver nitrogen gas, CO2 and water. The biomass synthesis reaction, occurring simultaneously, incorporates nitrate into bacteria cells, water and CO2.
Denitrification requires biodegradable soluble chemical oxygen demand (COD) provided by the influent wastewater, endogenous decay of the biomass or an additional external carbon source (methanol or acetate). To mirror the dissimilatory and assimilatory respiration and synthesis, denitrifying bacteria uses the same organic carbon for both reactions to produce nitrogen gas, CO2, water and more cellular biomass. It is important that BOD is present to allow nitrate reduction that completes the removal of nitrogen from wastewater.
The stoichiometry for denitrification requires 4 grams of BOD (6.6 grams COD) to remove 1 gram of nitrogen. During denitrification, alkalinity is returned to the water as 3.5 grams of alkalinity produced per 1 gram of nitrogen removed. Plants use this to increase the BOD-nitrogen ratio to improve nitrogen removal. A 4:1 ratio of BOD to nitrogen achieves 4 to 7 mg/L of nitrate-nitrogen in typical effluent wastewater. Ratios of less than 4:1 indicate that supplemental carbon is required.
How Do Plants Accomplish BNR?
In efforts to reduce the number of nutrient impairments, many point source dischargers received more stringent effluent limits for nitrogen and phosphorus. For stringent effluent TN limits, several process considerations are needed to meet the needs of the treatment facility. Choosing which system is most appropriate for a particular facility primarily depends on the target effluent concentrations and whether the facility will be constructed as new or retrofit with BNR to achieve more stringent effluent limits.
New plants have more flexibility and options when deciding which BNR configuration to implement because they are not constrained by existing treatment units and sludge handling procedures. Retrofitting an existing plant with BNR capabilities should involve consideration of the following factors: aeration basin size and configuration, clarifier capacity, type of aeration system, sludge processing units and operator skills.
BNR costs differ for new plants and retrofits. New plants’ BNR costs are based on estimated influent quality, target effluent quality and available funding. Retrofit costs, on the other hand, are more site specific and vary considerably for any given size category. Retrofit costs are based on the same factors as new plants, in addition to the layout and design of the existing treatment processes. Despite this variability in costs, unit costs generally decrease as the size of the plant increases due to economies of scale. The U.S. Environmental Protection Agency (EPA) illustrates this relationship for facility upgrades in three system size categories, outlined below:
To achieve new, lower effluent limits, facilities have begun to look beyond traditional treatment technologies. One technology provides a biological alternative to nitrogen removal without capital expansion of the existing facility. It includes regular additions of a highly concentrated formulation of facultative soil bacteria at multiple strategic locations throughout the entire collection system in accordance with an engineered treatment plan.
In-Pipe Technology Co. Inc. uses heterotrophic bacteria that thrives under anoxic and anaerobic conditions—the conditions required for the conversion of nitrate to nitrogen gas in the wastewater environment. The heterotrophs provide the traditional nitrifiers with a carbon source as a byproduct of their nitrification and denitrification activities. The intersection between carbon and nitrogen metabolism is regulated by at least six proteins (GltC, TnrA, RocG, RocR, CcpA and CodY) that respond to the In-Pipe bacteria branched-chain amino acid pathway.
The heterotrophs also were shown to reduce the amount of excretion products that can inhibit the growth of nitrosomonas bacteria while they make use of the organic excretion products for their own energy in carbon-depleted environments. By reviewing the nitrogen profile, unit processes and operations at the plant, In-Pipe works with plant staff to maximize nitrogen removal. This process allows the company to engineer a custom plan for biological treatment that complements existing infrastructure.
Since the Connecticut Department of Environmental Protection (DEP) established the Nitrogen Credit Exchange program in 2002 to reduce nutrient loads entering Long Island Sound, municipalities in the region have been pressured to reduce nitrogen discharged from the plant effluent into the receiving streams. In 2009, according to the Connecticut DEP, the value of credits purchased by the Nitrogen Credit Exchange was $4,384,688 and the value of those sold was $2,838,546. Forty-five facilities were required to purchase credits to meet their permit limits, while 34 facilities had credits to sell. The DEP states that the key to the program’s success is the implementation of nitrogen removal projects.
After an alternative analysis to upgrade the WWTP, a 4-million-gal-per-day (mgd) facility in Farmington, Conn., began utilizing In-Pipe to effectively lower effluent TN. After 10 months of treatment, it decreased effluent TN 14% (from 255 lb per day to 218 lb per day). During this period, influent ammonia load increased by 7% (from 568 lb per day to 606 lb per day). No significant change occurred in the BOD to nitrogen ratio (21.7 in 2009 to 19.2 in 2010).
Using the EPA average capital costs for BNR upgrades, the facility would spend in the range of $2.4 million to $6 million to achieve similar results. The nitrogen credit savings are forecast to reach $20,000 in 12 months with In-Pipe.
A significantly smaller 0.2-mgd facility in Corum, N.Y., failed to meet existing TN limits for several years as a result of high nitrate levels. The effluent discharge enters traditional seepage beds that were discovered to contain nitrogen levels potentially harmful to the groundwater supply. Therefore, In-Pipe was installed in December 2010. After 12 weeks, effluent nitrate decreased 67% (from 17.4 to 5.79 mg/L) and effluent TN decreased 37% (from 18.75 to 11.75 mg/L). Recent values determined by an outside lab reached 3.6 mg/L. No changes occurred to the existing process or plant operations as a result of the installation.
The town of Orange Park, Fla., faced the same total maximum daily load compliance deadline as other municipalities across Florida regarding effluent TN discharged to receiving streams. The target permit limit of total load discharged is less than or equal to 21,998 lb per year effluent TN, with a daily limit of 60.3 lb per day. The town’s wastewater treatment process, which contained three contact stabilization units operated in parallel and designed for hydraulic capacity of 2.5 mgd, was operating at 1 mgd and producing effluent TN at 76,100 lb per year. Since October 2009, effluent TN has decreased by 60% to 25,000 lb per year.
It is essential that BNR upgrades improve effluent nitrogen and phosphorus entering surface waters to reduce the primary causes of cultural eutrophication. Approximately 25% of all water body impairments are the result of nutrient-related causes (e.g., nutrients, oxygen depletion, algal growth, ammonia, harmful algal blooms, biological integrity and turbidity). New technologies are available to ease the burden of protecting the environment at a fraction of the cost of traditional capital projects. |
I am always fascinated to see how students manage to incorrectly answer exam and quiz questions. Not only does this provide a great insight into my own deficiencies as an instructor, but it also gives me some idea about how my students are thinking about and analyzing problems.
Take the quiz question that I gave yesterday in my remedial algebra classes:
Solve the following equation. Give your answer using set notation, and check your solution.
\(6(x-3) = 2x + 2(5+2x)\)
The correct answer is that there is no solution. The equation ultimately reduces to something like \(-18 = 10\), which is a contradiction. The correct interpretation of this result is that no matter what value is substituted for \(x\), the equation will never yield a true statement, which means that there is no solution.
I freely admit that this is a difficult question for the students that are being asked to answer it. In order to find a solution, they need to be able to apply properties of real numbers (such as distributivity, commutativity, and associativity), correctly manipulate variable and constant terms, recognize a contradiction, and remember the correct notation for dealing with said contradiction.
What surprised me was how many of my students came very, very close to getting the correct answer, yet stumbled on the last little leap of logic. Nearly every student managed the algebraic manipulations, and ended up with the result \(-18 = 10\). It was the last little step from there that proved to be the most difficult part of the question.
In general, I saw two types of wrong answers: some students reported that the solution was \((-10,18)\), while others gave me solutions of the form \(x = 28\) or \(x=-9/5\). Both of these answers are understandable, and I think that they each give some insight into how the students were thinking.
The first incorrect result can be addressed by giving a little context. The lecture material that went over solving equations was from last week. My students did have the weekend to work on homework problems related to solving linear equations in a single variable, but the last time they saw the material in lecture was last Thursday. By contrast, the lecture material leading up to the quiz was on linear equations in two variables. Solutions to such equations are ordered pairs of numbers, with one number representing an \(x\)-value, and the other number representing a \(y\)-value.
When the students who gave ordered pairs as solutions were confronted by a contradiction, they knew that something special needed to happen, but they couldn’t dredge up last week’s lecture. Very understandably, they groped for the best they could come up with, and created an ordered pair. My hope is that his mistake will be a very easy one to correct—a few examples in class before moving onto the next section should clear up that confusion.
The other answer, I think, stems from an over-adherence to a procedure that the book emphasizes, and which I taught in lecture. The strategy is to first simplify each side of the equation independently, then collect terms, and finally to perform a division to get a solution.
This works well when there is a single solution, but, as I attempted to point out in class, there are pitfalls. Specifically, if the equation has either no solutions or infinite solutions, it breaks down somewhere. I think that many students simply applied the procedure without thought, and tried to either collect the constant terms, or divide the constant terms. In either event, they took the result to be their answer.
In order to correct these kinds of errors, I think I need to make two points to my students. The first is simply one of notation: generally, when I talk about solving equations, I talk about terms “canceling” each other. For instance, in the equation \(4x+1 = 3x-1\), I might subtract \(3x\) from each side. Instead of explicitly writing the that \(3x-3x = 0\) on the right side, I will just cross out the terms. I know that there is an “invisible zero” there, but it seems that my students need the explicit reminder.
Second, I need to emphasize the special cases of identity and contradiction a bit more. These cases disrupt the algorithm that the students are attempting to apply, and need to be recognized. If the variable terms cancel each other out, we’re done! What remains is either a contradiction or an identity.
Hopefully, it will be possible to correct those mistakes, and to emphasize that the goal of solving equations is not to just follow a procedure to an answer (both mistakes stem, I think, from blindly following a list of steps), but to find values for the variable which make the equation true. The procedure is just a means to an end, and we have to understand what the various kinds of results that we get mean. |
Learning in Giraffes class is based around an approach that involves both adult-led teaching and child-initiated learning.
We believe children learn best when they are allowed to discover and apply their growing skills within a range of contexts.
That is why we create learning opportunities that have real life contexts and that are fun and engaging for children.
A typical day within Giraffes class will involve phonics, maths or literacy. We also theme our learning via topic and it is here that we cover our science, history or geography learning.
We also have weekly PE lessons, Forest School and music.
This term Giraffes class have PE on Tuesday and Forest School on Friday.
Our topic for the Spring term is ‘What to do with an idea?’ with children exploring the concepts of creativity and perseverance.
Our main text for the first half of the spring term will be The Lighthouse Keeper’s Lunch.
Our literacy learning will continue to cover the correct use of punctuation when writing sentences, specifically full stops, capital letters, question and exclamation marks. Over the spring term we will be focusing upon the development of the children’s vocabulary, use of adjectives and conjunctions, as well as prefixes and suffixes.
Our literacy learning is supported by the continuing focus on phonics. We are now looking at the alternative ways of spelling phonemes alongside spelling tricky and high frequency words.
Children in Giraffes class change their reading books three times per week and read to an adult within school every week. We ask parents/carers to support their children by reading with them at least three times per week.
In our maths learning for the spring term we will be looking closely addition and subtraction of one and two digit numbers, working in numbers to 40, as well as measuring, time and multiplication and division.
Problem solving will feature heavily across the year with children able to apply their mathematical skill to real life contexts. This real life context not only demonstrates to children the importance and relevance of maths within everyday life, but through specific questioning, will enable them to reason their conclusions, providing opportunities to deepen their understanding of key mathematical concepts. |
Africa is the world’s second largest continent, both by size and number, after Asia. Its landmass holds 54 countries and nine territories. Over 1.11 billion people call Africa home and call themselves African. Africa, and being African, has become a core part of the identity of millions of the continent's inhabitants, but where does this word, to which so many have such an emotional connection, come from?
The exact origins of the word ‘Africa’ are contentious, but there is much about its history that is known. We know that the word ‘Africa’ was first used by the Romans to describe that part of the Carthaginian Empire which lies in present day Tunisia. When the Romans conquered Carthage in the second century BCE, giving them jurisdiction over most of North Africa, they divided North Africa into multiple provinces, amongst these there were Africa Pronconsularis (northern Tunisia) and Africa Nova (much of present-day Algeria, also called Numidia).
All historians agree that it was the Roman use of the term ‘Africa’ for parts of Tunisia and Northern Algeria which ultimately, almost 2000 years later, gave the continent its name. There is, however, no consensus amongst scholars as to why the Romans decided to call these provinces ‘Africa’. Over the years a small number of theories have gained traction.
One of the most popular suggestions for the origins of the term 'Africa' is that it is derived from the Roman name for a tribe living in the northern reaches of Tunisia, believed to possibly be the Berber people. The Romans variously named these people ‘Afri’, ‘Afer’ and ‘Ifir’. Some believe that ‘Africa’ is a contraction of ‘Africa terra’, meaning ‘the land of the Afri’. There is, however, no evidence in the primary sources that the term ‘Africa terra’ was used to describe the region, nor is there direct evidence that it is from the name ‘Afri’ that the Romans derived the term ‘Africa’.
In the early sixteenth century the famous medieval traveller and scholar Leo Africanus (al-Hasan ibn Muhammad al-Wazan), who had travelled across most of North Africa giving detailed accounts of all that he saw there, suggested that the name ‘Africa’ was derived from the Greek word ‘a-phrike’, meaning ‘without cold’, or ‘without horror’. In a similar line of thought, other historians have suggested that the Romans may have derived the name from the Latin word for sunny or hot, namely ‘aprica’. Where exactly the Romans got the name ‘Africa’ from is however still in dispute.
For most of its history, the Roman word ‘Africa’ was not used to describe the continent as a whole, but rather only a very small section of North Africa, what today constitutes the northern parts of Tunisia. Prior to the late sixteenth century, there were a variety of names used to describe the various constituent parts of the north half of Africa, with ‘Libya’, ‘Aethiopia’, ‘Sudan’ and ‘Guinea’ being by far the most common names used.
For the ancient Greeks, almost everything south of the Mediterranean Sea and west of the Nile was referred to as ‘Libya’. This was also the name given by the ancient Greeks to the Berber people who occupied most of that land. The ancient Greeks believed their world was divided into three greater ‘regions’, Europa, Asia and Libya, all centred around the Aegean Sea. They also believed that the dividing line between Libya and Asia was the Nile River, placing half of Egypt in Asia and the other half in Libya. For many centuries, even into the late medieval period, cartographers followed the Greek example, placing the Nile as the dividing line between the landmasses.
Early Arabic cartographers tended to follow the Greeks in the usage of their term ‘Libya’ for vast swathes of North Africa beyond Egypt. Later Arabic cartographers began to call the areas south of the Sahara, extending from the Senegal River to the Red Sea, Bilad al-Sudan, ‘the land of the blacks’, which is where the contemporary nation of Sudan gets its name from.
Some European cartographers, who tended to privilege Latinate derivations above others, preferred the Roman terms for the region, using the word Africa to designate the northern landmass, rather than the Greek term Libya, although Libya was still used by many. During the Middle Ages, the Christian cartographers adopted the Greek division of the world into three greater regions surrounding the Aegean Sea. The medieval cartographers, however, moved this understanding away from a geographical conception of the world into a more metaphorical understanding. They aligned each region to one of Abraham’s descendants, giving Asia to Shem, Europe to Japheth and Africa to Ham. This tripartite conception of the world was represented in extremely abstract and symbolic, rather than geographical, form, as is shown in the image above.
In the fifteenth century, after the Portuguese had rounded the Cape and made contact with the Christian Empire of Abyssinia, the Greek term Aethiopia, meaning ‘land of the dark-skinned or burnt’, which was used by Greek cartographers to designate the land below the Egypt and the Sahara, was revived again to loosely describe all of the tropical area of the continent below the Sahara.
At roughly the same time the Portuguese began to refer to all of West Africa below the Sahara as Guiné, the land inhabited by the Guinness people. Guinness was a generic term used for black people, in contrast to the more brown people of much of North Africa. The term Guiné, or the English Guinea, became ever more popular to describe the region of West Africa accessible from the Gulf of Guinea.
Until the mid-seventeenth century, the terms Guinea and Aethiopia, or Ethiopia, were popularly used to describe the lands south of the Sahara. Libya was commonly used for the North-West of Africa above the Sahara. Africa was used, sometimes instead of Libya, to denote the North-West of Africa, and sometimes alongside Libya to denote the central and northern region of Africa which today is predominantly occupied by Tunisia.
Before the late sixteenth century ‘Africa’ was used only to denote one part of the larger landmass that makes up the continent, primarily that area occupied by Tunisia and Morrocco. For most of its history, the continent of Africa as we know it today had many names for all its various constituents, none of which was used to describe the landmass as a whole.
It was only in the sixteenth and seventeenth century, with the European age of exploration, that the concept of continents as contiguous landmasses bordered, and separated, by oceans, began to take shape. As European exploration opened up the idea of continents, so cartographers began to give single geographical names to entire continents. By the end of the seventeenth century, the name ‘Africa’ had won out over the others, beating names such as Guinea, Libya or Aethiopia to become the name for the entire continent as we know it today. Why Africa won out over all the other, often more popular, names is unclear, although some historians have argued that it is due to the seventeenth and eighteenth-century preference for latinate terms above all others.
The word ‘Africa’ has been a marker on the continent for millennia, but its dominance over the continent and its place as the name for all the people who live on it is only a very recent phenomenon.
• Middleton, John and Joseph C. Miller (eds). 2008. ‘Africa: History of a name’, in New Encylopaedia of Africa: Volume 1. Charles Scribner’s Sons: Farmington Hills, Michigan.
• Schillington, Kevin (ed.). 2005. ‘Africa: Historiography of,’, in Encyclopaedia of African History: Vol 1 and 2. Fritzroy Dearbon: New York.
Dear friends of SAHO
South African History Online (SAHO) needs your support.
SAHO is one of the most visited websites in South Africa with over 6 million unique users a year. Our goal is to fulfill our mandate and continue to build, and make accessible, a new people’s history of South Africa and Africa.
Please help us deliver this by contributing upwards of $1.00 a month for the next 12 months. |
Adults and Children as Learners
Teaching adults should be different if adults learn differently than children do. Theories or perspectives on adult learning, such as andragogy, make a number of assertions about the characteristics of adults as learners: adults need learning to be meaningful; they are autonomous, independent, and self-directed; prior experiences are a rich learning resource; their readiness to learn is associated with a transition point or a need to perform a task; their orientation is centered on problems, not content; they are intrinsically motivated; their participation in learning is voluntary (Draper 1998; Sipe 2001; Tice 1997; Titmus 1999). For some, "the major difference between adults and younger learners is the wealth of their experience" (Taylor, Marienau, and Fiddler 2000, p. 7). For others, the capacity for critical thinking or transformative learning is what distinguishes adults (Vaske 2001). In contrast, pedagogy assumes that the child learner is a dependent personality, has limited experience, is ready to learn based on age level, is oriented to learning a particular subject matter, and is motivated by external rewards and punishment (Guffey and Rampp 1997; Sipe 2001).
If there are indeed "distinctive characteristics of adults, on which claims for the uniqueness and coherence of adult education are based, then one might expect them to be taken into account in all organized education for adults" (Titmus 1999, p. 347). However, each of these characteristics is contested. Courtney et al. (1999) assert that "characteristics of adult learners" refers to a small number of identified factors with little empirical evidence to support them. Andragogy has been criticized for characterizing adults as we expect them to be rather than as they really are (Sipe 2001). Both andragogical and pedagogical models assume a "generic" adult and child learner (Tice 1997).
Some question the extent to which these assumptions are characteristic of adults only, pointing out that some adults are highly dependent, some children independent; some adults are externally motivated, some children intrinsically; adults' life experience can be barriers to learning; some children's experiences can be qualitatively rich (Merriam 2001; Vaske 2001). The emphasis on autonomy and self-direction is criticized for ignoring context. Adults in higher education can be marginalized and deprived of voice and power (Sissel, Hansman, and Kasworm 2001). Power differences based on race, gender, class, sexual orientation, and disability can limit adults' autonomy and ability to be self-directed (Johnson-Bailey and Cervero 1997; Leach 2001; Sheared and Sissel 2001). Lifelong learning can be coercive and mandatory, contradicting the assumption that adult participation is voluntary (Leach 2001). Adults do not automatically become self-directed upon achieving adulthood. Some are not psychologically equipped for it and need a great deal of help to direct their... |
A genetic disorder is an illness caused by abnormalities in the genome. They are heritable, and are passed down from the parents' genes. If a genetic disorder is present from birth, it is a type of congenital defect. Some only show up in later life.
Most genetic disorders are quite rare and affect one person in every several thousands or millions. Sometimes they are relatively frequent in a population. These types of recessive gene disorders give an advantage in certain environments when only one copy of the gene is present. Sickle cell anaemia is an example.
The same disease, such as some forms of cancer, may be caused by an inherited genetic condition in some people, by new mutations in other people, and by nongenetic causes in still other people. A disease in only called a genetic disease when it is inherited. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.