content
stringlengths 275
370k
|
---|
The chaotic steps that give rise to microscopic particles in the atmosphere called aerosols were witnessed for the first time in a verdant forest in Finland, an important step in understanding how the particles affect Earth's climate.
Aerosols are solid and liquid droplets tiny enough to float in the air. They can come from soot, dust and chemicals from cars, factories and farming, or natural sources like deserts, sea spray and plants. The particles are a major pollution source, and can affect human health.
How aerosols form, and their role in climate, remains poorly understood, but scientists would like to know more so they can better understand the implications for future climate change. Aerosols seed cloud formation and can reflect the sun's heat, cooling the Earth, said Markku Kulmala, an aerosol physicist at the University of Helsinki in Finland and lead author of the study of aerosol formation. The study is detailed in today's (Feb. 21) issue of the journal Science.
In the Hyytiälä forest in Finland, set aside decades ago to monitor nuclear fallout from the 1986 Chernobyl disaster, Kulmala and his colleagues built the world's most sensitive aerosol-particle detector. The instrument helped them watch the smallest aerosol precursors in the atmosphere, which had never been seen before.
How aerosols form
The instrument saw that as gas molecules of sulfuric acid smashed together with organic molecules, they formed incredibly small clusters, less than two nanometers in diameter. Lined up side-by-side, about 25,000 of these clusters would still be smaller than the width of a human hair.
The neutrally charged clusters grew slowly at first, until they reached a critical size (about 3 nanometers), the study found. Then, in a burst of activity, the neutral clusters quickly added a heavy coat of organic molecules. "What is most exciting is that the growth of small clusters [is] size-dependent," Kulmala told OurAmazingPlanet in an email interview. "This means that the formation of new aerosol particles is limited by the vapors participating on the growth of 1.5- to 3-nanometer particles."
Understanding the buildup of clusters, and how they grow, is key to predicting aerosol formation and their effect on climate. "The importance of neutral clusters and their growth has significant effect on [the] global aerosol load, and also to global cloud droplet concentrations," Kulmala said.
Impacts on climate
The study site is boreal forest, which covers about 8 percent of the planet's northern latitudes and is expected to expand with global warming. [Top 10 Surprising Results of Global Warming]
The aerosol-forming processes in the planet's tropical forests and urban regions may be different. "We still have to see if the results can be generalized to other places," said atmospheric chemist Meinrat Andrae of the Max Planck Institute for Chemistry in Germany, who was not involved in the study.
Andrae also cautioned that the small particles analyzed in the study must grow bigger before they can affect health or climate. "The particles that are formed at this step (a few nanometers in size) are still a long way away from the size range where they have climate or health relevance," he told OurAmazingPlanet in an email interview.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter. |
Generate a timestamp
Common Date Formats
- ISO 8601
- RFC 2822
Thu, 24 Aug 2051 05:00:18 +0000
Why would I need to convert a date to a timestamp?
Timestamps are commonly used in computer systems and databases to record and track events with a high degree of accuracy. A timestamp represents a specific point in time, typically measured as the number of seconds that have elapsed since a specific starting point (the "epoch").
Converting a date to a timestamp can be useful in various scenarios. For example, if you need to compare two events that occurred at different times, converting their respective dates to timestamps can help you determine which event occurred first. Timestamps can also be used to generate unique identifiers or to sort and filter data in a database based on time-related criteria. Additionally, timestamps are commonly used in programming and web development to record the time a specific action occurred or to track how long a process took to execute.
Overall, converting a date to a timestamp is a useful tool that can help you accurately record and track events over time, making it an essential tool for developers, data analysts, and anyone who needs to work with time-related data. |
The oceans help to limit global warming by soaking up carbon dioxide emissions. But scientists have discovered that intense warming in the future could lessen that ability, leading to even more severe warming.
The discovery comes from a study led by The University of Texas at Austin in which researchers analyzed a climate simulation configured to a worst-case emissions scenario and found that the oceans' ability to soak up carbon dioxide (CO2) would peak by 2100, becoming only half as efficient at absorbing the greenhouse gas by 2300.
The decline happens because of the emergence of a surface layer of low-alkalinity water that hinders the ability of the oceans to absorb CO2. Alkalinity is a chemical property that affects how much CO2 can dissolve in seawater.
Although the emissions scenario used in the study is unlikely because of global efforts to limit greenhouse gas emissions, the findings reveal a previously unknown tipping point that if activated would release an important brake on global warming, the authors said.
"We need to think about these worst-case scenarios to understand how our CO2 emissions might affect the oceans not just this century, but next century and the following century," said Megumi Chikamoto, who led the research as a research fellow at the University of Texas Institute for Geophysics.
The study was published in the journal Geophysical Research Letters.
Today, the oceans soak up about a third of the CO2 emissions generated by humans. Climate simulations had previously shown that the oceans slow their absorption of CO2 over time, but none had considered alkalinity as explanation. To reach their conclusion, the researchers recalculated pieces of a 450-year simulation until they hit on alkalinity as a key cause of the slowing.
According to the findings, the effect begins with extreme climate change, which supercharges rainfall and slows ocean currents. This leaves the surface of the oceans covered in a warm layer of fresh water that won't mix easily with the cooler, more alkaline waters below it. As this surface layer becomes more saturated with CO2, its alkalinity falls and with it, its ability to absorb CO2.
The end result is a surface layer that acts like a barrier for CO2 absorption. That means less of the greenhouse gas goes into the ocean and more of it is left behind in the atmosphere. This in turn produces faster warming, which sustains and strengthens the low-alkalinity surface layer.
Co-author, Pedro DiNezio, an affiliate researcher at the University of Texas Institute for Geophysics and associate professor at University of Colorado, said that the discovery was a powerful reminder that the world needs to reduce its CO2 emissions to avoid crossing this and other tipping points.
"Whether it's this or the collapse of the ice sheets, there's potentially a series of connected crises lurking in our future that we need to avoid at all costs," he said. The next step, he said, is to figure out whether the alkalinity mechanism is triggered under more moderate emissions scenarios.
Co-author Nikki Lovenduski, a professor at the University of Colorado who contributed to the Intergovernmental Panel on Climate Change 2021 climate report, said that the study's findings would help scientists make better projections about future climate change. |
With the help of a rock-zapping laser, NASA's Mars rover Curiosity has detected Red Planet rocks similar to Earth's oldest continental crust, researchers say.
This discovery suggests that ancient Mars may have been more similar to ancient Earth than previously thought, scientists added.
Earth is currently the only known planet whose surface is divided into continents and oceans. The continents are composed of a thick, buoyant crust rich in silica, whereas the seafloor is made up of comparatively thin, dense crust rich in silica-poor basaltic rock. [7 Biggest Mysteries of Mars]
Previously, scientists had suggested that the continental crust may be unique to Earth. The silica-rich rock, the idea goes, resulted from complex activity in the planet's interior potentially related to the onset of plate tectonics — when the plates of rock making up Earth's exterior began drifting over the planet's mantle layer.
In contrast, analyses of images snapped by Mars-orbiting spacecraft and studies of meteorites from the Red Planet previously suggested that the Martian crust was made up primarily of basaltic rock.
Now researchers have found that silica-rich rock much like the continental crust on Earth may be widespread at the site where Curiosity landed on Mars in August 2012.
"Mars is supposed to be a basalt-covered world," study lead author Violaine Sautter, a planetary scientist at France's Museum of Natural History in Paris, told Space.com. The findings are "quite a surprise," she added.
Sautter and her colleagues analyzed data from 22 rocks probed by Curiosity as the six-wheeled robot wandered ancient terrain near Gale Crater. This 96-mile-wide (154 kilometers) pit formed about 3.6 billion years ago when a meteor slammed into Mars, and the age of the rocks from this area suggests they could help shed light on the earliest period of the Red Planets, scientists said.
The 22 rocks the researchers investigated were light-colored, contrasting with the darker basaltic rock found in younger regions on Mars. They probed these rocks using the rock-zapping laser called ChemCamon Curiosity, which analyzes the light emitted by zapped materials to determine the chemistry of Martian rocks.
The scientists found these light-colored rocks were rich in silica. A number of these were similar in composition to some of Earth's oldest preserved continental crust.
Sautter noted that recent orbiter and rover missions had also spotted isolated occurrences of silica-rich rock. The researchers suggest these silica-rich rocks might be widespread remnants of an ancient crust on Mars that was analogous to Earth's early continental crust and is now mostly buried under basalt.
The researchers added that the early geological history of Mars might be much more similar to that of Earth than previously thought. Future research could investigate whether the marked differences between Mars' smooth northern hemisphere and rough, heavily cratered southern hemisphere might be due to plate tectonics, Sautter said.
The scientists detailed their findings online Monday (July 13) in the journal Nature Geoscience.
Follow us @Spacedotcom, Facebook or Google+. Originally published on Space.com. |
Computer simulation, a revolutionary research approach first conceived by mathematicians during the Second World War for military studies, has become an indispensable tool for researchers, academics and developers everywhere. It helps them advance scientific knowledge and develop technologies. Computer-aided design is cost-effective, risk-free and flexible—and avoids the classical hit-and-miss R&D process. It increases production capacity, reduces development costs and shortens time to market.
The emerging realm of miniature quantum devices, including lasers and semiconductor-based transistors, will rely even more heavily on simulation software in the design and production stages. Quantum will revolutionize research again by peering into the tiniest features of the world more deeply than anything before. The devices could be the key to solving blindness, cancer or dementia. They can build resilience to cyber attacks, transmit electrical power from plants without loss and replace GPS technology with small devices that work everywhere and at all times.
Quantum simulators are radically different from traditional computer simulators because they take into account nanoscale interactions of quantum systems that behave in unexpected ways. Quantum simulation software predicts that behaviour, helping researchers understand and consider those differences at the design stage before sensors are fabricated in the lab. Until now, a comprehensive quantum simulation method for solid-state systems has eluded developers.
A small Canadian company, Nanoacademic Technologies Inc., has tackled this challenge with an effective new method: solid-state quantum device simulation. This method can model semiconductor-based quantum device properties over a unique spectrum of features while being agnostic about the geometry and considered materials.
With the help of the quantum research team at the National Research Council of Canada (NRC) and contribution funding from the Collaborative Science Technology and Innovation Program, Nanoacademic enhanced its software code and has taken the product from the lab to the market much more quickly than originally anticipated.
Nanoacademic's Quantum Technology Computer-Aided Design (QTCAD) calculates a variety of properties in almost any geometries of semiconductor-based spin-qubit devices. In addition to electron simulation, a new feature simulates quasiparticles such as holes. These offer remarkable physical insights into elements that may not even exist by modeling technological advantages specifically for hole-based quantum devices.
"QTCAD makes it possible to finely model semiconductor devices by meshing them in specific ways at a very small (nanometric) scale to design basic functional units of future quantum computers," says Jeremy F. Garaffa, Nanoacademic's Director of Sales and Marketing. "And we are actively developing additional software features and modules that are also pretty unique." These include bridging the quantum modeling features of QTCAD with their set of density-functional theory codes—something no other software on the market can do.
To help Nanoacademic take QTCAD to the next level and prepare the product for market launch, their contacts at the Institut quantique in Sherbrooke, Quebec, introduced them to the quantum research team at the NRC in 2020.
"Supported by the NRC's scientific expertise, we crafted a project to anchor our tool with the most advanced theoretical physics as well as experiments that allowed us to compare and validate the performance of our software," adds Garaffa.
Getting to the finish line
According to Dr. Marek Korkusinski, Senior Research Officer with the Quantum Physics Group, the NRC's role has been to conduct experiments with Nanoacademic's software, communicate the results and work with the company to verify the data. "They use these data to calibrate their calculations and determine whether their simulation models will work in the real world," he says.
The NRC meets frequently with the Nanoacademic team, led by Dr. Félix Beaudoin, Director of Quantum Technology, to discuss challenges, solutions and results. With each partner bringing different yet critical expertise to the table, the collaboration has led to a version of QTCAD that is now commercially viable with customers in several countries.
Further commercialization will see the software used widely in academia, where it will speed up learning, reduce costs and train highly qualified personnel. In industry and government, it will take fewer resources to reach goals and provide safe alternatives to traditional on-the-ground experimentation.
This project is supported by the NRC's Quantum Sensors Challenge program and under the commercialization pillar of Canada's National Quantum Strategy. "It particularly embodies the importance of materials research and development," says the NRC's Dr. Aimee K. Gunther, Deputy Director of the Quantum Sensors Challenge program. "As it has for previous revolutionary technologies like computers and smartphones, our success depends on the quality of the engineering tools used to design fundamental components and building blocks." |
Researchers have uncovered a previously unknown biological process involving vitamin B12 and taurine that regulates the production of new bone cells. This pathway could be a potential new target for osteoporosis treatment.
In humans it is well known that vitamin deficiencies lead to stunted growth, but the underlying mechanisms have long been a mystery. In this study, the team was able to piece together the biological process that leads to the production of new bone by studying the offspring of mice lacking the Gastric Intrinsic Factor gene, which is active in the stomach and allows the gut to absorb vitamin B12.
“Bone cells aren’t solely studied in isolation in the lab as both local and systemic factors play an important role in their function, so it’s important to unpick the multitude of biological factors that can affect their proliferation,” says Dr Pablo Roman-Garcia, a first author from the Wellcome Trust Sanger Institute. “We were amazed to find a new system that controls bone mass through a protein expressed, of all the places, in the stomach.”
The researchers found that bone mass was severely reduced at eight weeks of age in the offspring of mice with vitamin B12 deficiency. Giving the mother a single injection of vitamin B12 during pregnancy was enough to prevent stunted growth and the onset of osteoporosis in the offspring. The team was surprised to find that B12-deficient mice had only one-third of the normal number of bone-creating osteoblast cells, but had no change in bone-degrading osteoclast cells.
Reducing vitamin B12 levels in bone cells in the laboratory did not affect the function of the bone-forming cells directly, while under the same conditions it affected liver cell functions profoundly. These findings suggested to researchers that the liver has an important role to play. This was confirmed when they showed that liver cells from the offspring of B12-deficient mothers were unable to produce taurine. When these mice were fed regular doses of taurine at three weeks of age, they recovered bone mass and grew normally.
“While the importance of taurine is yet to be fully understood, this research shows that vitamin B12 plays a role in regulating taurine production and that taurine plays an important role in bone formation,” Dr Vidya Velagapudi, Head of the Metabolomics Unit at the Institute for Molecular Medicine Finland. “To date we have focussed only on vitamin B12-deficient populations, but the next stages of this research will need to confirm the connection between vitamin B12, taurine and bone formation in general populations.”
While the focus of this study was the impact of maternal vitamin B12 deficiency on offspring in mouse models, there are promising parallels between these findings and data from human patients. Samples collected by Kocaeli University Hospital, Turkey from children born of nutritionally vitamin B12-deficient mothers also showed a significant decrease in levels of vitamin B12 and taurine. In addition, older patients with vitamin B12 deficiency from a study by the Institute for Molecular Medicine, Finland displayed a statistically positive correlation, suggesting that vitamin B12 plays a key role in regulating taurine synthesis and bone formation inhumans of all ages.
“The discovery of this unanticipated pathway between gut, liver and bone would not have been possible without the use of mouse molecular genetics and studies in the clinic that allowed us to understand interactions between these organs,” says Dr Vijay K Yadav, a senior author from the Sanger Institute. “The fact that the vitamin B12-taurine-bone pathway affects only bone formation and appears to play the same role in mice and human beings raises the prospect that targeting this pathway through pharmacological means could be a novel approach toward an anabolic treatment of osteoporosis.”
Source: Materials by Wellcome Trust Sanger Institute.
By Dr.Anbarrasu N.D. |
What is Hip Fracture Surgery?
Surgical correction of a hip fracture is known as hip fracture surgery.
Hip fractures involve a break that occurs near the hip in the upper part of the femur or thigh bone. The thigh bone has two bony processes on the upper part - the greater and lesser trochanters. The lesser trochanter projects from the base of the femoral neck on the back of the thigh bone. Hip fractures can occur either due to a break in the femoral neck, in the area between the greater and lesser trochanter or below the lesser trochanter.
The hip joint is a “ball and socket” joint. The “ball” is the head of the femur or thigh bone, and the “socket” is the cup-shaped acetabulum. The hip joint enables the upper leg to bend and rotate at the pelvis. The joint surface is covered by a smooth articular surface that allows pain-free movement in the joint.
Causes of Hip Fractures
Hip fractures are most frequently caused after minor trauma in elderly patients with weak bones, and by a high-energy trauma or serious injuries in younger people. Long term use of certain medicines, such as bisphosphonates to treat osteoporosis (a disease causing weak bones) and other bone diseases, increases the risk of hip fractures.
Signs and Symptoms of Hip Fractures
Signs and symptoms of hip fractures include:
- Pain in the groin or outer upper thigh
- Swelling and tenderness
- Discomfort while rotating the hip
- Shortening of the injured leg
- Outward or inward turning of the foot and knee of the injured leg
Diagnosis of Hip Fractures
Your doctor is able to diagnose a hip fracture based on your symptoms, abnormal posture of your leg and hip, and a thorough physical examination. Your doctor may also order imaging tests, such as X-rays, MRI scan, or bone scan to confirm and view the hip fracture.
A preoperative assessment will be made before surgery to check your overall health to make sure you are ready for the surgery. You will be asked about any medications that you are taking and the need to stop if necessary. You will have an anesthetic assessment to decide on what type of anesthesia will be used during surgery. You will be given antibiotics to reduce the risk of wound infection post surgery. An anticoagulant such as heparin may be given since the surgery carries the risk of a blood clot. Blood tests, urine samples, chest X-rays, electrocardiograms will be checked to look for any irregularities.
Surgical Treatment of Hip Fractures
Surgery is employed when a conservative approach such as medications, injections, and physiotherapy fail to provide satisfactory results.
Hip fracture surgery is performed under anesthesia either arthroscopically or through open surgery. Your surgeon will decide which approach is the best for your condition.
In general, an incision is made at the top of your thigh to expose the bones of the hip joint. The fractured or damaged joint is replaced with a prosthesis. The leg is moved to check for a satisfactory range of motion once the prosthesis is placed. The surgical incision is then closed with sutures and dressings to complete the operation.
Different surgical procedures are used for the treatment of hip fractures, and the type of surgery normally depends upon the severity and location of the fracture.
- Total Hip Replacement: This is an operation to replace both the natural socket in the hip and femoral head with prostheses. The upper femur and the socket in your pelvic bone are replaced with a prosthetic implant.
- Partial Hip Replacement: This is an operation to replace the damaged femoral head with a prosthesis. The traumatically fractured or damaged ball-like head of the thigh bone (the femoral head) is replaced with a prosthetic implant.
- Internal Fixation: This is an operation to hold the bone in place while it heals with screws, pins, rods, or plates. Your fracture will be corrected by placing a sliding hip screw into the head of the thighbone (femur), secured to the top of the thigh bone to hold the fracture together.
Postoperative Care Instructions
Instructions for postoperative care include:
- Use of assistive devices such as splints and crutches for walking
- Keep your leg elevated to decrease swelling
- Rest the hip as much as possible
- Medications to control pain and swelling
- Limited weight-bearing activities
- Follow a balanced, varied diet
- Physiotherapy to improve flexibility, range of motion, and strengthen muscles
- Adhere to follow-up appointments
What are the Risks Associated with Hip Fracture Surgery?
As with any surgery, some of the potential risks associated with hip fracture surgery include:
- Improper or non-union of bone
- Infection and wound complications
- Damage to nerves and blood vessels
- Leg-length discrepancy
- Deep vein thrombosis (blood clot)
- Bedsores due to lack of movement post surgery
- Muscle atrophy
- Deterioration of mental health in old patients
- Avascular necrosis
What are the Benefits of Hip Fracture Surgery?
Some of the benefits associated with successful hip surgery include:
- Reduced pain
- Decreased stiffness
- Improved mobility, strength, and coordination
- Ability to maintain an active lifestyle |
Click tabs above to view the Excerpts or click HERE to view the Table of Contents.
Note: Although the first edition of Israel and Palestine: A Common Historical Narrative has been published, the document undergoes continuous refinement through ongoing stylistic and scholarly review. Text on this page may be changed from time to time as a result of this process.
Israel and Palestine:
A Common Historical Narrative
©2020, The Israel Palestine Project
All rights reserved
Chapter 3 - Aliya: The evolution of Zionism, Jewish immigration, and early impacts on indigenous Palestinians
The lives of most nineteenth century European Jews were challenging. Never truly accepted, they lived on the margins of society, relentlessly targeted through discrimination and exclusion. While some achieved a measure of assimilation, others lived in perpetual fear of cycles of virulent anti-Semitism and fierce “pogroms” (organized acts of persecution and expulsion), especially in the Russian Empire and other Eastern European countries. For some, the solution was emigration, and many eventually departed for America. Others began to dream of safety and security in a land or country of their own. Although not the only possibility considered, Palestine was the most appealing destination for many religious Jews in particular (while many others wished to immigrate to a more modern world due to their ideological convictions) because of the historical presence of Jewish life in the Holy Land prior to the Roman occupation and its religious and national significance.
The organization Hovevei Zion (the Lovers of Zion) was started in 1881 in a number of Russian cities, as a response to ongoing persecution. This group (and others like it) promoted immigration and settlement in the Holy Land. The first wave of Russian-Jewish immigrants (50,000 to 60,000) that arrived in Palestine between 1882 and 1903 is referred to historically in Zionism as the first Aliya (Hebrew for “ascent”). Once arrived, they began purchasing land from Arab owners although, due to limited funds, the land was sometimes poor and swampy. Not all remained: approximately half left Eretz Yisrael, the Land of Israel, before the end of this period (see Note 6).
A second wave of immigration began in 1905 (the second Aliya) as a response to a number of extensive and violent pogroms in Russia and the Ukraine, and 40,000 more settlers arrived in Palestine, “… of whom fewer than half were more or less permanently absorbed in Eretz Yisrael" (see Note 7). This wave of immigrants, many of them young and idealistic, was imbued with the fervor of creating a socialist society. They characterized themselves as pioneers and were intent on creating, in the Holy Land, a new kind of Jewish existence.
As a political movement, Zionism was born at the first Zionist Congress, held in Basel, Switzerland, in 1897. Zionism was influenced by political trends of modernism, perceived idealistically, sweeping through Europe in the late 19th Century, including nationalism, secularism, socialism and colonialism. As it developed, Zionism increasingly focused on gaining support for the creation of a Jewish homeland from the great world powers.
Although not historically proven, a story is sometimes quoted and told that after the first Zionist Congress, the rabbis of Vienna decided to explore the ideas that Theodore Herzl, the founder of political Zionism, had expounded at that Congress and in his book, Der Judenstaat (The Jewish State). While subsequent Zionist Congresses explored the possibility of creating a Jewish homeland in other parts of the world (Argentina, Uganda, even Texas!), none of these explorations bore fruit and the focus on settlement in Palestine continued.
Over this time, a famous saying of the Zionist movement arose: “A land without a people for a people without a land", based on the popular and literary impressions of Palestine as being a barren and empty land, suited to be given to a landless people. These impressions were based on the writings of famous travelers such as the American author Mark Twain, that described Palestine in this way in his widely-read 1869 book, The Innocents Abroad. This saying was repeated with such fervor that it grew into an enduring myth: many settlers, upon their first arrival in the Holy Land, were surprised to find the land already inhabited by Arab people, a land with a vibrant culture, replete with fields and orchards carefully tended and well-cultivated. Most of the Palestinian people the newcomers encountered lived in and tilled fields and gardens in hundreds of cities, towns and villages that dotted the Holy Land and where they developed the terrace system of irrigation. The export of agricultural products to Europe was common, particularly oranges (see Note 8). Arab landowners had created the orange groves of Jaffa and, by the 1860’s, Jaffa oranges were famous throughout Europe. Non-agricultural industry was also developing.
The Zionist settlers purchased land from absentee landowners, mostly in the valleys and coastal regions where there were few, if any, Arab small landholding farmers (known as “Fellaheen”). In contrast to Ottoman right-of-land usage, which was diffuse but not less extensive, and following European land ownership law with the absolute right of private ownership, Palestinian tenant farmers living on these lands were evicted.
During this period, conflict between the Zionists and the Arab Palestinian population intensified, the result of accelerating evictions of tenant farmers as Jewish land purchases increased. In response, leading Zionist bodies adopted policies and took actions to improve relations by providing compensation for tenants and reaching out to the young Arab Palestinian national movement (see Note 9). By 1929, the British, noting the rise of landlessness among Palestinians and the resultant increase in tensions, enacted legislation to protect the rights of the tenant farmers evicted by sales from large landowners. Yet through all this time, some Palestinians and immigrant Jews lived side-by-side as amicable neighbors.
While the political leaders of the Zionist movement in Palestine were aware of the large numbers of Arab peoples already present, their focus was on meeting the needs of arriving Jewish settlers. As an outcome of this focus, they adopted policies that conflicted with the needs and interests of the indigenous population. During the earlier period of immigration, and the strong socialist orientation of the settlers, it was felt that the benefits of a successful, prosperous Jewish society would accrue to all, Arab and Jew alike. Indeed, Palestinians did benefit from the development associated with the Yishuv (as the new Jewish community was termed) and the later British Mandate, although it is not certain which had the greater impact. However, the net effect was the exclusion of native Palestinians from the economic benefits the Jewish settlers were bringing to the land.
As development continued and living conditions improved, Arab immigrants were drawn to Palestine, attracted by the opportunity for a better life. However, claims that Arab immigration swelled the Palestinian population significantly during the rapid rise in the period of 1930-1945 (e.g., as asserted by Joan Peters in the 1984 book, From Time Immemorial: The Origins of the Arab–Jewish Conflict over Palestine) have been discredited by Israeli and Palestinian scholars alike as having no substantive basis (see Note 10).
The clashes that planted the seeds for the Israeli-Palestinian conflict occurred in the first difficult encounters between the indigenous peoples and the zealous among the newcomers. Confrontation became increasingly common during the first decades of the twentieth century, with parallel experiences of separation, resentment and rejection. Palestinians voiced their fears about possible loss of their land and way of life, through the media of the day and other venues. From 1891 on, voices of protest were raised to an unresponsive Ottoman government and were later directed to the British government. Nothing seemed to stem the tide of ever more settlers arriving. The two peoples were separated by culture, religion, education and worldview. They didn’t have a common language to speak with each other about the grave conflict growing daily in their midst, and even the small population of Arabic-speaking Jews who had lived in the land for centuries were marginalized and forgotten. Violent eruptions began to break out sporadically – a sign, perhaps, of what was to come.
6. Ran Aaronsohn , in Shared Histories: A Palestinian-Israeli Dialogue, ed. Paul Scham, Walid Salem, Benjamin Pogrund, Left Coast Press, 2005, p. 62.
7. Ibid, p. 64.
8. Adel Manna, in Shared Histories, p. 25.
9. Ran Aaronsohn, in Shared Histories, p. 65.
10. Gilbar, Gad. “Megamot ba-hitpathut ha-demografit shel ha-Falastinim, 1870–1987” (Trends in the Demographic Development of the Palestinians, 1870–1987), in Hatenuah ha-leumit ha-Falastinit: Me-imut le-hashalamh? (The Palestinian National Movement: From Confrontation to Reconciliation?), ed. Moshe Maʻoz and B.Z. Kedar (Tel Aviv: The Ministry of Defense, 1996). Gilbar, an Israeli scholar relying on an abundance of British and Zionist sources, proves convincingly that immigration since the 19th Century was responsible only up to a tenth of the number of Arabs in Palestine at the eve of 1948, with the remainder a result of natural growth. More detailed information on this analysis, including references to work by Roberto Bachi, Fred Gottheil, Yehoshua Porath, Edward Said, Christopher Hitchens, and Norman Finkelstein, is included in Notes in the full Narrative.
Go to Chapter 9
Go to Table of Contents
Return to Top |
October is national Farm-to-School Month in the United States. It’s a time to celebrate the connections between schools, farmers, and locally and regionally produced foods. Each year, millions of students, farmers, and communities across North America celebrate the movement that’s improving child nutrition, supporting local farmers and economies, and increasing food and nutrition education.
Farm-to-school programs encourage schools to buy farm-fresh foods from their local community to serve in the cafeteria and feature educational activities in the classroom. Students gain access to healthy, nutritious food as well as educational opportunities such as farm field trips, garden-based learning, cooking lessons, and recycling programs. The farm-to-school approach helps children and families understand where their food comes from and how their food choices can impact their health, the environment, and their community. Specific benefits for children, according to the U.S. National Farm to School Network, include an increased knowledge and awareness about gardening, agriculture, healthy eating, local foods and seasonality, greater fruit and vegetable consumption both at school and at home, and enhanced overall academic achievement.
Farm-to-school programs also provide benefits to local and regional farmers and the broader community. They serve as an additional financial opportunity for farmers, fishers, ranchers, food processors, and food manufacturers, spurring economic activity and job creation within the community and the state. These programs also lessen the schools’ environmental impact by reducing student food waste and food miles. Across the globe, farm-to-school programs are fostering connections between students, teachers, parents, farmers, and policymakers in activities that support health, nutrition, agriculture, and local economies.
Food Tank is celebrating farm-to-school month by featuring 19 inspiring and innovative farm-to-school programs from around the world. These programs are making a demonstrated difference in child health, school attendance rates, food security, and farmer livelihoods in many communities.
1. Ghana School Feeding Programme, Ghana
Launched in 10 pilot schools by the former Ghanaian government in 2005, the Ghana School Feeding Programme (GFSP) has grown to feed more than 1.4 million children across 4,500 schools in Ghana. The program has helped increase school attendance, domestic food production, farm and household incomes, and food security in many communities. Active across all 170 districts, the GSFP is helping to reduce child hunger in some of Ghana’s most isolated communities.
2. Purchase from Africans for Africa Program, multiple countries
The Purchase from Africans for Africa Program (PAA) links smallholder farmers with local schools in five countries—Ethiopia, Malawi, Mozambique, Niger, and Senegal. Its pilot phase resulted in more than 1,000 metric tons of locally procured food serving 128,456 pupils in 420 schools. Family farmers’ productivity rates have increased by more than 100 percent, with schools feeding activities guaranteeing a market for an average of 40 percent of the food they produce. PAA is a partnership between the Government of Brazil, the Government of the United Kingdom, the U.N. Food and Agriculture Organization (FAO), and the World Food Program’s Purchase for Progress initiative.
3. Food for Life, England
A collaboration between food activist Jeanette Orrey, the U.K. Soil Association, and celebrity chef Jamie Oliver, Food For Life works to change food culture in nurseries, schools, hospitals, and care homes. Its “whole setting approach” works to provide nutritious, sustainably produced food, promote healthy food behaviors, and educate and engage pupils, patients, residents, and their families. The whole-school approach ensures that lessons about food and healthy eating are reflected and reinforced in the daily life of the school. Students, teachers, and organizers grow their own food, organize trips to farms, source food from local producers, set up school farmers’ markets, hold community food events, and serve freshly prepared meals made from scratch at school lunchtimes.
4. Farming and Countryside Education, England and Wales
Farming and Countryside Education (FACE) works across England and Wales to mobilize farmers and farming businesses to engage with classroom education. FACE provides training and resources to farmers to equip them with the knowledge and confidence to deliver on-farm and in-classroom education to inspire children. FACE also has a range of classroom resources across all key stages and subjects to help teachers build curriculum-based lessons about food, farming, and the countryside. Their online portal Countryside Classroom provides teachers with access to a database of learning resources, places to visit, and food and farming-related organizations.
5. Model Vihti, Finland
Model Vihti is a development project in Vihti, Finland, seeking to create sustainable, nature-based learning environments. The model involves garden-based learning where children plan the next season, grow seedlings indoors, prepare the soil, and plant, sow, and harvest edible crops. It also includes farm visits where pupils and teachers are assigned everyday tasks of a farmer, from cleaning horse stables to stacking firewood. Children also learn in nearby forests about forestry, water systems, and climate change, as well as basic survival skills such as first aid and making a safe fire in the forest. The three aspects of the program are designed to help children understand the interconnected natural and physical processes involved in food production.
6. Agri Aware, Ireland
Agri Aware provides educational and public awareness initiatives for both farming and non-farming communities across Ireland. One initiative is the Mobile Farm, an outdoor classroom that safely and humanely transports animals to educate children and adults via a hands-on learning experience. The Mobile Farm is accompanied by trained farmers who teach students about each animal, including facts about their natural habitat and their role in food production. In conjunction with the Dublin Zoo, Agri Aware also established Family Farm, an educational, interactive acre that represents modern Irish farm life. Each year, up to 1 million visitors learn about farm animals and Ireland’s agricultural history through the farmyard and its heritage exhibition.
7. School Meals Program, Italy
Since 2001, the city of Rome has gradually made its School Meals Program more sustainable, innovative, and culturally appropriate. Today, more than 144,000 meals are served daily in Rome across 550 nurseries, primary schools, and secondary schools; 92 percent of the meals are made from scratch on site; and 69 percent of them include organic food. Rome’s school meal program has a number of additional criteria such as a “guaranteed freshness” standard for fruit and vegetables—with no more than three days between harvest and intake—and a seasonal focus for designing recipes and menu planning. Children use ceramic and stainless steel tableware and all single-use items, such as napkins, must be recyclable and biodegradable.
8. Farm to Cafeteria Canada, Canada
Farm to Cafeteria Canada (F2CC) leads the Canadian farm-to-school movement, working with partners to influence policy to bring local, healthy, and sustainable foods into all public institutions. In 2016, 50 schools were delivered grants across British Columbia and Ontario, enabling approximately 20,000 students with opportunities to grow, harvest, cook, preserve, and eat healthy, local foods. F2CC’s Nourishing School Communities program has resulted in more than 250 policy and behavioral changes at the local and provincial levels, transforming school and policy environments. These include the development of policies or guidelines to facilitate food safety, food preparation, local food procurement and purchasing, and student involvement in menu planning, as well as field trips to local farms and food literacy activities.
9. Fresh Roots, Canada
Fresh Roots and the Vancouver School Board have partnered to develop outdoor, hands-on community learning classrooms called Schoolyard Market Gardens. The first of their kind in Canada, Schoolyard Market Gardens are a place of interaction for people of all ages, backgrounds, and cultures to explore food production, cooking, and eating. Produce grown in the gardens are distributed through a weekly Salad Box program as well as served in school cafeterias and local restaurants. Fresh Roots also hosts annual all-staff professional development days to help teachers learn how to use the garden as an outdoor classroom and achieve their specific curriculum objectives outside.
10. Ecotrust, United States
Ecotrust is a nonprofit organization based in Portland, Oregon, promoting projects that advance social equity, economic opportunity, and environmental well-being. In their farm-to-school work, Ecotrust focuses on low-income schools and preschools to ensure the most vulnerable children have access to fresh, healthy food. Ecotrust has launched how-to guides and resources, including the Farm to School Showcase Toolkit, a guide for connecting local food suppliers with school food buyers, and oregonfarmtoschool.org, a living guide to current information, studies, and data on farm-to-school outcomes in the state. Their online platform FoodHub connects more than 6,000 farmers, ranchers, fishermen, and specialty producers with wholesale food buyers in their region.
11. National Farm to School Network, United States
The National Farm to School Network (NFSN) provides vision, leadership, and support at the state, regional, and national levels to connect and expand the farm-to-school movement across the United States. More than 15,000 farm-to-school practitioners and supporters have joined the network and support NFSN by advocating for supportive policies, volunteering in their communities, and fostering connections and partnerships. Every year, NFSN establishes new initiatives and partnerships, develops new resources, toolkits, and reports, pushes for and achieves policy changes, and holds hundreds of presentations and events. Online, NFSN provides an extensive range of resources covering a range of topics and settings, opportunities to advocate for supportive policies, and information on establishing a farm-to-school program.
12. Seven Generations Ahead, United States
Seven Generations Ahead (SGA) works with the local government, community, and private sectors over a broad range of sustainability topic areas, including healthy community development, local food procurement, healthy eating, and sustainability education, among others. With their Fresh From the Farm Program (FFF), SGA directly implements healthy eating curriculum modules in limited-resource schools. FFF educators support the planning, design, and implementation of organic school gardens that introduce children to varieties of fresh fruits and vegetables, their cultivation methods, and nutritional value. Students can also engage in local organic farm tours, chef cooking demonstrations, and school-based composting that demonstrates the natural cycle of growing and harvesting food, preparing and eating food, and converting waste into fertilizer for new food.
13. The Kitchen Community, United States
Established in 2011 by Hugo Matheson and Kimbal Musk, The Kitchen Community (TKC) was founded on the belief that every child should have the opportunity to play, learn, and grow in healthy communities. To create healthier environments in underserved schools, TKC builds Learning Gardens, thriving vegetable gardens, and hands-on outdoor classrooms seeking to increase academic engagement and achievement, strengthen the bond between schools and their communities, and increase kids’ knowledge of and preference for fresh fruits and vegetables. TKC is the largest school garden organization in North America, impacting 250,000 kids across six major metropolitan regions with nearly 450 outdoor Learning Garden classrooms in schools nationwide.
14. Vermont Feed, United States
Vermont Feed (VT Feed) provide training, mentoring, and technical assistance to schools, food service staff, farmers, and nonprofit organizations working to build strong farm-to-school programs. VT Feed initiates projects that advocate for stronger food, farm, and nutrition policy, and develop innovative tools and evidence-based best practices for farm-to-school programs. Current projects include Jr Iron Chef TV, a statewide culinary competition for students to create healthy, local dishes, and the Farmer Correspondence Program, pairing farmers with classrooms based on students’ interests and grade levels. VT Feed also coordinates the Vermont Farm to School Network to facilitate local connections, foster local engagement, and work to increase farm-to-school initiatives in the state.
15. National School Feeding Programme, Brazil
Brazil’s National School Feeding Programme has been in operation since the 1950s but has transformed and expanded in recent decades. In 2009, the Brazilian government made it a legal requirement to source at least 30 percent of school meal produce from rural, family farms, and access to school meals has become a universal right under Brazilian law. The program is considered one of the largest and most comprehensive school nutrition programs in the world, supplying approximately 43 million pupils with one or more servings of healthy, culturally appropriate food per day in almost 250,000 schools across the country. According to the FAO, the program is improving the health of millions of young people, reducing school absenteeism, and guaranteeing a market for 120,000 family farmers across Brazil.
16. Sustainable Schools Program, multiple countries
Following the success of Brazil’s school feeding program, the FAO has partnered with the Brazilian government to replicate and adapt the program to 13 countries in Latin America and the Caribbean. Called the Sustainable Schools Program, it involves the adoption of healthy and adequate school meals, the implementation of educational school gardens, infrastructure improvements made to kitchens, dining halls, and storage rooms, and the direct purchase of local family farming products. For each country, a nutritional plan is developed based on students’ nutritional status, socioeconomic situation, and the knowledge and practices of household food consumption. Food is purchased from local family farmers to ensure dietary diversity and respect for cultural food preferences while promoting local economic development.
17. Australian Organic Schools, Australia
Australian Organic (formerly Biological Farmers of Australia) is the largest organic industry body in Australia. Their Organic Schools program helps schools start and maintain an organic school garden with seven educational units of work, including garden planning, soil health, planting, mulching, watering, and harvesting. Another three units are based on good nutrition, the benefits of consuming organic produce, and the process of becoming a certified organic producer. The program provides background information, lesson plans, activity sheets, case studies, and extra resources for use in primary and middle years.
18. Stephanie Alexander Kitchen Garden Foundation, Australia
The Stephanie Alexander Kitchen Garden Foundation, initiated by Australian celebrity chef, restaurateur, and food writer Stephanie Alexander, provides Pleasurable Food Education for primary school children. The program aims to provide a pleasurable learning experience that will positively influence children’s food choices, attitudes towards environmental sustainability, and working relationships with other children and adults. Based on the idea that fun is integral to learning, Pleasurable Food Education encourages critical thinking, teamwork, and increased levels of observation among students.
19. Garden to Table, New Zealand
The Garden to Table program aims to build skills through practical hands-on child-centric classes—not only growing and cooking skills but also building awareness of individual and collective responsibility for the environment, healthy eating, and community connectedness. In partnership with T&G (formerly Turners & Growers), Garden to Table have launched the Young Gardener Awards recognizing the most passionate young gardeners in the Garden to Table program. Winners will receive vouchers to buy a glass house or garden bed, and all the necessary tools and gardening resources to start their own productive home garden. The Garden to Table program began in 2010 with just three schools in Auckland and has now grown to a nationwide program with more than 60 participating schools. |
Simple instruction on how to draw a penguin is a fun drawing exercise for kids and adults. Thanks to the step by step drawings, you can make a penguin drawing quickly and easily. A picture just in time for winter holidays, during which it is worth practicing your hobby which is drawing. If you are just starting your adventure with drawing, a penguin will be the perfect starting point. Over time, you can move on to more complicated drawings and learn How to draw a lion.
Penguin drawing – instruction
A penguin is a bird that does not fly, but it swims and dives very well. Penguins live in the far south of Antarctica where it is very cold. Their thick feathers covering the entire body are tight and waterproof, which means that penguins do not freeze even in the most extreme weather conditions. The shape resembles black and white bowling penguins. On land, they move awkwardly and slowly. It’s all because of short legs. However, as soon as they enter the water, they feel like fish in the water. They are excellent divers, and their streamlined bowling shape makes them very fast and agile underwater.
The penguin is black and white, but have other crayons as well – yellow and orange to color the spout and feet. Start drawing with a pencil sketch, and use a rubber eraser if you make a mistake. If you already have all the necessary utensils ready, you can proceed to the instructions.
How to draw a penguin
Time needed: 5 minutes
Draw a penguin step by step
- Draw a small circle in the center of the sheet and another, larger oval underneath.
- How to draw a Penguin
Now connect both circles with two lines. Then draw the wings and mark the legs of the penguin.
- Penguin – drawing
Draw the eyes, beak and fins for the penguin.
- Drawing a penguin step 4
The penguin picture is almost finished. You only need to mark with a line where his black tailcoat ends.
- Penguin coloring book
The drawing of the penguin is now finished. If you want, you can correct its contours with a black felt-tip pen.
- Colorful drawing penguin
It is true that the penguin is not very colorful, but it does have some colors. Color his tailcoat and head black. Then take an orange crayon and paint your feet and beak orange. You can also add some yellow-orange to it on the belly and neck. |
The hustle and bustle of the holiday season is in full swing, and as we head into the end of what has been a challenging year, we can’t help but be filled with an array of emotions. Since March, we have felt all the feels—joy, sadness, anxiety, fear, elation, etc. You name it; we have felt it, and sorting through those emotions is not always easy—especially for children.
One of the great lessons learned over this past year is how to cultivate a culture of empathy and grow in our ability to understand and share another’s feelings. Empathy became the foundation upon which all the other emotions were built, and empathy is what can help you sort through the complexities of 2020 with your children.
Here are five ways you can help your children practice empathy:
1. Talk about and name feelings
Help your children validate and name their feelings. Avoid saying things like, “You shouldn’t be angry about e-learning,” and instead let your children feel their feelings. If your child is having trouble naming how they feel, look for picture books or videos or play “emotion charades” where children act out emotions and discuss times when they have also felt like that. You can also try the emotional snowman project referenced below.
2. Point out other people’s feelings
Children may not automatically notice how other people feel, so it is also important to validate and name others’ feelings. For example, you could say, “Your sister is feeling sad that she can’t see her friends. Maybe you could ask her to color with you.” Make a habit of regularly asking your child about friends and classmates, encouraging them to notice how others may be feeling in a situation.
3. Talk to your child about your feelings
Show your child that adults have emotions, too, and learning how to handle them is a part of life. For example, you could say, “I’m feeling frustrated that I can’t finish my project right now. I think I will go for a walk to make myself feel better.” Kids need to learn healthy ways to deal with tough emotions.
4. Provide opportunities for children to practice empathy
Empathy needs to be nurtured and requires practice and guidance. Give your child small tasks to do around the house, emphasize using social skills like sharing with a sibling, practice random acts of kindness in your neighborhood, or send cards to those in the military. Facilitating activities like this will have a big impact on your child—and the world around them.
5. Create a Kindness Jar
Instead of making resolutions this year, create a jar where you can capture all those moments when family members show kindness, compassion, and empathy. Start with an empty jar and have everyone write on slips of paper when someone is doing something good that needs to be recognized. On New Year’s Eve 2021, read through all the slips of paper and celebrate all you have accomplished.
A great variation of this project is to place a coin in a jar every time someone in your family demonstrates, kindness, empathy or other positive emotions and behaviors. When the jar is full donate the all of the money in the jar to charity. Directions for making this jar and included in the resources below.
Although this has been a tough year for many, we can still end it positively. Be thankful for those opportunities we have had to cultivate empathy with our families.
Happy New Year!
Project Resources for Practicing Empathy |
A safe drinking water is essentials for human life. Nevertheless, the water purification is also quite important and complementary for water use. However, the polluted water and dirty was found in Batang Kuis, Deli Serdang district, North Sumatra. The groundwater is colored yellow and odor so it is not worth for consuming. Thus, students from Faculty of Biology, University Medan Area conducted a research in regard to the clean water filter. The clean water filter is created with cheap and simple materials including: fibers, activated carbon, sand, gravel and bricks. The process of purification also is quite simple, it starts with arrange the materials in layers into a paralon pipe. The pipe is designed with length of 1 meter and 4 centimeters for its diameter. The process of water purification is be held on the pipe, then the clear water is streamed through other pipes.
After going through the purification process, the water color is change and also drinkable after been cooked. Mufti Sudibyo as the mentor said that this water filter has been socialized and given the counseling to the society and expected to be used in sustainable. |
1. Nitrogen (N)
Nitrogen is the first, and to some degree the major nutrient for strong, vigorous growth, dark green leaf color, and photosynthesis. Plants that are almost all leaf, such as lawn grasses, wheat, oats, small grain crops, and golf course grasses need plenty of nitrogen. The first number in fertilizers (N) for these crops and others should be especially high, especially for grass since it must continually renew itself due to being mowed often.
An example of the opposite problem, nitrogen deficiency. If you take the “normal” appearance and darken it, you have a classic symptom of nitrogen toxicity.
Nitrogen deficiency symptoms in plants.
When buying fertilizers for grasses, look for an analysis that starts with a very high “First number” in the N – P – K numbers. 30 – 0 – 0 is often used, but any combination with a high “first number” can be used.
Just remember, 100 pounds of 30-0-0 is exactly the same as 200 pounds of 15-0-0. Even if you chose 10-10-10, you could get the same 30 pounds of actual nitrogen by applying 300 pounds. And, with the 10-10-10, you’d also be applying 30 pounds of phosphorus and 30 pounds of potassium. That would probably be overkill for grass.
2. Phosphorous (P)
Phosphorous is used by plants largely for root growth and development. Flowers that are well fed with phosphorus will have more blooms, and fruits ripen better and faster. Phosphorus is important to flower bulbs, as well as to perennials and recently established trees and shrubs. Since trees and shrubs do not need as much nitrogen as grasses and leafy vegetable crops, a small first number and a larger second number is often seen in fertilizers intended for these plants, shrubs, and bushes.
Phosphorous deficiency in tomato leaf.
3. Potassium (K)
Potassium is a general nutrient for all plants, improving the overall health and strength of the plant. It improves the plant’s ability to withstand temperature extremes, and to a lesser degree, stress from drought. Potassium also helps plants resist diseases.
Because most soils have some available potassium, the third number is sometimes smaller than the first two. However, it is important to note that if the soil does not have available potassium, as some don’t…a smaller third number may not be desirable.
Potassium deficiency symptoms.
4. Calcium (Ca)
Calcium is important for general plant vigor and promotes good growth of young roots and shoots. Calcium also helps to build cell walls. As cells weaken, the vascular system of the plant starts to collapse, reducing the uptake of all of the major elements. The symptoms show up first at the growing tips of both the shoots and the roots.
Calcium is an immobile element, meaning that when there is a deficiency, the plant can’t translocate calcium from the older leaves to the younger leaves. New growth at the leaf tips and margins begins to wither and die back, and the new leaves are often deformed.
Calcium deficiency on new leaves.
5. Magnesium (Mg)
Magnesium helps regulate uptake of other plant foods and aids in seed formation. As it is contained in chlorophyll, it is also important in the dark green color of plants and for the ability of a plant to manufacture food from sunlight.
Magnesium is necessary for formation of sugars, proteins, oils, and fats, regulates the uptake of other nutrients (especially phosphorous), is a component of chlorophyll, and is a phosphorus carrier.
Magnesium deficiency symptoms.
Deficiency symptoms include mottled yellowing between veins of older leaves while veins remain green. Yellow areas may turn brown and die. Yellowing may also occur on older leaves. Leaves may turn reddish purple due to low P metabolism, and decreased seed production often occurs.
Deficiencies most likely on leached sandy soils and where high levels of N and K have been applied.
Turf: Green or yellow-green stripes, changing to cherry red. Older leaves affected first. Increased winter injury.
Broadleaf: Leaves are thin, brittle, and drop early. Older leaves may show interveinal and marginal chlorosis, reddening of older leaves, with interveinal necrosis late in the season followed by shedding of leaves. Shoot growth is not reduced until deficiency is severe. Fruit yield is reduced in severe deficiencies; apples may drop prematurely.
Conifer: Needle tips are orange-yellow and sometimes red. Primary needles remain blue-green in young seedlings, but in older plants, older needles and the lower crown show symptoms first. In affected needles, the transition to green may be sharp.
6. Sulfur (S)
Sulfur helps maintain a dark green color while encouraging more vigorous plant growth. Sulfur is needed to manufacture chlorophyll. Sulfur is as necessary as phosphorus and is considered an essential mineral.
What does sulfur do for plants? Sulfur in plants helps form important enzymes and assists in the formation of plant proteins. It is needed in very low amounts, but deficiencies can cause serious plant health problems and loss of vitality. Plants only need 10 to 30 pounds of sulfur per acre. Sulfur also acts as a soil conditioner and helps reduce the sodium content of soils.
Sulfur deficiency in corn.
Sulfur in plants is a component of some vitamins and is important in helping give flavor to mustard, onions and garlic. Sulfur born in fertilizer assists in seed oil production, but the mineral can accumulate in sandy or overworked soil layers. Sulfur deficiencies in soil are rare, but do tend to occur where fertilizer applications are routine and soils do not percolate adequately.
We have now covered primary and secondary elements that plants require for healthy growth. However, do not make the mistake of thinking the other elements needed are to be taken for granted. Au contraire! The so-called “trace elements” can have a far more exaggerated effect on plant growth than just “a trace effect.”
When I ran a 2000 acre farm, I had a few spots in one farm that have extremely low manganese. Until the problem was remedied, soybeans completely died in those spots! That’s hardly a “trace” problem when you’re depending on the soybean crop for your income. Let’s examine the remaining elements needed to provide everything a plant needs.
7. Boron (B)
Boron helps in cell development and helps to regulate plant metabolism. It’s a micronutrient required in very small amounts and there is a narrow range of safety when applying boron as toxicities can occur if too much is applied.
Boron has important roles in vegetable plants. It is needed for protein synthesis, development of cell walls, carbohydrate metabolism, sugar translocation, hormone regulation, pollen grain germination and pollen tube growth, fruit set, and seed development. Boron is mobile and readily leached in sandy soils and regular additions are necessary for many vegetables, but only in small amounts. Boron toxicity will occur if this element is overly applied.
How boron deficiency affects a plant over time.
8. Chlorine (CI)
Chlorine is involved in photosynthesis. Chloride is necessary for gas exchange, photosynthesis and protection against diseases in plants. When a plant’s leaf pores, called stomata, open and close to allow the exchange of gases, the plant sees an increase in potassium. A subsequent increase in chloride balances out the positive charge of the potassium to prevent plant damage. The exchange of gases between the plant and the air around it is critical for photosynthesis; a deficiency of chloride inhibits photosynthesis, threatening plant health.
Chlorine deficiency symptoms.
9. Copper (Cu)
Copper is extremely important in plant nutrition if only for the fact that it aids in forming chlorophyll. Plants don’t need much copper, but if they don’t get any, results can be disastrous.
It activates enzymes in your plants that help to synthesize lignin. It’s also part of the photosynthesis process. On top of that, it’s a key for flavor in certain types of veggies, and color in certain types of flowers.
Copper is immobile in plants, so if they are deficient in copper it will likely show up in newer growth. New leaves will begin to cup and you’ll notice chlorosis between the veins. If it’s a serious deficiency, small spots of the leaves will die off and they may wilt and fall off.
Leaf nodes will start growing closer and closer together, creating a squat look to your plant.
Copper deficiency in canola plants.
10. Iron (Fe)
Iron assists in the manufacture of chlorophyll and other biochemical processes. Iron is a nutrient that all plants need to function. Many of the vital functions of the plant, like enzyme and chlorophyll production, nitrogen fixing, and development and metabolism are all dependent on iron.
Without iron, the plant simply cannot function as well as it should.
Example of leaf chlorosis as a result of iron deficiency.
The most obvious symptom of iron deficiency in plants is commonly called leaf chlorosis. This is where the leaves of the plant turn yellow, but the veins of the leaves stay green.
Typically, leaf chlorosis will start at the tips of new growth in the plant and will eventually work its way to older leaves on the plant as the deficiency gets worse.
Other signs can include poor growth and leaf loss, but these symptoms will always be coupled with the leaf chlorosis.
11. Manganese (Mn)
Manganese is needed for chlorophyll production.
Manganese and Magnesium
It’s necessary to note the difference between magnesium and manganese, as some people tend to get them confused. While both magnesium and manganese are essential minerals, they have very different properties.
Magnesium is a part of the chlorophyll molecule. Plants that are lacking in magnesium will become pale green or yellow. A plant with a magnesium deficiency will show signs of yellowing first on the older leaves near the bottom of the plant.
Manganese is not a part of chlorophyll. The symptoms of manganese deficiency are remarkably similar to magnesium because manganese is involved in photosynthesis. Leaves become yellow and there is also interveinal chlorosis.
However, manganese is less mobile in a plant than magnesium so that the symptoms of deficiency appear first on young leaves. It’s always best to get a sample to determine the exact cause of the symptoms.
Other problems such as iron deficiency, nematodes, and herbicide injury may also cause leaves to yellow.
Manganese deficiency symptoms.
12. Molybdenum (Mo)
Molybdenum helps plants to use nitrogen. In non-legumes (such as cauliflowers, tomatoes, lettuce, sunflowers and maize), molybdenum enables the plant to use the nitrates taken up from the soil.
Molybdenum deficiency symptoms in a cauliflower leaf.
Where the plant has insufficient molybdenum, the nitrates accumulate in the leaves and the plant cannot use them to make proteins. The result is that the plant becomes stunted, with symptoms similar to those of nitrogen deficiency. At the same time, the edges of the leaves may become scorched by the accumulation of unused nitrates.
In legumes such as clovers, beans and peas, molybdenum serves two functions:
- The plant needs it to break down any nitrates taken up from the soil—in the same way as non-legumes use molybdenum.
- It helps in the fixation of atmospheric nitrogen by the root nodule bacteria. Legumes need more molybdenum to fix nitrogen than to utilize nitrates.
13. Zinc (Zn)
Zinc is used in development of enzymes and hormones. It is used by the leaves and needed by legumes to form seeds. The function of zinc is to help the plant produce chlorophyll.
Leaves discolor when the soil is deficient in zinc and plant growth is stunted. Zinc deficiency causes a type of leaf discoloration called chlorosis, which causes the tissue between the veins to turn yellow while the veins remain green. Chlorosis in zinc deficiency usually affects the base of the leaf near the stem. Chlorosis appears on the lower leaves first, and then gradually moves up the plant.
In severe cases, the upper leaves become chlorotic and the lower leaves turn brown or purple and die. When plants show symptoms this severe, it’s best to pull them up and treat the soil before replanting.
Zinc deficiency symptoms.
It’s hard to tell the difference between zinc deficiency and other trace element or micronutrient deficiencies by looking at the plant because they all have similar symptoms.
The main difference is that chlorosis due to zinc deficiency begins on the lower leaves, while chlorosis due to a shortage of iron, manganese or molybdenum begins on the upper leaves.
The only way to confirm your suspicion of a zinc deficiency is to have your soil tested. Your cooperative extension agent can tell you how to collect a soil sample and where to send it for testing. |
Learned optimism involves developing the ability to view the world from a positive point of view. It is often contrasted with learned helplessness writes Kendra Cherry. By challenging negative self-talk and replacing pessimistic thoughts with more positive ones, people can learn how to become more optimistic.
There are a number of benefits to becoming a more optimistic person. Some of the many advantages of optimism that researchers have discovered include:
- Better health outcomes: One study found that people who were more optimistic at age 25 were much healthier later between the ages of 45 and 60 than were their more pessimistic counterparts.
- Longer lifespan: Studies have shown that optimistic people tend to live longer than pessimists.
- Lower stress levels: Optimists not only experience less stress, they also cope with it better. They tend to be more resilient and recover from setbacks more quickly Rather than becoming overwhelmed and discouraged by negative events, they focus on making positive changes that will improve their lives.
- Higher motivation: Becoming more optimistic can also help you maintain motivation when pursuing goals. When trying to lose weight, for example, pessimists might give up because they believe diets never work. Optimists, on the other hand, are more likely to focus on positive changes they can make that will help them reach their goals.
- Better mental health: Optimists report higher levels of well-being than pessimists. Research also suggests that teaching learned optimism techniques can significantly reduce depression.
In one study, children with risk factors for depression were placed in a training program where they were taught skills related to learned optimism. The results of the study revealed that children with the risk factors were much more likely to show symptoms of moderate to severe depression at a two-year follow-up. However, those who had received training in learned optimism and anti-depression skills were half as likely to develop such symptoms of depression.
Optimism vs. Pessimism
Pessimists tend to believe that bad things are simply bound to happen, that they are at fault, and that negative outcomes will be permanent. Optimists, on the other hand, expect that good things will happen to them. They tend to see setbacks as temporary events caused by circumstances. Rather than giving up or feeling helpless in the face of failure, optimists view it as a challenge that can be overcome or fixed.
Optimists and pessimists tend to differ in terms of explanatory style, or how they go about explaining the events that take place in their lives. Key differences in these explanatory styles tend to be centered on:
- Personalisation: When things go wrong, optimists tend to lay the blame on external forces or circumstances. Pessimists, on the other hand, are more likely to blame themselves for the unfortunate events in their lives. At the same time, optimists tend to view good events as being a result of their own efforts, while pessimists link good outcomes to external influences.
- Permanence: Optimists tend to view bad times as temporary. Because of this, they also tend to be better able to bounce back after failures or setbacks. Pessimists are more likely to see negative events as permanent and unchangeable. This is why they are often more likely to give up when things get tough.
- Pervasiveness: When optimists experience failure in one area, they do not let it influence their beliefs about their abilities in other areas. Pessimists, however, view setbacks as more pervasive. In other words, if they fail at one thing, they believe they will fail at everything.
Research has found that pessimists tend to be in the minority. Most people (estimates have ranged between 60 to 80 percent) tend to be optimists to varying degrees.
Learned optimism is a concept that emerged out of the relatively young branch of psychology known as positive psychology. Learned optimism was introduced by the psychologist Martin Seligman, who is considered the father of the positive psychology movement. According to Seligman, the process of learning to be optimistic is an important way to help people maximise their mental health and live better lives.
Seligman himself has suggested that his work initially focused on pessimism. As a clinical psychologist, he tended to look for problems and how to fix them. It wasn’t until a friend pointed out that his work was truly about optimism that he truly began to focus on how to take what was good and make it even better.
Seligman’s work early in his career was centered on what is known as learned helplessness, which involves giving up when you believe that nothing you do will make any difference.
Explanatory styles play a role in this learned helplessness. How people explain the things that happen to them, whether they view them as being caused by outside or internal forces, contributes to whether people experience this helplessness or not.
A New Direction in Psychology
As a result of this paradigm shift, Seligman wrote a book focused on the psychology of learned optimism. His work helped inspire the rise of positive psychology. Seligman went on to become the president of the American Psychological Association, elected by the largest vote in the APA’s history. His theme for the year centered on the subject of positive psychology.
Psychology was only half-formed, he believed. Where there was a solid body of research and practice on how to treat mental illness, trauma, and psychological suffering, the other side that focused on how to be happy and how to live a good life, was only in its infancy. He believed that if people could learn how to become optimistic, they could lead healthier and happier lives.
Can You Learn Optimism?
While it may be clear that optimism can be beneficial, it then becomes a question of whether or not people can learn to take a more positive perspective. Can even the most pessimistic of people adjust their worldview?
Are people born optimists, or is it a skill that can be learned?
Researchers suggest that in addition being partially hereditary, optimism levels are also influenced by childhood experiences, including parental warmth and financial stability.
Seligman’s work, however, suggests that it’s possible to learn the skills that can help you become a more optimistic person. Anyone can learn these skills, no matter how pessimistic they are to begin with.
Is there an optimal time to develop this optimism?
Seligman’s research suggests that it may be beneficial to teach kids optimism skills late enough in childhood so that kids have the metacognitive skills to think about their own thoughts, but prior to the onset of puberty. Teaching such skills during this critical period might be the key to helping kids ward off a number of psychological maladies, including depression.
The ABCDE Model
Seligman believes that anyone can learn how to become more optimistic. He developed a learned optimism test designed to help people discover how optimistic they are. People who start out more optimistic can further improve their own emotional health, while those who are more pessimistic can benefit by lowering their chances of experiencing symptoms of depression.
Seligman’s approach to learning optimism is based upon the cognitive-behavioural techniques developed by Aaron Beck and the rational emotive behavioral therapy created by Albert Ellis. Both approaches are focused on identifying the underlying thoughts that influence behaviors and then actively challenging such beliefs.
Seligman’s approach is known as the “ABCDE” model of learned optimism:
- Adversity is the situation that calls for a response
- Belief is how we interpret the event
- Consequence is the way that we behave, respond, or feel
- Disputation is the effort we expend to argue or dispute the belief
- Energisation is the outcome that emerges from trying to challenge our beliefs
To use this model to learn to be more optimistic:
Think about a recent sort of adversity you have faced. It might be something related to your health, your family, your relationships, your job, or any other sort of challenge you might experience.
For example, imagine that you recently started a new exercise plan but you are having trouble sticking with it.
Make a note of the type of thoughts that are running through your mind when you think about this adversity. Be as honest as you can and do not try to sugarcoat or edit your feelings.
In the previous example, you might think things such as “I’m no good at following my workout plan,” “I’ll never be able to reach my goals,” or “Maybe I’m not strong enough to reach my goals.”
Consider what sort of consequences and behaviours emerged from the beliefs you recorded in step 2. Did such beliefs result in positive actions, or did they keep you from reaching your goals?
In our example, you might quickly realise that the negative beliefs you expressed made it more difficult to stick with your workout plan. Perhaps you started skipping workouts more or put in less of an effort when you went to the gym.
Dispute your beliefs. Think about your beliefs from step 2 and look for examples that prove those beliefs wrong. Look for any example that challenges your assumptions.
For example, you might consider all of the times that you did successfully finish your workout. Or even other times that you have set a goal, worked towards it, and finally reached it.
Consider how that you feel now that you have challenged your beliefs. How did disputing your earlier beliefs make you feel?
After thinking of times you have worked hard toward your goal, you may be left feeling more energized and motivated. Now that you have seen that it isn’t as hopeless as you previously believed, you may be more inspired to keep working on your goals.
Learning Optimism May Take Time
Remember, this is an ongoing process that you may need to repeat often. When you find yourself facing a challenge, make an effort to follow these steps. Eventually, you will find it easier to identify pessimistic beliefs and to challenge your negative thoughts. This process may also eventually help you replace your negative thoughts and approach challenges with greater optimism.
Criticisms & Potential Pitfalls
Some critics have argued that some learned optimism training programs are less about teaching people to become more optimistic and more about reducing pessimism. Other researchers believe that explanatory styles may actually have less to do with optimism than previously believed.
Other research has also suggested that optimism might also have a negative side. People who are overly and perhaps unrealistically optimistic may be prone to narcissism. Having an optimism bias can also lead people to take health risks and engage in risky behaviors because they underestimate their own level of danger.
While some research has pointed to potential pitfalls of being overly or unrealistically optimistic, most studies have supported the idea that there is a positive connection between optimism and overall health. Optimism, for example, is a predictor for better physical health as people grow older.
If you need support or would like to connect with like-minded people join our Private and Closed online Facebook Group for Child Abuse Survivors and those with CPTSD. Click here to join |
Mar 10, 2015
Superconducting "Cooper pairs" of electrons have been split to create entangled pairs of electrons in a new device built by physicists in Finland and Russia. The device employs two quantum dots made of graphene. Although other types of quantum dots have been used for this purpose, the latest research suggests that graphene quantum dots should deliver long-lived entangled electron pairs that could be used in quantum computers.
Entanglement is a quantum-mechanical phenomenon in which properties of fundamental particles are correlated so that making a measurement on one particle can instantaneously affect another particle – even across very large distances. In principle, a quantum computer can use this connectedness to perform certain calculations much faster than a conventional computer. Although practical quantum computers do not exist today, some potential designs involve using the intrinsic angular momenta, or "spin", of electrons as quantum bits (qubits) of information that can be entangled.
Superconductors provide a ready source of entangled electrons because the Cooper pairs that allow these materials to conduct electricity with little or no resistance are in fact entangled pairs of electrons with opposite spin. Splitting the pairs while preserving the electrons' entanglement can be done simply by connecting ordinary metal wires to either end of the superconductor. If the set-up is just right, each wire will carry away one electron from a pair. However, it is more often the case that both electrons will end up going down the same wire.
Boosting the odds
One way to boost the odds in favour of separation is to replace the wires with tiny blobs of semiconductor containing just several thousand atoms. These quantum dots have electron energy levels that can be set precisely by carefully adjusting their size. The two electrons from each Cooper pair can be guided to different resonant energy levels and separated as a result. This approach has already been exploited using quantum dots made from indium arsenide and, with greater efficiency, using carbon nanotubes.
The latest work, carried out by Pertti Hakonen and colleagues at Aalto University in Finland together with Gordey Lesovik of the Landau Institute for Theoretical Physics near Moscow, instead uses quantum dots made from graphene. Graphene should be able to preserve the entanglement of the separated electron pair for longer, thanks to the fact that it consists of a single layer of carbon atoms, which constrains the electrons to move in a straight line and so avoids the emission of electromagnetic radiation that interferes with the spin state.
The team used electron-beam lithography to carve out two rectangular quantum dots (each 200 × 150 nm) from a layer of graphene deposited on a silicon-dioxide substrate. The dots were positioned 180 nm apart, covered by a superconductor made from a thin sandwich of titanium and aluminium, and connected to two metal contacts.
Aligning energy levels
To split the entangled electrons from the superconductor, the researchers first set the resonant energy level of the quantum dots to equal the energy possessed by the Cooper pairs. They then varied the gate voltage across one of the dots and monitored the current flowing through the other. They found that across most of the voltage range there was no current, but that at certain voltages the current would suddenly increase, drop below zero and then return to the zero mark. The rise, they explain, occurs because at that voltage the energy in one dot increases very slightly, while that in the other drops by the same small amount, causing the electrons to separate and so register a current (unseparated pairs register as zero current). The negative current, meanwhile, is caused by electrons "elastic co-tunnelling" through the superconductor. "It is like having a switch where you reverse the current by aligning the energy levels either symmetrically or antisymmetrically," says Hakonen.
This is really a beautiful experiment
Detlef Beckmann, Karlsruhe Institute of Technology
Venkat Chandrasekhar of Northwestern University in the US praises the team's ability to "independently control the energy levels of the two quantum dots", and so neatly distinguish Cooper-pair splitting from elastic co-tunnelling. Detlef Beckmann of the Karlsruhe Institute of Technology in Germany agrees, arguing that the group can "probe the mechanism of Cooper-pair splitting more clearly" than has been possible to date. "This is really a beautiful experiment," he says.
There is, however, still room for improvement. Hakonen and colleagues are working to increase the device's efficiency – it currently splits just 10% of electrons passing through it – by better controlling the quantum dots' energy levels. They also aim to show that the device not only splits Cooper pairs, but that it does in fact preserve entanglement. They plan to do this by recording the spin of the separated electrons using contacts made from the nickel–iron magnetic alloy dubbed permalloy.
The research is described in Physical Review Letters. |
Using Children’s Books to Target Tier 2 Vocabulary
A couple of years ago I attended a presentation at the ASHA convention given by Dawna Duff entitled Robust Vocabulary Instruction. The presentation was all about targeting tier 2 vocabulary using children’s books. If you are unfamiliar with tiered vocabulary check out my blog post here. If you are interested in learning how to select and target vocabulary while reading with your preschooler, read on!
Why Teach Vocabulary?
- Vocabulary is a critical piece of reading comprehension
- Vocabulary intervention benefits all children but is most beneficial for kids with language difficulties
- Effective vocabulary instruction is explicit, intentionally designed and involves careful target selection
Step-by-Step Vocabulary Selection
- Pull out your book of choice and flip through the pages. Create a list of all the words you think the child does not already know.
- Cross out any words that are not considered tier 2. Ask yourself these questions: Can the word be used in different contexts? Does the word have different meanings? Does the word have a high frequency synonym that the child understands? If the answers are yes, it is likely a tier 2 word.
- Narrow your list to 5-6 target words, determining which will be the most beneficial to the child in multiple contexts.
Step-by-Step Vocabulary Instruction
- Contextualize: How was the word used in the story? Talk about the word in context of the story.
- Define: Provide a definition of the word using words the child already understands.
- Different example: Provide an example of the word in a context different from the story.
- Interact with examples: Have the child tell you about something related to the word. Relate the word to something familiar to the child.
- Phonological rehearsal: Ask the child, “What word are we talking about?” so the child can practice producing and recalling the word.
Using the book, The House at Poor Corner by A.A. Milne, one of the 5-6 words you could select is forget. First, read the story and target your selected words by repeating the sentences containing the words and providing a brief definition using everyday language. For example, when describing the word forget you could say, “When you forget something, you don’t remember it.” After reading the story review the words and teach more in depth with the following procedure. Keep in mind instruction can span a few days or sessions.
- Contextualize: Christopher Robin wanted to be remembered, so he told Pooh not to forget him. He wanted Pooh to think about and remember him. If Pooh forgot about him, he would not think about him.
- Define: Forget means you don’t remember.
- Different example: You can forget people like in the story but you can also forget things. Maybe you go outside but you forget your shoes! You didn’t remember them so you have to go back inside and get them.
- Interact with examples: Tell me about a time you forgot something? Fill in the blank… If you don’t forget something you _________ it. What kind of things might you forget about? How might you feel when you forget something… happy or sad?
- Phonological rehearsal: “What word are we talking about?”
Duff, Dawna. Robust Vocabulary Instruction. ASHA Convention, Philadelphia, PA, November 2016. |
An abstract of the World Health Organization’s definition and programmatic focus on approaches to equity.
According to the WHO, equity is “the absence of avoidable or remediable differences among populations or groups defined socially, economically, demographically, or geographically” (WHO Glossary). WHO’s 2008 report, Closing the Gap in a Generation: Health Equity through Action on the Social Determinants of Health, emphasizes the way that social factors concerned with health manifest in health consequences. These factors go beyond health issues and include “conditions in which people are born, grow, live, work and age”. WHO’s perspective addresses discriminatory health practices and all types of discrimination. WHO believes that exclusion ultimately correlates with and reinforces exclusionary health outcomes.
WHO cites unequal distribution of power, income and access to goods and services as underlying determinants of inequity. Determinant factors of equity, they believe, involve the confluence of overlapping deprivations stemming from gender, age, circumstances under which people work, the physical nature of the place people live and disproportionate vulnerability to the natural environment and the systems put in place to deal with illness. With respect to the health sector, Closing the Health Equity Gap notes that the way health services are administered also impact equity. Impeded access to information and services, including barriers of cost linked to that, availability, accessibility, contact and effective coverage of health services all affect universality of care. Equity outcomes are also influenced by the way that the health sector coordinates with other social services, the way in which health inequities are measured and the involvement of marginalized groups in decision-making.
At its core, life expectancy at birth and vulnerability to illness are the central concerns of WHO. The determinant factors manifest as impeded access to healthcare, education, material conditions, conditions of work and leisure, security and the chance of leading a flourishing life. This is reflected in households, communities, towns and cities.
Characteristics of the WHO Equity Approach
WHO Constitution avers its commitment to equity. The Constitution calls for the enjoyment of the highest attainable standard of health as a fundamental right of every human being, without distinction of race, religion, political belief, economic or social condition.
WHO’s concerns for equity and social justice are expressed in its emphasis on universal health coverage, which helps to reinforce the links between health, social protection and economic policy. In practical terms, WHO focuses on responding to the groundswell of demand from countries worldwide that seek to advance this agenda in their own nations.
WHO promotes a holistic view involving the whole government and focuses not just on the health sector. It advises that a supporting health ministry can champion the social determinants of health approach by modeling and promoting integrated campaigns. The 2008 World Health Report emphasizing primary health care reiterates the health sector’s charge to pursue social justice values that recognize the right of universal primary health care and participation. The WHO’s endorsement of The Rio Declaration on Social Determinants of Health and its main action areas reflect WHO’s chief aims in reorienting health systems for greater equity. These areas are: “to improve daily living conditions, to tackle the inequitable distribution of power money and resources, and to measure and understand the problem and assess the impact of action”. Programs and policies aimed at improving access to quality housing and shelter, clean water and sanitation, and achieving basic needs for healthy living are deemed human rights that are essential aspects of the WHO approach.
Universal Health Coverage (UHC) is designed to ensure that all people have access to preventive, curative, rehabilitative and palliative health services. This access must be of sufficient quality and ensure that the use of these services not expose the user to financial hardship. WHO highlights the importance of early childhood development in its social determinants approach and strives to improve the conditions into which a child is born, creating “equity from the start” (Closing the Gap 4). It also addresses gender equity as a socially determined dynamic and calls for comprehensive interventions in social, political, legal and economic realms.
Theory & Justification for the Equity Approach
WHO holds that the daily conditions that people live in have a strong influence on health equity and, further, that inequities in these circumstances are unfair and avoidable (Social Determinants of Health 6). Efforts by WHO to reorient the public health sector widely and influence policymakers beyond healthcare, so that health and equity are universal considerations, are embodied in WHO’s Priority Public Health Conditions Network. This network aims to re-frame the way public health entities define health intervention, to include social determinants and an equity lens. Ultimately, WHO argues that social policies can change poor health outcomes, and its production of practical tools and manuals reinforces this belief.
- • Reference 1: Closing the gap in a generation: health equity through action on the social determinants of health, 2008.
- • Reference 2: WHO: Closing the Health Equity Gap: Policy Options and Opportunities for Action, 2013.
- • Reference 3: WHO Glossary: Equity.
- • Reference 4: Tanahashi Bulletin, 1978.Health Service Coverage and Its Evaluation.
- • Reference 5: Rio Political Declaration on Social Determinants of Health. 2011.
- • Reference 6: Knowledge Network for Early Childhood: International Perspectives on Early Childhood Development. 2005.
- • Reference 7: Priority Public Health Conditions Network. 2007.
- • Reference 8: Gender mainstreaming for health managers: a practical approach. Facilitators Guide. WHO 2011.
|Additional Resources from the World Health Organization|
For more information
Videos: In First Person
Marcela Suárez is a single mother living with her four children in a small rented room in the outski...
Children and Poverty in the Era of Covid-19: How Remote Learning Exacerbates Inequality in New York City
On June 10, a webinar entitled “Children and Poverty in the Face of Covid-19: How Remote Learning Ex... |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
- General considerations
- Islamic literatures
- Nature and scope
- The range of Islamic literatures
- External characteristics
- Early Islamic literature
- Achievements in the western Muslim world
- Middle Period: the rise of Persian and Turkish poetry
- The new Persian style
- Persian literature: 1300–1500
- The period from 1500 to 1800
- European and colonial influences: emergence of Western forms
- Nature and scope
- Nature and elements of Islamic music
- Dance and theatre
- Types and social functions of dance and theatre
- Visual arts
- Early period: the Umayyad and ʿAbbāsid dynasties
- Middle period
- Seljuq art
- Early period: the Umayyad and ʿAbbāsid dynasties
In order to answer whether there is an aesthetic, iconographic, or stylistic unity to the visually perceptible arts of Islamic peoples, it is first essential to realize that no ethnic or geographical entity was Muslim from the beginning. There is no Islamic art, therefore, in the way there is a Chinese art or a French art. Nor is it simply a period art, like Gothic art or Baroque art, for once a land or an ethnic entity became Muslim, it remained Muslim, a small number of exceptions such as Spain or Sicily notwithstanding. Political and social events transformed a number of lands with a variety of earlier histories into Muslim lands. But because early Islam as such did not possess or propagate an art of its own, each area could continue, in fact often did continue, whatever modes of creativity it had acquired. It may then not be appropriate at all to talk about the visual arts of Islamic peoples, and one should instead consider separately each of the areas that became Muslim: Spain, North Africa, Egypt, Syria, Mesopotamia, Iran, Anatolia, and India. Such, in fact, has been the direction taken by some scholarship. Even though tainted at times with parochial nationalism, that approach has been useful in that it has focused attention on a number of permanent features in different regions of Islamic lands that are older than and independent from the faith itself and from the political entity created by it. Iranian art, in particular, exhibits a number of features (certain themes such as the representation of birds or an epic tradition in painting) that owe little to its Islamic character since the 7th century. Ottoman art shares a Mediterranean tradition of architectural conception with Italy rather than with the rest of the Muslim world.
Such examples can easily be multiplied, but it is probably wrong to overstate their importance. For if one looks at the art of Islamic lands from a different perspective, a totally different picture emerges. The perspective is that of the lands that surround the Muslim world or of the times that preceded its formation. For even if there are ambiguous examples, most observers can recognize a flavour, a mood in Islamic visual arts that is distinguishable from what is known in East Asia (China, Korea, and Japan) or in the Christian West. This mood or flavour has been called decorative, for it seems at first glance to emphasize an immense complexity of surface effects without apparent meanings attached to the visible motifs. But it has other characteristics as well: it is often colourful, both in architecture and in objects; it avoids representations of living things; it gives much prominence to the work of artisans and counts among its masterpieces not merely works of architecture or of painting but also the creations of weavers, potters, and metalworkers. The problem is whether these uniquenesses of Islamic art, when compared with other artistic traditions, are the result of the nature of Islam or of some other factor or series of factors.
These preliminary remarks suggest at the very outset the main epistemological peculiarity of Islamic art: it consists of a large number of quite disparate traditions that, when seen all together, appear distinguishable from what surrounded them and from what preceded them through a series of stylistic and thematic characteristics. The key question is how this was possible, but no answer can be given before the tradition itself has been properly defined.
Such a definition can be provided only in history, through an examination of the formation and development of the arts through the centuries, for a static sudden phenomenon is not being dealt with, but rather a slow building up of a visual language of forms with many dialects and with many changes. Whether these complexities of growth and development subsumed a common structure is the challenging question facing the historian of this artistic tradition. What makes the question particularly difficult to answer is that the study of Islamic art is still so new. Many monuments are unpublished or at least insufficiently known, and only a handful of scientific excavations have investigated the physical setting of the culture and of its art. Much, therefore, remains tentative in the knowledge and appreciation of works of Islamic art, and what follows is primarily an outline of what is known, with a number of suggestions for further work into insufficiently investigated areas.
Each artistic tradition has tended to develop its own favourite mediums and techniques. Some, of course, such as architecture, are automatic needs of every culture; and, for reasons to be developed later, it is in the medium of architecture that some of the most characteristically Islamic works of art are found. Other techniques, on the other hand, acquire varying forms and emphases. Sculpture in the round hardly existed as a major art form, and, although such was also the case of all Mediterranean arts at the time of Islam’s growth, one does not encounter the astounding rebirth of sculpture that occurred in the West. Wall painting existed but has generally been poorly preserved; the great Islamic art of painting was limited to the illustration of books. The unique feature of Islamic techniques is the astounding development taken by the so-called decorative arts—e.g., woodwork, glass, ceramics, metalwork, and textiles. New techniques were invented and spread throughout the Muslim world—at times even beyond its frontiers. In dealing with Islam, therefore, it is quite incorrect to think of those techniques as the “minor” arts, for the amount and intensity of creative energies spent on the decorative arts transformed them into major artistic forms, and their significance in defining a profile of the aesthetic and visual language of Islamic peoples is far greater than in the instances of many other cultures. Furthermore, because, for a variety of reasons to be discussed later, the Muslim world did not develop until quite late the notion of “noble” arts, the decorative arts have reflected far better the needs and ambitions of the culture as a whole. The kind of conclusion that can be reached about Islamic civilization through its visual arts thus extends far deeper than is usual in the study of an artistic tradition, and it requires a combination of archaeological, art-historical, and textual information.
An example may suffice to demonstrate the point. Among all the techniques of Islamic visual arts, the most important one was the art of textiles. Textiles, of course, were used for daily wear at all social levels and for all occasions. But clothes were also the main indicators of rank, and they were given as rewards or as souvenirs by princes, high and low. They were a major status symbol, and their manufacture and distribution were carefully controlled through a complicated institution known as the ṭirāz. Major events were at times celebrated by being depicted on silks. Many texts have been identified that describe the hundreds of different kinds of textiles that existed. Because textiles could easily be moved, they became a vehicle for the transmission of artistic themes within the Muslim world and beyond its frontiers. In the case of this one technique, therefore, one is dealing not simply with a medium of the decorative arts but with a key medium in the definition of a given time’s taste, of its practical functions, and of the ways in which its ideas were distributed. The more unfortunate point is that the thousands of fragments that have remained have not yet been studied in a sufficiently systematic way, and in only a handful of instances has it been possible to relate individual fragments to known texts. When more work has been completed, however, a study of this one medium should contribute significantly to the commercial, social, and aesthetic history of Islam, as well as explain much of the impact that Islamic art had beyond the frontiers of the Muslim world.
The following survey of Islamic visual arts, therefore, will be primarily a historical one, for it is in development through time that the main achievements of Islamic art can best be understood. At the same time, other features peculiar to this tradition will be kept in mind: the varying importance of different lands, each of which had identifiable artistic features of its own, and the uniqueness of certain creative techniques.
Earlier artistic traditions
Islamic visual arts were created by the confluence of two entirely separate kinds of phenomena: a number of earlier artistic traditions and a new faith. The arts inherited by Islam were of extraordinary technical virtuosity and stylistic or iconographic variety. All the developments of arcuated and vaulted architecture that had taken place in Iran and in the Roman Empire were available in their countless local variants. Stone, baked brick, mud brick, and wood existed as mediums of construction, and all the complicated engineering systems developed particularly in the Roman Empire were still utilized from Spain to the Euphrates. All the major techniques of decoration were still used, except for monumental sculpture. In secular and in religious art, a more or less formally accepted equivalence between representation and represented subject had been established. Technically, therefore, as well as ideologically, the Muslim world took over an extremely sophisticated system of visual forms; and, because the Muslim conquest was accompanied by a minimum of destruction, all the monuments, and especially the attitudes attached to them, were passed on to the new culture.
The second point about the pre-Islamic traditions is the almost total absence of anything from Arabia itself. While archaeological work in the peninsula may modify this conclusion in part, it does seem that Islamic art formed itself entirely in some sort of relationship to non-Arab traditions. Even the rather sophisticated art created in earlier times by the Palmyrenes or by the Nabataeans had almost no impact on Islamic art, and the primitively conceived ḥaram in Mecca, the only pre-Islamic sanctuary maintained by the new faith, remained as a unique monument that was almost never copied or imitated despite its immense religious significance. The pre-Islamic sources of Islamic art are thus entirely extraneous to the milieu in which the new faith was created. In this respect the visual arts differ considerably from most other aspects of Islamic culture.
This is not to say that there was no impact of the new faith on the arts, but to a large extent it was an incidental impact, the result of the existence of a new social and political entity rather than of a doctrine. Earliest Islam as seen in the Qurʾān or in the more verifiable accounts of the Prophet’s life simply do not deal with the arts, either on the practical level of requiring or suggesting forms as expressions of the culture or on the ideological level of defining a Muslim attitude toward images. In all instances, concrete Qurʾānic passages later used for the arts had their visual significance extrapolated.
There is no prohibition against representations of living things, and not a single Qurʾānic passage refers clearly to the mosque, eventually to become the most characteristically Muslim religious building. In the simple, practical, and puritanical milieu of early Islam, aesthetic or visual questions simply did not arise. |
Learn VBA Part 24 – Do While Loop in VBA Hindi
Use the Do Until Loop in Excel VBA. Code placed between Do Until and Loop will be repeated until the part after Do Until is true
The Do Until loop is very similar to the Do While loop.
The loop repeatedly executes a section of code until a specified condition evaluates to True.
In The Do While loop, you may on some occasions want to enter the loop at least once, regardless of the initial condition
here are 2 ways a Do While loop can be used in Excel VBA Macro code.
You can tell Excel to check the condition before entering the loop, or tell Excel to enter the loop and then check the condition. To stop an endless loop, press ESC or CTRL+BREAK.
To force an exit from a Do Until loop you use the line: Exit Do, e.g. If lNum = 7 Then Exit Do
Looping is one of the most powerful programming techniques. A loop in Excel VBA enables you to loop through a range of cells with just a few codes lines |
If your youngster has the misfortune of a cavity, it probably is on the chewing surface of the back tooth. The AAPD reports 80 – 90% of cavities happen on the chewing surfaces of permanent teeth, whereby 44% happen on baby teeth. But this kind of cavity – occasionally referred to as pit and fissure decay – may be prevented in adolescents, children and adults with dental sealant placement.
How do Sealants Prevent Decay?
The grooves and depressions on kid’s back teeth assist them in chewing and grinding food. But, the deep crannies also can trap debris and food where it is hard to keep clean, which makes them prime areas for decay to begin. With dental sealants, the dentist applies a plastic, thin material to the molar’s chewing surfaces, permitting the enamel to become smooth, as well as protected from this bacteria source.
Sealants may last as many ten years, according to the NIDCR. Though, if the dental hygienist or dentist sees any worn areas or chips on your youngster’s sealants, she or he may repair them merely by adding more of the transparent material.
Which Teeth Ought to Be Sealed and When?
All permanent or baby teeth that have deep fissures or pits are at risk for decay, and are thereby candidates for dental sealants, according to the AAPD. Most dental professionals suggest sealing baby molars because those teeth play a vital part in holding space for your permanent teeth. Keeping those teeth cavity-free may prevent your youngster from losing them early on.
The National Institute of Dental and Craniofacial Research recommends that the right time to seal molars is as soon as they come in – before the molars have an opportunity to decay. The initial permanent molars erupt about six years of age, according to the ADA eruption chart, whereby the following teeth in line for dental sealants are 2nd molars, which typically erupt from 11 to 13 years of age. The dentist also may suggest sealing your youngster’s premolars if there’s any deep grooves. The premolars come in between the age of 10 to 11 and replace all baby molars.
Sealants and Wisdom Teeth
The final teeth to come in to your youngster’s mouth are the 3rd molars – called wisdom teeth – occasionally between age 17 to 21. Wisdom teeth often aren’t shaped like the additional molars and, in most cases, don’t have enough space to erupt correctly, as observed by the AAOMS.
As a matter of fact, 9 out of 10 folks have at least one wisdom tooth that is impacted. Many wisdom teeth therefore can be extracted while the individual still is a young adult, before they trigger problems and the roots are completely developed. It’s mainly why sealants aren’t suggested for wisdom teeth.
However, occasionally someone’s mouth is big enough to accommodate all 4 wisdom teeth in their correct position. In that case, if the dental professional feels that these teeth are going to be a functional part of your youngster’s dentition, dental sealants might be an excellent choice to prevent them from decaying.
For more information on sealants contact the recognized dental office of Markham Dental today! |
Tourette (too-RET) syndrome is a disorder that involves repetitive movements or unwanted sounds (tics) that can't be easily controlled. For instance, you might repeatedly blink your eyes, shrug your shoulders or blurt out unusual sounds or offensive words.
Tics typically show up between ages 2 and 15, with the average being around 6 years of age. Males are about three to four times more likely than females to develop Tourette syndrome.
Although there's no cure for Tourette syndrome, treatments are available. Many people with Tourette syndrome don't need treatment when symptoms aren't troublesome. Tics often lessen or become controlled after the teen years.
Tics — sudden, brief, intermittent movements or sounds — are the hallmark sign of Tourette syndrome. They can range from mild to severe. Severe symptoms might significantly interfere with communication, daily functioning and quality of life.
Tics are classified as:
- Simple tics. These sudden, brief and repetitive tics involve a limited number of muscle groups.
- Complex tics. These distinct, coordinated patterns of movements involve several muscle groups.
Tics can also involve movement (motor tics) or sounds (vocal tics). Motor tics usually begin before vocal tics do. But the spectrum of tics that people experience is diverse.
Common motor tics seen in Tourette syndrome
||Touching or smelling objects
||Repeating observed movements
||Stepping in a certain pattern
||Bending or twisting
Common vocal tics seen in Tourette syndrome
||Repeating one's own words or phrases
||Repeating others' words or phrases
||Using vulgar, obscene or swear words
In addition, tics can:
- Vary in type, frequency and severity
- Worsen if you're ill, stressed, anxious, tired or excited
- Occur during sleep
- Change over time
- Worsen in the early teenage years and improve during the transition into adulthood
Before the onset of motor or vocal tics, you'll likely experience an uncomfortable bodily sensation (premonitory urge) such as an itch, a tingle or tension. Expression of the tic brings relief. With great effort, some people with Tourette syndrome can temporarily stop or hold back a tic.
When to see a doctor
See your child's pediatrician if you notice your child displaying involuntary movements or sounds.
Not all tics indicate Tourette syndrome. Many children develop tics that go away on their own after a few weeks or months. But whenever a child shows unusual behavior, it's important to identify the cause and rule out serious health problems.
The exact cause of Tourette syndrome isn't known. It's a complex disorder likely caused by a combination of inherited (genetic) and environmental factors. Chemicals in the brain that transmit nerve impulses (neurotransmitters), including dopamine and serotonin, might play a role.
Risk factors for Tourette syndrome include:
- Family history. Having a family history of Tourette syndrome or other tic disorders might increase the risk of developing Tourette syndrome.
- Sex. Males are about three to four times more likely than females to develop Tourette syndrome.
People with Tourette syndrome often lead healthy, active lives. However, Tourette syndrome frequently involves behavioral and social challenges that can harm your self-image.
Conditions often associated with Tourette syndrome include:
- Attention-deficit/hyperactivity disorder (ADHD)
- Obsessive-compulsive disorder (OCD)
- Autism spectrum disorder
- Learning disabilities
- Sleep disorders
- Anxiety disorders
- Pain related to tics, especially headaches
- Anger-management problems
There's no specific test that can diagnose Tourette syndrome. The diagnosis is based on the history of your signs and symptoms.
The criteria used to diagnose Tourette syndrome include:
- Both motor tics and vocal tics are present, although not necessarily at the same time
- Tics occur several times a day, nearly every day or intermittently, for more than a year
- Tics begin before age 18
- Tics aren't caused by medications, other substances or another medical condition
- Tics must change over time in location, frequency, type, complexity or severity
A diagnosis of Tourette syndrome might be overlooked because the signs can mimic other conditions. Eye blinking might be initially associated with vision problems, or sniffling attributed to allergies.
Both motor and vocal tics can be caused by conditions other than Tourette syndrome. To rule out other causes of tics, your doctor might recommend:
- Blood tests
- Imaging studies such as an MRI
There's no cure for Tourette syndrome. Treatment is aimed at controlling tics that interfere with everyday activities and functioning. When tics aren't severe, treatment might not be necessary.
Medications to help control tics or reduce symptoms of related conditions include:
- Medications that block or lessen dopamine. Fluphenazine, haloperidol (Haldol), risperidone (Risperdal) and pimozide (Orap) can help control tics. Possible side effects include weight gain and involuntary repetitive movements. Tetrabenazine (Xenazine) might be recommended, although it may cause severe depression.
- Botulinum (Botox) injections. An injection into the affected muscle might help relieve a simple or vocal tic.
- ADHD medications. Stimulants such as methylphenidate (Metadate CD, Ritalin LA, others) and medications containing dextroamphetamine (Adderall XR, Dexedrine, others) can help increase attention and concentration. However, for some people with Tourette syndrome, medications for ADHD can exacerbate tics.
- Central adrenergic inhibitors. Medications such as clonidine (Catapres, Kapvay) and guanfacine (Intuniv) — typically prescribed for high blood pressure — might help control behavioral symptoms such as impulse control problems and rage attacks. Side effects may include sleepiness.
- Antidepressants. Fluoxetine (Prozac, Sarafem, others) might help control symptoms of sadness, anxiety and OCD.
- Antiseizure medications. Recent studies suggest that some people with Tourette syndrome respond to topiramate (Topamax), which is used to treat epilepsy.
- Behavior therapy. Cognitive Behavioral Interventions for Tics, including habit-reversal training, can help you monitor tics, identify premonitory urges and learn to voluntarily move in a way that's incompatible with the tic.
- Psychotherapy. In addition to helping you cope with Tourette syndrome, psychotherapy can help with accompanying problems, such as ADHD, obsessions, depression or anxiety.
- Deep brain stimulation (DBS). For severe tics that don't respond to other treatment, DBS might help. DBS involves implanting a battery-operated medical device in the brain to deliver electrical stimulation to targeted areas that control movement. However, this treatment is still in the early research stages and needs more research to determine if it's a safe and effective treatment for Tourette syndrome.
Coping and support
Your self-esteem may suffer as a result of Tourette syndrome. You may be embarrassed about your tics and hesitate to engage in social activities, such as dating or going out in public. As a result, you're at increased risk of depression and substance abuse.
To cope with Tourette syndrome:
- Remember that tics usually reach their peak in the early teens and improve as you get older.
- Reach out to others dealing with Tourette syndrome for information, coping tips and support.
Children with Tourette syndrome
School may pose special challenges for children with Tourette syndrome.
To help your child:
- Be your child's advocate. Help educate teachers, school bus drivers and others with whom your child interacts regularly. An educational setting that meets your child's needs — such as tutoring, untimed testing to reduce stress, and smaller classes — can help.
- Nurture your child's self-esteem. Support your child's personal interests and friendships — both can help build self-esteem.
- Find a support group. To help you cope, seek out a local Tourette syndrome support group. If there aren't any, consider starting one.
Preparing for an appointment
If you or your child has been diagnosed with Tourette syndrome, you may be referred to specialists, such as:
- Doctors who specialize in brain disorders (neurologists)
- Psychiatrists or psychologists
It's a good idea to be well-prepared for your appointment. Here's some information to help you get ready, and what to expect from your doctor.
What you can do
- Be aware of any pre-appointment restrictions. At the time you make the appointment, be sure to ask if there's anything you need to do in advance, such as restrict your diet.
- Write down any symptoms you or your child is experiencing, including any that may seem unrelated to the reason for which you scheduled the appointment.
- Write down key personal information, including any major stresses or recent life changes.
- Make a list of all medications, vitamins or supplements that you or your child is taking.
- Make a video recording, if possible, of a typical tic to show the doctor.
- Write down questions to ask your doctor.
Your time with your doctor is limited, so preparing a list of questions can help ensure the best use of time. List your questions from most important to least important in case time runs out. For Tourette syndrome, some basic questions to ask your doctor include:
- What treatment, if any, is needed?
- If medication is recommended, what are the options?
- What types of behavior therapy might help?
Don't hesitate to ask other questions during your appointment anytime you don't understand something or need more information.
What to expect from your doctor
Your doctor is likely to ask you a number of questions. Being ready to answer them may allow time later to cover other points you want to address. Your doctor may ask:
- When did the symptoms begin?
- Have the symptoms been continuous or occasional?
- How severe are the symptoms?
- What, if anything, seems to improve the symptoms?
- What, if anything, appears to worsen the symptoms? |
Standard Form - Free maths worksheets and other resources
Some resources all about standard form and how to calculate with numbers written in standard form.
Converting into standard form - DoingMaths YouTube Channel
Multiplying and dividing in standard form worksheet
This worksheet is all about multiplying and dividing numbers that are already in standard form, giving answers that are also in standard form.
Adding and subtracting in standard form worksheet
A maths worksheet containing questions on adding and subtracting numbers given in standard form. This involves converting standard form numbers so that their powers of ten match up. |
Rashed AlshamsiJarad FennelENC110212/3/2017Self-driving CarsIn the recent past, an in-depth research has been on-going regarding self-driving cars as a result of the levels of fatalities emanating from road accidents. For instance, in the US, about 33, 000 road users including commuters, pedestrians, and drivers have in one way or another succumbed to deaths and injuries emanating from road accidents. Conversely, based on statistical data from the Center for Disease Control (CDC), the skyrocketing numbers of distracted drivers on the roads because of the global economic meltdown has rendered drivers unfocused and careless hence causing accidents. However, laws and traffic regulations have been published as sources of punishment for erring drivers, but most of them do not respect and adhere to the traffic law and regulations resulting in continuous road accidents. Consequently, the advancement in technology by tech companies and automobile manufacturers could avert the menace orchestrated by accidents through the manufacture of smart vehicles that contain warning systems to caution drivers in the event an occurrence of an accident is imminent. Hence, this would make roads safer for everyone. Nevertheless, amidst the use of far-fetched technological methods to curb fatalities from road carnage, little has been left to be desired. The way forward according to various researchers’ standpoints that are backed up by the study is the adoption of self-driving cars that will prevent loss of life (Holstein 4).Self-driving cars primarily rely on the use of electronic sensors, GPS and other pertinent technology that enable the automobile to find its way within a distance of 15miles. Also, the cars have inbuilt computer monitors and systems, which are operated by assigned tech savvy attendants. In other instances, these cars are driven by a human with the help of computer technology and robots. Hence, the level of accuracy brought about by the use of computers cannot match that of humans. The above assertion depicts that various factors could cause roads accidents and require drivers that are alert and focused on the road. In contrast, self-driving cars eliminate the human errors that result in over eighty percent of the road accidents that happen in the United States. Furthermore, Bonnefon, Shariff, and Rahwan explained that computers are designed to control autonomous vehicles, thus, are effective at eliminating all the dangers including human error and fatigue, which could occur from the operations of vehicles thereby enhancing safety standards measures (1574). The researchers further noted that a sophisticated algorithm is used for the creation of a system to determine the safest distance between the self-driving car and the other human-driven vehicle. Also, their processors are designed to compute all the probable situations and consequences within seconds before the self-driving car execute actions or a chain of command for an event.According to McBride, and Neil, self-driving cars employ systems and designs that operate platooning systems, which permit them to communicate with one another on problems associated with traffic conditions, as well as those factors that can lead to occurrence an accident (184). Consequently, self-driving cars are efficient to the extent that they may not need anyone on board. These cars can estimate mileage since they act according to computer commands. Therefore, all controls are guaranteed including overtaking and emergency braking. This will aid in averting congestion since they are computer controlled, which safeguard observance of traffic rules in their entirety. In essence, self-driving cars can pre-determine the safest and uncongested route to take in case of a traffic jam. Thus, commuters are provided with the value for money since they spend less time commuting (Surden and Mary-Anne 11).Also, self-driving cars emit less toxic carbon (II) and carbon (IV), which are not environmentally friendly, because most of them use green energy like solar and wind energy. The self-driving cars reduce overreliance on the use of petroleum products, which are depleting. Conversely, the technology used by self-driving cars optimizes fuel consumption and on the other hand, has low impacts on the environment. These reasons are considered as the contributing factors for the increased rate of clamor to utilize self-driving vehicles (Surden and Mary-Anne 43). Moreover, self-driving cars are cost-effectiveness, a benefit that should increase people’s preferences on self-driving cars over the human-driven cars. As Holstein explained, a cost-effective dimension of the arguments can be supported if self-driving cars could be examined from the time saved by drivers when computers drive their vehicles (3). While some might argue that this line of argument is flawed because of the lack of data to support it, it is untrue due to the information on the assigned value of each human life in the United States. Therefore, rather than spending several hours on the road driving, these individuals can use the time to engage in other productive ventures that would aid in building the national, which in the end improves the quality of human life.Self-driven cars, however, are limited in third world countries due to the poor condition of the roads and the technology, which limit the car sensors to perform efficiently. The technology used to produce self-driving cars makes them efficient and safe, as well as a proactive remedy to the series of problems that result in the roadway carnage and injuries. Self-driving cars use technology controlled by computers, and therefore they are susceptible to cyber-attacks such as hacking or malicious programs aimed at corrupting their systems. The cyber-attack on these cars can make them not function normally, and the results can be catastrophic. However, if these concerns are addressed, current evidence depicts that self-driving cars are the best alternative for saving lives, prevention of loss of economic gains in the society, and cost of accident management. Works CitedBonnefon, J.-F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–1576. Accessed on 31 Oct 2017.Holstein, Tobias. “The Misconception of Ethical Dilemmas in Self-Driving Cars.” Multidisciplinary Digital Publishing Institute Proceedings. Vol. 1. No. 3. 2017. Accessed on 31 Oct 2017.McBride, Neil. “The ethics of driverless cars.” ACM SIGCAS Computers and Society 45.3 (2016): 179-184. Accessed on 31 Oct 2017.Surden, Harry, and Mary-Anne Williams. “Technological Opacity, Predictability, and Self-Driving Cars.” SSRN Electronic Journal, vol. 38, no. 1, 2016, pp. 1-62. |
East Antarctica has long been hailed as a bastion of continuity in the rapidly unraveling Antarctic. The West Antarctic is the landscape where ice goes to die, while the higher elevation, colder eastern portion of the continent has been viewed as a stable landscape largely separated from rising temperatures and warm ocean currents.
But new research shows that the few soft spots in the impervious ice sheet are categorically losing ice. While it’s not a full-blown meltdown like what could be playing out in West Antarctica, it’s a reminder that no corner of the planet—not even the coldest one—is impervious to the impacts of climate change.
The study released last week in Geophysical Research Letters chronicles the past 15 years at two glaciers in East Antarctica. The Moscow University and Totten glaciers descend from the East Antarctic ice sheet down to the sea. They act as dams to ice on land; if that ice all melted into the ocean, it would raise sea levels more than 16 feet (five meters). The rest of East Antarctica is an even stash of ice, one we really need to learn more about.
“The East Antarctic ice sheet contains much more ice and sea level potential than any other ice sheet by far, making it of crucial global significance,” Yara Mohajerani, a PhD candidate at the University of California, Irvine who led the study, told Earther.
Scientists have suspected these two glaciers have been losing mass for a while, driven by warm ocean water circulating under them, similar to the processes at play in West Antartica driving huge losses. But it’s been hard to get a handle on just how much mass, because they drain a huge basin, and the mass losses are still small relative to the size of said basin. To overcome that, the researchers developed a novel technique using Gravity Recovery and Climate Experiment (GRACE), a NASA satellite that measures small changes in gravity.
The thing about Earth is it’s not a nice, spherical marble. It’s a lumpy-ass gourd of a planet and its shape is ever changing. All those shifting lumps means that gravity also changes in tiny ways. Not in ways that you and I can feel, mind you, but in ways that satellites can precisely measure from space.
That’s the whole point of GRACE, which lasted from 2002 to 2017, and the follow-up mission that launched earlier this year. The new study used the 15 years of GRACE data and a nifty set of calculations to get a finely calibrated set of results showing that the glaciers are unequivocally losing mass.
For the 15 years of GRACE measurements, the glaciers are shedding 18.5 gigatons of ice annually. That’s enough ice to fill 7.4 million Olympic-sized pools.
“This multisensor study provides multiple lines of evidence that the changes taking place in that part of East Antarctica are real and significant,” Eric Rignot, a NASA Jet Propulsion Laboratory climate scientist involved with the research, told Earther. “This study is part of a growing number of evidence that East Antarctica is an important part of the evolution of Antarctica in a warmer climate.”
Overall, Antarctica has shed 3 trillion (yes, trillion) tons of ice since 1992. The losses in this corner of Antarctica are dwarfed by losses elsewhere. There’s also evidence that East Antarctica as a whole may be gaining mass through snowfall, given its high, frigid nature.
But the few glaciers that do descend down from it are still worthy of scrutiny. Previous research has shown that East Antarctic glaciers have been wiped out in the past during times when atmospheric carbon dioxide levels were roughly on par with the ones we’ve built up today through human activities.
Chad Greene, a University of Texas researcher who has studied the water undercutting Totten Glacier’s floating ice shelf, said the new research shows how the melt there is affecting the rest of the glacier.
“They’re showing the long-term response of the ice sheet to the types of changes I was observing in Totten Ice Shelf, and we see once again that the Aurora Subglacial Basin is losing ice, driven by changes in ocean temperature and circulation at the coast,” he told Earther. “It’s basically the same picture we’ve been seeing for a decade now, but Mohajerani et al. bring it much more clearly into focus: Totten Glacier and the Aurora Subglacial Basin it drains are losing ice, no matter how you look at it.” |
小王子 (Xiǎo Wáng Zǐ) — in Mandarin language. Mandarin / 官话 / 官話 is a group of related varieties of Chinese spoken across most of northern and southwestern China. The group includes the Beijing dialect, the basis of Standard Mandarin or Standard Chinese. Because Mandarin originated in North China and most Mandarin dialects are found in the north, the group is sometimes referred to as the Northern dialects (北方话). Many local Mandarin varieties are not mutually intelligible. Nevertheless, Mandarin is often placed first in lists of languages by number of native speakers (with nearly a billion). |
Twenty-eight years ago today, pound notes were discontinued after more than 150 years of circulation. The former chancellor Lord Lawson delivered the news, claiming that the change from note to coin would add longevity and durability to the sterling. A look back into the history of money shows how banknotes originated and whether their time as a means of currency is coming to end.
The Bank of England first issued a banknote in 1694 to help King William III raise money to fight a war against the French. British citizens were given paper receipts from the bank as proof and representation of storing their gold. This has since developed into our modern day banknotes. To this day there are just under £55 billion worth of notes in circulation in the UK.
The Chinese invented the first type of paper note back in 118BC. Unable to find enough copper to make the coins needed, pieces of deerskin were used to stand for 40,000 units of a number of coins.
And yet, despite such a deep-rooted history, the banknote has struggled in recent days to justify itself as a primary means of currency. With risks of counterfeiting, not to mention a short lifespan, is it time for a more modern equivalent to be introduced?
Such a decision has already been made in certain countries, with the polymer note the popular choice.
Australia was the first country to issue the plastic currency in 1988, and has gone 100% polymer. At the moment, 23 countries use polymer banknotes, with Brunei, New Zealand, Papua New Guinea, Romania and Vietnam among those using purely polymer notes as their currency.
More recently, in November 2011, the Bank of Canada began circulating $100 polymer banknotes in an effort to combat counterfeiting and reduce costs. The country claimed its $100 bill was the world’s most advanced banknote, as it includes a hologram within the note’s transparent window. It also shows a circle of numbers that match the value of the denomination when held up to light.
A $50 polymer denomination followed in March this year, with a $20 bill expected soon. By the end of 2013, new $10 and $5 bills will have been introduced, and all Canadian currency will be printed on polymer.
Why plastic notes?
One of the major reasons for converting to polymer currency is the extended lifetime. Polymer notes last at least two-and-a-half times longer than paper notes, reducing processing and replacement costs, as well as the environmental impact. The introduction of plastic currency saves production costs because fewer bills are produced, as well as the fact that polymer notes are recyclable at the end of their lifetime, compared to current cotton currency, which often ends up in landfill.
Another advantage is that the notes stay cleaner for longer. Indeed, in the UK, many notes are taken out of circulation and often destroyed each year due to deteriorating quality, frequently due to contamination or damage. In 2009, 830 million notes, worth £11.4 billion, were taken out of circulation and destroyed due to their worsening quality.
Cleanliness is an advantage to countries with warmer climates, with humidity less likely to affect paper currency.
“The tropical climate is a challenging environment for banknotes, especially because of high humidity and high temperatures”, polymer researcher Stane Straus told the BBC. “This causes paper notes to absorb moisture, thus becoming dirty. Polymer notes, on the other hand, do not absorb moisture.”
The technology has not been without its issues, however. Haiti and Cost Rica were the first to trial polymer banknotes during the 1980s; yet the quality of ink was below what was thought necessary. A plastic note was also introduced in the Isle of Man in 1983, using British technology, but withdrawn in 1988, with ink problems suspected as the cause, also.
There are lingering questions over the necessity of polymer notes, though, with some concerned with their lack of practicality, as well as the fact that the security risks with traditional cotton banknotes are decreasing.
“Paper is much more secure than it used to be and the new £50 note, for example, has features that are extremely hard to counterfeit”, said Tom Hockenhull, curator of the Modern Money exhibition at the British Museum.
Hockenhull refers to new cotton notes, which contain transparent polymer windows in to minimise the risk of counterfeiting. It is thought that polymer notes are not as secure as this modern cotton note, as it is considered easier to replicate.
A further problem, particularly for developing countries, is the initial cost of producing polymer notes. They are most expensive to make and their longevity can only be maximised with the right recycling facilities.
Nevertheless, over a longer period, polymer notes outweigh conventional paper.
There have recently been developments in the UK regarding currency change. Bank of England officials have expressed their concerns that the £5 note is not durable enough, which, as a low denomination, changes hands quickly. A source close to the Bank of England told The Independent, “This is at the evaluation stage. A decision won’t be made for the next year or two and production a little while after that, but a plastic £5 note is a possibility.”
It is clear that traditional banknotes – regardless of their 300-year history and heritage – have their drawbacks – whether it is security, reusability or longevity – and many countries have already looked at alternative forms of currency. Whilst polymer currency might present a large initial financial outlay, their practical advantages cannot be ignored. |
When early rocket pioneers started strapping volunteers to missiles, how did they know those volunteers would need spacesuits? Of course, in both the Soviet Union and the US, animals subject we tested first, but in fact, we already had aircraft flying almost to the edge of space before that–and aircraft flew higher than humans can survive even during World War II. How did we prepare for this environment? How did we know we needed to?
Well in fact, people had been flying in balloons since the 18th century, and some later balloon flights led to deaths. But long before this, in 1644, Torricelli described the first mercury barometer, writing “We live submerged at the bottom of an ocean of the element air.”
Torricelli and his mentor Galileo knew full well that we live on a spherical planet, but Galileo gave an erroneous explanation for the difficulties in using the suction pump to draw water up a well even though Aristotle had known that air has weight. |
Draft a Family Constitution
Delve into U.S. history and the concept of the Constitution by inviting your child to create her own Family Constitution! What are the rules of your home? She may surprise you with her knowledge of your family's rules and understanding of why each rule is in place. She'll draft a written Family Constitution and even give it an aged look using soaked tea bags.
What You Need:
- Drawing paper
- Several black tea bags
What You Do:
- Discuss the U.S. Constitution with your child. Explain that the Constitution is the highest law in the United States and that we all live by its rules. Inform her that the rules can be changed or amended in the Bill of Rights, a set of laws that protects the rights of the people. Extend this discussion further by talking about how each principle is important to our freedom.
- Next, have her list the rules or “laws” each member of the family is expected to follow. Have a conversation with her about why these rules are important. Do your family's rules fall under the U.S. Constitution?
- Have her list the rules on lined or scratch paper. Explain that these rules may be changed or amended (like the U.S. Constitution) as your child grows older. For example, her bedtime may be an hour later a few years from now.
- Next, have her take a piece of acrylic paper and tear off about a 1/2-inch margin from each side. The tearing will give the paper an old, tattered look.
- Ask your child to write the rules of the house in cursive or print neatly onto drawing paper. Drawing paper works well because it absorbs the tea bag without tearing. Make sure the ink is dry before moving on to the next step.
- Next, invite her to add tea bags to hot water and let them soak for a few minutes.
- When the water has cooled just enough to touch, remove the tea bags and squeeze out excess water.
- Encourage her to gently press the tea bags onto her Family Constitution. She can continue to apply pressure with the tea bag until the entire paper is covered. When she's done, the paper should appear to have an aged look to it.
Let the paper dry and ask her to hang it from a visible place in your home; you may even choose to frame it. Your family can refer to its very own Constitution at any time. |
The Kepler-186 system consists of 5 known planets circling a red dwarf star. The 5 planets have sizes ranging from 1.0 - 1.5 Earth-radius and orbital periods of 3.9 - 130 days. All 5 planets are probably rocky, since planets with large hydrogen-helium gas envelops tend to be larger than 1.5 - 2.0 Earth-radius. Of particular interest is Kepler-186f, the fifth planet in the system. Kepler-186f is the first confirmed Earth-sized planet in the habitable zone around another star. Its detection was reported by Quintana et al. (2014) in a paper published in the April 18 issue of the journal Science. Another paper by Bolmont et al. (2014) evaluates the habitability of the Kepler-186 system and, in particular, the habitability of Kepler-186f. Additionally, the paper also investigates the formation and tidal evolution of the Kepler-186 planetary system.
Figure 1: Artist’s impression of a habitable planet.
Figure 2: Orbital configuration of the Kepler-186 planetary system. The shaded regions denote the habitable zone. The bottom part of the plot shows a comparison between 4 different planetary systems that contain planets in the habitable zone: the Solar System, Kepler-62, Kepler-186 and GJ 581. Source: Bolmont et al. (2014).
Being situated in the habitable zone does not necessarily imply than Kepler-186f is habitable. Habitability also depends on the planet’s atmospheric characteristics. In the study, simple climate models are used to assess the habitability of Kepler-186f. The model atmospheres are assumed to be composed of carbon dioxide (CO2), nitrogen (N2) and water (H2O) only. A key criterion for habitability is the ability for a planet to sustain liquid water on its surface. In the case for Kepler-186f, to keep mean surface temperatures above 273 K - the freezing point of water, the models show that modest amount of CO2 are needed in most of the cases. For large amounts of atmospheric N2 (~10 bars), 200 - 500 mbar of CO2 is all that is required to keep surface temperatures above 273 K.
Figure 3: Surface temperature as a function of CO2 partial pressure, for different N2 partial pressures. Water triple point temperature of 273 K is indicated by the horizontal dashed line. Top to bottom rows: decreasing insolation - 0.32, 0.29 and 0.27 (Earth = 1.0). Left to right columns: increasing gravity. Source: Bolmont et al. (2014).
Given such favourable prospects for habitability, it is worth considering the possibility of photosynthesis occurring on Kepler-186f. The amount of insolation Kepler-186f receives from its host star is estimated to be 32 percent the intensity of insolation Earth receives from the Sun. An atmosphere containing 5 bar of CO2 and 1 bar of N2 would support a surface temperature of 285 K on Kepler-186f, close to Earth’s current mean surface temperature. Due to atmospheric absorption, such an atmosphere would further depress the amount of insolation that reaches the surface of Kepler-186f to a factor of 7 times less insolation than what the Earth’s surface gets. At wavelengths of 500 - 700 nm, corresponding to plant chlorophyll, the difference becomes even larger with Earth getting 10 - 20 times more flux than Kepler-186f. Although such a low level of insolation does not preclude photosynthesis, it does suggest that photosynthesis on Kepler-186f would occur at a much slower rate than on Earth.
Figure 4: Net stellar insolation received at the top of atmosphere (TOA) and at the surface for Kepler-186f, assuming an atmosphere containing 5 bar of CO2 and 1 bar of N2. This is shown in comparison to modern Earth. Source: Bolmont et al. (2014).
- Quintana et al., “An Earth-Sized Planet in the Habitable Zone of a Cool Star”, Science 18 April 2014: Vol. 344 no. 6181 pp. 277-280.
- Bolmont et al. (2014), “Formation, tidal evolution and habitability of the Kepler-186 system”, arXiv:1404.4368 [astro-ph.EP] |
What sparked the recent surge of the Ebola virus, and could it have been predicted?
Ebola’s outbreaks among human populations usually result from handling infected wild animals. Although the virus reservoir has not yet been identified with certainty, in Africa fruit bats are believed to be the natural hosts for the virus. It is therefore impossible to predict the start of an outbreak, although it is possible to project its unfolding if containment and mitigation policies are not implemented in a timely manner. Human-to-human transmission mostly occurs through blood or bodily fluids from an infected person, thus affecting mostly caregivers in the family or in healthcare settings where the proper cautions aren’t taken. Isolation of cases in well-equipped healthcare settings and the use of rigid protection protocols for handling burial procedures are crucial for the containment of outbreaks.
How is this outbreak different from those that have occurred in the past?
Since March, the World Health Organization has reported more than 1,000 cases of Ebola with a fatality rate of about 60 percent, depending on the specific places. Although previous outbreaks recorded fatality rates of up to 90 percent, this current outbreak is the worst in terms of the number of infected people. This outbreak is somewhat unique also because it has hit major urban areas such as Conakry, the capital city of Guinea. In the past, Ebola has usually emerged in less populated rural regions. Isolation and control in large cities is obviously more challenging. Capital cities are also major transportation hubs for travelers potentially spreading the outbreak in other geographical regions.
Does this outbreak present an international concern and if so, how great is that concern?
The risk of infection for travelers is minimal because infection results from direct contact with sick individuals. However the presence of the disease in major cities with airports introduces the possibility that infected people not yet in the acute stage of the disease are going to get on a plane and spread the virus internationally. This global spreading can be modeled by using human mobility network data. Although we cannot rule out the possibility of cases spreading to major European or American airport hubs, the probability for these events is quite small because the major airports in the region have limited traffic to international destinations. On the other hand, the persistence in time of the outbreak and the growing number of cases are increasing the probability that we might see it spread internationally. The makes it imperative to win the battle in containing the outbreak in the region as soon as possible. |
Test your student’s vocabulary knowledge with this word search. It will help your students focus on spelling and reinforce their knowledge of the words.
-Use as an in class assignment, warm up, or homework assignment.
-Challenge your higher level students to a “race the clock” activity.
-Project it in class and complete as a whole class or have two students race to find the word.
- Have your students translate the words to English or create a drawing for each word
Both chapters and ANSWER KEYS are included. |
PEOPLE who develop schizophrenia may start life with differently structured brains. The finding adds support to the idea that genetics can play a crucial role in the condition.
Probing the biology of schizophrenia is difficult – brain tissue from people who have it is rarely available to study. Kristen Brennand of the Icahn School of Medicine at Mount Sinai in New York and her colleagues got around this by taking skin cells from 14 people with schizophrenia, and reprogramming them into stem cells and then neurons.
They found that on average these neurons had lower levels of a signalling molecule called miR – 9, when compared with similar cells developed from people without schizophrenia.
The team also found that the “schizophrenic” nerve cells could not migrate as far in a dish. This discrepancy vanished if levels of miR – 9 were artificially restored. The molecule seems to be a master switch for many genes affecting cell migration (Cell, DOI: 10.1016/j.celrep.2016.03.090).
Schizophrenia symptoms don’t usually begin until adolescence, but the suspicion is that the condition is caused by problems that begin in the womb but stay silent through childhood.
“Even before your child is born the genetics have already started to do their work,” says Brennand.
This article appeared in print under the headline “Schizophrenia’s foundations may be laid down in the womb” |
You should have a working knowledge of the following terms:
Introduction and Goals
Bacteria are everywhere. They have been found at the deepest depths of the oceans and high above in the atmosphere. Based on sheer numbers and species diversity, they are the most successful group of life on the planet.
This tutorial is the first in a three-part series discussing prokaryotes. By the end of this first tutorial you should have a basic understanding of:
- The basic features of all prokaryotes
- The diverse lifestyles of prokaryotes
- Why prokaryotes can undergo rapid evolutionary change
- The different nutritional modes of prokaryotes
- The basic genetic organization of prokaryotes
- Asexual reproduction as a means for acquiring new genetic information
Figure 1. Colorized Image of a Bacterial Colony. (Click to enlarge)
A defining feature of prokaryotes is their lack of membrane-bound nuclei. This is not to say they lack subcellular specialization because some prokaryotes have very elaborate internal membranes. However, they generally have less subcellular specialization than eukaryotes (organisms with membrane-bound nuclei and organelles). Also, prokaryotes are usually much smaller than eukaryotic cells (1-5 microns compared to 10-100 microns). They are often described as single-celled organisms, but they can form colonies that show a remarkable level of complexity (as depicted in this colorized image of a bacterial colony). The shape of individual cells is used to classify prokaryotes; they can be either spherical (coccus), rod-shaped (bacillus), or helical (spirillum).
This animation is a simple game to test your understanding of the basic features of a prokaryote.
In the last few decades, several taxonomic schemes have been used to describe life. One of the simplest divided life into prokaryotes and eukaryotes; that is, those organisms without nuclei went into one group and those with nuclei went into another, respectively. Another commonly used scheme divided life into five kingdoms: Monera (prokaryotes), Protista, Plantae, Fungi, and Animalia. Keep in mind that classification schemes strive to show the evolutionary relationships between groups, and in recent years it has become apparent that the evolutionary relationships of prokaryotes are quite complex. One prokaryotic group, the Archaea, have some features that are more eukaryotic than prokaryotic. Although the Archaea lack a nucleus, their genetic organization is more like that of a eukaryote.
Figure 2. The Three-Domain System of Classification. (Click to enlarge)
To reconcile new data, the taxonomic scheme of life has been revised. The most current scheme proposes that life be divided into three domains. In this scheme prokaryotic organisms can belong to the domain Archaea or the domain Bacteria, while those organisms that have a nucleus comprise the third domain, Eukarya. Organisms that make up these three domains are sometimes referred to as archaebacteria, eubacteria and eukaryotes, respectively. This figure illustrates the relationship between the three domains. Keep in mind that the common ancestor of the prokaryotes most likely arose about 3 billion years ago.Archaea are sometimes referred to as extremophiles, inhabiting extreme environments (e.g., hot springs, salt ponds, Arctic ice, deep oil wells, acidic ponds that form near mines, and hydrothermal vents); however, these environments are not extreme to the archaea. In fact, many extremophiles die when moved to our environment. Life is relative.
The rest of the prokaryotes are classified as bacteria. Some textbooks and articles still refer to all prokaryotes as bacteria, but there is an increasing tendency to make a distinction between archaebacteria and bacteria.
Figure 3. Thermophilic (heat-loving) bacteria. (Click to enlarge). Limestone terraces, formed by precipitation from calcium-rich water flowing from a raised hotpool. Pink, green, and brown-colored archaebacteria occupy the thermal gradients in the flowing water (60-100°C).
Prokaryotes by the Numbers
In terms of metabolic impact and numbers, prokaryotes dominate the biosphere. They outnumber all eukaryotes combined. They live in a myriad of environments and even a teaspoon of common dirt can harbor 100 million or more bacteria. Not only are bacteria plentiful in total numbers, but the species diversity may be quite high as well. Although recognizing distinct species of bacteria is a challenge for microbiologists modern approaches using DNA diversity analysis, suggest that bacteria spawn new species quite rapidly. Recent studies suggest that in that same teaspoon of soil there could reside up to 1 million different species.
Fast Growth and High Rates of Evolution
In some cases, prokaryotes can divide in as little as 20 minutes (although much slower rates are also observed). Generally, prokaryotes have three factors that enable them to grow rapidly. First, prokaryotes have a small genome (genetic material). Second, prokaryotes have simple morphologies (structural features). Third, prokaryotes replicate via binary fission(cell division in which a prokaryotic chromosome replicates and the mother cell pinches in half to form two new daughter cells). These three factors allow for a short generation time. This short generation time means that evolutionary changes occur relatively quickly when compared to longer-lived species.
Genetic Organization Aids Fast Generation Times
Compared to eukaryotes, prokaryotes usually have much smaller genomes. On average, a eukaryotic cell has 1000 times more DNA than a prokaryote. This means that less DNA must be replicated with each division in prokaryotes.
The DNA in prokaryotes is concentrated in the nucleoid. The prokaryotic chromosome is a double-stranded DNA molecule arranged as a single large ring.
Prokaryotes often have smaller rings of extrachromosomal DNA termed plasmids. Most plasmids consist of only a few genes. Plasmids are not required for survival in most environments because the prokaryotic chromosome programs all of the cell's essential functions. However, plasmids may contain genes that provide resistance to antibiotics, metabolism of unusual nutrients, and other special functions. Plasmids replicate independently of the main chromosome, and many can be readily transferred between prokaryotic cells.
Prokaryotes replicate via binary fission. Binary fission is simply cell division whereby two identical offspring each receive a copy of the original, single, parental chromosome. Binary fission is a type of asexual reproduction (reproduction that does not require the union of two reproductive cells, and that produces offspring genetically identical to the parent cell). A population of rapidly growing prokaryotes can synthesize their DNA almost continuously, which aids in their fast generation times. Even as a cell is physically separating, its DNA can be replicating for the next round of cell division.
Figure 4. Binary division in a bacterium. (Click to enlarge)
This simple animation shows binary fission in a prokaryote.
Asexual Reproduction and the Transfer of Genetic Information Between Prokaryotes
Prokaryotes do not alternate between the haploid and diploid states, hence meiosis and fertilization are not components of their life cycles. Rather, binary fission is the main method of reproduction in prokaryotes. This form of asexual reproduction means that the genetic variation afforded by meiosis/fertilization does not occur in prokaryotes. Nonetheless, genetic variation does occur in prokaryotes, and mutations (coupled with short generation times) are one source of variation in the population. Remember that genetic variation, within a population, can be beneficial because it provides the raw materials for a population to adapt to a changing environment. Greater diversity in the gene pool increases the likelihood that at least some of the organisms in a population will have the right alleles to survive if environmental conditions change.
One way that genetic material can be moved between bacteria is transformation. Transformation occurs when prokaryotes acquire genes from their surrounding environment. This DNA might have been left behind by other bacteria (from the same or different species) when they died. The foreign DNA is directly taken up by the cell and expressed. If the DNA contains a beneficial gene (e.g., one encoding for antibiotic resistance), then the individuals harboring that gene will have a selective advantage over their non-transformed counterparts. As long as individuals with this gene reproduce more successfully, compared to those lacking the gene, they will be more fit and the gene will increase in frequency (i.e., microevolution, via natural selection, will occur).
Other examples include transformation of nonpathogenic bacteria into pathogenic (harmful) strains. When harmless Streptococcus pneumoniae bacteria are placed in a medium containing dead cells of the pathogenic strain, they can take up the DNA from the dead pathogenic cells. If the formerly harmless bacteria pick up the gene for pathogenicity, they will become pathogenic themselves. It is important to point out that pathogenicity may not confer a long-term increase in fitness; if the host dies, the microsymbiont is left in a cold house.
Transformation is commonly used by genetic engineers to relocate bacterial genes.
Genetic material can also be moved between bacteria by conjugation. The mechanism of conjugation requires that two living prokaryotic cells physically join with one another. Typically DNA transfer only goes one way, with the "male" using an appendage called a pilus (plural, pili). In order to produce pili, prokaryotes must have a plasmid termed the F factor(fertility factor plasmid). When a cell has the F factor plasmid, it is said to be F+. This F+ condition is heritable. If an F+ cell divides, both of the resulting cells will be F+. This condition is also "contagious." After an F+ cell conjugates with a "female" cell that does not contain the F factor, the "female" cell obtains the F factor plasmid and becomes F+ ("male").
Click on the following to view a simple animation of an F-plasmid transfer.
Genetic material can also be moved between bacteria by transduction. In this event, the exchange of DNA between prokaryotes is made possible by phages(viruses that infect bacteria). Phages reproduce by injecting their genetic material inside the bacterial cell, then multiplying, and eventually bursting from the cell. In a mechanism referred to as specialized transduction, the phage DNA inserts somewhat benignly into the bacterial host chromosome. Here it can lay dormant for many generations. However, under certain conditions, the phage DNA excises itself from the bacterial chromosome (usually carrying pieces of the chromosome with it), then replicates and forms new phages that burst out of the cell. These phages can reinfect other bacteria and thereby transfer not only their own DNA, but pieces of the former host DNA into the newly infected bacterium.
Figure 5. Overview of Transduction. (Click to enlarge)
Genetic Variation and Evolution
The short generation time associated with binary fission was pointed out earlier in this tutorial. We also know that mutations add new and different alleles to populations. These two factors (short generation times and mutations), combined with the processes of conjugation and transduction, help prokaryotic populations achieve vast genetic variation (without the alternation of haploid/diploid states seen in many eukaryotes). Generation times are minutes to hours, and can result in a beneficial mutation being heavily favored and passed on to a great number of offspring in a very short period of time. Once again, a short generation span enables prokaryotic populations to adapt very rapidly to environmental change. This adaptive evolution is as important now to prokaryotes as it was when prokaryotic life began to diversify a few billion years ago.
During the course of evolution, prokaryotes have adapted to a myriad of environments. Part of this adaptation involves different ways of obtaining energy and carbon. In looking at the diversity of prokaryotes, one observes many different nutritional modes. When considering nutritional modes, there are some general features that are commonly used to categorize the nutritional state of any life form.
All life can be categorized nutritionally, according to how an organism obtains its energy and from where it gets its carbon. The prefixes "chemo" and "photo" are used to describe whether the energy comes from a high-energy molecule (e.g., glucose) or from light, respectively. "Auto" and "hetero" are used to describe whether carbon dioxide or a more complex form of carbon is used as a carbon source, respectively. The prefixes are then affixed to the suffix "troph," meaning nourishment.
This tutorial introduced you to the prokaryotes. They are a very diverse group of organisms that are commonly referred to as bacteria; however, they are really comprised of two different domains. One domain, the Archaea, usually grow in the most extreme environments. Their ability to occupy extreme habitats is mirrored by their flexibility in utilizing resources; some species are photosynthetic, whereas others can live on oil or hydrogen sulfide. The other domain, the Bacteria, is much more abundant. Although diverse, members of both domains share some common features. Prokaryotes lack membrane-bound nuclei, they are generally single-celled or colonial, and they are very small. The genetic organization of prokaryotes and binary fission as a means for replication aids in their fast generation times, which contributes to relatively quick evolutionary changes. We will continue our discussion of prokaryotes in the next tutorial by exploring their morphologies and by describing some of their interactions with other life forms. |
Structure of Neuron
The neuron is the working cell within the brain. The structure of neuron is made up of different components, including the cell body, the dendrites and the axon. The axon is the portion which is most exposed to injury in a shear injury, as the long protrusion can extend substantial distances across differententially moving layers of the brain. A typical structure of the neuron to some degree resembles a dinner fork. The handle of the fork would represent the axon, the tines of the fork are the dendrites, the end of the fork is the synaptic terminal and the area where the tines branch off fork, is the cell body, or soma.
The Structure of Neurons and How They Work
The structure of neuron and a neuron’s job is information transfer, from one end of a given neuron to the other and from one cell to another cell. Information transfer within a cell is called intracellular signaling and information transfer between cells, intercellular signaling. The intracellular signal begins at the soma body part of structure of neurons and runs down the axon, to the terminal end. The intercellular transfer is across the synapse, where the signal jumps from one neuron to another neuron.
The Role of the Axon in Intercellur Signaling.
As part of the structure of neuron, the role that the axon plays in intercellular signaling is roughly equivalent to the wire that connects a switch to an electrical fixture. Like the wires behind your drywall, the axon may run substantial distances, as much as a meter. Like an electrical wire, it is thin. And like a electrical wire, it transfers electrical impulses.
Axons Are Vulnerable to Shearing Forces.
The axon is vulnerable to injury when the brain begins to move as a result of rotational forces. When traumatic acceleration/deceleration forces are placed upon the brain, the layers, at progressively greater distances from the fulcrum of the motion, will move at differentential speeds, creating a sliding of these different layers across themselves.
The impact of this sliding is that the axon is quickly stressed beyond its tolerance. Even if the axon is not severed as a result of such force, it may be significantly damaged.
As an electical wire has insulation, so does an axon. To protect the axon from damage and to assure that its electrical impulse do not stray, the axon is protected by insulation called the myelin sheath.
Unlike what we see on wires, the myelin sheath is not a continuous covering. The myelin sheath is made up of glial cells, laid end to end over axon. The glial cells can be thought of as similar to rolls of paper towels, with the material wrapped many times around the core of the axon. Just as if you were to line up a row toilet paper rolls across a room, there are gaps between each glial cell. At each space between one glial cell and another, there is a small gap in the myelin.
These gaps between glial cells are called the Nodes of Ranvier. The Nodes of Ranvier serve an important purpose, providing a channel for energy to enter the axon to boost the signal as it goes down the length of the cell.
When the axons insulation, the myelin sheath, is damaged, the speed of information processing within the brain can be profoundly impacted. Processing speed problems and the associated attention concentration problems, are among the most common deficits after a MTBI or concussion, and this diffuse injury the axons is likely the culprit.
Source: Levitan , The Neuron, ©1997, Oxford University Press. |
Technology’s Touch on a Time-Tested Teaching Tool
Flashcards have been around since the stone age but now let’s take a look at a technology tool that will help ELLs learn metacognitive skills while practicing vocabulary. We recommend that ESL teachers use electronic flashcards on their iPods to help reinforce content and at the same time teach students valuable study skills.
Everyday, teachers and students are discovering that iPods have a use beyond downloading music, movies and entertainment. When used creatively, iPods can bridge the classroom with the outside world. This has tremendous appeal to today’s tech savvy students- aptly named, Digital Natives. 1
There are several ways to create flashcards that can be used on your iPod. One quick and easy way is to use digital photos (jpeg, gif, or png) and create a photo album in your iTunes library. Take pictures of labeled objects in your classroom. After selecting the photo album, view the photo album as a slideshow by simply hitting the play button on your iPod. Go to the settings menu to add music from your iPod and to adjust the timing and transitions.
Through the use of iPod technology, English Language Learners can increase and reinforce academic language proficiency and content area knowledge.   The capability, versatility, and popularity of iPods among the school age demographic make it the perfect crossover teaching tool between learning in the classroom and embracing the outside world.
1 Presky, M. (2001). Digital Natives, Digital Immigrants. On the Horizon, 9 (5). |
We’ve all been taught since grade school that Earth has seven continents. With that, a theory of Pangea was even raised. However, a hidden crust has been recently found by a group of researchers around New Zealand, a neighbor of the continent, Australia. They argued that this hidden region should be designated as a new continent. They dubbed this massive new continent, Zealandia.
This idea of a potential continent has been previously theorized by Geophysicist Bruce Luyendyk, coining its name in 1995. Hence, researchers have dedicated 10 years of study to prove such.
Specifically, sea-floor samples showed that Zealandia consists of light continental crust instead of dark volcanic rocks making up nearby underwater plateaus. Moreover, rather than a group of continental islands and fragments, researchers also found the area to be structurally intact with a large continental crust enough to be separated and be officially called a continent. The argument even goes as there is actually no widely accepted definition of a continent, with geographers and geologists having different answers. For instance, geographers consider Europe and Asia as separate continents, whereas geologists believe it to be a single landmass of Eurasia.
“One of the main benefits of this article is that it draws attention to the arbitrary and inconsistent use of such a fundamental term as continent,” says Brendan Murphy, a geologist at St. Francis Xavier University in Antigonish, Canada.
This new continent stretches from near Australia’s northeastern coast past the islands of New Zealand and calls a 1.8 million square mile land mass. It also includes New Caledonia, along with several other territories and island groups.
Despite the argument, this study could generally help biogeographers to better understand the endemic plants and animals of New Zealand. It’s also interesting how mysterious this planet still is despite of years of research and staying here. |
Seeing a double picture can be the case when using stereo microscopes or compound microscopes with a binocular head. This can be due to several reasons.
- Make sure that the two eyepieces are the correct distance to each other. Adjust the distance to suit your eyes.
- Make sure that the diopter compensation is properly set. One of the two eyepieces can be rotated to adjust for different eyesight of the two eyes. If you use glasses, then these will correct for the different vision. In this case make sure that the diopter adjustment is set the same.
- Stereo microscopes contain prisms which turn an inverted image right side up. If these prisms are shifted (because the microscope had to take a heavy bump), then there is the likely possibility that the two images do not overlap. In this case the microscope has to be taken apart and the prisms adjusted. |
By far the most important skill students need for future success is the ability to solve complex and challenging problems. This sentiment is shared by practically every educator we’ve asked across the globe. With Solution Fluency you have a powerful tool to give to your students. It’s a process that will serve them well in school and in life. Let’s turn on the Solution Fluency spotlight and shine it for better understanding.
The question educators have most often is: What does Solution Fluency look like in action in the classroom? This Solution Fluency spotlight will help. It has simple tools for helping you understand how to apply the 6Ds process in your teaching.
Solution Fluency in the Classroom
When educators first see Solution Fluency, it looks familiar to them. They say “we do that already.” That’s because the 6Ds process of Solution Fluency mirrors other processes they are already familiar with.
This chart shows how Solution Fluency’s 6Ds echo the stages of other common learning processes. These include the scientific method, the writing process, media production, and design thinking.
These methodologies are all engineered to solve problems effectively and efficiently. So the idea of Solution Fluency being a fluid and unconscious process in students’ minds isn’t a stretch.
What Solution Fluency Is and Isn’t
Let’s challenge some common assumptions about Solution Fluency. Here’s a look at both what it is and is not.
Solution Fluency is not:
- A linear process. Solution Fluency isn’t bound by the limitations and strictness that can be part of linear processes. It has a different more intuitive flow.
- A long complicated process. Some teachers assume Solution Fluency takes days or weeks to implement. It can be used this way, but that’s not the only way.
- Exclusive to a classroom. Solution Fluency isn’t strictly a process for the modern classroom. True, it’s purpose is for learning, but its power extends beyond school.
- An advanced thinking model. To some, Solution Fluency looks too complicated to work with every student. As such, teachers often have doubts about its viability.
Solution Fluency is:
- A cyclical process. All the phases of Solution Fluency are meant to be revisited in any learning journey. It doesn’t always need to happen, but it can.
- A versatile process. Solution Fluency can be applied to any task of any scale. It doesn’t matter if you’re making a grocery list or redesigning the universe!
- A skill for life. Solution Fluency teaches us crucial problem-solving, critical thinking, and analytical skills. It’s a formula for success in every aspect of life.
- A skill for everyone. An eye-opening moment for us was when we saw 4-year-old children explaining the 6Ds to their parents. Solution Fluency is that simple!
Some Guiding Questions
Understanding how to use Solution Fluency in a classroom setting means asking guiding questions at every phase. Suggestions for each phase are listed below.
Define: What are the details of the challenge we face? What do we want to overcome specifically? What do we want to solve?
Discover: What do I need to know and what do I need to be able to do? Why do we need this to happen? Why hasn’t it been done previously? If it has, why wasn’t it successful? What can we change?
Dream: What do we truly want to create? How will it function? What will it look like? What’s our best-case scenario for the end goal?
Design: What does it look like “on paper”? How will we create and implement it? What are the steps we must take? What are the milestones and guidelines we will set for ourselves? How will we ensure everything is being done right and on time? How will we deal with problems?
Deliver: How do we bring this idea into functional reality? How do we practically apply what we’ve done? How will we present this to people? How will we know it’s working?
Debrief: What were the results of our efforts? How did we succeed or fall short of accomplishing our goal? What went well, and what didn’t? How can we improve our efforts and outcome in the future? How can we apply what we’ve done to similar problems?
The Ultimate Solution Fluency Tool
There is a perfect way to put Solution Fluency to practical use in your classroom. The Solution Fluency Activity Planner uses Solution Fluency as the guided process for creating your best project-based learning lessons.
It features thousands of plans to view and share, and plenty of room to create your own. Join a global network of educators now and start bringing Solution Fluency into your classroom! |
Message on the Observance of National Afro-American (Black) History Month, February 1988
February traditionally has been our National Black History Month. In our celebration of this period, all Americans should reflect on the theme, ``The Constitutional Status of Afro-Americans into the Twenty-first Century.''
Americans' mighty contributions to the greatness of this land we call
Our Founding Fathers were the architects of the greatest political document ever written. In its preamble, they recorded their dream of securing ``the Blessings of Liberty to ourselves and our Posterity. . . .'' The dream of liberty for black Americans found many courageous champions before and during the bloody years of the Civil War, in the Jim Crow era, and in the modern civil rights movement. They saw that the bell of liberty rings hollow unless applied equally to Americans of every race, creed, and color.
The issues of freedom and equality are at the very core of National Afro-American (Black) History Month. This month offers all Americans the chance to learn more about a vital part of our history. But as we learn, we must remember that the battle against the disease known as prejudice cannot be waged and won in one era and forgotten in another. Every generation must renew the fight against injustice. |
Despite differences in approach, Endemic Bird Areas, Terrestrial Biodiversity Hotspots and Global 200 Ecoregions overlap extensively, helping to focus attention on the world’s most important places for biodiversity conservation.
Responding to the need to focus effort and investment, many conservation organisations have carried out priority-setting exercises. However, these differ extensively in their targets, scale (both ‘grain’, i.e. size of the unit of analysis, and extent), and whether they tackle questions of where or how to do conservation (Redford et al. 2003). Given these disparities, argument about the ‘right’ way to set priorities is not surprising. Different approaches are often trying to achieve different things, or are nested within one another in terms of scale. When approaches of similar grain, extent and purpose are compared, there are often reassuring levels of agreement.
Well-established, large-grain, global priority analyses that ask ‘where’ conservation should be done include Endemic Bird Areas (EBAs) (BirdLife International, Stattersfield et al. 1998), Terrestrial Biodiversity Hotspots (Conservation International, Mittermeier et al. 1998) and the Global 200 Ecoregions (WWF, Olson and Dinerstein 1998). EBAs (numbering 218) are areas where two or more bird species with ranges of less than 50,000 km2 co-occur. Hotspots (25) are biogeographic regions with high levels of plant endemism (at least 1,500 endemic plants, corresponding to 0.5% of the global vascular plant flora) and where less than 30% of the original natural habitat remains. Global 200 Ecoregions (200) are considered the most biologically valuable ecoregions, containing outstanding examples of each of the world’s habitat types.
These priority areas have been identified using different approaches and criteria, including different taxonomic coverage at different scales, yet they show considerable geographic overlap and similarity (see figure). For example, EBAs, Terrestrial Biodiversity Hotspots and Global 200 Ecoregions all encompass the Atlantic Forest of Brazil, the Philippines, large parts of Madagascar and the tropical Andes. These priority-setting frameworks are valuable for focusing global-scale attention and funding on the world’s most important places for biodiversity conservation.
Related Case Studies in other sections
BirdLife International (2004) Different broad-scale conservation priorities overlap extensively. Presented as part of the BirdLife State of the world's birds website. Available from: http://www.birdlife.org/datazone/sowb/casestudy/211. Checked: 17/03/2014
|Key message: Priorities must be set to target scarce resources| |
Concept of Digital Preservation
Digital preservation consists of the processes aimed at ensuring the continued accessibility of digital materials. To do this involves finding ways to re-present what was originally presented to users by a combination of software and hardware tools acting on data.
To achieve this requires digital objects to be understood and managed at four levels: as physical phenomena; as logical encodings; as conceptual objects that have meaning to humans; and as sets of essential elements that must be preserved in order to offer future users the essence of the object.
Digital preservation can be seen as all those processes aimed at ensuring the continuity of digital heritage materials for as long as they are needed.
The most significant threats to digital continuity concern loss of the means of access. Digital materials cannot be said to be preserved if the means of access have been lost and access becomes impossible. The purpose of preserving digital materials is to maintain accessibility: the ability to access their essential, authentic message or purpose.
Digital preservation involves choosing and implementing an evolving range of strategies to achieve the kind of accessibility discussed above, addressing the preservation needs of the different layers of digital objects. The strategies include:
- Working with producers (creators and distributors) to apply standards that will prolong the effective life of the available means of access and reduce the range of unknown problems that must be managed
- Recognising that it is not practical to try to preserve everything, selecting what material should be preserved
- Placing the material in a safe place
- Controlling material, using structured metadata and other documentation to facilitate access and to support all preservation process
- Protecting the integrity and identity of data
- Choosing appropriate means of providing access in the face of technological change
- Managing preservation programmes to achieve their goals in cost-effective, timely, holistic, proactive and accountable ways. |
History of Bulgaria – part 1 – The Balkans before Bulgaria
Long before the Bulgarians came to the Balkans, they were inhabited by Thracians – as early as 3500 BC. On the territories of today`s Bulgaria. Serbia, Macedonia and Greece lived more than 50 different Thracian tribes. Although they were from the same ethnicity, they did not live in peace and each of the tribes fought for its independence and establishment of its own state. In the 5th century BC one of the tribes – the Odrysians, became powerful enough to take over most of the Balkans and establish the largest Thracian kingdom. There are hundreds of Thracian tombs, temples and treasures today that are spread all over the territory of Bulgaria. Some of the notable examples of Thracian legacy are the tombs in the Valley of Thracian kings in south Bulgaria and the gold Treasure from Panagyurishte.
The Odrysian kings were hellenized and had adopted the Greek alphabet, but kept their sovereignty until 46 AD when Thrace was taken over and became a roman province.
The Roman Empire and Byzantium
The history of the Byzantine empire starts with the moment when Constantine the Great decides to move the seat of the Roman empire to Constantinople (today Istanbul), with the goal of improving the defense of all endangered provinces and in this way, founding a second Rome. By the time Thrace became part of the Roman empire, the entire population of the Balkans was already hellenized, embracing the Greek culture and language and although the Latin was an official language for a while, it did not establish itself well in the eastern provinces and was one of the main reasons for the division of the empire in two – the Western (Latin) Roman empire and the Eastern (Greek) empire – Byzantium in 467 AD.
After the detronation of the last Roman emperor – Romulus Augustulus, the authorities in Constantinople considered themselves the only legitimate inheritants of the Roman empire. Byzantium managed to push back barbaric tribes well until the 5th and 6th century, when the Slavic tribes and the Bulgars started slowly to conquer its territories on the Balkans.
Confronting the Bulgars and Slavic tribes for the first time.
The Bulgars started their journey to Europe from central Asia, leaving their ancient homeland during the Migration Period in the 1st century AD. For the first time, a Byzantine chronist mentions the Bulgars in the 4th century AD as inhabitants of the lands around the Caspian, the Azov and the Black sea with territories reaching to the fore-mountains of Caucasus. During the same Migration period, South Slavic tribes migrated to Eastern Europe and settled along the Danube and the territories north of the Black sea and started to slowly conquer the northern territories of Byzantium.
The South Slavic tribes united with the Bulgars and together became strong enough to defeat the Eastern Roman empire and this soon led to the founding of Old Great Bulgaria on the territory of modern Ukraine in 631. This first state later moved further west to the territories north of the Danube. |
An ultrasound exam is a procedure that uses high-frequency sound waves to scan a woman’s abdomen and pelvic cavity, creating a picture (sonogram) of the baby and placenta. Although the terms ultrasound and sonogram are technically different, they are used interchangeably and reference the same exam.
What types of ultrasound are there?
There are basically seven different ultrasound exams, but the principle process is the same.
The different types of procedures include:
Transvaginal Scans – Specially designed probe transducers are used inside the vagina to generate sonogram images. Most often used during the early stages of pregnancy.
Standard Ultrasound – Traditional ultrasound exam which uses a transducer over the abdomen to generate 2-D images of the developing fetus .
Advanced Ultrasound – This exam is similar to the standard ultrasound, but the exam targets a suspected problem and uses more sophisticated equipment.
Doppler Ultrasound – This imaging procedure measures slight changes in the frequency of the ultrasound waves as they bounce off moving objects, such as blood cells.
3-D Ultrasound – Uses specially designed probes and software to generate 3-D images of the developing fetus.
4-D or Dynamic 3-D Ultrasound – Uses specially designed scanners to look at the face and movements of the baby prior to delivery.
Fetal Echocardiography – Uses ultrasound waves to assess the baby’s heart anatomy and function. This is used to help assess suspected congenital heart defects.
How is an ultrasound performed?
The traditional ultrasound procedure involves placing gel on your abdomen to work as a conductor for the sound waves. Your healthcare provider uses a transducer to produce sound waves into the uterus. The sound waves bounce off bones and tissue returning back to the transducer to generate black and white images of the fetus.
When are ultrasounds performed?
Ultrasounds may be performed at any point during pregnancy, and the results are seen immediately on a monitor during the procedure. Transvaginal scans may be used early in pregnancy to diagnose potential ectopic or molar pregnancies.
There is not a recommended number of ultrasounds that should be performed during routine prenatal care. Because ultrasound should only be used when medically indicated, many healthy pregnancies will not require ultrasound. The average number of ultrasounds varies with each healthcare provider.
Additional ultrasounds might be ordered separately if your healthcare provider suspects a complication or problem related to your pregnancy.
What does the ultrasound look for?
Ultrasounds are diagnostic procedures that detect or aid in the detection of abnormalities and conditions related to pregnancy. Ultrasounds are usually combined with other tests, such as triple tests, amniocentesis, or chorionic villus sampling, to validate a diagnosis.
An ultrasound exam is medically indicated throughout pregnancy for the following reasons:
- Confirm viable pregnancy
- Confirm heartbeat
- Measure the crown-rump length or gestational age
- Confirm molar or ectopic pregnancies
- Assess abnormal gestation
- Diagnose fetal malformation
- Weeks 13-14 for characteristics of potential Down syndrome
- Weeks 18-20 for congenital malformations
- Structural abnormalities
- Confirm multiples pregnancy
- Verify dates and growth
- Confirm intrauterine death
- Identify hydramnios or oligohydramnios – excessive or reduced levels of amniotic fluid
- Evaluation of fetal well-being
- Identify placental location
- Confirm intrauterine death
- Observe fetal presentation
- Observe fetal movements
- Identify uterine and pelvic abnormalities of the mother
What are the risks and side effects to the mother or baby?
The ultrasound is a noninvasive procedure which, when used properly, has not demonstrated fetal harm. The long term effects of repeated ultrasound exposures on the fetus are not fully known. It is recommended that ultrasound only be used if medically indicated.
Answers to common questions related to an ultrasound/ sonogram exam:
If an ultrasound is done at 6 to 7 weeks and a heartbeat is not detected, does that mean there is a problem?
No, it does not mean there is a problem. The heartbeat may not be detected for reasons that include: tipped uterus, larger abdomen, or inaccurate dating with last menstrual period. Heartbeats are best detected with transvaginal ultrasounds early in pregnancy.
Concern typically develops if there is no fetal heart activity in an embryo with a crown-rump length greater than 5mm. If you receive an ultrasound exam after week 6, your healthcare provider will begin to be concerned, if there is no gestational sac.
How accurate are ultrasounds in calculating gestational age?
Your healthcare provider will use hormone levels in your blood, the date of your last menstrual period and, in some cases, results from an ultrasound to generate an estimated gestational age. However, variations in each woman’s cycle and each pregnancy may hinder the accuracy of the gestational age calculation.
If your healthcare provider uses an ultrasound to get an estimated delivery date to base the timing of your prenatal care, the original estimated gestational age will not be changed.
Why do some healthcare providers schedule ultrasounds differently?
If there are any questions regarding gestational age, placenta location, or possible complications then more ultrasounds may be scheduled. Because ultrasound should only be used when medically indicated, many healthy pregnancies will not require ultrasound. The average number of ultrasounds varies with each healthcare provider.
How accurate are ultrasounds in determining the conception date to determine paternity?
Your healthcare provider will use hormone levels in your blood, the date of your last menstrual period and, in some cases, results from an ultrasound to generate an expected date of conception. However, many differences in each woman’s cycle may hinder the accuracy of the conception date calculation.
The viability of sperm varies as well, which means intercourse three to five days prior to ovulation may result in conception. Ultrasound dating of conception is not reliable for determining paternity because the ultrasound can be off by at least 5-7 days in early pregnancy.
When can an ultrasound determine the sex of the baby?
You may have an ultrasound between 18 to 20 weeks to evaluate dates, a multiples pregnancy, placenta location or complications. It may also be possible to determine the gender of your baby during this ultrasound. Several factors, such as the stage of pregnancy and position of fetus, will influence the accuracy of the gender prediction.
To be 100% sure you will have an anxious wait until the birth!
Are ultrasounds a necessary part of prenatal care?
Ultrasounds are only necessary if there is a medical concern. As noted above, ultrasounds enable your healthcare provider to evaluate the baby’s well being as well as diagnose potential problems. For women with an uncomplicated pregnancy, an ultrasound is not a necessary part of prenatal care.
Last Updated: 07/2015
Compiled using information from the following sources: William�s Obstetrics Twenty-Second Ed. Cunningham, F. Gary, et al, Ch. 16. American Institute of Ultrasound in Medicine, http://www.aium.org/ |
Calcium, an important mineral for keeping bones healthy is transferred from the expectant mother to the fetus during the last stages of pregnancy. Therefore, babies who are born prematurely have an increased risk of having weaker bones.
Adults who were born full term but were small for their gestational age also had lower bone mass. These findings are important since peak bone mass is a major determinant of future osteoporosis.
‘Babies who are born prematurely fail to get adequate calcium from the mother, resulting in lower bone mass and increased risk of osteoporosis.’
"Few studies to date have addressed bone mass in adults who were born with low birth weight, and there are conflicting findings," said Chandima Balasuriya, the first author of the study. Balasuriya is a medical doctor and PhD candidate at the Norwegian University of Science and Technology (NTNU) and St Olavs University Hospital. "Our study shows that both those born prematurely with a very low birth weight and those who were born full term, but small for their gestational age, had lower bone mass than the control group, who were born full term with normal weights."
The study was conducted by the Endocrinology and Bone Group, headed by NTNU Professor Unni Syversen, and looked at 186 adults who were 26-28 years old.
Fifty-two of the participants were very low birth weight babies, with a mean birth weight of 1.2 kg, and a mean gestational age of 29 weeks. Another 59 participants had been born to term, but were considered "small for gestational age", with a mean birth weight of just under 3 kg. The researchers also had a control group of 77 adults who were born at term with normal weight.
For all three groups, researchers measured bone mineral content and density in the spine, neck, hip and the whole body, and looked at current height and weight, smoking, level of physical activity and a variety of other measures.
When the researchers looked at the data from adults who were born small for their gestational age at term, they found that this group had lower bone mass than adults who were born with normal weight at term.
But when the researchers corrected the bone mass measurement for the heights of this group, who tended to be shorter, they found that the low bone mass was partly due to their smaller body size. In contrast, body size alone did not account for the lower bone mass the researchers found in adults who had been pre-term babies.
The good news is that parents and doctors can put this information to use, by helping low-birth weight children build as much bone mass as possible as they grow and develop, through diet and exercise.
"Ensuring that children with low birth weights have a diet rich in calcium, vitamin D and protein, in combination with exercise that involves weight-bearing physical activities may help reduce risk of bone fractures later in life," Balasuriya said. |
Communicable diseases and crises
WHO Guidelines for Epidemic Preparedness and Response to Measles Outbreaks
Measles ranks as one of the leading causes of childhood mortality in the world. Before measles vaccine became available, virtually all individuals contracted measles with an estimated 130 million cases each year. Humans are the only natural host. Measles is a highly communicable infection. Despite the remarkable progress made in measles control with the introduction of measles vaccination, it is estimated that in 1997 nearly one million deaths from measles still occurred, half of them in Africa. Outbreaks of measles continue to occur even in highly vaccinated populations. |
Just like the manufacturers of silicon electronics, a team of Penn State chemical engineers wants to assemble circuit boards in place, but these circuits are made of conducting organic polymers that pose major fabrication roadblocks.
"We want to build electronic devices like transistors and flexible circuits," says Dr. Seong Kim, assistant professor of chemical engineering.
Kim and Sudarshan Natarajan, graduate student in chemical engineering, looked at fabricating circuits from polythiophene. This conjugate conducting organic polymer is easily made in a beaker, but once the polymer is created from chaining together a series of identical smaller molecules – monomers – it is a powder that cannot be molded or used for film coating.
"Conjugate conducting polymers are not soluble nor are they meltable," Kim told attendees at the annual meeting of the American Chemical Society today (Sept. 8) in New York. "Some researchers have made them soluble by adding elements to the polymer backbone, but making circuit boards with these is difficult and requires high energy."
Kim and Natarajan solve the fabrication problem by combining the synthesis and processing steps, which are done separately in conventional methods, into a single step.
"We bypass the problem," says Kim. "We make the material at the site of application."
The researchers use a prepared substrate and deposit the monomer – the small molecule that chains to make the polymer – using standard physical vapor deposition. Once they have a thin film of the monomer on the substrate, they apply a mask, similar to those used in standard silicon electronics manufacture, to the surface. The masked monomer film is then exposed to ultraviolet light.
The light causes two monomers to join forming a dimer, then a third molecule to form a trimer and so on. Dimers and trimers then join to begin forming the much longer polymer chains until all the monomer exposed to light is polymerized.
This reaction differs from normal polymerization where the chain begins at one point and grows by adding individual monomers. In this new process, monomers are joining each other wherever they are struck by the photons in the ultraviolet light. The process takes about seven minutes to complete. The researchers then wash off the soluble, uncoupled monomers, leaving only the pattern of conducting polymers indicated by the mask.
Aiming to incorporate conjugate conducting organic materials into the current silicon-based microtechnology, the researchers tried a variety of inorganic substrates including copper, gold and silicon.
However, neither copper nor silicon will make a flexible circuit, so the researchers are also investigating using other flexible substrates such as plastics. A circuit made on plastic materials would find applications where the flexibility is critical. For example, flexible circuits would be ideal for lightweight flexible-screen displays creating electronic paper.
A variety of conducting polymers are also light emitting. The proper combination of red, yellow and green can produce full color images. Another example would be in biomedical applications.
The researchers are looking at a variety of other organic conducting polymers for use in in-place fabrication of circuits and electronic devices.
A seed grant from Penn State's National Science Foundation-supported Materials Research Science and Engineering Center seed grant supported this work.
The above story is based on materials provided by Penn State. Note: Materials may be edited for content and length.
Cite This Page: |
Keeping Kids Healthy
The word “cancer” certainly strikes a scary and emotional note in our hearts, and when attached to the word “childhood,” it can be especially frightening.
However, as with many things we fear, we can be empowered by understanding. This week, we explain just exactly what cancer really is.
Every part of the body (the brain, liver, heart, bones, fingernails, muscles and so on) is made up of hundreds of millions of microscopic cells that are specialized for that particular organ.
These cells follow a very complex and highly organized instruction set from their DNA to multiply, grow and eventually die and become replaced throughout our entire lifetimes.
Occasionally, however, the instruction set becomes damaged as it is copied into newly formed cells. Usually our bodies can recognize cells with damaged DNA and repairs or destroys them.
But sometimes when the instruction to “stop multiplying” is damaged, cells can multiply and grow out of control faster than our bodies can repair the damage. This is how cancer begins.
If the out-of-control cells come from a solid organ like the liver, brain or a muscle, a cancerous tumor is formed. If the out of control cells originate from the blood, such as in leukemia, no tumor is usually formed, but the cancer cells are circulated throughout the body in the bloodstream.
When cancer cells break off from a solid tumor and travel through the blood to other parts of the body and start new tumors, this is called metastasis.
Cancer cells can be very aggressive and start to crowd out and steal energy and nutrients from normal cells so that healthy body parts can no longer function correctly.
In children, the DNA damage that starts the formation of a cancer is not typically caused by any identifiable cause or lifestyle habit such as smoking.
Rather, it is more often a random mistake in the DNA instructions of cells that are rapidly multiplying during the normal growth process of children.
This partly explains why some of the most common cancers in children are of the blood, brain and bones.
Next week we’ll discuss the some of the different types and early warning signs of childhood cancers.
Sally Robinson is a clinical professor of pediatrics at UTMB Children’s Hospital, and Keith Bly is an associate professor of pediatrics and director of the UTMB Pediatric Urgent Care Clinics. This column isn’t intended to replace the advice of your child’s physician. |
The essential feature of a dry climate is that annual losses of water through evaporation at the earth's surface exceed annual water gains from precipitation. Due to the resulting water deficiency, no permanent streams originate in dry climate zones. Because evaporation, which depends chiefly on temperature, varies greatly from one part of the earth to another, no specific value for precipitation can be used as the boundary for all dry climates. For example, 25 in (610 mm) of annual precipitation may produce a humid climate and forest cover in cool northwestern Europe, but the same amount in the hot tropics produces semiarid conditions.
Two divisions of dry climates are commonly recognized: the arid desert, and the semiarid steppe. Generally, the steppe is a transitional belt surrounding the desert and separating it from humid climates beyond. The boundary between arid and semiarid climates is arbitrary but commonly defined as one-half the amount of precipitation separating steppe from humid climates.
Of all the climatic groups, dry climates are the most extensive; they occupy a quarter or more of the earth's land surface. |
The Global Positioning System has completely revolutionised how geologists study the deformation of the Earth. If you leave a GPS receiver in a fixed location for days, months and years, it is precise enough to measure motions on the mil...
The Global Positioning System has completely revolutionised how geologists study the deformation of the Earth. If you leave a GPS receiver in a fixed location for days, months and years, it is precise enough to measure motions on the millimetre scale, allowing us to track strain building up across active faults, and even the incremental drift of the tectonic plates themselves across the Earth’s surface. But on the 26th December 2004, stations across a sizeable slice of the Earth’s surface suddenly found themselves being jerked around a bit more rapidly. The plots below are from stations in southern India and northern Taiwan, respectively.
If you are thinking that date sounds a bit familiar, you’d be right: that jerk is the signal of the massive magnitude 9.3 earthquake that ruptured a 500 km length of the Sunda Trench off the coast of Indonesia on Boxing Day 2004, and unleashed a devastating tsunami.
What’s impressive is that we are seeing permanent deformation of the crust due to motion on a fault (what is known as coseismic deformation) an extremely long way away. As we can see on the map below, the Indian GPS station IISC is some 2,300 miles away, and the Taiwanese station TNML is 3,600 miles away, from the Sunda Trench. And yet, even at that distance, the Sumatra-Andaman earthquake shifted the land beneath these points about a centimetre – a little less for the Taiwan, a little more for India.
The figure above also compares the actual motion observed with GPS (black arrows) with predictions from a model of the Boxing Day rupture (grey arrows). What this figure doesn’t show is the predicted coseismic deformation at places not occupied by GPS stations. Fortunately, a paper just published in the Journal of Geophysical Sciences contains a much nicer visualisation of the output of a similar model. This model – rather mind-blowingly – indicates that the Sumatran-Andaman earthquake rupture directly deformed a sizeable fraction of the Earth’s surface, including Africa, Arabia, the eastern half of Asia, and most of the Americas.
Paul Tregoning and his co-authors have gone on to calculate the cumulative coseismic deformation resulting from all 15 magnitude 8 or greater earthquakes that have occurred since the turn of the millennium on the Earth’s surface. Unsurprisingly, the big three earthquakes in this period – the Sumatra-Andaman, the magnitude 9.1 Tohoku earthquake in March 2011, and the magnitude 8.8 Chilean earthquake in February 2010 – are the major contributors, but the smaller ones fill in some gaps in the southwest Pacific.
Modelled global coseismic deformation due to all M 8+ earthquakes since 2000, from Tregoning et al., 2013
Basically, outside of western Europe and the Arctic Circle, pretty much the entire surface of the planet has been shifted at least a millimetre or two by an earthquake since the turn of the millennium. And this has real world consequences. The interiors of the Earth’s tectonic plates are generally assumed to be rigid and undeforming, and are used as a fixed reference point for measuring deformation at the plate boundaries. The red arrows in the figure above show exactly how much you’d be wrong if you are assuming that for a given point on the Earth’s surface. Even when you’re a long way from a plate boundary, coseismic deformation from distant, large earthquakes is causing your ‘fixed’ reference point to be not so fixed. Spooky tectonic action at a distance, indeed.
Corné Kreemer, Geoffrey Blewitt, William C. Hammond, & Hans-Peter Plag (2006). Global deformation from the great 2004 Sumatra-Andaman Earthquake observed by GPS: Implications for rupture process and global reference fram Earth, Planets, Space, 58 (2), 141-148
Tregoning, P., Burgette, R., McClusky, S., Lejeune, S., Watson, C., & McQueen, H. (2013). A decade of horizontal deformation from great earthquakes J |
A STUDY ON THE TECHNIQUES OF TEACHING VOCABULARY TO THE FIFTH STUDENTS OF SD MUHAMMADIYAH 1 BANCAR
IN THE 2007/2008 ACADEMIC YEAR
In this chapter, the researcher presents introduction which consists of background of the study, statement of the problems, purposes of the study, significance of the study, limitation of the study, and definition of the key terms.
1.1 Background of the Study
A language is a meaningful way for communication (Soekemi, 1995:4). It means that a language is sound, which is produced, connected between the kinds of sounds the speakers of a language make and their culture. (A language is an instrument for communication).
English as foreign language is taught in Indonesia from elementary school until university level. They are some reasons why English has been taught in elementary school that is introduction English in early years is profitable for their basic in studying English furthermore. Another important reason is from psychological and linguistic point of view. The psychological conditions of the tenth years old and under children have good memory.
In addition, English subject can be taught in elementary school if it is needed by society and supported by a teacher who has ability to teach, so that at the end of elementary school the student has capacity of reading, writing, listening, and speaking in English. And to support for skill – language, it is needed language component that is vocabulary.
Vocabulary as one of the important aspect of English language will make the students easy to communicate to each other and master other aspects of language.
As we have known that in the English curriculum elementary students have to master vocabulary at least 500 words and to develop pupil’s vocabulary. The teacher must have teaching technique so that they can receive English easily, however the students must develop vocabulary themselves too. Actually that vocabulary is the foundation to learn English and vocabulary is one of the component of language, there is no language without vocabulary.
Vocabulary in English as a foreign language is taught at school for the purpose of providing the students the four language skills, they are listening, speaking, reading and writing. Some general statements say that techniques of teaching reading comprehension and other (Mujiono, 1993)
This study mainly concerns with the techniques of teaching English vocabulary at elementary school. Moreover, the role of the teacher’s competence of mastering the various techniques is very important to make the students to be able to do or achieve the aim of the instructional program on mastering vocabulary. Ur (1996) states “It is better to teach vocabulary in separated, spaced sessions than to teach it all at once”.
The writer may say that vocabulary is the important elements of language and it should be taught effectively and purposefully. To do it effectively, the teacher must have both theoretical and knowledge about subject matter. That theoretical are cognitive theory and creative construction theory. Ryan and Cooper (1984: 302) stated “To do this effectively the teacher must have theoretical, knowledge about learning and human behavior and knowledge about subject matter be taught”.
The writer believes that English is really important for the beginner and the interaction of teacher and learners are needed in teaching learning process.
In this skripsi, the writer tries to describe the Techniques of Teaching Vocabulary to the Fifth Year Students of SD MUHAMMADIYAH 1 BANCAR.
1.2 Statement of the Problems
The writer intents to investigate the following matters:
1. What kinds of techniques are used by the teacher in teaching English vocabulary?
2. How does teacher apply the techniques in classroom?
3. How are the students attitude towards the techniques applied by the teacher?
1.3 Purposes of the Study
Based on the statement of the problems above, the purposes of the study are:
1. To described kind of techniques used by the teacher in teaching English vocabulary.
2. To described the implementation of those techniques in classroom.
3. To described the response from the students about this techniques.
1.4 Significance of the Study
The finding of the study are expected to give valuable information to serve as a feedback which contributes to improve the techniques in teaching learning process and finding the best the techniques of teaching English vocabulary and improve the knowledge about vocabulary items. The head master and the English teacher of SD MUHAMMADIYAH 1 BANCAR would know how well the teaching and learning has fulfilled the need of the pupil and achieve the goal of learning and teaching English.
Hypothesis is primary answer of research problem, and it must be tested to get its truth. And usually it is used in research because the research will be directed to the problem being researched.
Based ob the opinion above, the writer feels it is necessary to state some hypothesis. Concerning with the research problem, the writer formulates the hypothesis as follows:
If student mastery English vocabulary, they will be more active in teaching and learning process and can use or develop four English skills.
The students is more interested in learning vocabulary with this techniques.
If students mastery English vocabulary, they can be successful in their English study in Elementary School.
1.6 Limitation of the Study
This study that is carried out at SD MUHAMMADIYAH 1 BANCAR focuses on the aspect of English Language and also the techniques, because of limited times and energy, the writer limited the problem as follow:
1. The kind of techniques in teaching English vocabulary at SD MUHAMMADIYAH 1 BANCAR.
2. The implementation of those techniques on the fifth year students at SD MUHAMMADIYAH 1 BANCAR.
1.7 Definition of Key Terms
To avoid misunderstanding in the interpretation of words used in this research, the terms used are as follow:
1. Study : The activity of the learning or gaining knowledge from book (Hornby As, 1995: 187)
2. Techniques : The ways of presenting the material of vocabulary to the students that take places in the classroom (Prasasti Enik, 2004: 6)
3. Teaching : The activities done by the teacher in presenting English material (Horny As, 1974: 150)
4. Vocabulary : List of words with their meanings, especially at the back of a book used for teaching a foreign language. (Hornby As, 1992 : 46)
REVIEW OF RELATED LITERATURE
In this chapter, the researcher presents about definition of vocabulary, vocabulary as words and their meaning, kinds of vocabulary, the techniques of teaching vocabulary, and the theory of teaching and learning language for your learner.
Definition of the Vocabulary
The are many definitions of vocabulary that come from linguist actually one is not really different from the other. Gairns and Redman (1992:44) state “At a very basic level of survival in a foreign language, we can satisfy many of our needs with vocabulary and a bilingual dictionary”. “Other opinion from Hornby (1992: 46) state “Vocabulary means total number of word in a language used by a person”. In this case, Webster’s (1982: 53) as quoted by Widodo (1005: 10) gives better definitions is better. It defines vocabulary as “All the words of a language similarly”.
From definitions above, the writer employs vocabulary as her field of study because it is regarded as the key in learning language especially English. People can express their idea if they have enough vocabularies. On the other hand, if some one has very few vocabulary they will get difficulties in using English.
Vocabulary as Words and their Meaning
Finoechiaro (1989: 68) as quoted by Widodo (1005: 11) state “Words become meaningful only when studied and considered in context, that is with all the other surround them and helping to give them their meanings”.
In this items, the writer tries to describe about the words and their meaning focusing on five items:
2.2.1 Polysemy, Homonym
According to Gairns and Redman (1992:14):
Polysemy : we use this term to describe a single word from with several different but closely related meanings. In English, for example, we can talk about the “head” of a person, and the “head” of pin, on the “head” of an organization.
Homonym : when a single word has several different meanings, which are not closely related, we use the term homonym e.g. a file/fail/ may be used for keeping papers in, or may be tool for cutting or smoothing hard substances.
2.2.2 Synonym, Antonym, Hyponym
Synonym : items that mean the same, or nearly the same; for example; bright, smart; may serve as synonyms of intelligent (Ur, 1996:62). According to Yule (1987:95) states “Synonym is two or more terms, with very closely related meaning which are often, but not always, intersubstitutable in sentences”, as quoted by Widodo (1005:12)
Antonyms : items that mean the opposite, e.g. rich is and antonym of poor (Ur, 1996:62). “Antonym is a word of opposite meaning (Kustoyo, 1998:28) as quoted by Widodo (2005:13).
Hyponyms : items that serve as specific examples of a general concept; dog, lion, mouse are hyponyms of animal.
2.2.3 Multi-Word Verb
Gairn and Redman (1992:33) state that we are using this term to describe the large number of English vocabulary consisting of two, or sometimes three parts:
A “base” Verb-Preposition e.g. look into (investigate), get over (recover from)
A “base” Verb and Adverbial Particle (Phrasal Verb) e.g. break down (collapse), call of (cancel).
A “base” Verb-Adverbial Particle-Preposition e.g. put up with (tolerate).
As the writer example illustrate that there are Verb-Preposition combination that meaning is not clear from the individual parts; this probably explains why certain grammar book and course writers include semantically opaque preposition verbs in the treatment of phrasal verb. In our experience the distinction does not pose of significance the teaching problem, but if you to pursue the different the writer refer you to one of the grammar books listed in the bibliography. For the reader the writer will use the term “Phrasal Verb” when referring specially to verb-adverbial particle, and multi-word verb to include semantically opaque prepositional verbs as well.
In some cases phrasal verbs retain the meaning of their individual verb and particle e.g. sit down, while in others the meaning can not be deducted from an understanding of the constituent parts e.g. take in (deceive or cheat some body). It is this later category that creates most difficulty and contributes to the mystique which surrounds multi-word verb for many foreign learners. Also contributing to the mystique is the fact that many phrasal verbs have-multiple meaning e.g. pick up can mean lift, acquire, collect, etc. grammatically, students need to know whether a transitive multi word verb is phrasal or preposition. This is because phrasal verbs are separable.
e.g. take off your hat take it off
take your hat off (but not take off it)
while the prepositional verb are not:
e.g. look after the children
look after them.
(but not ‘look the children after or look them after’)
Finally, there is question of style, some command phrasal verb are informal and have one-word equivalents which are preferred in more formal contexts (e.g. put off/ postpone; get long means manage). Students will need to be aware of restrictions of these kinds.
2.2.4 Idiom, Collocation
An Idiom is a sequence of word which operates as single semantic unit and like many multi-words verb the meaning of the whole can not be deduced from an understanding of the parts e.g. never mind, hang on, under the weather, etc. (Gairns and Redman, 1992:35)
Collocation according to Ur (1996: 61) is “the typical of the particular items are another factor that makes a particular combination sound “right” or “wrong” in a given context.
2.2.5 Componential Analysis
According to Gairns and Redman (1992:40) componential analysis is a systematic means of examining sense relations. If the writer take items from the same semantic field (and which therefore have some features it common with each other) the writer can, by breaking them down into their constituent parts, examine the similarities and differences between them.
Example : Boy = + human + male + child
Girl = + human – female + child
Kinds of Vocabulary
There are two kinds of vocabulary:
Active vocabulary is vocabulary that often used by a person to express his idea and sense. Example: cry, laugh, and so on.
Passive vocabulary is vocabulary that often best present it quite quickly with a simple example. If it is appears as part of a text or dialog, we can often leave student to guess the word from the context Cristina (1998:43)
Gairns and Redman (1992) divided vocabulary into two kind’s namely respective vocabulary and productive vocabulary.
Respective vocabulary means language items, which can only recognized and comprehended in the context of reading and listening materials.
Productive Vocabulary means language items, which the learner can recall and use appropriately in speech and writing (these terms are often called passive and active vocabulary).
From that opinions shown that actually those vocabularies almost the same with active and passive vocabulary.
The Techniques of Teaching Vocabulary
Before discussing the technique of presenting materials for young learners. It is important to know that teaching English to the elementary school students is different from teaching English secondary or high school students. Kasbolah (1995 : 25) as quoted by Purwati (2003 : 13) points out that the goal of teaching English as the local content for the elementary school students should be stressed on building the positive point of view toward English. Furthermore, he said that the materials for the beginners should contain mostly some activities, such as singing, playing games, and reading poems. Brumfit (1991: 5 – 6) says that in teaching a second language words not enough, so we need a lot of object to work. Furthermore, he suggest to let the students play the language and make the variety the classroom.
The techniques of teaching vocabulary is a procedure or a collection way used in the classroom teaching vocabulary (Hubbard – Jones – Thornton: 31) (as quoted by Prasasti, 2004: 15). From the meaning techniques above the writer has assumption that techniques are very important and most needed in the teaching learning process.
In this study the writer can mention some techniques for teaching vocabulary, they are:
Say the word clearly and write it on the board.
Get the class to repeat the word in.
Translate the word into the students own language.
Ask student to translate the word.
Draw a picture to show what the word means.
Give example to show how the word is used.
Ask question using the new word.
In showing the meaning of words, there are three ways to showing the meaning of new words:
By showing the a real object
Anything that is ready in the classroom; furniture, clothes, part of the body. Also many object that can be brought into food (grapes, mango, orange) small object from home (soap, cups, keys, cupboard).
By showing a picture
This can be done in two ways:
By drawing a picture on the board
By showing a picture prepared before the lesson
From above, we can combine different techniques, they are:
Picture on the board (interesting, so the student remember it)
Facial expression (gives meaning clearly) e.g. (show how “smile” is used as a verb)
Translation (to make sure everyone understand)
Point out that each technique is very quick (a few seconds), and they all reinforce each other. For example:
Teacher : Look, they are smiling. Now look at me. I’m smiling (show by facial expression smile). We smile when we are happy. Smile (gesture)
Students : Smile
Teacher : Good. What does it mean? (student give translation)
Explanation the meaning of vocabulary item can be very difficult, especially at beginner and elementary levels. But we can be used. It is worth remembering that explaining the meaning of word use, which are relevant. If the writer explaining the meaning of “mate” (friend) we have to points out that is a colloquial word used in informal contexts and that it is more often used for males than for females according to Jeremy Harmer (1996: 162) as quoted by Widodo (2005: 20).
The Theory of Teaching and Learning Language for Your Learner
The last few years have seen a growing tendency among children in Indonesia to learnt English. Although English has been taught to SD (Primary School) students in some private school, most of these children learn English from non-formal education. It is kind of education that make use of and benefits from this trend and situation. We, at International Language Programs, for example, started this program in June 1989 and it has been a successful and business.
It is since the new curriculum (1994) was introduced that school could teach English formally at primary level. This curriculum reflects the recognition of teaching English from an early age and hopefully stops the controversies on this matter.
This paper is intended as a basis for the further study of teaching English to children in particular those who sit at the primary level or early years of the SMP (Secondary School) and searches for better approaches towards this matter. It examines what teaching is like and how to do it.
Some facts about children and adults
The different between children and adult:
Children like playing and moving whereas adults seem reluctant to move and regard playing as childish.
Children can absorb new things easily whereas adults find it difficult to absorb new things.
Children feel at ease dealing with one thing at a time whereas adults are eager to know a lot of things at one time.
Children get bored easily whereas adults can spend a long time doing something especially if it is of their interest.
Children also differ in their knowledge background from adults. Adults have gained some k knowledge of it as they learn at school or from other sources.
Children differ from adults in many ways. Consequently, teaching them requires different approaches. The above fact about children have to be accommodated according and activities in class should be in line with these facts. The following are points to consider when teaching children:
Teach One Thing at a time
Activities Should Vary
Lessons Should be Interesting
Teaching/ Learning Stages
According to Harmer (1983) about learning stages in English lessons:
Introducing a new language
Unlike adults, children cannot automatically write what they hear or say. This is, again, because they are new to the language and not familiar with it. Their writing skills are also shaky as they are still learning to write. This skill, therefore, has to be built up gradually. Special attention has to be paid to spelling. Any (speaking) practice should not involve much writing. If it does (as it can be avoided) the teacher has to make sure that his students are familiar with the spelling. If not, it will inhibit the smoothness and fluency of the activity. For children, writing can be copying, completion, answering question, telegraph writing, writing dictation. The teacher should not be too strict about spelling mistakes.
This preparation often takes longer than the teaching it self and involves a lot of thought. Preparation includes: lesson-planning; seeking, choosing, cutting, and sticking pictures; writing, developing, and typing material; photo –copying, and so on. This certainly requires time-devotion, skill, and experience on the part of the teacher(s). the availability of aids, books, and resources is also essential.
Application at School
One of the most important requirements is qualified teachers. At least two qualifications are needed from the teacher:
A qualification in English is essential as the teacher is the model for his students. He relates to them and they learn to speak from him.
A teaching qualification is includes a sound knowledge of how to teach children and the ability to implement it in class. |
Listen to today's episode of StarDate on the web the same day it airs in high-quality streaming audio without any extra ads or announcements. Choose a $8 one-month pass, or listen every day for a year for just $30.
You are here
Stars are big and heavy, but they’re not solid. Because they’re made of gas, they’re just about as easy to stretch and squish as a blob of Silly Putty. Stars that spin very fast, for example, bulge out at the equator, so they look like flattened beachballs. And stars in certain stages of life puff in and out like a set of breathing lungs.
And astronomers using a space telescope have recently discovered binary stars that get stretched out, then wiggle like a bowl of Jell-O when they spring back to a rounder shape.
The Kepler satellite was built to look for planets in other star systems. But its observations also reveal details about the stars.
An example is “heartbeat” stars. They got the name because, when you plot how bright they are, the line looks like a chart of a beating heart, with regular peaks and valleys.
A recent study found that these systems consist of two stars in an elongated orbit around each other. At their closest, the stars may be separated by only a few times the size of the stars themselves. The gravity of each star tugs on the other, stretching the star out. The increased surface area of the two stars makes the system brighter. When the stars move away from each other, they snap back to a rounder shape, so they get fainter.
As they change to a more spherical shape, though, their surfaces jiggle. Astronomers may someday use the jiggles to probe conditions inside the stars — stars that are constantly changing shape.
Script by Damond Benningfield |
Kids don't have to pay bills, cook dinners, or manage carpools. But — just like adults — they have their share of daily demands and things that don't go smoothly. If frustrations and disappointments pile up, kids can get stressed or worried.
It's natural for all kids to worry at times, and because of personality and temperament differences, some may worry more than others. Luckily, parents can help kids learn to manage stress and tackle everyday problems with ease. Kids who can do that develop a sense of confidence and optimism that will help them master life's challenges, big and small.
What Do Kids Worry About?
What kids worry about is often related to the age and stage they're in.
Kids and preteens typically worry about things like grades, tests, their changing bodies, fitting in with friends, that goal they missed at the soccer game, or whether they'll make the team. They may feel stressed over social troubles like cliques, peer pressure, or whether they'll be bullied, teased, or left out.
Because they're beginning to feel more a part of the larger world around them, preteens also may worry about world events or issues they hear about on the news or at school. Things like terrorism, war, pollution, global warming, endangered animals, and natural disasters can become a source of worry.
Find out what's on their minds: Be available and take an interest in what's happening at school, on the team, and with your kids' friends. Take casual opportunities to ask how it's going. As you listen to stories of the day's events, be sure to ask about what your kids think and feel about what happened.
If your child seems to be worried about something, ask about it. Encourage kids to put what's bothering them into words. Ask for key details and listen attentively. Sometimes just sharing the story with you can help lighten their load.
Show you care and understand. Being interested in your child's concerns shows they're important to you, too, and helps kids feel supported and understood. Reassuring comments can help — but usually only after you've heard your child out. Say that you understand your child's feelings and the problem.
Guide kids to solutions. You can help reduce worries by helping kids learn to deal constructively with challenging situations. When your child tells you about a problem, offer to help come up with a solution together. If your son is worried about an upcoming math test, for example, offering to help him study will lessen his concern about it.
In most situations, resist the urge to jump in and fix a problem for your child — instead, think it through and come up with possible solutions together. Problem-solve with kids, rather than for them. By taking an active role, kids learn how to tackle a problem independently.
Keep things in perspective. Without minimizing a child's feelings, point out that many problems are temporary and solvable, and that there will be better days and other opportunities to try again. Teaching kids to keep problems in perspective can lessen their worry and help build strength, resilience, and the optimism to try again. Remind your kids that whatever happens, things will be OK.
So, for example, if your son is worried about whether he'll get the lead in the school play, remind him that there's a play every season — if he doesn't get the part he wants this time, he'll have other opportunities. Acknowledge how important this is to him and let him know that regardless of the outcome, you're proud that he tried out and gave it his best shot.
Make a difference. Sometimes kids worry about big stuff — like terrorism, war, or global warming — that they hear about at school or on the news. Parents can help by discussing these issues, offering accurate information, and correcting any misconceptions kids might have. Try to reassure kids by talking about what adults are doing to tackle the problem to keep them safe.
Be aware that your own reaction to global events affects kids, too. If you express anger and stress about a world event that's beyond your control, kids are likely to react that way too. But if you express your concern by taking a proactive approach to make a positive difference, your kids will feel more optimistic and empowered to do the same.
So look for things you can do with your kids to help all of you feel like you're making a positive difference. You can't stop a war, for example, but your family can contribute to an organization that works for peace or helps kids in war-torn countries. Or your family might perform community service to give your kids the experience of volunteering.
Offer reassurance and comfort. Sometimes when kids are worried, what they need most is a parent's reassurance and comfort. It might come in the form of a hug, some heartfelt words, or time spent together. It helps kids to know that, whatever happens, parents will be there with love and support.
Sometimes kids need parents to show them how to let go of worry rather than dwell on it. Know when it's time to move on, and help kids shift gears. Lead the way by introducing a topic that's more upbeat or an activity that will create a lighter mood.
Highlight the positive. Ask your kids what they enjoyed about their day, and listen attentively when they tell you about what goes great for them or what they had fun doing. Give plenty of airtime to the good things that happen. Let them tell you what they think and feel about their successes, achievements, and positive experiences — and what they did to help things turn out so well.
Schedules are busy, but make sure there's time for your kids to do little things they feel good doing. Daily doses of positive emotions and experiences — like enjoyment, gratitude, love, amusement, relaxation, fun, and interest — offset stress and help kids do well.
Be a good role model. The most powerful lessons we teach kids are the ones we demonstrate. Your response to your own worries, stress, and frustrations can go a long way toward teaching your kids how to deal with everyday challenges. If you're rattled or angry when dealing with a to-do list that's too long, your kids will learn that as the appropriate response to stress.
Instead, look on the bright side and voice optimistic thoughts about your own situations at least as often as you talk about what bothers or upsets you. Set a good example with your reactions to problems and setbacks. Responding with optimism and confidence teaches kids that problems are temporary and tomorrow's another day. Bouncing back with a can-do attitude will help your kids do the same. |
Vulnerability and resilience are two important concepts to understand child development better.
Vulnerabilities are various negative traits and weaknesses that a child may bring from his genetic background, or from a child’s personality:
- Difficult temperament, irritability
- Physical abnormality
The environment also heavily influences a child development. This environment can be beneficial for the child development, or can be detrimental. For instance, a detrimental environment could be:
- Inadequate nutrition
- Insalubrious environment
- Abusive environment
- Parents fighting, divorce
Resilience is the capacity to overcome weaknesses from a child’s vulnerabilities or from the child’s environment. Children are incredibly resilient, and they can overcome most negative conditions, provided that the child’s vulnerabilities are compensated by a protective environment.
The real problems starts when the child is growing in a poor environment, and also has high vulnerabilities, so there isn’t one to compensate the other. Here are some examples of a protective environment compensating for a child’s vulnerability:
- The child has a difficult temperament, yet is nurtured by patient and careful parenting
- The child has a weak health, yet is growing in a secure and safe environment with access to regular, high quality health care
At the opposite, here are examples of a poor environment, which will aggravate a child’s matching vulnerability. These combination will yield the worst outcome on the long run:
- The child has a difficult temperament, and is raised in an abusive context
- The child has a weak health, and is growing in an unhygienic environment |
Stellar Snowflake Cluster
Newborn stars, hidden behind thick dust, are revealed in this image of a section of the Christmas Tree Cluster from NASA's Spitzer Space Telescope. The newly revealed infant stars appear as pink and red specks toward the center and appear to have formed in regularly spaced intervals along linear structures in a configuration that resembles the spokes of a wheel or the pattern of a snowflake. Hence, astronomers have nicknamed this the "Snowflake Cluster."
Star-forming clouds like this one are dynamic and evolving structures. Since the stars trace the straight line pattern of spokes of a wheel, scientists believe that these are newborn stars, or "protostars." At a mere 100,000 years old, these infant structures have yet to "crawl" away from their location of birth. Over time, the natural drifting motions of each star will break this order, and the snowflake design will be no more.
While most of the visible-light stars that give the Christmas Tree Cluster its name and triangular shape do not shine brightly in Spitzer's infrared eyes, all of the stars forming from this dusty cloud are considered part of the cluster.
Like a dusty cosmic finger pointing up to the newborn clusters, Spitzer also illuminates the optically dark and dense Cone Nebula, the tip of which can be seen towards the bottom left corner of the image.
Image Credit: NASA/JPL-Caltech/P.S. Teixeira (Center for Astrophysics) |
Concern about the environmental impact of forest fires in the Bukombe District in Tanzania has been growing in the last two decades. Most in the district are caused by human activities. The protection of the miombo woodlands is hampered by a lack of fire management policies and legal instruments to support fire prevention and suppression. Trained human resources are also limited. Local communities have their own management system and forest fire management that complement local ecology and traditions. It is therefore expedient to involve them in forest management. The HASHI (Soil Conservation and Afforestation) Project together with the Forest and Beekeeping Division have organized joint forest management in selected locations where the villagers are granted use rights to forest resources. The creation of local ownership has been a key to the success of fire management.
The Bukombe District is located in northern Tanzania, with an area of about 10,500 km2. During the last two decades, concern about the environmental impact of forest fires in the district has been growing. The main natural vegetation in the district is composed of woodland and thick forest. More than 90 percent of all fires are caused by human activities such as:
agriculture, especially farm preparation and shifting cultivation;
hunting and collection of honey (smoke is used to drive animals from their hideouts and bees from their hives);
traditional tribal fire uses.
Forest fire in the miombo woodland has resulted in significant damage to property and many lives have been lost. Even though fire is important for the regeneration and growth of the miombo woodlands, the uses of fire will always be controversial.
The major problems facing forest fire protection are age-old traditional attitudes, socio-economic activities and, to some extent, past national forest policies that dissociated the local communities from their traditional access and utilisation of the forests. The government used to be the custodian of such forests while local communities were barred from the resources. For example, the local villagers were not allowed to collect even firewood from the forest. Such alienation induces local communities to be detached and indifferent to their environment. Hence they do not care too much about the forest.
Efforts to combat forest fires in Tanzania, in general, are hindered by a lack of fire management policies and legal instruments to support fire prevention and suppression. Furthermore, technical and professional human resources are also inadequate at all levels. It is for that reason that a collective effort involving local communities in fire management should be encouraged.
The local communities have their own management systems and forest fire management that complement local ecology and traditions. For example, the Sukuma people traditionally construct a Ngitili (a Sukuma term meaning enclosure). This area within the village is closed off at the beginning of the wet season and opened during the dry season for grazing cattle. This traditional practise has protected many areas from fires. Therefore, joint forest management (JFM) efforts and strategies need to be implemented, considering that the government does not have sufficient resources to combat forest fires alone.
Due to inadequate funds and staff resources to protect the forests, the HASHI (Soil Conservation and Afforestation) Project and the Forest and Beekeeping Division have established JFM in selected villages where local communities are granted use rights. This provides villagers an incentive to manage and protect the forest against encroachment, illegal harvesting and fires.
In trying to involve local communities in forest fire management, the project focuses on:
Education and publicity through interactive video shows, mobile extension teams, brochures, posters, calendars and radio programmes, to sensitise and empower the communities to prevent fires.
Seminars, workshops and meetings at different levels to disseminate information.
Formulation of by-laws on fire protection.
Collaboration with village committees in management, planning, monitoring and extension services.
The creation of local ownership at the village level has been a key to the success of fire management. To a large extent, JFM has changed the attitudes and behaviour of villagers regarding land use and fire management considerably. The number of forest fires is slowly decreasing with time.
The new forest policy encourages private ownership of land and forests through JFM and community-based fire management that increase land tenure security. It is hoped that this will significantly decrease the incidence of wildfires in the miombo woodlands and improve forest and fire management.
Head of Fire Protection,
Hashi Project, P.O. Box 797, Shinyanga, Tanzania, email: |
The Earth's inner core is thought to be slowly growing as the liquid outer core at the boundary with the inner core cools and solidifies due to the gradual cooling of the Earth's interior (about 100 degrees Celsius per billion years). Many scientists had initially expected that the inner core would be found to be homogeneous, because the solid inner core was originally formed by a gradual cooling of molten material, and continues to grow as a result of that same process. Even though it is growing into liquid, it is solid, due to the very high pressure that keeps it compacted together even if the temperature is extremely high. It was suggested that Earth's inner core might be a single crystal of iron. Read more
CNN - July 1996... Deep inside the Earth, spinning in a watery pool of iron, the Earth's core is a giant iron crystal slightly smaller but more dense than the moon. Beyond that, the substance at the heart of our planet always has been a mystery. Although seismologist Xiaodong Song acknowledged the mystery is "really a complicated problem," he and fellow seismologist Paul Richards have managed to unravel it. They announced that their relatively superficial study of 28 years' worth of earthquake records at the Lamont-Doherty Earth Observatory shows the core is in motion, and going at a pretty good clip.
The Columbia scientists measured the underground effects of earthquakes, determining how quickly their movement travels through the center of the Earth to other places on the globe. The scientists have learned that the Earth's core is turning in an eastward direction and spinning faster than the Earth itself. Every 400 years, the core is a full turn ahead of the Earth.
"The really surprising thing is how fast the core is moving," Richards said. They estimate the core moves about 100,000 times faster than the movements of the Earth's tectonic plates. This information about the Earth's core may shed new light on how the Earth works. For starters, the core's motion could help explain why magnetic north and south periodically wander or reverse over Earth's history. "This rotation that has been found allows us to go forward in understanding the Earth's magnetic field. So we think we understand it better than we did before this observation," seismologist Paul Silver said. Gravitational pull and seismic activity also could be viewed differently because of this discovery.
A Seismic Adventure
There's a giant crystal buried deep within the Earth, at the very center, more than 3,000 miles down. It may sound like the latest fantasy adventure game or a new Indiana Jones movie, but it happens to be what scientists discovered in 1995 with a sophisticated computer model of Earth's inner core. This remarkable finding, which offers plausible solutions to some perplexing geophysical puzzles, is transforming what Earth scientists think about the most remote part of our planet.
"To understand what's deep in the Earth is a great challenge," says geophysicist Lars Stixrude. "Drill holes go down only 12 kilometers, about 0.2 percent of the Earth's radius. Most of the planet is totally inaccessible to direct observation." What scientists have pieced together comes primarily from seismic data. When shock waves from earthquakes ripple through the planet, they are detected by sensitive instruments at many locations on the surface. The record of these vibrations reveals variations in their path and speed to scientists who can then draw inferences about the planet's inner structure. This work has added much knowledge over the last ten years, including a puzzling observation: Seismic waves travel faster north-south than east-west, about four seconds faster pole-to-pole than through the equator.
This finding, confirmed only within the past two years, quickly led to the conclusion that Earth's solid-iron inner core is "anisotropic" -- it has a directional quality, a texture similar to the grain in wood, that allows sound waves to go faster when they travel in a certain direction. What, exactly, is the nature of this inner-core texture? To this question, the seismic data responds with sphinx-like silence. "The problem," says Ronald Cohen of the Carnegie Institution of Washington, "is then we're stymied. We know there's some kind of structure, the data tells us that, but we don't know what it is. If we knew the sound velocities in iron at the pressure and temperature of the inner core, we could get somewhere." To remedy this lack of information, Stixrude and Cohen turned to the CRAY C90 at Pittsburgh Supercomputing Center.
Earth's layered structure -- a relatively thin crust of mobile plates, a solid mantle with gradual overturning movement, and the outer and inner core of molten and solid iron.
Don't believe Jules Verne. The center of the Earth is not a nice place to visit, unless you like hanging out in a blast furnace. The outer core of the Earth, about two-thirds of the way to the center, is molten iron. Deeper yet, at the inner core, the pressure is so great - 3.5 million times surface pressure -- that iron solidifies, even though the temperature is believed to exceed 11,000 degrees Fahrenheit, hotter than the surface of the sun.
Despite rapid advances in high-pressure laboratory techniques, it's not yet possible to duplicate these conditions experimentally, and until Stixrude and Cohen's work, scientists could at best make educated guesses about iron's atom-to-atom architecture - its crystal structure - at the extremes that prevail in the inner core. Using a quantum-based approach called density-functional theory, Stixrude and Cohen set out to do better than an educated guess. With recent improvements in numerical techniques, density-functional theory had predicted iron's properties at low pressure with high accuracy, leading the researchers to believe that with supercomputing they could, in effect, reach 3,000 miles down into the inner core and pull out what they needed.
Three crystal structures of iron.
Yellow lines show bonds between iron atoms.
Rethinking Inner Earth
On Earth's surface, iron comes in three flavors, standard crystalline forms known to scientists as body-centered cubic (bcc), face-centered cubic (fcc) and hexagonal close-packed (hcp). Working with these three structures as their only input, Stixrude and Cohen carried out an extensive study - more than 200 separate calculations over two years - to determine iron's quantum-mechanical properties over a range of high pressures. "Without access to the C90," says Stixrude, "this work would have taken so long it wouldn't have been done."
Prevalent opinion before these calculations held that iron's crystal structure in the inner core was bcc. To the contrary, the calculations showed, bcc iron is unstable at high pressure and not likely to exist in the inner core. For the other two candidates, fcc and hcp, Stixrude and Cohen found that both can exist at high pressure and both would be directional (anisotropic) in how they transmit sound. Hcp iron, however, gives a better fit with the seismic data. All this was new information, but even more surprising was this: To fit the observed anisotropy, the grain-like texture of the inner core had to be much more pronounced than previously thought.
"Hexagonal crystals have a unique directionality," says Stixrude, "which must be aligned and oriented with Earth's spin axis for every crystal in the inner core." This led Stixrude and Cohen to try a computational experiment. If all the crystals must point in the same direction, why not one big crystal? The results, published in Science, offer the simplest, most convincing explanation yet put forward for the observed seismic data and have stirred new thinking about the inner core.
Could an iron ball 1,500 miles across be a single crystal? Unheard of until this work, the idea has prompted realization that the temperature-pressure extremes of the inner core offer ideal conditions for crystal growth. Several high-pressure laboratories have experiments planned to test these results. A strongly oriented inner core could also explain anomalies of Earth's magnetic field, such as tilted field lines near the equator. "To do these esoteric quantum calculations," says Stixrude, "solutions which you can get only with a supercomputer, and get results you can compare directly with messy observations of nature and help explain them -- this has been very exciting."
There are those who believe Earth has a core crystal beneath the Great Pyramid, functioning much like a central computer that runs the grid programs of our reality. It was allegedly programmed by Tehuti aka Thoth after the fall of the Atlantean Program ... and the Egyptian Program began. This crystal, may not be physical, but consciousness. It is referred to as the Hall of Records, Akashic Records, collective unconsciousness, or the grids/matrix that create the realities in which we virtually experience.
CRYSTALINKS HOME PAGE
PSYCHIC READING WITH ELLIE |
A molecule that contains only carbon and any of the following: Hydrogen, oxygen, nitrogen, sulfur, and/or phosphorous
The process by which living organisms produce molecules
Two different molecules that have the same chemical formula
Simple carbohydrates that contain three to ten carbon atoms
Carbohydrates that are made up of two monosaccharides
Carbohydrates that are made up of more than two monosaccharides
A chemical reaction in which molecules combine by ejecting water
Breaking down complex molecules by the chemical addition of water
Lacking any affinity to water
A lipid made from fatty acids which have no double bonds between carbon atoms
A lipid made for fatty acids that have at least one double band between carbon atoms
A bond that links amino acids together in a protein
A strong attraction between hydrogen atoms and certain other atoms (usually oxygen or nitrogen) in specific molecules
Anything that has mass and takes up space
An explanation or representation of something that can't be seen
All atoms that contain the same number of protons
Chemicals that result from atoms linking together
A change that affects the appearance but not the chemical makeup of a substance
A change that alters the makeup of the elements or molecules of a substance
One of three forms-solid, liquid, or gas-which every substance is capable of attaining
The random motion of molecules from an area of high concentration to an area of low concentration
A measurement of how much substance exists within a certain volume
A membrane that allows some molecules to pass through but does not let other molecules pass through
The tendency of a solvent to travel across a semipermeable membrane into areas of higher solute concentration
A substance that alters the speed of a chemical reaction but does not get used up in the process
Please allow access to your computer’s microphone to use Voice Recording.
We can’t access your microphone!
Click the icon above to update your browser permissions above and try again
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.
Your microphone is muted
For help fixing this issue, see this FAQ. |
this lesson, students will need to work with a partner and construct
one or more of the geometric solids out of coffee stirrers and twist
ties, straws and pipe cleaners, toothpicks and gumdrops, or other
available supplies. For example, a tetrahedron built out of straws is
Students should use the Geometric Solids Tool to help them build the physical model.
Each pair of students should record how you constructed each solid in a table, such as the one below.
This is a meaningful activity for students. You may wish to
start with simple polyhedra, such as a cube or a tetrahedron. Students
can rotate their solid, count the faces, corners, and edges and compare
their results with the ones in the table.
How did you construct the
Draw attention to the fact that these constructions look like the
transparent shapes in the computer program. They may wish to look at
their shape on the computer, using the transparent tool.
Early finishers can build shapes of their own. As in the previous
part of the lesson, they should record the shape and information about
the shape in their tables. |
Why Won't My Salt Melt?
Place a pad of butter in a skillet on a warm stove, and it will quickly melt into an oily liquid that can be poured over your baked potato. However, if you put a pile of salt in a skillet, it will not liquefy, even if you crank the heat way up. Why not? The difference lies in the types of solid that you are trying to melt. The types of bonds that must be broken in order to melt a solid have a strong influence on its melting point.
Why It Matters
- The nonpolar fat molecules in butter are held together in the solid state by relatively weak London dispersion forces. Only a little bit of energy is required to loosen these intermolecular interactions enough to allow the formation of a free-flowing liquid, so its melting point is relatively low.
- The ions in a salt crystal are held together by strong ionic bonds. Much more energy is required before the ions can break free of their positions within the solid crystal.
- Although NaCl cannot be melted on a common stove, molten (liquid) sodium chloride can be achieved at higher temperatures, as seen in the following video:
With the links below, learn more about how the structures of different types of solids affect their physical properties, such as their melting points. Then answer the following questions.
- An unknown solid melts when heated to body temperature by holding it in your hand. The solid does not conduct electricity. What type of solid (network covalent, metallic, ionic, or molecular) is this substance likely to be?
- Solid mixtures of metals are known as alloys. Would brass (a copper-zinc alloy) still take the form of a metallic solid? What defining property of metallic solids can we look at to determine the answer?
- Butter is a molecular solid, but it is not crystalline. Are there examples of molecular solids that are also crystals?
- At low enough temperatures, the heavier noble gases can be solidified. Which type of solid (network covalent, metallic, ionic, or molecular) best describes solid xenon? (Hint: Think about the properties of the solid, and note that it may not perfectly fit into any of these categories.) |
It is a branch of Geology, which deals with the study of rocks, and includes:
(a) Pedogenesis, i.e., origin and mode of occurrence as well as natural history of rocks.
(b) Petrography, i.e., dealing with classification and description of rocks.
The branch of petrology dealing with the study of stones alone is called 'lithology'. Stones include the rocks that are necessarily hard, tough and compact.
As we know, rocks are necessarily the constituents of the earth's crust. Rocks are composed of minerals. Some rocks are monomineralic, composed of one mineral only while most of the rocks are multiminerallic consisting of more than one mineral species as essential constituents.
Igneous and metaigneous rocks constitute 95% of all the rocks of the earth's crust. A sedimentary and metasedimentary rock constitutes 5% of the rocks of the earth's crust.
Classification of Rocks :
According to the mode of origin, all rocks are categorised into three major groups:
I. Igneous Rocks or Primary Rocks.
II. Sedimentary Rocks or Secondary Rocks.
III. Metamorphic Rocks.
I. Igneous rocks:
These are the rocks formed by the solidification of magma either underneath the surface or above it; accordingly they are divided into two groups:
(a) Intrusive bodies:
Which are formed underneath the surface of the earth?
(b) Extrusive bodies:
These are due to the consolidation of magma above the surface of the earth. These are also known as Volcanic-rocks.
On the basis of the depth of formation, intrusive rocks are of two types:
(i) Plutonic rocks, which are formed at very great depths.
(ii) Hypabyssal rocks, which are formed at shallow depth.
Important Features of Igneous Rocks:
1. Generally hard, massive, compact with interlocking grains.
2. Entire absence of fossils.
3. Absence of bedding planes.
4. Enclosing rocks are baked.
5. Usually contain much feldspar.
II. Sedimentary rocks:
These rocks have been derived from the pre-existing rocks, through the processes of erosion, transportation and deposition by various natural agencies like, wind, water, glacier, etc. The loose sediments, which are deposited, undergo the processes of compaction and the resulting products are known as sedimentary rocks.
On the basis of place of formation, sedimentary rocks are of two types:
(i) Sedentary rocks, that are the residual deposits, formed at the site of the pre-existing rocks from which they have been derived. These arc not formed by the process of transportation.
(ii) Transported, in which case the disintegrated and decomposed rock materials are transported from the place of their origin and get deposited at a suitable site. According to the mode of transportation of the deposits, these rocks are sub-divided into three types as;
(a) Mechanically deposited. Clastic rocks.
(b) Chemical precipitation. Chemical deposits.
(c) Organically deposited. Organic deposits.
Important Features of Sedimentary Rocks:
1. Generally soft, stratified, i.e., characteristically bedded.
2. Fossils common.
3. Stratification, lamination, cross-bedding, ripple marks mud cracks, etc. are the usual structures.
4. No effect on the enclosing or the top and bottom rocks.
5. Quartz, clay minerals, calcite, dolomite, hematite is the common minerals.
III. Metamorphic rocks:
These are formed by the alteration of pre-existing rocks by the action of temperature, pressure aided by sub-terranean fluids (magmatic or non-magmatic).
Important Features of Metamorphic Rocks:
1. Generally hard, interlocking grains and bedded (if derived from stratified rocks).
2. Fossils are rarely preserved in rocks of sedimentary origin except slates.
3. Foliated, gneissose, schistose, granulose, slaty, etc., are the common structures.
4. Common minerals are andalusite, sillimanite, Kyanite, cordierite, wollastonite, garnet, graphite, etc.
1. Igneous Rocks Types:
(i) Granite-with its volcanic equivalent, i.e., Rhyolite.
(ii) Syenite-with its volcanic equivalent 'Trachyte',
(iii) Nepheline-Syenite (and phonolite).
(v) Granodiorite and Monzonite.
(vi) Gabbro, diorite and norite, and their volcanic equivalents basalts (deccantraps), andesite etc.
2. Sedimentary Rocks:
(iii) Limestone and dolomite
(iv) Saline rocks
3. Metamorphic Rocks:
Mode of Occurrence or Forms of Igneous-Rocks:
The form, i.e., the size, shape of the igneous bodies, depends mostly on the following factors:
(i) Mode of formation.
(ii) Viscosity of magma, which in turn depends on the
(a) temperature, and
(b) composition of the magma.
(iii) Relation with the surrounding country-rocks, i.e.,
(a) physical characters of the invaded rocks,
(b) weight of the overlying rockmass in case intrusive bodies
The intrusive and the extrusive rocks exhibit typical forms, which are characteristic to them.
The forms assumed by intrusive bodies depend upon major geological structures as faults, folds, bedding planes, etc. Accordingly there are two major categories of forms of the intrusive bodies
In this case an intrusive mass happens to cut across the structures of the pre-existing rocks of the country. There are different types of discordant forms in unfolded regions as well as in highly folded regions.
(a) In unfolded regions:
These discordant igneous bodies exhibit a cross- cutting relationship with the country rocks. Dykes commonly occur in groups and such group may be of radiating, arcuate or any other pattern.
Since for the formation of dykes the magma is to be sufficiently mobile, the composition of the dykes are mostly basic, i.e., doleritic. Dykes are evidences of regional tension in the crust within the area of igneous activities. Larger dikes produce baking and hardening; effect on either side.
(ii) Ring dyke:
A dyke of arcuate out crop; occurring more or less in the form of a complete or nearly complete circle.
These are inwardly dipping (in the form of inverted co-axial cones) dyke-like masses with circular out crop.
(b) In high folded regions:
These are the largest intrusive bodies. Most batholiths are found in belts of deformation within the earth's crust and are granitic in composition. These are widening downwards to unknown depths.
Batholiths of comparatively smaller dimensions are called 'stocks' and stocks of circular outcrop upon the surface are known as bosses. The remnants of the country-rock occurring upon or near the top surface of such intrusive masses are known as 'roof- pendants'.
These are funnel-shaped basic bodies with circular outcrop.
Sickle-shaped basic bodies formed by stretching of the strata after or during injection.
Any irregular intrusive body.
Any deep-seated intrusive body, irrespective of its shape and size, is known as 'pluton'.
2. Concordant bodies:
These are intrusive bodies that run parallel to the structures of the country-rocks in which they occur.
(a) In anfolded regions:
These are thin parallel sided tabular sheet of magma that has penetrated along bedding planes, planes of schistosity, unconformities, etc. These are also doleritic in composition. They may attain any orientation in space depending upon the attitude of the rock beds in which they occur.
These intrusive bodies have their lower surface "at and have a convex top. It is due to accumulation of viscous magma which is usually acidic in composition, which pushes the Overlying rocks upwards to make room for the mass.
These are saucer shaped bodies, which are of dimensions and are of basic to ultrabasic in composition.
Sometimes the magma breaks through the overlying rock beds and the igneous mass after consolidation is known as bysmalith.
(b) In highly folded regions:
These are crescentic shaped igneous bodies occurring along the crests and troughs of folds of the country rocks. They are basaltic in composition.
Forms of Extrusive Bodies:
Lava flows and pyroclastics which are the products of volcanic activities are the usual forms of extrusive igneous bodies.
Volcanic neck. It is a mass of igneous rock produced by the consolidation of lava and pyroclastic materials in the channel of eruption of an extinct volcano. |
|Neem: A Tree for Solving Global Problems (BOSTID, 1993)|
Native to India and Burma, neem is a botanical cousin of mahogany. It is tall and spreading like an oak and bears masses of honey-scented white flowers like a locust. Its complex foliage resembles that of walnut or ash, and its swollen fruits look much like olives. It is seldom leafless, and the shade it imparts throughout the year is a major reason why it is prized in India. The Subcontinent contains an estimated 18 million neem trees, most of them lined along roadsides or clustered around markets or backyards to provide relief from the sun.
Under normal circumstances neem's seeds are viable for only a few weeks, but earlier this century people somehow managed to introduce this Indian tree to West Africa, where it has since grown well. They probably expected neem to be useful only as a source of shade and medicinals - especially for malaria - but in Ghana it has become the leading producer of firewood for the densely populated Accra Plains, and in countries from Somalia to Mauritania it is a leading candidate for helping halt the southward spread of the Sahara Desert.
This century, people took neem seed to other parts of the world, where the tree has also performed well. Near Mecca, for example, a Saudi philanthropist planted a forest of 50,000 neems to shade and comfort the two million pilgrims who camp each year on the Plains of Arafat (a holy place where the prophet Muhammad is said to have bidden farewell to his followers). And in the last decade neem has been introduced into the Caribbean, where it is being used to help reforest several nations. Neem is already a major tree species in Haiti for instance.
But neem is far more than a tough tree that grows vigorously in difficult sites. Among its many benefits, the one that is most unusual and immediately practical is the control of farm and household pests. Some entomologists now conclude that neem has such remarkable powers for controlling insects that it will usher in a new era in safe, natural pesticides.(In this report we use the words "pesticide" and "insecticide" in the broad sense of pest- and insect-controlling agents. By strict definition, the words imply toxins that kill outright; neem compounds, however, usually leave the pests alive for some time, but so repelled, debilitated, or hormonally disrupted that crops, people, and animals are protected.)
Extracts from its extremely bitter seeds and leaves may, in fact, be the ideal insecticides: they attack many pestiferous species; they seem to leave people, animals, and beneficial insects unharmed; they are biodegradable; and they appear unlikely to quickly lose their potency to a buildup of genetic resistance in the pests. All in all, neem seems likely to provide nontoxic and long-lived replacements for some of today's most suspect synthetic pesticides.
That neem can foil certain insect pests is not news to Asians. For
centuries, India's farmers have known that the trees withstand the periodic
infestations of locusts. Indian scientists took up neem research as far back as
the 1920s, but their work was little appreciated elsewhere until 1959 when a
German entomologist witnessed a locust plague in the Sudan.
During this onslaught of billions of winged marauders, Heinrich Schmutterer noticed that neem trees were the only green things left standing. On closer investigation, he saw that although the locusts settled on the trees in swarms, they always left without feeding. To find out why, he and his students have studied the components of neem ever since.
Schmutterer's work (as well as a 1962 article by three Indian scientists showing that neem extracts applied to vegetable crops would repel locusts) spawned a growing amount of lively research. This, in turn, led to three international neem conferences, several neem workshops and symposia, a neem newsletter, and rising enthusiasm in the scientific community. By 1991, several hundred researchers in at least a dozen countries were studying various aspects of neem and its products.
Like most plants, neem deploys internal chemical defences to protect itself against leaf- chewing insects. Its chemical weapons are extraordinary, however. In tests over the last decade, entomologists have found that neem materials can affect more than 200 insect species as well as some mites, nematodes, fungi, bacteria, and even a few viruses. The tests have included several dozen serious farm and household pests - Mexican bean beetles, Colorado potato beetles, locusts, grasshoppers, tobacco budworms, and six species of cockroaches, for example. Success has also been reported on cotton and tobacco pests in India, Israel, and the United States; on cabbage pests in Togo, Dominican Republic, and Mauritius; on rice pests in the Philippines; and on coffee bugs in Kenya. And it is not just the living. plants that are shielded. Neem products have protected stored corn, sorghum, beans, and other foods against pests for up to 10 months in some very sophisticated controlled experiments and field trials.
Researchers at the U.S. Department of Agriculture have been studying neem since 1972. In laboratory experiments, they have found that the plant's ingredients foil even some of America's most voracious garden pests. For instance, in one trial each half of several soybean leaves was sprayed with neem extracts and placed in a container with Japanese beetles. The treated halves remained untouched, but within 48 hours the other halves were consumed right down to their woody veins. In fact, the Japanese beetles died rather than eat even tiny amounts of neem-treated leaf tissue. In field tests, neem materials have yielded similarly promising results. For instance, in one test in Ohio, soybeans sprayed with neem extract stayed untouched for up to 14 days, untreated plants in the same field were chewed to pieces by various species of insects, seemingly overnight.
Neem contains several active ingredients, and they act in different ways under different circumstances. These compounds bear no resemblance to the chemicals in today's synthetic insecticides. Chemically, they are distant relatives of steroidal compounds, which include cortisone, birth-control pills, and many valuable pharmaceuticals. Composed only of carbon, hydrogen, and oxygen, they have no atoms of chlorine, phosphorus, sulfur, or nitrogen (such as are commonly found in synthetic pesticides). Their mode of action is thus also quite different.
Neem products are unique in that (at least for most insects) they are not outright killers. Instead, they alter an insect's behavior or life processes in ways that can be extremely subtle. Eventually, however, the insect can no longer feed or breed or metamorphose, and can cause no further damage.
For example, one outstanding neem component, azadirachtin, disrupts the metamorphosis of insect larvae. By inhibiting molting, it keeps the larvae from developing into pupae, and they die without producing a new generation. In addition, azadirachtin is frequently so repugnant to insects that scores of different leaf-chewing species - even ones that normally strip everything living from plants - will starve to death rather than touch plants that carry traces of it.
Another neem substance, salannin, is a similarly powerful repellent. It also stops many insects from touching even the plants they normally find most delectable. Indeed, it deters certain biting insects more effectively than the synthetic chemical called "DEET" (N,N-diethy- lm-toluamide), which is now found in hundreds of consumer insect repellents.
To obtain the insecticides from this tree is simple (at least in principle). The leaves or seeds are merely crushed and steeped in water, alcohol, or other solvents. For some purposes, the resulting extracts can be used without further refinement.
These pesticidal "cocktails," containing 4 major and perhaps 20 minor active compounds, can be astonishingly effective. In concentrations of less than one-tenth of a part per million, they affect certain insects dramatically. In trials in The Gambia, for example, these crude neem extracts compared favorably with the synthetic insecticide malathion in their effects on some of the pests of vegetable crops. In Nigeria, they equaled the effectiveness of DDT, Dieldrin, and other insecticides. And elsewhere in the world these plant products have often showed results as good as those of standard pesticides.
The extracts from neem seeds can also be purified and the most effective ingredients isolated from the rest of the mix. This process allows standardization and uniform formulations that can be produced for commercial use in even the world's most sophisticated pesticide markets.
Whatever the mixture or formulation, neem-based products display several remarkable qualities. For example, although pests can become tolerant to a single toxic chemical such as malathion, it seems unlikely that they can develop genetic resistance to neem's complex blend of compounds - many functioning quite differently and on different parts of an insect's life cycle and physiology.
Certainly, they won't do so quickly. Several experiments have failed to detect any signs of incipient resistance to the mixture. For example, even after being exposed to neem for 35 successive generations, diamondback moths remained as susceptible as they had been at the beginning.
Another valuable quality is that some neem compounds act as systemic agents in certain plant species. That is, they are absorbed by, and transported throughout, the plants. In such cases, aqueous neem extracts can merely be sprinkled on the soil. The ingredients are then absorbed by the roots, pass up through the stems, and perfuse the upper parts of the plant. In this way, crops become protected from within. In trials, the leaves and stems of wheat, barley, rice, sugarcane, tomatoes, cotton, and chrysanthemums have been protected from certain types of damaging insects for 10 weeks in this way.
Because systemic materials are inside the plant, they cannot be washed off by rain. Nor can they harm bees, predacious insects, and other organisms that do not chew plant tissue. Even new growth that occurs following the treatment may be protected. (In the case of conventional sprayed-on chemicals, on the other hand, new growth is usually vulnerable to insects.)
Perhaps the most important quality is that neem products appear to have little or no toxicity to warm-blooded animals. Birds and bats eat the sweet pulp of the fruits of neem trees without apparent ill effects. In fact, neem fruits are a main part of their diets in some locations, such as on Ghana's Accra Plains. When neem-seed extracts were brushed on the skins of rats, the animals' blood showed no abnormalities; indeed, the treated rats ate more food and gained more weight than the untreated ones.
This safety to mammals apparently extends to people. The deaths of
a few young children in
Malaysia in the 1980s have been linked to the doses of neem-seed oil forced on them by their parents. (Like the previous use of castor oil in the Western world, neem oil in Asia is considered a cure-all for some childhood illnesses.) However, other than this, no hazard has been documented under conditions of normal usage. For one thing, neem extracts show no mutagenicity in the Ames test, which detects potential carcinogens. For another, people in India have been adding neem leaves to their grain stores for centuries to keep weevils away. Thus, for many generations millions have been eating traces of neem on a daily basis.
Certain neem products may even benefit human health. The seeds and leaves contain compounds with demonstrated antiseptic, antiviral, and antifungal activity. There are also hints that neem has anti inflammatory, hypotensive, and anti-ulcer effects. There is a potential indirect benefit to health as well. Neem leaves contain an ingredient that disrupts the fungi that produce aflatoxin on moldy peanuts, corn, and other foods - it leaves the fungi alive, but switches off their ability to produce aflatoxin, the most powerful carcinogen known.
For dental hygiene, especially, neem could prove valuable. Despite a general lack of toothpaste and toothbrushes, most people in India have bright, healthy teeth, and dental researchers usually attribute this to "chewsticks." Every morning, millions of Indians break off a twig, chew the end into a brushlike form, and scrub their teeth and gums. The most popular are the twigs from neem, and the selection seems to have a scientific basis. Research has shown, for example, that compounds in neem bark are strongly antiseptic. Also, tests in Germany have proved that neem extracts prevent tooth decay, as well as both preventing and healing inflammations of the gums. Neem is now used as the active ingredient in certain popular toothpastes in Germany and India.
Moreover, researchers have recently found that neem might be able to play a part in controlling population growth. Materials from the seeds have been shown to have contraceptive properties. The oil is a strong spermicide and, when used intravaginally, has proven effective in reducing the birth rate in laboratory animals. A recent test involving the wives of more than 20 Indian Army personnel has further demonstrated its effectiveness. Other neem compounds show early promise as the long-sought oral birth-control pill for men. This is just an intriguing hint at present; however, in exploratory trials they reduced fertility in male monkeys and a variety of other male mammals without inhibiting sperm production. In addition, the effects seemed to be temporary, which would be a big selling point that could help its rapid and widespread adoption.
All of this is potentially of vital importance for the world's poorest countries, many of which have high rates of population growth, severe problems with various agricultural pests, and a widespread lack of even basic medicine. The neem tree will grow in many Third World regions, and it can grow on certain marginal lands where it will not compete with food crops. Thus, it could bring good health and better crop yields within the reach of farmers too poor to buy pharmaceuticals or farm chemicals. It makes feasible the concept of producing one's own pesticide because the active materials can be extracted from the seeds, even at the farm or village level. Extracting the seeds requires no special skills or sophisticated machinery, and the resulting products can be applied using low-technology methods.
This possibility is significant because most developing countries are in the tropics, where year-round warmth often allows pest populations to build to unacceptable levels. The problems attendant on using synthetic pesticides, therefore, are particularly severe in the Third World. For instance, the World Health Organization attributes 20,000 deaths and more than a million illnesses each year to pesticides mishandled or used to excess. In addition, because the pests breed year-round, mutational resistance builds up much more quickly in the tropics than in lands having winter seasons.
Neem also seems particularly appropriate for developing country use because it is a perennial and requires little maintenance. It appeals to people in both rural and urban areas because (unlike most trees) its leaves, fruits, seeds, and various other parts can be used in a multitude of ways. Moreover, it can grow quickly and easily and does not necessarily displace other crops.
Neem and the United States
In 1975 the U.S. Department of Agriculture research facility at Beltsville, Maryland, and 19 of its stations across the country embarked on a comprehensive program to study the pest- control properties of various plants. Several universities collaborated on this program; others worked independently. The research, which still continues at several locations, has demonstrated or verified the outstanding effects of neem extracts against numerous species of destructive insects and fungi. Of thousands of plant extracts tested, neem was by far the best.
In trials, crude alcohol extracts of neem seeds proved effective at very low concentrations against 60 species of insects, 45 of which are extremely damaging to American crops and stored products - causing billions of dollars of losses to the nation each year. They included sweet-potato whitefly (see page 94), serpentine leafminers (which attack vegetable and flower crops), gypsy moth (which causes millions of dollars of losses to homeowners and the forest industry), and several species of cockroach.
In 1985, the Environmental Protection Agency approved a commercial neem-based insecticide for certain nonfood uses. Called Margosan-O@, the product is available at present in limited quantities in 21 states, and the amount is growing quickly. It is registered for use against such pests as whiteflies on chrysanthemums, leafminers on birch trees, aphids on roses, chinch bugs on lawns, gypsy moths on shade trees, and thrips on gladiolus. So far, it is being used primarily in professional greenhouses.
As a result of all this work, neem is seen as the nation's leading candidate for providing a new generation of broadspectrum pesticides. However, neem cannot yet be legally used against pests that occur on food crops. Despite neem's apparent lack of toxicity or environmental danger, getting the authorization to use it on food plants will take time and great expense, because federal agencies require exhaustive testing before approving any pesticide for this purpose.
Although neem is essentially unknown to the American public, some
neem-based consumer products are appearing in shops across the nation. Imported
neem soap and toothpaste, for example, are sold fairly widely in specialty
Neem production and processing also provides employment and generates income in rural communities - perhaps a small, but nonetheless valuable, benefit in these days of mass flight to the cities in a desperate search for jobs. It could be a useful export as well; a ton of neem seed already sells at African ports (Dakar, for example) at more than twice the price of peanuts. On top of all that, neem by-products (the seedcake and leaves, in particular) actually may improve the local soils and help foster sustainable crop production.
Although neem's ability to promote health and its value as a safe pest control is still only in the realm of possibility, there is no doubt that neem trees can provide the poor and the landless with oil, feed, fertilizer, wood, and other essential resources. In its crude state the oil from the seeds can have a strong garlic odor, but even in that form it can be used for heating, lighting, or crude lubricating jobs. Refined, it loses its unpleasant smell and is used in soaps, cosmetics, disinfectants, and other industrial products.
Neem cake, a solid material left after the oil is pressed from the seeds, is also useful. Broadcast over farm fields, it provides organic matter as well as some fertility to the soil. More important, it controls several types of soil pests. It is, for example, notably effective against nematodes, those virulent microscopic worms that suck the life out of many crops. Cardamom farmers in southern India claim that neem cake is as effective as the best nematode-suppressing commercial products.
Because neem is a tree, its large-scale production promises to help alleviate several global environmental problems: deforestation, desertification, soil erosion, and perhaps even (if planted on a truly vast scale) global warming. Its extensive, deep roots seem to be remarkably effective at extracting nutrients from poor soils. These nutrients enter the topsoil as the leaves and twigs fall and decay. Thus, neem can help return to productive use some worn-out lands that are currently unsuited to crops. It is so good for this purpose that a 1968 United Nations report called a neem plantation in northern Nigeria "the greatest boon of the century" to the local inhabitants.
At a more basic level, the increasing scientific scrutiny of neem is providing biological insight into the way plants protect themselves against the multitude of plant eaters. It is a window on a battle in the continuing chemical warfare between plants and predators. And because it is part of this natural antagonism, neem is a promising candidate for use in the increasingly popular concept of integrated pest management. To employ neem in pest control is to take advantage of the plant kingdom's 400 million years of experience at trying to frustrate the animal kingdom.
For all its apparent promise, however, the research on neem and the development of its products are not receiving the massive support that might seem justified. Indeed, all the promise mentioned above is currently known to only a handful of entomologists, foresters, and pharmacologists - and, of course, to the traditional farmers of South Asia. Much of the enthusiasm and many of the claims are sure to be tempered as more insights are gained and more field operations are conducted.
Nonetheless, improving pest control, bettering health, assisting reforestation, and perhaps checking overpopulation appear to be just some of the benefits if the world will now pay more attention to this benevolent tree.
Among many new developments in the 20 months since the first printing of this book, our attention has been caught by the following.
· Three new neem-based products - Azatin@, Turplex@, and Align@-have entered the U.S. insecticide market.* The U.S. Environmental Protection Agency (EPA) has approved Align@ for use on food and feed crops.
· Margosan-O@ is now registered in all 50 states, and the EPA has approved it for use on food crops. Two related neem formulations, BioNeem@ for the consumer market and Benefits@ for lawn and turf care, are also available.+
· A neem newsletter has begun publication in the United States.++
· More than 70,000 neem trees have been planted in Florida, Puerto Rico, and Mexico (Yucatan and Baja California).
· Ground-up neem leaves have been reported successful at treating scabies, a serious skin disease. Of 824 cases, 98 percent showed complete cures within 3-15 days.$
· Medical researchers in India have developed a topical neembased product that appears to boost the body's defense against infection at the location where it is applied. It is being tested notably for protecting women from vaginal infections (viruses, bacteria, fungi, yeast) and pregnancy.//
*The manufacturer is AgriDyne Technologies, Inc. (see Research Contacts, page 121).
+The manufacturer is W.R. Grace (see Research Contacts, page 121).
++Published by The Neem Association (see Research Contacts, page 121).
$ Information via Martin Price (see Research Contacts, page 121). The mite that causes scabies also causes mange in livestock (donkeys, camels, llamas, for instance).
//This development is led by Shakti N. Upadhyay of the National
Institute of Immunology,
Indian Council of Medical Research, P.O. Box 4508, New Delhi 110 029, India. |
The First World War (WWI) was fought from 1914 to 1918 and the Second World War (or WWII) was fought from 1939 to 1945. They were the largest military conflicts in human history. Both wars involved military alliances between different groups of countries.
World War I (a.k.a the First World War, the Great War, the War To End All Wars) was centered on Europe. The world warring nations were divided into two groups namely ‘The Central Powers’ and ‘The Allied Powers’. The central powers group consisted of Germany, Austria-Hungary, Turkey and Bulgaria. The Allied powers group consisted of France, Britain, Russia, Italy, Japan, and (from 1917) the U.S.
World War II (a.k.a the Second World War), the opposing alliances are now referred to as ‘The Axis’ and ‘The Allies’. The Axis group consisted of Germany, Italy, and Japan. The Allies group consisted of France, Britain, the U.S., the Soviet Union, and China. World War II was especially heinous because of the genocide of Jewish people perpetrated by the Nazis.
|World War I||World War II|
|Period and duration||1914 to 1918; 4 years||1939 to 1945; 6 Years|
|Triggers and causes||Assassination of Archduke Francis Ferdinand of Austria in June 1914. Militarism, Imperialism, nationalism and alliance system.||Political and economic instability in Germany. The harsh conditions of the Treaty of Versailles Rise of power of Adolf Hitler and his alliance with Italy and Japan to oppose the Soviet Union|
|Conflict between||The Central Powers (Germany, Austria-Hungary, and Turkey) and the Allied Powers (France, Britain, Russia, Italy, Japan, and (from 1917) the U.S.)||The Axis Powers (Germany, Italy, and Japan) and the Allied Powers (France, Britain, the U.S., the Soviet Union, and China)|
|Casualties||Estimated to be 10 million military dead, 7 million civilian deaths, 21 million wounded, and 7.7 million missing or imprisoned.||Over 60 million people died in World War II. Estimated deaths range from 50-80 million. 38 to 55 million civilians were killed, including 13 to 20 million from war-related disease and famine.|
|Genocide||The Ottoman Empire (Turkey) carried out genocide of Armenians.||German Nazis committed genocide against Jews and Romanis.|
|Methods of warfare||Fought from lines of trenches and supported by artillery and machine guns, infantry assault, tanks, early airplanes and poisonous gas. Mostly static in nature, mobility was minimal.||Nuclear power and missiles were used, modern concepts of covert and special operations. Submarines and tanks were also more heavily used. Encryption codes for secret communication became more complex. Germany used the Blitzkrieg fighting method.|
|Outcomes||The German, Russian, Austro-Hungarian and Ottoman empires were defeated. Austro-Hungarian and Ottoman empires ceased to exist. The League of Nations was formed in the hope of preventing another such conflict.||The war ended with the total victory of the Allies over Germany and Japan in 1945. The Soviet Union and the United States emerged as rival superpowers. The United Nations was established to foster international cooperation and prevent conflicts.|
|Post-war politics||Resentment with the onerous terms of the Treaty of Versailles fueled the rise of Adolf Hitler's party in Germany. So some historians believe that in a way, World War I led to World War II.||There was a Cold War between the United States and Russia after the end of the Second World War until the collapse of the USSR (1947-1991). The wars in Afghanistan, Vietnam and Korea were, in a sense, proxy wars between the two nations.|
|Nature of war||War between countries for acquiring colonies or territory or resources.||War of ideologies, such as Fascism and Communism.|
|Abbreviation||WWI or WW1||WWII or WW2|
|Also known as||The Great War, The World War, The Kaiser's War, The War of the Nations, The War in Europe, or The European War, World War one, First World War, The war to end all wars||Second World War, World War Two, The Great Patriotic War|
|American president during the war||Woodrow Wilson||FDR, Harry Truman|
|British Prime Minister during the war||H. H. Asquith (1908-1916); David Lloyd George (1916-1922)||Winston Churchill|
|Predecessor||Napoleonic Wars||World War I|
|Successor||World War II||Cold War|
Causes of the War
World War I Trigger
- The assassination of Archduke Franz Ferdinand of Austria on 28 June 1914, the heir to the throne of Austria-Hungary was the trigger for the war. He was killed by Serbian nationalists.
- Austria-Hungary invaded Serbia.
- At same time Germany invaded Belgium, Luxembourg and France
- Russia attacked Germany
- Several alliances formed over the past decades were invoked, so within weeks the major powers were at war; as all had colonies, the conflict soon spread around the world.
This video from Yale explains the events that led to World War I:
Causes of World War II
The Versailles Treaty signed at the end of World War I not only lay the moral blame of the conflict on Germany but also forced the Germans to make huge payments to the victors of the war. France and Britain needed these reparations payments in order to pay down their own debts. But they were highly onerous, arguably unjustifiably so, and were deeply unpopular in Germany. Hitler seized on this growing resentment and promised to "undo this injustice and tear up this treaty and restore Germany to its old greatness". In fact, the payments demanded were so large that Germany was able to repay the final installment of interest on this debt only on October 3, 2010. The following causes of World War II are generally acknowledged:
- Treaty violations and acts of aggression on various fronts.
- Political and economic instability in Germany, combined with bitterness over its defeat in World War I and the harsh conditions of the Treaty of Versailles.
- Rise of power of Adolf Hitler and the Nazi Party. In the mid-1930s Hitler began secretly to rearm Germany, in violation of the treaty.
- Adolf Hitler signed alliances with Italy and Japan to oppose the Soviet Union
- German invasion of Poland on Sept. 1, 1939
The following documentary delves into the causes of World War II:
Sequence of events
World War I
The sequence of events for World War I began in 1914 with Austria-Hungary declaring war on Serbia on 28 July 1914 in a bid to reassert its authority as a Balkan power. With war breaking out between Austria-Hungary on one side and Serbia on the other, Europe quickly fell back to the alliances nations had formed. Austria-Hungary and Germany were allies. Serbia was allied with Russia; as was France. Russia aided Serbia and attacked Austria. So Austria-Hungary was fighting in two fronts with Serbia and with Russia and consequently lost on both fronts. In a bid to aid Austria-Hungary against Russia, and fearing an attack from France, Germany mobilized its army and attacked France.
- The French, redeploying round Paris, together with the British, checked the now extended German armies on the Marne. In March and April 1915 British sea and land forces attacked the Dardanelles. The Turks countered both threats, causing the British to evacuate the Gallipoli peninsula at the end of 1915.
- A joint Austro-German offensive at Gorlice-Tarnow (2 May 1915) unlocked Russian Poland and the tsar's shattered armies fell back
- In 1915 the Allies agreed that simultaneous attacks on all fronts were the way to drain the reserves of the Central Powers
- On 21 February 1916 Germans attacked the Verdun salient; however this attack was stalled in June. Austrians' independent offensive against the Italians in the Trentino also stalled.
- Germany finally adopted unrestricted submarine warfare in February 1917, and in doing so drove America into the war.
- The Germans extended their front while reducing their strength by almost a million men. Simultaneously they continued to advance in the east, competing with their Austrian allies in the Ukraine and the Turks in the Caucasus. * The French counter-attacked in July and the British in August. Together with the Americans, they drove the Germans back in a series of individually limited but collectively interlocking offensives.
- On 15th September the Anglo-French forces at Salonika attacked in Macedonia, forcing the Bulgars to seek an armistice by the end of the month.
- The whole of the Central Powers' Italian front crumbled after the Austrian defeat on the Piave in June.
- The German high command initiated the request for an Armistice on 4 October. After the war Germany claimed that the army was ‘stabbed in the back’ by revolution at home. The people of Germany and Austria-Hungary were battered by food shortages and inflation.
- On 11 November an armistice with Germany was signed in a railroad carriage at Compiègne. At 11 a.m. on 11 November 1918 a ceasefire came into effect.
1919 A formal state of war between the two sides persisted for another seven months, until signing of the Treaty of Versailles with Germany on 28 June 1919
World War II
The war that broke out in 1939 was a war for the European balance of power. The immediate cause of the conflict was the German demand for the return of Danzig and part of the Polish ‘corridor’ granted to Poland from German territory in the Versailles Treaty of 1919. Poland refused to agree to German demands, and on 1 September 1939 overwhelming German forces launched the Polish campaign and defeated her in three weeks. Russia also invaded eastern Poland. Poland thus got divided into two parts. In March 1939 Britain and France had guaranteed Polish sovereignty, and in honor of that pledge first demanded that German forces withdraw, and then on 3 September declared war on Germany. America was committed by the Neutrality Acts of 1935 and 1937 of non-intervention in overseas conflicts.
This video presents a concise history of the events of World War II:
- German armies invaded Belgium, Luxembourg, and northern France and within six weeks defeated western forces.
- Britain was able to resist German air attacks in the battle of Britain in August and September 1940, and survived a German bombing offensive (the ‘Blitz’) in the winter of 1940-1, but it was not possible for Britain to defeat Germany unaided.
- On 10 June 1940 Mussolini's Italy declared war on Britain and France.
- In December 1940 Hitler turned attention away from Britain and approved BARBAROSSA, the large-scale invasion of the USSR.
- America started giving increasing economic assistance to Britain and China following President Roosevelt's pledge to act as the ‘arsenal of democracy’.
- BARBAROSSA was launched on 22 June 1941 when three million German, Finnish, Romanian, and Hungarian soldiers attacked the whole length of the Soviet western frontier. Soviet Union was shattered.
- In North Africa, Commonwealth forces stationed in Egypt drove Italian armies back across Libya by February 1941
- In Abyssinia and Somaliland Italian forces were forced to surrender by May 1941.
- Italy's complete defeat in Africa was avoided only by Hitler's decision to send German reinforcements under Rommel, and the weak logistical position of Commonwealth forces.
- The US navy became closely involved in the battle of the Atlantic in efforts to break the German submarine blockade of shipping destined for Britain. In March 1941 Congress approved the Lend-Lease Bill which allowed almost unlimited material aid, including weapons, for any state fighting aggression. In the autumn of 1941 this came to include the USSR, despite strong American anti-communism. Throughout 1940 and 1941 the USA tightened an economic blockade of Japan which threatened to cut off most Japanese oil supplies.
- American actions provoked both Japanese and German retaliation. On 7 December 1941 Japanese naval aircraft attacked the American naval base at Pearl Harbor, followed by the rapid conquest of western colonies in south-east Asia and the southern Pacific.
- On 11 December Germany declared war on the USA.
- Russia made a remarkable recovery and in November Germany and her allies attacking Stalingrad (now Volgograd) were cut off by a massive Soviet encirclement, URANUS.
- In November 1942 at Alamein a predominantly Italian force was defeated by Montgomery.
- The USA fought a largely naval and air war between 1942 and 1945, using its very great naval power to deploy troops in major amphibious operations, first in the Solomon Islands to halt the Japanese Pacific advance, then in TORCH, a combined American-British landing in Morocco and Algeria in November 1942.
The entry of the USA signaled a change in the political balance of the war of great significance. German forces in Stalingrad surrendered in January 1943and by May 1943 Italian and German forces finally surrendered in Tunisia, enabling the Allies to mount the invasion of Sicily and then Italy. Italy sued for an armistice in September 1943.
American economic might and political interests helped to bind together the different fronts of conflict, while America's worldwide system of supply and logistics provided the sinews of war necessary to complete the defeat of the aggressor states. A major intelligence deception operation and declining air power weakened the German response and by September 1944 German forces had been driven from France.
- German surrendered on 7 May 1945 following Hitler's suicide on 30 April.
- A long-range bombing campaign destroyed the Japanese cities and most of the Japanese navy and merchant marine. America’s newest weapon, the atomic bombs were dropped on Hiroshima and Nagasaki in August 1945.
- Soviet forces destroyed the Japanese army in Manchuria; Japan finally capitulated on 2 September.
Many of the weapons that dominate military operations today were developed during World War I, including the machine gun, the tank and specialized combat aircraft. This is a great video that explains the military strategies and tactics used during World War I.
World War I
- After the war, the Paris Peace Conference imposed a series of peace treaties on the Central Powers. The 1919 Treaty of Versailles officially ended the war. Building on Wilson's 14th point, the Treaty of Versailles also brought into being the League of Nations on 28 June 1919. In signing the treaty, Germany acknowledged responsibility for the war, agreeing to pay enormous war reparations and award territory to the victors. It caused a lot of bitterness.
- Austria–Hungary was partitioned into several successor states.
- The Russian Empire lost much of its western frontier as the newly independent nations of Estonia, Finland, Latvia, Lithuania, and Poland were carved from it.
World War II
- The war ended with the total victory of the Allies over Germany and Japan in 1945. The United Nations was established to foster international cooperation and prevent future conflicts.
- The Soviet Union and the United States emerged as rival superpowers.
- Although the totalitarian regimes in Germany, Italy, and Japan were defeated, the war left many unresolved political, social, and economic problems in its wake and brought the Western democracies into direct confrontation with their erstwhile ally, the Soviet Union under Josef Stalin, thereby initiating a period of nearly half a century of skirmishing and nervous watchfulness as two blocs, each armed with nuclear weapons, faced each other probing for any sign of weakness.
- The European economy had collapsed with 70% of the industrial infrastructure destroyed.
- A rapid period of decolonization also took place within the holdings of the various European colonial powers. These primarily occurred due to shifts in ideology, the economic exhaustion from the war and increased demand by indigenous people for self-determination. |
We know that teaching early reading skills using methods that utilize the senses—not just by looking at printed words on a page—is very effective. The good news is that you can do this easily at home. Added bonus: it’s fun, too!
This active approach helps engage your child both physically and mentally. Young children learn quickly through play and movement; take advantage of this preference with activities like these.
- Cut letters out of interesting textures like sandpaper, brown paper bags, or foam board.
- If your child is old enough, you can trace the letters and have him cut them out.
- When you play letter games together (like talking about letter names and sounds), have your child pick up the letter and trace it with her finger.
Letter (or Sight Word) Twister
- Using a Twister mat (or a homemade version), tape index cards with letters or short words on each dot. Then play Twister using the letters or words instead of the usual version.
Sidewalk Chalk Letter Practice and Word Building
- Call out letter sounds (“NNNNN!”) and ask your child to run and stand on them.
- Call out a letter name (“P!”). When your child runs and stands on it, ask him to say the letter’s sound.
- If your child is able, call out short words (“cat!”). Have your child run to each letter, pause, and then go to the next. Then have her write the word herself with the chalk.
Alphabet (or Sight Word) Scramble
- Scatter foam or magnetic letters on the ground. Ask your child to pick them up as you call them out, and then tell you the sound of the letter.
- Older kids can see how many words they can build out of the scattered letters in 30 seconds for a fun challenge.
Taste and Smell
You can really get creative in this category as your kids eat (and smell) their letters and words!
- Put some pudding or yogurt on a cookie sheet. Ask your child to trace letters or short words on the tray. Licking fingers is encouraged!
- Letter foods like cookies, noodles or crackers are great for practicing letter sounds and building short words. Yum.
- While you’re preparing a meal, ask your child to identify the ingredients or menu items–and then ask them to name the first letter sound and name. “Macaroni starts with M and sounds like MMMM!”
- Visit new places together that relate to stories you’ve read. For example, visiting a farm exposes your child to lots of smells and sights (and maybe even tastes) that he may have only read about before your trip.
- Songs are one of the most fun ways to integrate sound into early literacy. Sing together, make up riffs on your favorite songs or rhymes, and let your child hear and say as many sounds—especially rhyming words—as you can.
- Listen to audiobooks on your favorite device (or CD).
- Read aloud to your child!
- If your child is beginning to read, record his voice as he reads aloud; then let him follow along in the book as you play it back.
- Label everything! Write words like “door,” “wall,” and “window” on index cards. Walk around your house with your child and allow her to tape them where they belong. Leave them there until she can confidently read those words on her own.
- Watching good educational television helps build your child’s literacy skills too. Almost anything on PBS Kids is a great bet.
- Take a picture walk before you start a new book. Show your child the pictures in the book before you read, and share your ideas about what might happen in the story.
- Magnetic letters on the fridge–a classic for a reason!
Let these suggestions get you started. Bring in your own creativity or things you have around your house to keep it going—and have fun as you help support your child’s growth as a reader using all her senses. |
Although the exact sampling distribution for the proportion defective is a binomial distribution, in which
is the probability of an individual item being defective, this Demonstration uses the normal approximation to the binomial distribution, which is valid for large
. A rule of thumb is that
should satisfy the test
Suppose an experiment is to be conducted wherein a treatment of some sort is to be applied to a population. The investigator is interested in knowing the minimum sample size that should be randomly selected from the population to detect a change in the proportion having a certain characteristic as a result of the treatment. The statistical factors that influence the sample size are:
- the value selected for
, the type I risk (the risk of rejecting a true null hypothesis)
- the value selected for
, the type II risk (the risk of accepting the null hypothesis as true when, in reality, it is false)
- the value of the population proportion defective,
(this is often an estimate, based on judgement and experience)
- the minimum size of the change to be detected by the sample,
The type I risk is that of getting a false positive from the sample. In other words, there is a probability
that the sample will indicate that there is a difference of at least
when, in fact, there is not.
The type II risk is that of getting a false negative from the sample. In other words, there is a probability
that the sample will indicate there is not a difference of at least
when, in fact, there is such a difference. The ability to detect a difference when there actually is one is called the power of the test and is equal to
As mentioned, whether the test of the hypothesis is a one-sided or two-sided test has an effect on the sample size. A one-sided test would, for example, look either at whether the proportion defective is more than the hypothesized value OR at whether the proportion defective is less than the hypothesized value. A two-sided test would consider whether the proportion defective is significantly different from the hypothesized value, considering both tails of the distribution of the test.
The user of this Demonstration should mentally formulate a null hypothesis and an alternative hypothesis (either one- or two-sided). Then the appropriate control selections can be made and the resulting sample size can be examined. Typical values for
are 5% and 10%, respectively, resulting in a confidence of 95% and a test power of 90%.
Note that making the detected difference,
, small, drives a large sample size, as would be expected. |
Confinement — Gluing the Blocks Together
Why are quarks never alone? Counter to intuition, the force between them does not diminish
when there is distance between the quarks. The force is so strong that we must expend an
enormous amount of energy to try and pull the quarks apart. When the expended energy grows
large enough, a quark/anti-quark pair is created rather than letting the original quarks
separate (see illustration).
This property is called confinement. We observe this crucial phenomenon yet remain baffled by
its underlying cause. Understanding confinement is one of the most fundamental questions in
physics today. If quarks were not confined, the world would be a very different place. |
Dr. Richard Terry, Department of Plant & Wildlife Sciences
Many of the activities of the ancient Maya did not leave artifactual or architectural remains for us to study, since many activities involved organic materials that were biodegraded over time. Furthermore, the warm and humid climate accelerated the decomposition of most organic materials (Dahlin et al., 2007). However, minerals like phosphate contained in food and other organic materials are fixed on the soil surface, imprinting a chemical trace which is possible to analyze (Barba and Ortiz 1992; Terry et al. 2000; Parnell et al. 2001). Geochemical techniques can be used to determine ancient human activity and their correlation to minerals like P, Fe, Cu, Zn, and Mn. Moreover, the geochemical results from these analysis can be spatially analyzed to define the places the activities took place.
Since all plant and animal foodstuffs contain P, its presence in soils and floors at elevated concentrations has been adopted as a proxy for ancient food preparation, consumption, and disposal areas. Other minerals, like Zn, are also correlated with P concentrations at these activity areas (Dahlin et al. 2007).
The objective of the mentored research was to use P prospection as a tool to locate ancient midden features at household groups at Tikal, Guatemala and at Wolf Village in Utah County. High P concentrations might indicate the places where past inhabitant discarded quantities of foodstuffs, broken ceramics, along with wood ashes and charcoal.
Methods – Chemical Analysis
We used the following procedure to extract and analyze soil phosphorus: 2 g of soil are mixed with 20 ml of Olsen Bicarbonate extraction solution (0.5 M NaHCO3 at pH 8.5), using 50 ml jars attached to a board to analyze six samples at a time. Then, the samples are shaken for 30 minutes, after which they are filtered through a 15 cm filter paper, collecting the filtrate in a similar gang of six 50 ml jars. After that, 1 ml of the solution is aliquoted to a colorimeter vial and diluted to 10 ml using deionized water. A packet of PhosVer 3 Reagent is added to each sample, shaken for 60 seconds, and let to rest for four minutes for color development. Finally, the samples are measured with a Hach DR/850 Colorimeter, using the % Transmittance function. More information about the modifications in the method can be found in Terry, et al. (2000).
Grids of 10m intervals were established by mean of flags and measuring tapes. Soil surface samples of 0-10cm were collected by trowel following the removal of the leaf litter layer.
Results and Discussion
Phosphate prospection studies were conducted at the ruins of Tikal, Guatemala in February and March of 2010. Additional prospection studies were conducted at Wolf Village near Goshen, Utah in July. The results of those studies are presented below.
The 10×10 m grid included a major patio group of four structures in the northeast area of Operation 9. A patio group with smaller structures was located in the southeast corner of the Operation. The P concentrations in that patio were very low, in the range of 5-10 mg/kg. The highest P concentration (ca. 25-36 mg/kg) was found in the east side of the Operation between the two patio groups, suggesting a shared midden deposit. A number of “hot spots” or high P areas are visible in Figure 2, and their significance might be confirmed by archaeologists working on the area. The background P concentration of 5 mg/kg was determined by averaging 10 percent of the samples lowest in concentration.
Four minor structures and a small patio were found at the south end of Operation 13 (figure 3). On the southeast side of the patio group there is an area of high P concentration of ca. 40-58 mg/kg, while the background P concentration was about 9 mg/kg.
There is a pattern of elevated P concentration starting at the area of higher levels (58 mg/kg), and moving to the center, then to the northwest corner of the grid. The P levels represented in this pattern are ca. 20-30 mg/kg.
The patio group in Operation 14 (Figure 4) consisted of several elite structures surrounding a large patio. Smaller structures were found outside the patio group. A remarkably large patio surrounded by four structures is visible in the center of Figure 4. One area of high P concentration is found at the north end of the patio at the corner of the staircase of the north structure. Other areas of high P are to the west of the large patio near smaller structures. The higher P values are in the range of 40-60 mg/kg. The background P level in Operation 14 was 7 mg/kg.
The Terminos reservoir group is located in a heavily vegetated area on Puleston’s east transect (Puleston, 1983). A group of 10+ workers were necessary to cut down the vegetation in order to take soil surface samples. In Operation 15, a number of structures constituted a large patio group on the west side and a smaller patio group on the east side (Figure 5). P concentrations of ca. 45 mg/kg were found in only one area, on the southwest corner of the larger patio group. Background P levels were about 10 mg/kg.
In the course of two weeks we were able to analyze 383 surface soil samples in the field laboratory. The field laboratory allowed for timely analysis of soil samples to provide data to archeologists while the excavations were still open.
This study demonstrates the utility of geochemical analysis for the archaeological study of activity areas for the site Wolf Village. Geochemical studies help to evaluate the correlations between archaeological data and soil chemical data, which can then be applied to archaeological situations. The high concentrations of phosphorus are likely due food processing and waste.
The high concentrations of Cu and Fe are indicative of the ancient use of paints and pigments. There is evidence of the use of Iron Oxide (Ochre) and Copper pigments on pottery found at the Wolf Village. The phosphorus and trace metal concentrations identified potential activity areas as well as reinforce those already known. For instance, the feature labeled F5 is a structure that is currently being excavated where there are high levels of P, Cu, and Fe. An example of an area for potential excavation for archaeologists is located at approximately 410E, 370N, because at this location of Wolf Village the levels of P as well as Cu and Fe were much higher than background levels. Geochemical and geostatistical analysis are useful tools in determining activity areas for ancient sites for further investigations in archaeology.
- Barba, L. and Ortiz, A. 1992 Análisis químico de pisos de ocupación: un caso etnográfico en Tlaxcala, Mexico. Latin American Antiquity 3:63-82.
- Dahlin, B. H., Jensen, C. T., Terry, R. E. , Wright, D. R., and Beach, T. 2007 In Search of an Ancient Maya Market. Latin American Antiquity 18(4):363-385.
- Parnell, J. J., Terry, R. E., and Golden, C. 2001 The use of in-field phosphate testing for the rapid identification of middens at Piedras Negras, Guatemala. Geoarchaeology: An International Journal 16:855-873.
- Parnell, J. J., Terry, R. E., and Nelson, Z. 2002a Soil chemical analysis applied as an interpretive tool for ancient human activities at Piedras Negras, Guatemala. Journal of Archaeological Science 29:379-404.
- Puleston, D. E. 1983. Tikal Report No. 13 The Settlement Survey of Tikal. Philadelphia, The University Museum, University of Pennsylvania.
- Terry, R. E., Hardin, P. J., Houston, S. D., Jackson, M. W., Nelson, S. D., Carr, J., and Parnell, J. 2000 Quantitative phosphorus measurement: A field test procedure for archaeological site analysis at Piedras Negras, Guatemala. Geoarchaeology: An International Journal 15:151-166. |
Britannica Web Sites
Articles from Britannica encyclopedias for elementary and high school students.
- antibiotic - Children's Encyclopedia (Ages 8-11)
Doctors sometimes treat patients with a type of medicine called an antibiotic. Antibiotics treat illnesses and infections caused by bacteria, or tiny organisms. The first widely used antibiotic was penicillin. It was discovered in 1928.
- antibiotic - Student Encyclopedia (Ages 11 and up)
Certain medicinal substances have the power to destroy or check the growth of infectious organisms in the body. The organisms can be bacteria, viruses, fungi, or the minuscule animals called protozoa. A particular group of these agents is made up of drugs called antibiotics, from the Greek anti ("against") and bios ("life"). Some antibiotics are produced from living organisms such as bacteria, fungi, and molds. Others are wholly or in part synthetic-that is, produced artificially. Penicillin is perhaps the best known antibiotic. Its discovery and later development has enabled the medical profession to treat effectively many infectious diseases, including some that were once life-threatening. |
As the pictures at the top of the facing page demonstrate, the overall patterns produced in all cases tend to look complex, and in many respects random. But the crucial point is that because of the way the system was constructed there is nevertheless a simple formula for the color of each cell: it is given just by a particular digit in the number obtained by raising the multiplier to a power equal to the number of steps. So despite their apparent complexity, all the patterns on the facing page can in effect be described by simple traditional mathematical formulas.
But if one thinks about actually using such formulas one might at first wonder what good they really are. For if one was to work out the value of a power mt by explicitly performing t multiplications, this would be very similar to explicitly following t steps of cellular automaton evolution. But the point is that because of certain mathematical features of powers it turns out to be possible—as indicated in the table below—to find mt with many fewer than t operations; indeed, one or two operations for every base 2 digit in t is always for example sufficient.
So what about other patterns produced by cellular automata and similar systems? Is it possible that in the end all such patterns could just be described by simple mathematical formulas? I do not think so. In fact, as I will argue in Chapter 12, my strong belief is that in the vast majority of cases it will be impossible for quite fundamental reasons to find any |
By FLIPPO GRAVATT
Dead chestnuts on the Blue Ridge Plateau Virginia.
AMONG the trees of the world, chestnuts, Castanea spp., are outstanding because they produce large crops of tasty nuts and numerous valuable products such as durable posts, lumber, and tannin-extract wood. The forest stands of the American chestnut, however, have been destroyed by two serious fungus diseases, blight and phytophthora root rot (ink disease). Both these diseases are serious on European chestnut also. In part of Europe the ink disease has seriously damaged that species and now the blight, already found in Italy and Spain, is an immediate menace to chestnut throughout that continent. In contrast, Asiatic species of chestnut are resistant to both these diseases.
Chestnut blight is caused by a fungus named Endothia parasitica. This fungus grows in the bark, cambium, and outer wood of affected trees. Within the bark and cambium region the fungus growth appears as characteristic buff-colored fans (page 5). These fans indicate the presence of the blight. However, if the chestnut tissue has been injured or partially killed by freezing or some other factor, the blight fungus will grow through it without forming fans. On the outer bark surface the fungus produces reddish-brown fruiting bodies about the size of pinheads and of two types. Under moist conditions sticky tendrils of spores ooze from one kind of fruiting body. These spores are spread by birds, animals, and insects; the other type of fruiting body produces wind-borne spores.
Usually the first symptom of the disease to be noted outwardly is the dying of a limb to which are attached yellow-brown leaves that contrast with the green of the rest of the tree. On examination of such a limb, a girdling blight canker will be found. Sometimes the cankered area is swollen, and sometimes it is sunken (page 6); but slicing away the bark will always disclose the fungus fans if the blight is the primary cause of the canker. On some trees, even of susceptible species, the blight fungus may grow extensively in the outer bark for four or five years before it extends into the cambium region and then kills the tree.
There are four species of the genus Castanea in Asia. The Chinese chestnut, Castanea mollissima, is a medium-size tree much used for orchard and home planting. This species is reported by Chinese authorities to reach a height of 15 to 18 m. (50 to 60 ft.) in forests. It is not so upright growing as the American chestnut and shows quite a tendency to spread out when not crowded. In two plantings in the eastern United States, 22-year-old Chinese chestnuts growing on good sites are 15 m. (50 ft.) tall and are still rapidly increasing in height. Perhaps this species may do better as a forest tree in the United States than in China. The nuts range in size from 55 to 275 to the kilogram (25 to 125 or more to the pound) and are quite sweet, decidedly sweeter than Japanese chestnuts. Orchard production is mostly on a seedling rather than a grafted variety basis. The range of the tree in China extends from north of the Great Wall in the vicinity of Peiping south to Yunnan, and to the Kwangsi and Kwangtung provinces. The Chinese chestnut is also planted in northwestern Korea.
The Chinese chestnut shows general resistance to the blight in both China and the United States of America. Under good growing conditions, where the trees are not weakened by such factors as freezing, defoliation, or crowding, most of the trees are highly blight-resistant. The disease may be present in the outer bark of the trunk, especially in limb crotches and near the ground, without extending into the cambium region. However, on limbs that are weakened by shading, the fungus may grow into the cambium region and hasten death of the part. Abundant reddish-brown fungus fruiting bodies are then produced on the dead limb. Small trees are sometimes girdled and seemingly killed back to the ground by the blight, especially when they are weakened by late spring freezes, but most of them send up vigorous sprouts. Some such sprouts that have been under observation in the United States for 15 or 20 years have shown no further damage from the blight.
In China there are at least two other species of the genus Castanea, namely, the Henry chinkapin, Castanea henryi, and the Seguin chestnut, Castanea saguinii. The Henry chinkapin is a tall forest tree, up to 30 m. (100 ft.) in height. Because it produces a small single nut in the bur, it is classified as a chinkapin. This tree grows in central and eastern China but is not frequent. The blight has not been reported on it in China, but in the United States this species is more susceptible than the Chinese chestnut. The Seguin chestnut is a small, bushy plant. It is very prolific, produces three nuts to the bur, and under some conditions continues blooming even into the fall. The blight has not been reported on this species in China, and in the United States the species is very resistant to the disease.
The Japanese chestnut, Castanea crenata, is widely distributed in Japan and southern Korea as a forest tree and is extensively planted in Japan as an orchard tree. Many horticultural varieties are known and their nuts vary greatly in size. Some large ones run 11 to the kilogram (5 to the pound), while some of the wild forest Japanese chestnuts are about the size of the American chestnut and run 275 to the kilogram (125 or more to the pound). The blight is widely scattered in Japan but is not abundant. It sometimes injures orchard trees. In the United States the Japanese chestnut has been more susceptible to the blight than the Chinese chestnut and in general it is less hardly.
American chestnut attacked by the blight.
Chestnut blight was first reported at New York City in 1904. It spread rapidly and eventually killed all the American chestnuts in its natural range in the eastern United States (page 6). Sprouts continue to come up from the base of the killed trees, and sometimes these sprouts bear crops of nuts. However, they in turn are usually killed before they reach a height of 6 m. (20 ft.).
In the southeastern United States there are in the genus Castanea several species of chinkapin, ranging in size from small, low, flat-growing shrubs to trees sometimes 30 cm. (1 ft.) or more in trunk diameter. All of these species are susceptible to blight, and large numbers have been killed by it. Blight has not yet been reported from the Ozark Mountains, west of the Mississippi River, where the largest of the species, Castanea ozarkensis, is native, but in time it undoubtedly will be found there.
From 1912 to 1914 an effort was made by the State of Pennsylvania, with the co-operation of the United States Department of Agriculture, to stop the spread of blight. The program was based on insufficient knowledge, because later studies showed that the disease was more widely distributed in different States than was suspected at that time and was exceedingly hard to control. The State control work undoubtedly delayed the spread of the blight for some years and gave owners of millions of acres of chestnut more time to market their timber stands. It was found that advance small spot infections could be kept from spreading, even though located in the midst of continuous growth of highly susceptible trees. However, very careful work was necessary, including burning of all parts and treatment of the peeled stump with creosote. Frequent reinspection for several years was necessary, as nearly always some infected trees were missed. The fungus fruits on untreated stumps, on chips or pieces of bark that are missed, on small twigs, and on exposed roots. Anyone attempting eradication, even on a less susceptible species like the European chestnut, must take extreme precautions.
The difficulties, but also the success, in holding the blight in check are shown by the work on the Pacific Coast of North America. The first infection was found on European and Japanese chestnuts at Agassis in British Columbia. The infected trees were cut out, but the disease reappeared on several other European chestnuts, which in turn were destroyed. A second infection was found in the State of Oregon, originating from a shipment of chestnut nursery stock from the infected eastern States. This was cut out in 1929 and again in 1934, and no further infection has been found there. A third infection, at Seattle, in the State of Washington, presumably originated from a planting of infected nuts of the American chestnut that came from the eastern States. The infected trees were destroyed and no further disease has been noted at Seattle.
In 1934 the blight was found in several irrigated orchards of European chestnuts in California. All infected trees were immediately burned by State authorities and all known chestnut trees in the State were inspected by State and Federal pathologists. Each year since then a few infected trees have been found in the previously infected orchards, and these have been burned. Inspection and control are supplemented by a rigidly enforced embargo that prevents shipments of chestnut nursery stock into California from the infected eastern States. Another important factor which should contribute to the successful stopping of the blight is the isolated character of the plantings, with no known susceptible host plants in the vicinity. In contrast there frequently were 4,0008,000 highly susceptible trees per km² (10,000-20,000 to the square mile) in the eastern chestnut forests. Such a concentration of susceptible trees provides ideal conditions for spread of the disease and makes it difficult for inspection to locate all infections. The inability to eradicate the disease in California has therefore been puzzling.
A point of interest about the blight in California is its apparent virulence on the seedling European chestnuts. Although the trees were growing under the best of conditions, the cankers continued to enlarge with only a little evidence of any resistance to the enlargement as indicated by callus formation. European chestnuts in the eastern United States have in general shown more resistance than the American chestnut to the blight before being killed, but none under observation has shown as much resistance as the ones reported by Professor A. Pavari in Italy.
The chestnut blight fungus grows and fruits also on a number of species of oak. Occasionally typical cankers form on twigs or branches of the chestnut oak, Quercus montana. The fungus gains entrance through wounds into the wood of a number of native oaks, grows inwards for several inches, but seems to cause little damage. The only native American oak that so far has been damaged is the post oak, Quercus stellata. In many areas this widely distributed tree, with a standing volume of over 23 million m³ ® (5,000 million bd. ft.), is seriously damaged by the blight. Some trees have their tops killed, some have open cankers along their trunks; on many others the fungus grows in the outer bark without reaching the cambium and doing any damage. However, a large proportion of the trees exposed to the blight for 25 years are not damaged. The blight has not been found on post oak outside the area where chestnut has been killed.
In addition to the oaks, related species of Castanopsis from Asia and one from the Pacific Coast have been killed by the blight fungus in greenhouse inoculation tests. Sometimes the fungus grows as a saprophyte on red maple, Acer rubrum, shagbark hickory, Carya ovata, and staghorn sumac, Rhus typhina.
The search for American chestnut resistant to the blight is still unsuccessful. Thousands of trees have been reported as resistant; hundreds of the best of these have been propagated for further testing, which is still under way, but to date there is no selection considered sufficiently resistant to warrant propagation. Considering that the host chestnut covered millions of acres over a wide climatic range, it is quite remarkable that all trees seem to be so susceptible to the blight. In the areas where the blight has been present 25 years or more, a very few large trees are still alive and struggling against the disease, but each year a few more of them die. Sprouts are prevalent everywhere.
Mycelian fans of the chestnut blight fungus. On most specimens only scattered small fans will show, not an extensive fan growth as shown here.
The European chestnuts and their hybrids growing in orchards or as ornamentals in the eastern States have nearly all been killed by the blight. The most extensively planted variety, the Paragon, had been grafted on American sprouts and these were quickly killed.
Forest plantings of Chinese chestnuts made by Dr. J. D. Diller of the Division of Forest Pathology, U. S. Plant Industry Station, in the eastern United States have grown well on deep, fertile soils with good air drainage. On old, abandoned fields, however, the plantings nearly always failed. Even on good sites, the best results were obtained by girdling the present forest growth and allowing the chestnuts to grow up with protection of these girdled trees with less deterioration of site than is usual with clear-cutting. Trees grown from seed of certain selected trees of the Chinese chestnuts give better growth and are hardier than are those from other trees. The tannin content of the Chinese chestnuts grown in the United States equals or is slightly higher than that of the American chestnut, which still supplies most of the vegetable tannin produced in this country (Fig. 4)
A sunken and swollen canker on American chestnut.
Some State conservation, forest, and wildlife departments are now growing Chinese chestnuts and distributing them to farmers for woodland plantings. The Chinese chestnut seedlings are being widely planted as orchard and ornamental trees in the eastern United States. Two private nurseries, for instance, are each advertising 75,000 seedlings for sale this year. A limited number of trees of grafted varieties are planted now, but as the supply of grafted trees increases they will be more extensively used.
Map showing the rate at which chestnut blight spread over the Eastern United States. The dated lines show the extent of the heavy infection at the time indicated.
Breeding to combine some of the desirable characteristics of the different chestnuts is under way by Mr. R. B. Clapper of the Division of Forest Pathology, U. S. Plant Industry Station, and Dr. A. H. Graves of Hampden, Connecticut. Most of the F1 hybrids between the Asiatic chestnuts and the American chestnut have been susceptible to the blight, but the progeny of one cross made by Mr. Clapper is very promising. The Chinese chestnut used in this cross came originally from Tientsin in China. A few of the sixteen surviving plants of this cross are shown on page 7; though each of them has the blight fungus growing in the outer bark, so far these have not been damaged. It will be many years before the final value of this hybrid as a forest tree is determined, but it is by far the best one secured among many Asiatic and American hybrids. Unfortunately, crossing F1 Chinese-American or Japanese-American hybrids back on to the Asiatics to secure more resistance usually results in a loss of the vigorous upright growth habit of the American chestnut. Hybrids between the other Asiatic chestnuts and American chestnuts and chinkapins are under test but so far nothing of outstanding value has been secured.
The dead chestnut trees are chipped up at this plant, the tannin extracted, and the chips then used for pulp products.
Pollen from some of the Chinese hybrids and selected pure Chinese trees has been sent by air mail to Professor A. Pavari at Florence, Italy, for the past two seasons. Use of this pollen on bagged flowers of European chestnuts has given hybrid nuts both years. Nuts and scions from the United States have also been sent, but the viable pollen gives quicker results in combining the desirable qualities of the European chestnuts with the selected trees from this country. World-wide airmail service will especially facilitate tree breeding since most trees must grow many years before they produce pollen.
The tragedy of the destruction or threatened destruction of the chestnut in many lands should serve as a warning against the dangers of tree diseases spreading from one country to another, especially since the phenomenal increase of airplane transportation enhances this danger.
Hybrids between the American and Chinese chestnuts. Note the vigorous upright growth, characteristic of the American parent. Flowers are being bagged to secure the F2 generation.
Photos by courtesy U. S. Forest Service |
Arabic Alphabet Letters
Check out this introduction to Arabic worksheet that explains the fundamentals of writing the Arabic script to kids.
Kids get to learn Arabic calligraphy by writing the letter Alif with this cute worksheet that also helps develop Arabic vocabulary.
Kids learning how to write in Arabic can practice writing the letter "ba" with this cool worksheet that also helps them build their Arabic vocabulary.
Kids can practice their Arabic writing skills with this fun worksheet that has them drill the letter "Tā'" to learn new vocabulary and practice handwriting.
Learn the Arabic Alphabet! Kids practice their vocabulary, pronunciation, and writing skills with this attractively illustrated worksheet on the letter "Thā'."
It's the Arabic Alphabet for kids! Practice writing Jīm in its initial, medial, final, and isolated forms with this cool worksheet.
Check out this cool Arabic alphabet worksheet from our Arabic alphabet series. Kids practice writing and pronouncing the letter Ḥā'.
Practice the iconic Arabic consonant "Khā'." These worksheets make learning the Arabic alphabet easy for children getting acquainted with the language.
Kids have fun learning the Arabic language with this cute and cool worksheet that has them practice writing and pronouncing the letter Dāl.
Kids get some valuable Arabic alphabet practice with this cool learning Arabic worksheet by practicing how to read and write the letter "Dhāl." |
If your child’s asthma is triggered when playing sports, it’s important to be able to work with the coach to help manage the symptoms and ensure your child has a positive experience. Your child should be able to participate fully in sports and exercise even though he or she has asthma. In fact, staying physically active is important to a healthy lifestyle and helps keep your child’s lungs strong and healthy.
Unfortunately, many coaches aren’t prepared to handle children with asthma. In 2011, only half of children’s athletic coaches knew how to recognize asthma symptoms, and a third of them hadn’t received sufficient training to deal with the needs of asthmatic team members, according to a study by Cooper University Hospital.
Consequently, you may need to take matters into your own hands and provide this information to your child’s coach.
Exercise-induced asthma generally occurs when the air is cool and dry and the child is participating in an endurance sport. Poor ventilation and air filled with pollutants can also trigger symptoms during exercise.
However, scientists are still baffled as to why exercise causes attacks in some asthma sufferers. During an asthma attack, the small bronchial tubes of the lungs tighten from swelling and inflammation, and muscle spasms in the bronchial walls can occur. This results in symptoms such as wheezing, coughing, tightness in the chest and difficulty breathing.
If your child experiences exercise-induced asthma symptoms, being proactive about managing them can go a long way toward helping him or her succeed at sports.
Talking to a Health Care Provider
If your child has exercise-induced asthma, control may come in the form of medication and following a physician’s recommendations before participating in physical activity. Your physician may need to make some adjustments to your child’s asthma control medication, and he or she may also prescribe use of quick-relief medication before exercise.
Other ways a child can practice both asthma control and sports include:
- Cross training or trying different sports to see if one is easier on the lungs.
- Using a scarf or face mask when the air is cold.
- Avoiding exercise in the early morning.
- Increasing the child’s fitness level. (If a child is out of shape, his or her asthma symptoms may improve with his fitness level.)
Creating an Asthma Action Plan
A great tool to keep in your child’s asthma control arsenal is an Asthma Action Plan. You can get a blank Asthma Action Plan form from the Lungtropolis website or ask your child’s doctor to fill one out. The Asthma Action Plan is a simple form to help you manage your child’s asthma. It contains the following information:
- Medications, along with dosage and timing, including medication to take before exercising
- Known asthma triggers
- Emergency contact information
- Steps to take when asthma symptoms appear
- What to do if your child has a breathing emergency
Communication with Coaches
The best way to make sure your child’s coach knows what to do if your child has symptoms during exercise or sports is to tell him or her yourself. Give the coach a copy of your child’s completed Asthma Action Plan and explain that your child needs to pre-medicate before exercise. Also describe the signs that indicate your child’s asthma symptoms are getting worse and the medication your child needs to use when that happens. Give a copy of the action plan to the assistant coach and the school nurse, as well, so they have a better understanding of your child’s condition.
The interactive web-based learning game Lungtropolis® was created to help children ages 5-10 control their asthma. The site also incorporates resources for parents featuring comprehensive tips on caring for a child with asthma, like how to start an asthma action plan.
Susan Schroeder, MPH, MCHES, PMP, is a Research Scientist at ORCAS. She has over 12 years’ experience as an intervention designer and content developer of Web-based health programs. Ms. Schroeder is Principal Investigator on the Multimedia Asthma Self-Management Program and working on four other NIH-funded projects to develop innovative mHealth self-management solutions for physical and emotional well-being. |
A Level 1 Amicus Reader that describes tiny animals around the world. Examples include the pygmy marmoset, bee hummingbird, and ladybug. Includes comprehension activity.
Highlights animals of all types known for their large size, including the blue whale, the Goliath beetle, and more. Includes comprehension activity.
Introduces the opposites big and small by comparing such animals as big blue whales and small hermit crabs.
Introduces the opposites near and far by comparing the behavior of such animals as fox pups that stay near dens and butterflies that migrate far south.
Introduces the opposites up and down by comparing the behavior of such animals as eagles up in the air and fish down in the sea.
Introduces synonyms for big by comparing large, huge, massive, and enormous sea animals.
Kids in a classroom practice measuring favorite objects they brought from home using different units of measurement and comparing the objects.
Introduces differences in size by comparing groups of big animals, such as big turtles, wildcats, and ocean creatures.
Introduces differences in length and height by comparing dog breeds and their features, such as legs, ears, and noses.
Introduces differences in weight by comparing heavy, heavier, and heaviest everyday machines, such as motorcycles, cars, and trucks.
Introduces differences in lengths by comparing lengths of baseball diamonds, tennis courts, Olympic swimming pools and other distances in sports.
Introduces differences in size by comparing groups of small animals, such as small birds, turtles, and fish.
Introduces differences in height by comparing groups of tall landmarks and structures throughout the world, such as skyscrapers, bridges, and mountains. |
Nomenclature of alkenes and alkynes
Ethylene and acetylene are synonyms in the IUPAC nomenclature system for ethene and ethyne, respectively. Higher alkenes and alkynes are named by counting the number of carbons in the longest continuous chain that includes the double or triple bond and appending an -ene (alkene) or -yne (alkyne) suffix to the stem name of the unbranched alkane having that number of carbons. The chain is numbered in the direction that gives the lowest number to the first multiply bonded carbon, and adding it as a prefix to the name. Once the chain is numbered with respect to the multiple bond, substituents attached to the parent chain are listed in alphabetical order and their positions identified by number.
Compounds that contain two double bonds are classified as dienes, those with three as trienes, and so forth. Dienes are named by replacing the -ane suffix of the corresponding alkane by -adiene and identifying the positions of the double bonds by numerical locants. Dienes are classified as cumulated, conjugated, or isolated according to whether the double bonds constitute a C=C=C unit, a C=C−C=C unit, or a C=C−(CXY)n−C=C unit, respectively.
Double bonds can be incorporated into rings of all sizes, resulting in cycloalkenes. In naming substituted derivatives of cycloalkenes, numbering begins at and continues through the double bond.
Unlike rotation about carbon-carbon single bonds, which is exceedingly rapid, rotation about carbon-carbon double bonds does not occur under normal circumstances. Stereoisomerism is therefore possible in those alkenes in which neither carbon atom bears two identical substituents. In most cases, the names of stereoisomeric alkenes are distinguished by cis-trans notation. (An alternative method, based on the Cahn-Ingold-Prelog system and using E and Z prefixes, is also used.) Cycloalkenes in which the ring has eight or more carbons are capable of existing as cis or trans stereoisomers. trans-Cycloalkenes are too unstable to isolate when the ring has seven or fewer carbons.
Because the C−C≡C−C unit of an alkyne is linear, cycloalkynes are possible only when the number of carbon atoms in the ring is large enough to confer the flexibility necessary to accommodate this geometry. Cyclooctyne (C8H12) is the smallest cycloalkyne capable of being isolated and stored as a stable compound.
Ethylene is formed in small amounts as a plant hormone. The biosynthesis of ethylene involves an enzyme-catalyzed decomposition of a novel amino acid, and, once formed, ethylene stimulates the ripening of fruits.
Alkenes are abundant in the essential oils of trees and other plants. (Essential oils are responsible for the characteristic odour, or “essence,” of the plant from which they are obtained.) Myrcene and limonene, for example, are alkenes found in bayberry and lime oil, respectively. Oil of turpentine, obtained by distilling the exudate from pine trees, is a mixture of hydrocarbons rich in α-pinene. α-Pinene is used as a paint thinner as well as a starting material for the preparation of synthetic camphor, drugs, and other chemicals.
Other naturally occurring hydrocarbons with double bonds include plant pigments such as lycopene, which is responsible for the red colour of ripe tomatoes and watermelon. Lycopene is a polyene (meaning many double bonds) that belongs to a family of 40-carbon hydrocarbons known as carotenes.
The sequence of alternating single and double bonds in lycopene is an example of a conjugated system. The degree of conjugation affects the light-absorption properties of unsaturated compounds. Simple alkenes absorb ultraviolet light and appear colourless. The wavelength of the light absorbed by unsaturated compounds becomes longer as the number of double bonds in conjugation with one another increases, with the result that polyenes containing regions of extended conjugation absorb visible light and appear yellow to red.
The hydrocarbon fraction of natural rubber (roughly 98 percent) is made up of a collection of polymer molecules, each of which contains approximately 20,000 C5H8 structural units joined together in a regular repeating pattern.
The lower alkenes (through four-carbon alkenes) are produced commercially by cracking and dehydrogenation of the hydrocarbons present in natural gas and petroleum (see above Alkanes: Chemical reactions). The annual global production of ethylene averages around 75 million metric tons. Analogous processes yield approximately 2 million metric tons per year of 1,3-butadiene (CH2=CHCH=CH2). Approximately one-half of the ethylene is used to prepare polyethylene. Most of the remainder is utilized to make ethylene oxide (for the manufacture of ethylene glycol antifreeze and other products), vinyl chloride (for polymerization to polyvinyl chloride), and styrene (for polymerization to polystyrene). The principal application of propylene is in the preparation of polypropylene. 1,3-Butadiene is a starting material in the manufacture of synthetic rubber (see below Polymerization).
Higher alkenes and cycloalkenes are normally prepared by reactions in which a double bond is introduced into a saturated precursor by elimination (i.e., a reaction in which atoms or ions are lost from a molecule).
These usually are laboratory rather than commercial methods. Alkenes also can be prepared by partial hydrogenation of alkynes (see below Chemical properties).
Acetylene is prepared industrially by cracking and dehydrogenation of hydrocarbons as described for ethylene (see above Alkanes: Chemical reactions). Temperatures of about 800 °C (1,500 °F) produce ethylene; temperatures of roughly 1,150 °C (2,100 °F) yield acetylene. Acetylene, relative to ethylene, is an unimportant industrial chemical. Most of the compounds capable of being derived from acetylene are prepared more economically from ethylene, which is a less expensive starting material. Higher alkynes can be made from acetylene (see below Chemical properties) or by double elimination of a dihaloalkane (i.e., removal of both halogen atoms from a disubstituted alkane). |
RXTE Discovers High Frequency QPOs - August 1996
The Rossi X-ray Timing Explorer (RXTE) has discovered neutron stars that emit streams of X-rays that pulse over 1,000 times a second. The pulses are not strictly periodic, but vary slightly from cycle to cycle. Astronomers call them "quasi-periodic oscillations" or QPOs. This just means that the pulses are almost, but not quite, periodic.
A neutron star is the superdense remains of an exploded star that gravitationally collapsed back in on itself to form a small, compressed core of neutrons. It is not unusual for a neutron star to emit X-rays. When a neutron star is in a binary system with a sun-like star, matter is gravitationally pulled off this stellar companion. As the matter falls toward the neutron star, it emits X-rays. As UC Berkeley astrophysicist Jonathan Arons said "It's the sound of matter going splat."
Sometimes the emitted X-rays are pulsed, or modulated by the spinning of the neutron star. The pulse period seen in the X-ray emission is exactly the same as the spin period of the star. However, another type of pulsation, the QPO, was found in the mid-1980s by the European X-ray Observatory Satellite (EXOSAT). The cycle times of these QPO's were between 6 and 20 times a second for most of the sources in which this behavior was observed. It was also noticed that the average period of the oscillations varied as the overall X-ray brightness of the source varied. The brighter the source was in X-rays, the shorter the QPO period. Scientifically, this means that the central frequency varies as source intensity.
It was theorized that this modulation of X-rays was due to the difference in frequency between the matter's orbital period around the neutron star and the spin period of the neutron star. This difference is called the beat frequency.
The beat frequency is important. The neutron star and the matter orbiting it move at different rates. For the matter to be able penetrate the neutron star's magnetosphere to cause the QPO, the matter has to be at the right place at the right time. This only occurs at the beat frequency. (see diagram below. Diagram is based on a drawing originally done by Michiel van der Klis and published in Exploring the X-ray Universe by Philip Charles and Frederick Seward (1995). Used by permission.)
The beat frequency model would also explain the 6-20 Hz QPOs that EXOSAT observed. Until RXTE, it was hard to confirm this theory, because most of the systems in which QPO's occur did not allow for direct measurement of the neutron star spin period.
Because of its much higher sensitivity and time resolution, RXTE has not only discovered QPOs at much higher frequencies than 6-20 Hz, it may also have verified the "beat frequency" model. RXTE has directly observed the neutron star spin frequency, the frequency of the orbiting material, and the beat frequency for a particular X-ray source. This source, named 4U 1728-34, has a neutron star rotation frequency of 363 Hz. The frequency of the orbiting material is around 1100 Hz, and the beat frequency, or difference between the two, is around 700 Hz; this is what theory predicts.
As an alternative explanation to the beat frequency theory, some scientists now speculate that these rapid QPOs are caused by hot bubbles of radiation bursting on the neutron star's surface and colliding with the infalling matter.
As RXTE continues to observe the high-energy universe over the years to come, astronomers hope to get the data which will not only allow them to understand what is happening, but why, and how. |
Before the invention of CNC machining and mills, metalworking and fabrication was being done by numerical control (or NC) machines. The NC machines were invented in the late 1940s by John T. Parsons and the Massachusetts Institute of Technology (MIT). They had been commissioned by the United States Air Force, and the goal of their work was to find a more cost-effective way to manufacture aircraft parts with intricate geometries. NC became the industry standard.
The CNC mill was a possibility until the late 1960s when the concept of computer-controlled machining started to circulate. The early 1970s saw significant developments in CNC machining and the CNC mill. 1976 marked the first year 3D Computer-Aided Design/Computer-Aided Machining systems were made available. By 1989, CNC machines had become the industry standard.
The old NC machines had been controlled by punch cards that had a set of codes. These codes were called G-codes. The codes were made to give the machine its positioning instructions. A large sticking point with these machines was that they were hardwired, which made it impossible to change any pre-set parameters. As CNC machines and CNC mills became more prevalent and took over, G-codes continued to be used as a means of control, but now they were designed, controlled, and managed through computer systems. Today, the G-codes in CNC machines, along with logical commands, have been combined to form a new programming language. This language is called parametric programs, and the machines that feature it allow the worker to make real-time adjustments. |
When many people think of slavery, they think of the translatlantic trade that took place between Africa, the Americas and the Caribbean. The legacy of enslavement in the Americas (particularly in the United States) is known globally through the cultural and political impact of African-American iconography, films, history and references in popular culture. For many people of African descent across the world, it is one of the clearest historical links that binds us together, even if we do not have west African or American ancestry.
But the slave trade across the Atlantic Ocean is not the only history of longstanding mass global enslavement. Less well-known is a system that went on for centuries longer, but which took place across its opposite oceanmass, the Indian Ocean. The Indian Ocean slave trade encompassed Africa, Asia and the Middle East, with people from these areas involved as both captors and captives.
The numbers of people enslaved and the exact length of the trans-Indian slave trade have not been definitively established, but historians believe that it preceded the transatlantic enslavement by centuries. Even though it is largely ignored as an international slave trade, examples of its impact abound. Writing on Indian Ocean slavery frequently mentions African people in China and Persia as well as in the Muslim holy cities of Mecca and Medina, which also served as central slave markets.
The longevity of the Indian Ocean slave trade is also evident in key historical moments. Long before the slave revolt of Haiti under Toussaint L’Overture, which istouted as the most successful slave revolt in modern history, established the first black republic in the western hemisphere, African slaves in the southern Iraqi city Basra established political power centres in Iraq and parts of present-day Iran for a period of fourteen years. The Zanj rebellion, and subsequent rule of East African slaves in parts of Iraq, took place between 869-883AD1. Centuries later, when American president Barack Obama was elected as the first African-American president in the United States, his election proved inspirational to their black descendants who continue to live in Basra.
But focussing solely on African people enslaved across Asia would be hiding the extent of Indian Ocean slavery: Asian people were enslaved for centuries as well, with Asian slaves who survived shipwrecks on European ships found living with the indigenous population on South Africa’s coast long before colonisation. There are also reports of Indian people enslaved and living in Kenya and Tanzania, and later, there was the large-scale movement of enslaved Asian people sent to work as slaves in colonial South Africa, starting from Dutch colonisation in 1652. Enslaved Asian people in South Africa came from as far afield as Japan and Timor, but the majority were from India, Sri Lanka, Indonesia and China.
In addition, men from Baluchistan in present-day Pakistan are regularly mentioned working as guards in relation to the slaving community based in Tanzania in the 1800s, overseen by the Omani sultanate who ruled Zanzibar, and Indian and Chinese slaves were to be found in South Africa, as well as in parts of the African eastern coast.
The Ottoman Empire enslaved non-Muslim populations in the Balkans, and women were often the target for sexual slavery, hence the Orientalist “allure” of the harem, and likely the source of the term “white slavery”. Afro-Turks also continue to live in Turkey. At its most pernicious, the effects of Asian enslavement is seen in contemporary racist European depictions of Asian women – which often have roots and metaphors in the sexual abuse inherent in the enslavement of Asian women and their status in the early days of colonialism.
There are other contemporary reverberations of the Indian Ocean slave trade – and continuing practices of enslavement in parts of north Africa, including in Mauritania. Enslavement of “African” populations by the “Arab” Sudanese ruling class in Sudan was one of the key reasons for the breakup of the Republic of Sudan and the secession of South Sudan. Even today, being darker-skinned African is synonymous with being called abd/abeed (slave) by Arabs. This includes Arab people who have been born and have lived all of their lives in western Europe and north America. (The Twitter hashtag #abeed will show you how prevalent and contemporary the epithet is.)
Words like “coolie” and “kaffir”, often associated with the Asian indentured labour system prevalent under later European colonialism, had roots and common usage in the periods of Indian Ocean slavery from the 1600s onwards.
Starting today, Media Diversified will be publishing an ongoing series on slavery across the Indian Ocean (#IndianOceanSlavery). The articles will have most of their starting points in South Africa, which was one of the epicentres of the Indian Ocean slave trade, with the country importing slaves as part of its colonisation process. This series will include articles looking at the history of Asian political prisoners in the country, the history of Chinese people in Africa which goes back for at least a millennium, and the wider resonances of both slavery and very specific under-reported histories in Australia, Ireland and India. Although the descendants of enslaved Africans and Asians continue to live in South Africa, outside of academic publications the country has very little knowledge about its own history of slavery.
What will become apparent is that slavery in Africa stretched much further than the west African coast where most of the transatlantic slave trade took place from. It also decimated the African interior for centuries longer than the period in which the transatlantic slave trade took place. Southern, central and east Africa were similarly affected, including by the large-scale movement of enslaved people within Africa, most notably in places like Mozambique and Madagascar. At the same time, there was extensive enslavement in Asia, in India as well as in Indonesia and other parts of south-east Asia, including Japan.
Publishing this series on Indian Ocean slavery is significant because it brings together key aspects of a largely underplayed history for general readers. When I started reading up on the topic, I was surprised at how many academic tracts had been published on the issue, and yet that knowledge had not in any significant way filtered through to the populations from whom the history was drawn. If anything, despite all of the extensive body of research on Indian Ocean slavery, the information remains “hidden within books”.
It challenges the history we tell ourselves in Africa, Asia and the Middle East about how we came to be, and it also challenges the history that we tell ourselves about other continents. It brings to light that what was perceived as anti-colonial solidarity in the 1950s and 1960s (often with India as its centre) was a continuation of a centuries-long historical twinning between what is collectively called the “Third World” or developing world.
Very often finding the information involved following whispers of conversation or remembering a fact that I had heard long ago and could not make historical sense of at the time. The internet made researching information easier at times, but I would not have been able to do concerted research without the extensive archives in Cape Town and the dedicated staff who manage them. I also would not have been able to find the background material without the well-stocked libraries in South Africa. In fact, if I attempted this project outside of South Africa, there would likely have been very little in terms of records and libraries to bolster my knowledge.
In a wider context, I also drew strength from the burgeoning interest in the history of slavery from the descendants of enslaved people in South Africa. At the moment it reaches a small group of people, but it is the start of reversing the trend of historians writing about history as if there are no contemporary resonances and impact, and as if there are no contemporary living descendants of slaves in South Africa, the wider African continent and Asia.
Outside of the formal research, finding the information has been an astonishing experience, which led me to retrace all of my life’s journey, especially the often disparate lives that I have led across Africa and Asia during the past two decades — stretching from Senegal to east Africa, across Turkey and Afghanistan to south-east Asia. What was previously incongruous to me made sense when I walked into the Slave Lodge in Cape Town and saw a map detailing the places where enslaved people in South Africa came from. The map of slaves’ origins was in fact a map of all of the places that I had lived in or had very significant contact with. And so, many of the gaps were things that only I could have known, having lived a very particular life: why in Turkey I encountered the exact same fig jam recipe as my grandmother’s in Cape Town, which is a traditional Cape Malay dish; the common words close to isiZulu that I would hear when I lived in northern Uganda; why – besides the common vocabularies of Persian, Kiswahili and isiZulu that I’d draw on in Kabul and Nairobi – Persians in Iran and Afghanistan as well as Zulus in South Africa both ate maas/maast/amasi (plain yoghurt/fermented milk) with their meals. These were small questions that I could not answer up to now, but the thread of what I have discovered is much bigger than I had anticipated.
The first article in this series will look at the history of the Dutch Christmas icon, Zwarte Piet (Black Peter). The iconography around Zwarte Piet brings together my own questions about slavery in South Africa, and about why the soot-smeared, golden-earringed icon who arrived in a wooden boat continues to be such a key cultural figure in Holland.
But researching the history of Zwarte Piet took me far away from what had been the familiar framing of enslavement to me for most of my life, namely the trade between mainly west Africa and the Americas. I hope that for you, the reader, it will as fruitful to read as it was for me to spend the past year in musty archives, running after snippets of information, being surprised again and again, and ultimately giving voice not to an academic pursuit, but to real people who lived and breathed, who were part of my history, and might be part of how you came to be as well.
Encyclopaedia Britannica, Zanj Rebellion.
For more information on the African-Iraqi community in Basra, see:
1Encyclopaedia Britannica http://www.britannica.com/event/Zanj-rebellion (First accessed 08/04/2016)
All work published on Media Diversified is the intellectual property of its writers. Please do not reproduce, republish or repost any content from this site without express written permission from Media Diversified. For further information, please see our reposting guidelines.
Karen Williams works in media and human rights across Africa and Asia. She was part of the democratic gay rights movement that fought against apartheid in South Africa. She has worked in conflict areas and civil wars across the world and has written extensively on the position of women as victims and perpetrators in the west African and northern Ugandan civil wars.
Indian Ocean Slavery is a series of articles by Karen Williams on the slave trade across the Indian Ocean and its historical and current effects on global populations. Commissioned for our Academic Space, this series sheds light on a little-known but extremely significant period of international history.
This article was commissioned for our academic experimental space for long form writing edited and curated by Yasmin Gunaratnam. A space for provocative and engaging writing from any academic discipline.
If you enjoyed reading this article and you got some benefit or insight from reading it buy a gift card or donate to keep Media Diversified’s website online
Or visit our bookstore on Shopify – you can donate there too! We are 100% reader funded |
We’re sometimes wondering how on earth an earthquake could happen, when all we can do is only making a guess? Well, actually almost everything in this whole universe can be explained, including earthquake. As one of the most occuring natural disaster, earthquake quite causes great damage. So, to know further about this nature phenomenon, it’s not a loss to take a look on the earthquake diagrams, like the one on the following image.
What is an earthquake? According to Encyclopedia, an earthquake is a geological event inside the earth that generates strong vibrations. When the vibrations reach the surface, the earth shakes, often causing damage to natural and manmade objects, and sometimes killing and injuring people and destroying their property. Earthquakes can occur for a variety of reasons; however, the most common source of earthquakes is movement along a fault. Each year, more than a million earthquakes occur worldwide. Most of these are so small that people do not feel the shaking. But some are large enough that people feel them, and a few of those are so large that they cause significant damage.
In these earthquake diagrams, there’s picture of process that lets us imagine what goes on in the outer crust. The most effective way to minimize the hazards of earthquakes is to build new buildings or retrofit old ones to withstand the short, high-speed acceleration of earthquake shocks.
Learning about earthquake will not ever be complete unless you use sufficient materials to show to your students. All these diagrams are printable in all size. |
Dot Monsters are a perfect activity to teach to the whole class or use with those “early finishers” who end up with free time after completing a project. Kids from 1st grade through 6th grade love making these! Dot Monsters nurture imagination and encourage creative thinking and problem solving…. and they’re FUN!
Here’s how you make them:
1. Any size paper will work for this activity, but I usually use 9×12 or 12×18. With your eyes closed, use a Sharpie (“fine” point) to place 30 dots randomly all over your paper. (Older students may use up to 50 dots, as in the example above.) Remind students that they only need to press lightly to make their dots! (A “Magic Rub” eraser will quickly remove any stray Sharpie marks from desks!)
2. Next, connect your dots with straight lines to make one complete shape. When students try this for the first time, you may want to have them use a pencil to connect their dots and then trace their lines with a Sharpie, until they understand the concept of making one complete shape.
3. Then, turn your paper different directions and use your imagination until you “see” a monster in one of the shapes!
4. Finally, add details to your drawing to help others “see” your monster, too! When you first teach this lesson, it might help to brainstorm characteristics of a monster and write ideas on the board. (horns, sharp teeth, scales, wings, fur, claws, spikes, etc.) Encourage your students to add a background and lots of color, too.
5. You can also include a fun writing activity if you want. It could be a simple “fill in the blanks” form about your monster, or a more elaborate story. Here’s an example to go with the monster pictured here: “My monster’s name is the Two-Headed Party Animal. It lives in Paris. It eats balloons and cake. It likes to do the Hokey-Pokey.” |
What is Herpes?
Cold sores / herpes can be spread in various ways. Oral herpes outbreaks, cold sores, or fever blisters are caused by the herpes simplex type 1 virus (HSV-1), which is contracted by 90% of adults by age 50. Many people who carry HSV-1 do not have any symptoms, and can still spread the virus. However, for some, symptoms can be severe and may require treatment.
What Causes Herpes?
Direct contact to a mouth with the virus is needed to catch it. While herpes is known for being a sexually transmitted infection, kissing someone with the virus can also spread it. The herpes virus cannot survive for long on objects, such as shared cups, straws, or lipstick.
After the herpes virus is contracted, never many do not show any symptoms, while others have mild or vague symptoms. Cold sore outbreaks initially start as pain or tingling on the lip, then form unsightly, painful, red blisters. Eventually, the blisters pop, ooze, crust over, and flake off before they resolve within one to two weeks.
A weakened immune system is the most common trigger of cold sore outbreaks, such as fatigue, stress, excessive sun exposure, and illnesses. Lasers, chemical peels, or cosmetic treatments near the mouth may also trigger herpes outbreaks, so it is important to inform your provider before procedures.
- Oral anti-viral pills (acyclovir, valacyclovir): treat and prevent cold sore outbreaks
- Topical anti-viral creams: over-the-counter (Abreva) and prescriptions can shorten the duration of the cold sores.
For more information about Cold Sores / Herpes, view some of our articles. |
Prepositions of place are an important part of the English language and will enable students to create more complex sentences. The meanings of basic prepositions and prepositional phrases are incredibly easy to demonstrate in a classroom and students can often guess their meanings.
How To Proceed
Warm up – Prepositions
Use this opportunity to review vocabulary you plan on using in this lesson. In this example words including book, desk, chair, clock, pencil, and teacher would be good to review. Crisscross is an excellent game to start the class with. Have all the students stand. Ask questions like “What is this?” while holding up a pen or pointing to an object. Have students volunteer to answer by raising their hands. Choose a student and if they answer correctly they may sit down. Repeat the exercise until all students are seated. In large classes the volunteer can choose to sit either in their row or column of students to sit. Usually no more than about ten questions are asked. The exercise should take approximately five minutes.
Introduce – Prepositions Pronunciation
Write the target vocabulary on the board. The words below are a good set to begin with:
– in front of
– next to
The vocabulary you introduce may depend on the textbook being used. Demonstrate the pronunciation of each word one at a time having students repeat it after you. If certain students appear not to be participating, call on them individually to pronounce the word for the class. You may want to start a chain where the first student says the first vocabulary word, the next student says the second, and the third student says the third, etc until all students have had the opportunity to say at least one word aloud. In a small class feel free to repeat this exercise several times and encourage them to speed up with each cycle while still maintaining proper pronunciation. Drilling is important however it is often boring for students so adding in some fun elements can encourage them to participate.
Introduce – Prepositions Meaning
Try to have the students come up with the meaning or translation of each word. Use example sentences such as “I am in front of the board. Now I am in front of the desk. Now I am in front of Jane.” and change your position in the classroom accordingly. Use as many example sentences as you can think of for each preposition trying to get the students to guess its meaning before writing it on the board and moving onto the next one. Drill pronunciation and translation before continuing.
To test comprehension, do a short exercise. Tell students to put their hands on their desks, above their desk, behind their backs or to put their books in their desks, under their desks, etc. Perhaps a few students would like to give it a try so why not have them give a few instructions as well. A simple worksheet where students match prepositions with pictures would be good practice as well.
Introduce – Prepositions Q & A
Ask students questions such as “Where is my/your/the book/pen/desk/clock?” Demonstrate the pronunciation of the question and answer. The model dialogue for this lesson should resemble the structure below:
– A: Where is (my/your/Sam’s/the) (noun)?
– B: It’s (preposition) the (noun).
Ask your students to practice the model dialogue in pairs for about five minutes taking turns being A and B. Next ask for volunteers to demonstrate their conversations and encourage them to be creative instead of being limited to the vocabulary you’ve already used in the lesson. Correct any errors with clear explanations and demonstrations before moving on.
Ask students to write five sentences using prepositions or use a game for further practice of prepositional phrases and sentence construction. An exercise like Jumbled (where students work in groups to arrange a set of words into five to ten sentences in a race against other groups) or Scrambled (where students have a worksheet with sentences written out of order that they must rearrange) is great practice.
Reading a story
Find topic that will interest your students and a choose a short story about it, even a paragraph or two will do. As you and read through it, let the class spot the prepositions. This will help them remember the correct way to use them.
As a class review the exercise from the previous step. Students can volunteer to read one of their written sentences aloud, groups can take turns reading one of their sentences from Jumbled, or students can read their un-Scrambled sentences aloud. Whatever exercise you’ve done, this is a key stage in catching mistakes. Often other students can assist their peers in making corrections but if not you may need to review certain problem areas.
Common Examples for Prepositions of Place
– In – the point itself “I will meet you in the library,” “We live in an apartment.”
– On – the surface “Put your books on the table.” “Please sit on the chair.”
– At – the general vicinity “Can we meet at the gate before class? “I found these shoes at the store on the corner.”
– Inside – something contained “Hang your coat inside the cupboard.” “Pack the books inside the container.”
Prepositions are easily reviewed throughout the school year by being added to random exercises. For instance, typically prepositions would be covered before moving onto the past or future tenses. Adding prepositions to sentences used in practicing those new tenses should be an easy review for your students and keep them aware of the use of prepositions throughout their studies.
P.S. If you enjoyed this article, please help spread it by clicking one of those sharing buttons below. And if you are interested in more, you should follow our Facebook page where we share more about creative, non-boring ways to teach English. |
Influenza, commonly known as “the flu”, is an infectious disease caused by the influenza virus. Symptoms can be mild to severe. The most common symptoms include: a high fever, runny nose, sore throat, muscle pains, headache, coughing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children there may be nausea and vomiting but these are not common in adults. Nausea and vomiting occur more commonly in the unrelated infection gastroenteritis, which is sometimes inaccurately referred to as “stomach flu” or “24-hour flu”. Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.
Usually, the virus is spread through the air from coughs or sneezes.This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are sick. The infection may be confirmed by testing the throat, sputum, or nose for the virus.
Influenza spreads around the world in a yearly outbreak, resulting in about three to five million cases of severe illness and about 250,000 to 500,000 deaths. In the Northern and Southern parts of the world outbreaks occur mainly in winter while in areas around the equator outbreaks may occur at any time of the year. Death occurs mostly in the young, the old and those with other health problems. Larger outbreaks known as pandemics are less frequent. In the 20th century three influenza pandemics occurred: Spanish influenza in 1918, Asian influenza in 1958, and Hong Kong influenza in 1968, each resulting in more than a million deaths. The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June of 2009. Influenza may also affect other animals, including pigs, horses and birds.
Frequent hand washing reduces the risk of infection because the virus is inactivated by soap. Wearing a surgical mask is also useful. Yearly vaccinations against influenza is recommended by the World Health Organization in those at high risk. The vaccine is usually effective against three or four types of influenza. It is usually well tolerated. A vaccine made for one year may be not be useful in the following year, since the virus evolves rapidly. Antiviral drugs such as the neuraminidase inhibitors oseltamivir among others have been used to treat influenza. Their benefits in those who are otherwise healthy do not appear to be greater than their risks. No benefit has been found in those with other health problems. |
What Does Average Mean?
Average can be used as a noun, adjective, or verb.
Average is a number that defines the middle value in a group of items. For instance, you could reference the modal average, mean average, or median average. Typically, the average refers to the arithmetic mean.
The GDP of California has surpassed the national average.
It can also be used to describe a norm.
Her height was the median average for the group.
Again, the word pertains to either the arithmetic mean or something typical and ordinary.
The average temperature in Qatar is above 30 degrees Celsius.
The avg. mortgage rate is expected to increase, due to a rise in demand for houses.
In both of the examples above, the word average has been used to describe an arithmetic mean. Sometimes, the word can be used to describe something typical.
The average person does not know the worldwide cost of medical care.
In this context, the word refers to the common man or woman.
The football was of average size.
This means that the football had customary proportions.
As a verb, the word means, “to take the average” or “to have as an arithmetic mean.”
The annual growth rate averaged 7%.
Average the three numbers.
Where You’ll Find Averages
Wikipedia defines average as a single number that represents other numbers. It’s used in mathematics to refer to the mode, mean, or the median. It’s primarily found in discussions of statistics—although you’ll also see the term used in government reports, scientific studies, and news articles.
The History of the Word
According to The Online Etymology Dictionary, the word average first appeared in the late 15th century. It originated within the maritime industry, and reflected shared losses from damaged merchandise. Due to cosmopolitan origins, the English word was influenced by words in many different languages, including French, Italian, Spanish, Arabic, German, and Dutch.
The meaning referring to the calculation of arithmetic mean first appeared in 1755.
Synonyms for Average
- a dime a dozen
- fair to middling
- middle of the road
- run of the mill
Examples of the Word in Context
“Over time, however, costs have soared while response rates have declined. The average cost, in 2020 dollars, to count one housing unit increased from about $16 in 1970 to about $92 in 2010, a Government Accountability Office analysis found.”
—The New York Times
“According to the U.S. Census Bureau, the average household income was $73,298 in 2014, the latest year for which complete data is available. However, this doesn’t tell the whole story. Depending on your family situation and where you live, average household income can vary dramatically.”
“Consumer Price Index CPI in the United States averaged 113.65 Index Points from 1950 until 2019, reaching an all time high of 257.94 Index Points in November of 2019 and a record low of 23.51 Index Points in January of 1950.”
Examples of the Abbreviation in Context
- The avg. price of a baseball is $72, meaning baseballs costs the MLB an avg. of $6 million dollars per season.
- The weatherman reports that the av. temperatures are expected to rise this summer.
- The avg. life expectancy is expected to increase due to improvements in health care.
- The av. citizen would like limited government oversight.
- An avg. of 3,000 people attend the yearly concert.
- The avg. man is more concerned with security than previous studies had indicated.
The Word Counter is a dynamic online tool used for counting words, characters, sentences, paragraphs, and pages in real time, along with spelling and grammar checking. |
Although it's name may sound harmless, bloat is a life-threatening emergency for dogs. The condition, formally called gastric dilation-volvulus (GDV), can quickly kill dogs if they don't receive prompt treatment.
What Is Bloat?
Bloat occurs when your pet's stomach fills with air. In many cases, the stomach then twists, cutting off its blood supply. The condition prevents blood from flowing back to the dog's heart and can cause irreversible damage to the spleen, stomach, pancreas, liver, and other organs. Shock can develop soon after the first signs of bloat appear. Breathing problems also occur as the air-filled stomach presses against the diaphragm. Unfortunately, a dog can die of bloat just a few hours after experiencing the first symptoms.
Which Dogs Get Bloat?
Any dog can develop bloat, although it may be more likely to occur in older dogs and males. Great Danes, Saint Bernards, German shepherds, poodles, retrievers and other large breeds with deep, narrow chests are at increased risk of developing bloat. Swallowing air while eating, a problem that can occur in anxious dogs, may also increase the likelihood of bloat, as can eating a large amount during a meal.
A genetic link may be responsible for some cases of bloat. Veterinarians at Cummings School of Veterinary Medicine at Tufts University are currently conducting a research study to find the gene responsible for the condition. Although bloat may have a genetic component, environment and diet might increase the likelihood that your dog will actually develop the condition. If a gene is identified, a genetic test could be developed to identify dogs at high risk.
What Are the Symptoms of Bloat?
Symptoms of bloat start suddenly and may include:
- An enlarged stomach
- Pain in the abdomen
- Excessive drooling
- Dry heaving
- Shallow breathing
Symptoms of shock include:
- Weak pulse
- Rapid heart rate
- Pale gums and lips
- Low body temperature
- Glazed eyes
- Dilate pupils
How Is Bloat Treated?
Surgery is used to treat bloat, but it can't be performed until your pet is in stable condition. Before surgery can begin, your pet may receive pain medications, antibiotics and intravenous fluids to treat shock. A tube inserted into esophagus or a large needle placed in the stomach may be used to deflate the stomach and release the trapped air. Bloodwork and other tests may also be performed before surgery.
During surgery, your dog's stomach will be repositioned and sutured to the abdominal wall to prevent it from twisting in the future. Surgery also involves thoroughly examining your pet's stomach and organs for signs of damage due to the blood flow blockage.
Your pet will stay at the animal hospital for several days following surgery. During that time, the veterinary staff will closely monitor him or her for heart problems, infections, pancreas or liver damage, or other conditions associated with bloat.
How can I Reduce My Dog's Risk of Bloat?
Although it's not possible to prevent bloat in every case, there are a few things you can do to reduce your dog's risk, such as:
- Change Mealtime. Two to three small meals spaced throughout the day are better than one large meal.
- Limit Water. Wait until an hour after mealtime to offer water.
- Lower Food and Water Dishes. Swallowing air is less likely to occur when you place food and water dishes on the floor instead of in elevated feeders.
- Wait to Play Fetch. Don't start a game of fetch, take your dog for a run or allow him or her to participate in any type of exercise for at least an hour after eating.
- Don't Give in to Begging. Giving your pet samples of the foods you eat can cause gas to build up in the stomach.
- Discourage Competition. Do your pets wolf down their food in an effort to finish first? The faster they eat, the more likely they are to swallow air. Confining your dogs to different rooms or areas while they eat can help them slow down.
Recognizing the symptoms of bloat and taking steps to reduce your dog's risk can help your pet avoid these devastating condition. Call us today if you're worried that your dog may have bloat or if it's time to schedule your furry friend's next veterinary visit.
American Kennel Club: Bloat (or GDV) in Dogs — What It Is and How it’s Treated, 11/3/16
Tufts University: The Genetics of Bloat, Summer 2014
Peteducation.com: Bloat (Gastric Dilation and Volvulus in Dogs) |
Here are four more fabulous Greek myths for your students to read. Written in kid
friendly language! These myths are written in vivid, descriptive language, so that your students will ALWAYS remember who these famous characters are, their stories, and the lesson or moral of each! Use them for guided reading groups or for seat work for students!
Included are the following:
-Four More Great Myths: Pandora's Box, The Trojan Horse, Medusa and Perseus,
and Persephone, Queen of the Underworld
-A myth anchor chart to post in your room
-Two pages of higher level thinking reading comprehension questions for each myth,
which are tied to Common Core Standards |
Arthritis is a condition in which one or more of your joints are inflamed. This can result in stiffness, soreness, and in many cases, swelling.
Inflammatory and noninflammatory arthritis are the two most common forms of the condition.
There are dozens of different arthritis types. One of the most common types of inflammatory arthritis is rheumatoid arthritis (RA), and the most common type of noninflammatory arthritis is known as osteoarthritis (OA).
OA and RA both have very different causes.
Causes of osteoarthritis
Even though it’s called noninflammatory arthritis, OA can still result in some inflammation of the joints. The difference is that this inflammation probably results from wear and tear.
OA happens when a joint’s cartilage breaks down. Cartilage is the slick tissue that covers and cushions the ends of the bones in a joint.
Injuring a joint can accelerate the progression of OA, but even everyday activities can contribute to OA later in life. Being overweight and putting extra strain on the joints can also cause OA.
Causes of rheumatoid arthritis
RA is a much more complicated disease, but it usually affects the:
Like psoriasis or lupus, RA is an autoimmune disease. This means the body’s immune system attacks healthy tissue.
The cause of RA still remains a mystery. Because women are more likely to develop RA than men are, researchers believe that it may involve genetic or hormonal factors.
RA can also appear in children, and it can affect other body parts, like the eyes and lungs.
The symptoms of RA and OA are similar, in that they both involve stiffness, pain, and swelling in the joints.
But the stiffness associated with RA tends to last longer than it does during flare-ups of OA, and is generally worse first thing in the morning.
The discomfort associated with OA is usually concentrated in the affected joints. RA is a systemic disease, so its symptoms can also include weakness and fatigue.
After your doctor performs a physical examination of the joints, they may order screening tests.
Your doctor may order a blood test to determine if the joint problem is due to RA. This is to look for the presence of “rheumatoid factor” or cyclic citrullinated antibodies that are usually found in people with RA.
Arthritis is treated differently depending on the type:
Your doctor may recommend nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen for minor flare-ups or mild cases of arthritis.
Corticosteroids, which can be taken orally or by injection, can reduce inflammation in the joints.
Physical therapy can help improve muscle strength and your range of motion. Stronger muscles can better support a joint, possibly easing pain during movement.
When damage to the joint is severe, your doctor might recommend surgery to repair or replace the joint. This is typically done only after other treatments fail to give you enough pain relief and mobility.
NSAIDs and corticosteroids might be used to help reduce pain and swelling for people with RA, but there are also specific drugs designed to treat this type of arthritis.
Some of these include:
- Disease-modifying antirheumatic drugs (DMARDs): DMARDs block your body’s immune system response, which helps slow down the progression of RA.
- Biologics: These drugs respond to the immune system’s response that causes inflammation instead of blocking the whole immune system.
- Janus kinase (JAK) inhibitors: This is a new type of DMARD that blocks certain immune system responses to prevent inflammation and joint damage.
New drugs continue to be tested to help treat RA and reduce symptom intensity. And like OA, RA symptoms can sometimes be relieved through physical therapy.
Living with OA or RA can be a challenge. Regular exercise and weight loss can help reduce the burden on your joints. Exercise not only contributes to weight loss, but it also can help support the joints by strengthening the muscles around them.
Assistive devices, like canes, raised toilet seats, or equipment to help you drive a car and open jar lids, are available to help you maintain independence and daily function.
Eating a healthy diet that includes lots of fruits, vegetables, low-fat proteins, and whole grains can also help ease inflammation and prevent weight gain.
Even though there’s no cure for OA or RA, both conditions are treatable. As with most health challenges, getting an early diagnosis and a head start on treatment often results in the best outcomes.
Don’t just chalk joint stiffness up to another unavoidable sign of aging. If there’s swelling, pain, or stiffness, it’s a good idea to make an appointment with your doctor, especially if these symptoms interfere with your daily activities.
Aggressive treatment and a better understanding of your specific condition may help keep you more active and more comfortable in the years ahead. |
Tangerine Times, Pt II: Producing the Paper
Lesson 6 of 6
Objective: SWBAT use technology and creativity to produce a newspaper based on events in the novel, Tangerine.
Latin Roots: Warm Up
This is our daily warm up, wherein students work with two or three Latin roots per day. The resource that I use to get my roots is Perfection Learning's Everyday Words from Classic Origins.
Every day, when the students arrive, I have two Latin roots on the SmartBoard. Their job is to generate as many words as they can that contain the roots, and they try to guess what the root means. After I give them about five minutes, we share words and I tell them what the root means.
The students compile these daily activities in their class journals. After every twelve roots, they take a test on the roots themselves and a set of words that contains them.
Yesterday, the students got started on developing newspaper articles for a Tangerine newspaper.
Today, the kids marched into class, armed with their drafts of their Tangerine articles. After our daily warm up, I explained the activity, using a yardstick and a piece of newspaper.
The students were directed to have a brief meeting with their editorial team. During that meeting, they were to decide how they were going to use the space on the newspaper sheet to "lay out" their newspaper. In other words, they had to fill the space without going over. Since each team had five or six members and a sheet of newspaper is roughly the same size as six pieces of 8.5 by 11 paper, they could decide how they wanted to use the space.
Students were required to have a masthead, complete with date. And every article had to have a headline and a byline. We talked about how the sample paper used the space, and the kids pointed out how many graphics were on the page AND that the article fonts were different sizes.
Tomorrow, we will have ten minutes allotted for "pasting down" and then we will put the papers up to display them. |
Around the globe, growing evidence suggests that a warming climate is affecting marine ecosystems and species. This raises the question of what can be done to help seabirds and other species adapt to a changed environment.
A recent review of observed and predicted effects of climate on Australian seabirds suggests responses to climate-related changes will be species and region-specific. This reduces our capacity to respond through implementing broad national conservation strategies, and increases the need for locally targeted management responses.1
Despite gaps in our knowledge of seabird species’ biology, we can make efforts to improve the resilience of these animals to climate change through appropriate management of the marine and terrestrial environments. This includes mitigation of threats not directly related to climate change.
Case study: Little penguins on Phillip Island
The colony of little penguins (Eudyptula minor) at Phillip Island, Victoria, is one of the biggest in the world, attracting almost 500 000 visitors a year. Its success offers a good example of how effort put into reducing both climate and non-climate-related environmental threats can increase the species’ resilience to climate warming.
Scientists have intensively monitored the little penguin population since the late 1960s. By investigating the penguins’ responses to environmental change, researchers have identified some clear patterns. For example, in years when sea-surface temperatures are high in the months before the breeding season, the birds tend to start breeding earlier, which leads to more and heavier chicks.1 , 2 Warmer ocean temperatures have also been associated with increased survival of first-year penguins. Combined, these factors have the potential to increase penguin numbers in the colony.
However, warmer conditions are not all good news for this species. While nesting, and in the subsequent moult period, little penguins spend considerable amounts of time on land in burrows and can be exposed to high land temperatures.
Although their burrows provide some insulation, little penguins, as with many seabirds, suffer from heat stress and even mortality with prolonged exposure to air temperatures above 35ºC. The Phillip Island Nature Parks management has been actively addressing this issue by providing shady indigenous vegetation and appropriately designed and insulated artificial nesting burrows.
Warmer and drier conditions can also increase the risk of fire. The little penguin appears maladapted to fire; instead of avoiding it, the birds will remain in their burrows or near vegetation until severely burnt or killed.3 Projected increases in the incidence and intensity of fire for south-eastern Australia could therefore increase the risk of injury and death for this species.
To reduce the risk of fire in the penguin colony, reserve staff implemented three precautions. First, because overhead powerlines had caused a number of previous fires, all powerlines have now been run underground, effectively eliminating that ignition source. Second, some of the indigenous plants being reinstated in cleared and eroding areas provide shade during summer and also act as fire retardants. Finally, the reserve has implemented a fast-response fire action plan to reduce the risk of fire spreading through the colony.
For the little penguins of Phillip Island, the manageable risks leading to adult and chick mortality on land have been eliminated – at least for the moment.
Lynda Chambers is a senior researcher at the Bureau of Meteorology, specialising in the responses of Australia’s biodiversity to climate variability and change. Her work involves building networks of researchers and conservation managers to improve our understanding of environmental impacts and to develop adaptation options. Peter Dann is research manager at the Phillip Island Nature Parks, which manages the famous ‘Penguin Parade’. His work focuses on marine birds and mammals in Bass Strait and the processes – particularly those identified as threats – that drive their population sizes.
1 Chambers LE, Devne C, Congdon BC, Dunlop N, Woehler EJ and Dann P (2011) Observed and predicted impacts of climate on Australian seabirds. Emu 111: 235–251.
2 Cullen JM, Chambers LE, Coutin PC and Dann P (2009) Predicting the onset and success of breeding of Little Penguins, Eudyptula minor, on Phillip Island from ocean temperatures off south east Australia. Marine Ecology Progress Series 378: 269–278.
3 Chambers LE, Renwick L and Dann P (2010) Victoria’s little penguins show limits to adaptation. ECOS 153:19 |
A biography is simply the story of a life. Biographies can be just a few sentences long, or they can fill an entire book—or two.
- Very short biographies tell the basic facts of someone's life and importance.
- Longer biographies include that basic information of course, with a lot more detail, but they also tell a good story.
Biographies analyze and interpret the events in a person's life. They try to find connections, explain the meaning of unexpected actions or mysteries, and make arguments about the significance of the person's accomplishments or life activities. Biographies are usually about famous, or infamous people, but a biograpy of an ordinary person can tell us a lot about a particular time and place. They are often about historical figures, but they can also be about people still living.
Many biographies are written in chronological order. Some group time periods around a major theme (such as "early adversity" or "ambition and achievement" ). Still others focus on specific topics or accomplishments.
Biographers use primary and secondary sources:
- Primary sources are things like letters, diaries, or newspaper accounts.
- Secondary sources include other biographies, reference books, or histories that provide information about the subject of the biography.
To write a biography you should:
- Select a person you are interested in
- Find out the basic facts of the person's life. Start with the encyclopedia and almanac.
- Think about what else you would like to know about the person, and what parts of the life you want to write most about. Some questions you might want to think about include:
- What makes this person special or interesting?
- What kind of effect did he or she have on the world? other people?
- What are the adjectives you would most use to describe the person?
- What examples from their life illustrate those qualities?
- What events shaped or changed this person's life?
- Did he or she overcome obstacles? Take risks? Get lucky?
- Would the world be better or worse if this person hadn't lived? How and why?
- Do additional research at your library or on the Internet to find information that helps you answer these questions and tell an interesting story.
- Write your biography. See the Tips on Writing Essays and How to Write a Five Paragraph Essay for suggestions.
You'll find biographies of lots of famous people in the encyclopedia. (Look them up in the Infoplease search box to find them.) You can also browse these short biographies of Selected Figures from Recent History.
For more on what makes a good biography, see the encyclopedia entry on biography and this The Biography Maker site from the Bellingham Public Schools. |
Presentation on theme: "Hot Spots The active volcanoes form directly over a hot spot, where magma rises from the mantle. The hot spot is stationary. The plate moves - slowly -"— Presentation transcript:
1Hot SpotsThe active volcanoes form directly over a hot spot, where magma rises from the mantle.The hot spot is stationary. The plate moves - slowly - over the hot spot.Volcanic islands form over the hot spot. As they move away from the hot spot, the volcanoes go extinct and the islands erode.There are several hotspots in the Pacific, and several chains of islands.
2The Formation of MagmaOceanic lithosphere is made up of sediments and volcanic rock that contains water and other fluids.When oceanic lithosphere moves downward into the mantle at a convergent boundary, the fluids contact the surrounding rock.When the fluids enter the already hot mantle rock, the melting temperature of the hot rock decreases. As a result, the rock begins to melt.
3Types of VolcanoesLava that flows at divergent boundaries forms from melted mantle rock and rich in iron and magnesium and poor in silica.Silica poor lava is called mafic magma and runny and sticky.This type of lava generally comes from nonexplosive eruptions.
7Mid Ocean RidgesMid ocean ridges are underwater volcanic mountain chains that form where two tectonic plates are moving apart.As the plates move apart, magma from the mantle rises to fill cracks that form in the crust.Some of the magma and lava cool and become part of the oceanic lithosphere. This process is known as sea-floor spreading.
9The mid Atlantic ridge is very active The mid Atlantic ridge is very active. Long linear cracks called fissures have formed where the Atlantic andEurasian plates are moving apart. Basaltic magma rises to Earth’s surface through these fissures and erupts nonexplosively.
11As a general rule, basaltic lava erupts from fissure eruptions As a general rule, basaltic lava erupts from fissure eruptions. A fissure is a seam in the earth's crust, from several meters long to kilometers long. If you have seen video of lava erupting that is bright red in color, you are probably looking at a basaltic eruption.
12Hot Spot VolcanoesHot spot volcanoes form over mantle plumes. These are columns of hot solid rock that rise through the mantle by convection.Continuous eruptions will eventually reach sea level and become islands.
13Types of Lava There are four basic types of lava: 1. Aa lava (Hawaiian term pronounced 'ah-ah' for lava flows that have a rough rubbly surface composed of broken lava2. Pahoehoe3. Pillow lava4. Blocky lava
14Aa is lava that forms a thick, brittle crust Aa is lava that forms a thick, brittle crust. The crust is torn into jagged pieces as molten lava continues to flow underneath.
28When basaltic lava flows cool and solidify they contract, often developing fractures perpendicular to their surface. Individual columns tend to have a hexagonal (six-sided) cross section. Since columns are always at right angles to the cooling surface, vertical columns are seen in lava flows
29Shield VolcanoShield volcanoes usually form at hot spots. Shield volcanoes form from layers of lava left by many eruptions.Shield volcano lava flow is very runny so it spreads out over a wide area. The sides of this volcano are not very steep, yet this type of volcano can be very large. The base of this type of volcano can be more than 100 km in diameter.
30Mauna Kea in Hawaii is the tallest mountain on Earth as measured from its base on the sea floor.
34Magma Chamber: this is the magma that feeds the eruption pools from deep underground. The magma rises through cracks in the crust. This movement of magma can cause small earthquakes that are used to predict eruptions.Vents: lava is released through these openings.Lava may also erupt from fissures along the sides of a shield volcano.After eruption the lava moves down slope in long rivers of molten rock. Often this lava will cool and solidify on top while the interior continues to travel through long, pipelike structures called lava tubes.
36A lava tube on the island of Hawaii, taken just above a lava falls A lava tube on the island of Hawaii, taken just above a lava falls. The floor is cauliflower pahoehoe, a rougher form of pahoehoe. Note the tree roots coming in from the ceiling. Lava tubes tend to be fairly close to the surface.
39Volcanoes at Convergent Boundaries Magma at convergent boundaries are melted mantle rock and melted crustal rock. So, fluid mafic lava and lava rich in silica form at these boundaries. Lava rich in silica cool to form light-colored rocks.Silica rich magma tends to trap water and gas bubbles, which causes enormous gas pressure to develop within the magma. As the gas filled magma rises to Earth’s surface, pressure is rapidly released. This change results in a powerful explosive eruption.
40Pyroclastic Materials Volcanic bombs are large blobs of magma that harden in the air
48Mount Pinatubo in the Philippines. The ash in this cloud had temperatures that reached 750°
49Pyroclastic flows are produced when a volcano ejects enormous amounts of hot ash, dust, and toxic gases. This cloud of material can reach speeds of 200 km/hr. This is faster than most wind speeds of hurricanes. The temperature within the flow can reach 700° At this temperature the flow burns everything in its path thus, Pyroclastic flows are the most dangerous of all volcanic phenomena
51Cinder cone volcanoes are the smallest type Cinder cone volcanoes are the smallest type. They generally reach heights of no more than 300 meters. They are also the most common type of volcano. These volcanoes are made from Pyroclastic material and most often form from moderately explosive eruptions. These volcanoes have steep sides. They also have a wide summit crater.Unlike other volcanoes, cinder cone volcanoes usually erupt only once in their lifetime.
53Composite VolcanoesThese volcanoes are also called Stratovolcanoes. These are the most recognizable of all volcanoes. They form from both explosive eruptions of Pyroclastic material and quieter flows of lava. This forms alternating layers of Pyroclastic material and lava. They have a broad base and sides that get steeper toward the summit.
56Negative Effects of Volcanic Eruptions Tambora, Indonesia.Most of the negative effects come from explosive eruptions which results in loss of life and property damage.This volcano erupted in 1815 and killed 92,000 people. Most were killed by falling debris (volcanic bombs).
57Volcanic Ash in the Atmosphere The 1815 Tambora eruption put enough ash and gasses into the upper atmosphere that the average global temperature decreased by 3°C for one to two years.The lower temperatures is blamed for crop failures and starvation, particularly in New England and Europe. These effects led to the deaths of 82,000 people
58LaharsLahars are fast moving mudflows that bury everything in their path |
This article may include advertisements, paid product features, affiliate links and other forms of sponsorship.
One of the scariest things a person can experience is choking on something. The feeling of not being able to breathe no matter how hard you try is terrifying and creates uncontrollable panic. Although many associate choking with eating, choking can be caused by any foreign object being lodged in the throat, preventing air from reaching the lungs. In addition to mechanical obstruction, a person can also choke due to a sleep disorder, an allergic reaction, tissue swelling or the crushing of the trachea resulting in a blocked airway. That’s where it is important to know the Heimlich Maneuver.
According to the National Safety Council, choking is the fourth leading cause of unintentional injury death. One of the ways to assist yourself or a person who is choking is to perform what is called the Heimlich maneuver. The Heimlich maneuver is a technique used to help remove a trapped object from a person’s airway. This is a simple, life-saving technique that we encourage everyone to learn.
How to Tell if Someone is Choking
Before performing the Heimlich maneuver, you must first determine if a person is choking. Choking is when the respiratory passage is blocked by constriction of the neck, an obstruction in the trachea, or the swelling of the larynx. The most common sign that someone is choking is the person wrapping their hands around their throat. Often times, the person choking will stand up and draw attention to themselves, signaling for help. If they are not coughing, not breathing, and unable to speak, this may mean that they are choking and need assistance. Additionally, if the choking individual’s face begins to turn blue, this is due to a lack of oxygen as a result of the obstructed airway. This may then lead to them becoming unconscious from not being able to breathe. Without lifesaving efforts being used immediately such as the Heimlich maneuver, choking can lead to suffocation and eventually death.
Steps of the Heimlich Maneuver
Before performing the Heimlich maneuver on a choking individual, first determine if they are choking by referencing the information in the above section. If they have something stuck in their throat but they are able to still breathe, encourage them to cough forcefully to attempt to clear the object. If the breathing ceases and choking becomes more extreme, ask them if they are choking and then advise them that you are going to start the Heimlich maneuver. Instruct a bystander to call 911 while you begin life saving efforts. If you are alone, call 911 yourself to request emergency medical response. If available, place the call on speaker phone so you can dedicate your hands to the Heimlich maneuver while talking to the 911 operator instead of holding a phone. Once you are ready to follow through with the Heimlich maneuver, follow these steps:
For an Adult
- Stand behind the person choking and place one leg forward in between their legs.
- Reach around the abdomen and find the naval area.
- Place the thumb side of your fist against the abdomen just above the naval.
- Grasp your fist with the other hand and thrust in and up into the abdomen with quick jerks.
- Continue thrusts until the choking individual expels the object.
Even if you are able to successfully unblock an object using the Heimlich maneuver, still continue to seek medical attention for any injuries or damage that may have occurred during the process.
For an Infant
Children can often choke on small objects that they find or food that is too big for them. To prevent choking in infants, keep small objects out of reach, make sure to always cut their food into small bites and avoid hard candies. The Heimlich maneuver should not be performed on a child under one year old.
As always, 911 should always be called immediately for emergency medical response. If you determine that an infant is choking, follow these steps:
- Sit down with the baby face down on your forearm and rest it on your thigh.
- With the heel of your hand, gently administer five back blows in between the shoulder blades.
- If the back blows do not work, turn the baby face up while still resting on your forearm and thigh with their head lower than the rest of their body.
- Place your index and middle finger at the center of the infant’s breastbone and perform five quick chest compressions that are about 1.5 inches deep for one second per compression.
- Continue to alternate from back blows to chest compressions until the object is dislodged and the baby can breath again.
For a Pregnant Woman
Due to their pregnant stomach, you will not be able to get the proper placement of your hands around their abdomen. Instead, position your hands higher on their torso around the base of their breastbone and perform the thrusts. Follow this same procedure if performing the Heimlich maneuver on an overweight individual that you cannot properly fit your arms around.
Even your dogs, cats, and other pets are risk of choking. If you are a pet owner, learning how to perform the Heimlich maneuver on an animal will be useful in the event they are choking. Use caution when trying to assist a choking animal because even docile ones can panic and become aggressive when choking.
- Using both hands, open the animal’s mouth.
- If possible, press the animal’s lips over their teeth so their lips are between their teeth and your fingers.
- Look inside the animal’s mouth and remove any visible obstruction that you can grab with your fingers.
- If you are unable to remove the object or food, use a spoon or something flat to pry it away from their teeth or roof of their mouth.
- If these steps do not work, apply pressure to the animal’s abdomen just below the rib cage.
- Make a fist and push up and forward firmly.
- If you are unable to pick the dog up, lay them on their side and place one hand on their back while squeezing the abdomen upwards and forward with the other hand.
- Check the animal’s mouth after Heimlich maneuver to remove the dislodged object.
Choking is scary enough as it is. Being alone and feeling helpless makes the situation even worse. Believe it or not, you can actually give yourself the Heimlich maneuver in the event you are choking and there is no one around to help you. Follow the steps of the Heimlich maneuver but with your hand placement on your own abdomen while giving yourself five thrusts. If you are not successful at expelling the object through doing the Heimlich maneuver on yourself, thrust your upper abdomen against a hard edge such as the corner of a table or chair.
WANT TO READ MORE?
For more safety tips check out How to do Infant CPR here.
Information contained in this post does not constitute legal advice and should not be substituted for professional legal counsel. Daily Mom is not liable for the consequences of any actions taken on the basis of the information provided.
Photo Credit: Pixabay |
Treaty of Moultrie Creek
The Treaty of Moultrie Creek was an agreement signed in 1823 between the government of the United States and the chiefs of several groups and bands of Indians living in the present-day state of Florida. The treaty established a reservation in the center of the Florida peninsula.
The indigenous peoples of Florida had largely died out by early in the 18th century, and various groups and bands of Muskogean-speakers (commonly called Creek Indians) and other groups such as Yamasee and Yuchi moved into the area, often with the encouragement of the Spanish colonial government. These groups, which often lived on both sides of the border between Florida and Georgia, came into increasing conflict with white settlers after the United States became independent. When the United States acquired Florida from Spain in 1821 (by means of the Adams-Onís Treaty), the conflict increased. In 1823, the United States government decided to settle the Seminoles on a reservation in the central part of the territory.
A meeting to negotiate a treaty was scheduled for early September 1823 at Moultrie Creek, south of St. Augustine. About 425 Seminoles attended the meeting, choosing Neamathla, a prominent Mikasuki chief, to be their chief representative. Under the terms of the treaty negotiated there, the Seminoles were forced to place themselves under the protection of the United States and to give up all claim to lands in Florida, in exchange for a reservation of about four million acres (16,000 km²).
The reservation ran down the middle of the Florida peninsula from just north of present-day Ocala to a line even with the southern end of Tampa Bay. The boundaries were well inland from both coasts, to prevent contact with traders from Cuba and the Bahamas. Neamathla and five other chiefs, however, were allowed to keep their villages along the Apalachicola River.
Under the Treaty of Moultrie Creek, the United States government was obligated to protect the Seminoles as long as they remained peaceful and law-abiding. The government was supposed to distribute farm implements, cattle and hogs to the Seminoles, compensate them for travel and losses involved in relocating to the reservation, and provide rations for a year, until the Seminoles could plant and harvest new crops. The government was also supposed to pay the tribe US$5,000 a year for twenty years, and provide an interpreter, a school and a blacksmith for the same twenty years. In turn, the Seminoles had to allow roads to be built across the reservation and had to apprehend any runaway slaves or other fugitives and return them to United States jurisdiction.
- Mahon: 2-8, 18-37
- Mahon: 40-50
- Missall: 63-64.
- Missall: 64-65.
- Mahon, John K. (1985). History of the Second Seminole War 1835-1842 (Revised Edition ed.). Gainesville, Florida: University of Florida Press. ISBN 0-8130-1097-7.
- Missal, John and Mary Lou Missal (2004). Seminole Wars: America's Longest Indian Conflict. Gainesville, Florida: University Press of Florida. ISBN 0-8130-2715-2. |
In the nucleus, DNA is associated with a class of structural and regulatory proteins called histones. The nucleosome refers to the units of DNA spooled around histone complexes, typically structured as heterotetramers (two copies of four different histone proteins). Stretches of DNA between individual nucleosomes are referred to as linker DNA. The highly ordered packaging of DNA and histones together is called chromatin.
Storage of DNA. The DNA in the nucleus of the cell is complexed in a very orderly fashion with a class of structural and regulatory proteins called histones. The combination of histones and DNA together is called chromatin.
Structure of Chromatin. Chromatin consists of DNA spooled around complexes of histone protein molecules called nucleosomes.
Chromatin structure influences the accessibility of promoter regions to transcription factors. The accessibility, or lack of accessibility, of promoter DNA represents a major means of transcriptional regulation and is at the center of epigenetic gene regulation. Relaxed chromatin, also known as euchromatin, allows transcription factors more ready access to promoter sites, thereby facilitating activation of transcription. In contrast, heterochomatin is densely compacted to varying degrees, often concealing DNA promoter sequences from potential transcription factors. Epigenetic markers (or marks) are the modifications to histone proteins and DNA that modulate the affinity of chromatin-binding proteins, in turn altering chromatin structure.
Researchers are interested in understanding the role of epigenetics in disease progression. The activity of enzymes that modify histones can be affected by environmental factors, such as toxins or stress, leading to alterations in gene transcription. Aberrant methylation has been well correlated with gene silencing and the development of several cancers.
Biomedical research is exploring if epigenetic regulations can be exploited for the development of novel drug therapies. For example, abnormal changes in chromatin conformation associated with the development of cancer might be reversible with the proper therapeutic. Epigenetic therapies, such as manipulating the chromatin conformation of specific genes to activate or repress transcription, would appear to be far more achievable than genetic therapies requiring precise DNA sequence modifications.
Stem cell researchers are actively and aggressively exploring epigenetics. Stem cells are unique in that they have not assumed a particular cell fate and retain the potential to become a variety of tissue types — a cellular state referred to as pluripotency. Studies of the epigenetic changes that occur during stem cell differentiation have provided clues to how the epigenetic status of cells may impact pluripotency. Epigenetic manipulations may enable researchers to more accurately direct differentiation of cells into desired cell types and may aid in the generation of induced pluripotent stem cells, the practice of reprogramming fully differentiated cells back toward a state of pluripotency.
While most genes in autosomal cells are simultaneously expressed from both alleles, a small proportion of genes are expressed in a monoallelic fashion. Imprinting is the process through which one of two alleles for a given gene is silenced in a parent-of-origin specific pattern. Imprinting is considered a form of epigenetics because it leads to heritable changes in gene expression despite a lack of changes in the genomic code.
The silencing of one allele in imprinting is achieved through DNA methylation and histone modifications. Non-protein coding RNAs greater than 200 bp in length (long ncRNAs) play a role in genomic imprinting by recruiting chromatin remodeling complexes to regions of the genome that are silenced.
Genetic imprinting has been linked to the development of several diseases, including various forms of cancer. Various oncogenes and tumor suppressor genes appear to demonstrate patterns of imprinting. Given the potential role for gene imprinting in disease progression, scientists are actively investigating how epigenetic modifications impact monoallelic gene expression.
Researchers are also examining the potential for epigenetic markers to serve as biomarkers for disease. Biomarkers are diagnostic correlates that aid in the prognosis and diagnosis of diseases. Indeed, differences in epigenetic patterns between diseased and healthy tissues are apparent. Current efforts in biomarker research predominantly examine DNA methylation patterns in tissue biopsies (Laird 2010). It is possible that epigenetic changes may aid in disease diagnosis and help dictate the most appropriate therapeutic approaches for treating patients.
Ben-David U and Benvenisty N (2011). The tumorigenicity of human embryonic and induced pluripotent stem cells. Nat Rev Cancer 11, 268–277. PMID: 21390058
Carey N et al. (2011). DNA demethylases: A new epigenetic frontier in drug discovery. Drug Discov Today 16, 683–690. PMID: 21601651
Chi P et al. (2010). Covalent histone modifications — miswritten, misinterpreted, and miserased in human cancers. Nat Rev Cancer 10, 457–469. PMID: 20574448
Laird PW (2010). Principles and challenges of genome-wide DNA methylation analysis. Nat Rev Genet 3, 191–203. PMID: 20125086
Malecová B and Morris KV (2010). Transcriptional gene silencing through epigenetic changes mediated by noncoding RNAs. Curr Opin Mol Ther 12, 214–222. PMID: 20373265
Pembrey M (1996). Imprinting and transgenerational modulation of gene expression: Human growth as a model. Acta Genet Med Gemellol (Roma) 45, 111–125. PMID: 8872020
Perry AS et al. (2010). The epigenome as a therapeutic target in prostate cancer. Nat Rev Urol 7, 668–680. PMID: 21060342 |
Every time we eat food, our body has to digest and absorb the nutrients the food contains. Some of those nutrients include proteins, carbohydrates and fats. Different enzymes and digestive juices allow different nutrients to be digested and absorbed in the body. For example, fat is a nutrient that must be mixed with an emulsifier in order for it to be effectively digested and absorbed from the intestinal tract .
Fats are hydrophobic, which means they do not dissolve in water. This is evident when you pour a tablespoon of vegetable oil in a cup of water and watch it rise to the top. This property of fat makes it necessary for the body to create an environment in which fat can be digested or broken down.
Bile is what allows fat to be digested in the water environment of the intestines. When fat is present in the intestinal tract, bile is secreted from the gallbladder and is released into the intestines via the common bile duct. Bile has an attraction for both fat and water. Therefore, bile is able to penetrate large fat globules floating around in the intestines and can break them into smaller globules that are now also water soluble with a bile coating. This is called emulsification.
The smaller fat globules or droplets that are combined with bile are technically called micelles. Micelles are important for a couple of reasons. First, the micelles allow enzymes that break down fats to access the fat more easily. Before emulsification, fat digestion is really ineffectual. Second, micelles also allow for transport of the fat molecules from the intestinal tract into the intestinal cells. This allows for the absorption of the fat into the rest of the body.
What Exactly is Bile?
Due to bile, fats are able to form smaller droplets in the small intestines. The bile secretion is actually made of two major components that contribute to its ability to be both hydrophobic and hydrophilic, or water-soluble. Cholesterol is actually used to make bile acid in the liver and is the component that gives bile its non-water-soluble qualities. The bile acid is then combined with an amino acid from protein, which leads to water-soluble qualities.
- Understanding Nutrition Now, 12th edition; Ellie Whitney, Ph.D. and Sharon Rolfes, M.S.
- Ryan McVay/Lifesize/Getty Images |
Ever 40 Million Americans suffer from anxiety disorders. Learn more about the many treatments available to help cure anxiety.
What is Anxiety Disorder?
Anxiety disorders affect about 40 million American adults age 18years and older in a given year causing them to be filled with fearfulness and uncertainty. Anxiety disorders last at least for 6 months and get worse if they are not treated. Anxiety disorders common occur along with other mental or physical illness including alcohol and substance abuse, which may mask anxiety symptoms or make them worse. In some cases, these other illnesses need to be treated before a person will respond to treatment for the anxiety disorder.
- Panic Disorder
Panic Disorder is a real illness that can be successfully treated. It is characterized by sudden attacks of terror, usually accompanied by a pounding heart, sweatiness, weakness, faintness, or dizziness. During the attacks, people with panic disorder may flush or feel chilled; their hands may tingle or feel numb; and they may experience nausea, chest pain, or smothering sensations. Panic attacks usually produce a sense of unreality, a fear of impending doom, or a fear of losing control. It can occur anytime, even during sleep.
- Generalized Anxiety Disorder (GAD)
People with GAD go through the day filled with exaggerated worry and tension, even though there is little or nothing to provoke it. They anticipate disaster and are overly concerned about health issues, family problems, or difficulties at work. Sometimes just the thought of getting through the day produces anxiety. They can’t relax, startle easily, and have difficulty concentrating. Often they have trouble falling asleep or staying asleep. Physical symptoms that often accompany the anxiety include fatigue, headaches, muscle tension, muscle aches, difficulty swallowing, trembling, twitching, irritability, sweating, nausea, lightheadedness, having to go to the bathroom frequently, feeling out of breath and hot flashes.
- Social Phobia or Social anxiety disorder
Social Phobia is diagnosed when people become overwhelmingly anxious, and excessively self-conscious in everyday social situations. They have an intense, persistent, and chronic fear of being watched and judged by others and of doing things that will embarrass them. They can worry for days or weeks before a dreaded situation. It may interfere with work, school, and other activities and may make it hard to make and keep friends.
- Obsessive compulsive Disorder (OCD)
People with OCD have persistent upsetting thoughts and use rituals (compulsion) to control the anxiety these thoughts produce. Most of the time, the rituals end up controlling them.
- Post-Traumatic Stress disorder (PTSD)
PTSD develops after a terrifying ordeal that involved physical harm or threat of physical harm. People with PTSD may startle easily, become numb, loss of interest in things they used to enjoy, emotionally distant, irritable, become aggressive and violent. They repeatedly relive the trauma in their thoughts during the day and in nightmares when they sleep. They are called flashbacks. They may lose touch with reality.
- Specific Phobia
Specific Phobia is an intense, irrational fear of something that actually poses little or no threat. Some of the common phobias are heights, escalators, tunnels, highway driving, closed-in places, water, flying, dogs, spiders, and injuries involving blood.
What treatments are available?
Cognitive Behavioral Therapy (CBT) is very useful in treating anxiety disorders. The cognitive part helps people change the thinking patterns that support their fears, and the behavioral part helps people change the way they react to anxiety provoking situations along with psych education.
Exposure-based therapy has been used for many years to treat specific phobias. CBT is undertaken when people decide they are ready for it and with their permission and cooperation. There are no side effects other than the discomfort of temporarily increased anxiety.
Stress management techniques and meditation can help people with anxiety disorders calm themselves and may enhance the effects of therapy. Aerobic exercise may have a calming effect also.
Medications will not cure anxiety disorders, but will keep them in control while the person receives psychotherapy.
Please CONTACT ME for a consultation and take charge of your Health and Wellness in Mind, Body and Spirit. |
Perhaps you've fallen into the trap of thinking that a motherboard is just a slab of fibreglass for the all important processor to slot into. Well, it's time to rethink things: the motherboard is the nervous system of your PC.
It provides the essential communication pathways that enable the rest of your machine to do its job, handles the video circuitry and connections to external devices and even resists scrabbling hands trying to rip out graphics cards or rubbing all those essential components. Like all true workhorses, when it does its job, you barely notice it.
Manufacturing them remains a challenge. True, processors have features that are so small that they can't be seen with the naked eye, but the amount of technology at work when building a motherboard is no less impressive.
It's an intensive process – and one that you're about to learn in detail.
1. Raw materials
Like any other electronic item, tracing the motherboard back to its roots leaves us staring at a hole in the ground – or, to be more accurate, a couple of them.
The two dominant constituents of a printed circuit board are fibreglass – which provides insulation – and copper, which forms the conductive pathways, taking us back to their birthplaces in a sand quarry and open-cast copper mine respectively.
Turning sand into glass and copper ore into metal are processes that are hundreds of years old, but what we do with the materials next is anything but ancient.
2. Fabricating copper-clad laminate
Molten glass is extruded to produce glass fibres that are woven to create a sheet of fibreglass fabric. Next the sheet is impregnated with epoxy resin and heated to partially cure the resin; the resulting sheet is called 'prepreg'. Multiple sheets of prepreg are stacked to produce a laminated sheet of the required thickness.
Sheets of copper foil are applied to both sides of the laminate and the sandwich is placed in a heated press. This completes the curing of the resin, making the laminate rigid and causing the layers to bond together.
The result is an insulating sheet of fibreglass with copper foil on both sides: copper-clad laminate. The overall thickness of the printed circuit board (PCB) is typically 1.6mm. This means that, for a six-layer board, the fibreglass laminates will be about 0.35mm thick and the copper foil will be about 0.035mm thick.
The fibreglass is thick enough to provide adequate mechanical strength and rigidity, and the copper is sufficient for good electrical and thermal conductivity.
3. Etching away unwanted copper
A photosensitive material called photo-resist is applied to both sides of the copper-clad laminate, totally covering the copper layers. This is usually a dry film process, in which thin films of solid photo-resist are laminated onto both sides of the board using equipment that's fairly similar to an office laminator.
Now a transparent artwork showing the pattern of the PCB's pads and tracks is placed over the photosensitive copper-clad laminate, and is then exposed to ultraviolet light. Ultraviolet is used rather than visible light so the board can be handled safely in daylight.
Where the photo-resist is exposed to ultraviolet, the chemicals polymerise, forming a plastic. Since the board has two copper layers, each of which has a photo-sensitive coating, this process is carried out twice using different artworks for each side.
Next, the board is immersed in a chemical solution to develop the latent image. The developer washes away the unexposed photo-resist, leaving only material that was polymerised and which corresponds to the pad and tracks. The areas of the copper film that aren't protected by the remaining polymerised portions of the photoresist are etched away.
In an oxidation reaction, metallic copper is transformed into a copper salt, which is water-soluble and therefore washes off during the etching. For quick etching, the board passes through a chamber in which the etchant is sprayed at a high pressure and at a temperature of about 50C.
After etching, the board is washed to remove surplus etchant and the remaining photo-resist is removed using an organic solvent. The insulating fibreglass board now has a pattern of copper tracks on each side that will form the circuit's interconnections. This assembly is called a core.
However, motherboards have a multilayer construction, which means they have more than two copper layers. This means that the above process has to be carried out several times. In the case of a six-layer motherboard, two of these cores will be needed to provide four of those layers. We'll see later how the other two layers are made.
4. Building up a stack
Double-sided cores are now sandwiched together to start the creation of a multilayer PCB. Two cores are used for a six-layer board (a common figure for motherboards), but they can't be stacked directly on top of each other because this would cause the copper tracks on the top of the bottom core to short with the tracks on the bottom of the top core.
To stop this from happening, a sheet of prepreg is placed between them. Sheets of prepreg are also applied to the top and bottom of the stack before it's subjected to pressure and a high temperature to complete the curing of the prepreg and bond everything together.
For a six-layer board, the stack would comprise: prepreg / core / prepreg / core / prepreg. This means that the final result will be: fibreglass / copper / fibreglass / copper / fibreglass / copper / fibreglass / copper / fibreglass.
5. Drilling the holes
Holes are now drilled through the board. First come the mounting holes, which will be used for mechanical fixing (bolting the motherboard into the PC's case).
Second are the holes that are used to accommodate the leads of through-hole components when they're soldered to the board in a couple of steps' time.
Finally, there are the tiny holes that form vias (vertical interconnect access), which make electrical connections between the various copper layers – or will, when we get to routing, testing and QA.
Despite the use of a high-speed, numerically controlled drilling machine, drilling can be a very time-consuming process, especially if lots of different hole sizes are required. For this reason, it's common to stack boards together so that several are drilled at once, saving time and money. |
HARVARD UNIVERSITY—A new study in the journal Nature Sustainability overturns long-held interpretations of the role humans played in shaping the American landscape before European colonization. The findings give new insight into the rationale and approaches for managing some of the most biodiverse landscapes in the eastern U.S.
The study, led by archaeologists, ecologists, and paleoclimatologists at Harvard, Emerson College and elsewhere, focuses on the coast from Long Island to Cape Cod and the nearby islands of Nantucket, Martha’s Vineyard, Block Island, and Naushon–areas that historically supported the greatest densities of Native people in New England and today are home to the highest concentrations of rare habitats in the region, including sandplain grasslands, heathlands, and pitch pine and scrub oak forests.
“For decades, there’s been a growing popularization of the interpretation that, for millennia, Native people actively managed landscapes – clearing and burning forests, for example – to support horticulture, improve habitat for important plant and animal resources, and procure wood resources,” says study co-author David Foster, Director of the Harvard Forest at Harvard University. This active management is said to have created an array of open-land habitats and enhanced regional biodiversity.
But, Foster says, the data reveal a new story. “Our data show a landscape that was dominated by intact, old-growth forests that were shaped largely by regional climate for thousands of years before European arrival.”
Fires were uncommon, the study shows, and Native people foraged, hunted, and fished natural resources without actively clearing much land.
“Forest clearance and open grasslands and shrublands only appeared with widespread agriculture during the European colonial period, within the last few hundred years,” says Wyatt Oswald, a professor at Emerson College and lead author of the study.
The authors say the findings transform thinking about how landscapes have been shaped in the past – and therefore how they should be managed in the future.
“Ancient Native people thrived under changing forest conditions not by intensively managing them but by adapting to them and the changing environment,” notes Elizabeth Chilton, archaeologist, co-author of the study, and Dean of the Harpur College of Arts and Sciences at Binghamton University.
To reconstruct historical changes to the land, the research team combined archaeological records with more than two dozen intensive studies of vegetation, climate, and fire history spanning ten thousand years. They found that old-growth forests were predominant for millennia but are extremely uncommon today.
“Today, New England’s species and habitat biodiversity are globally unique, and this research transforms our thinking and rationale for the best ways to maintain it,” says Oswald. “It also points to the importance of historical research to help us interpret modern landscapes and conserve them effectively into the future.”
The authors also note the unique role that colonial agriculture played in shaping landscapes and habitat. “European agriculture, especially the highly varied activity of sheep and cattle grazing, hay production, and orchard and vegetable cultivation in the 18th and 19th centuries, made it possible for open-land wildlife species and habitats that are now rare or endangered – such as the New England cottontail – to thrive,” says Foster. Open-land species have declined dramatically as forests regrow on abandoned farmland, and housing and commercial development of both forests and farms have reduced their habitat.
Foster notes that the unique elements of biodiversity initiated through historical activities can be encouraged through analogous management practices today.
“Protected wildland reserves would preserve interior forest species that were abundant before European settlement,” he says. “Lands managed through the diversified farming and forestry practices that created openlands and young forests during the colonial period would support another important suite of rare plants and animals.”
For successful conservation models that leverage this historical perspective, the authors point to efforts by The Trustees of Reservations, the oldest land trust in the world, which manages more than 25,000 acres in Massachusetts embracing old and young forests, farms, and many cultural resources. The organization uses livestock grazing to keep lands open for birds like bobolinks and meadowlarks, which in turn supports local farmers and produces food for local communities.
Jocelyn Forbush, Executive Vice President for the Trustees, says, “Maintaining the legacy of our conserved openlands in Massachusetts is an important goal for The Trustees and we are increasingly looking to agricultural practices to yield a range of outcomes. In particular, we are employing grazing practices to support the habitats of our open and early successional lands in addition to the scenic and cultural landscapes that shape the character of our communities.”
Article Source: HARVARD UNIVERSITY news release |
Cranial nerve 8 (CN 8) contains two components: auditory (cochlear) and vestibular. Both begin in the inner ear and travel to the brainstem: the auditory component projects to the cochlear nuclei (at the pontomedullary junction) and the vestibular component projects to the vestibular nuclei (in the medulla).
Auditory information travels from the inner ear through the auditory (cochlear) portion of CN 8 to arrive at the cochlear nuclei at the pontomedullary junction (Fig. 12–1). The cochlear nuclei project to the inferior colliculi of the lower midbrain via the lateral lemniscus, and also project to the superior olives. Each inferior colliculus projects to the ipsilateral medial geniculate nucleus (MGN) of the thalamus, and each MGN projects to the ipsilateral auditory cortex in the superior temporal gyrus (Heschel’s gyrus).
The auditory pathway. See text for explanation. Reproduced with permission from Martin J: Neuroanatomy Text and Atlas, 4th ed. New York: McGraw-Hill Education; 2012.
Auditory information crosses to become bilateral early in its connections within the brainstem, so unilateral hearing loss can only occur due to pathology of the inner ear or CN 8 (or rarely the entry zone of CN 8 or cochlear nuclei at the pontomedullary junction). Central lesions (in the brainstem or temporal lobe) only rarely cause deafness, and must be extensive and bilateral to do so. Therefore, central etiologies of deafness are usually associated with other signs due to involvement of neighboring structures. Left temporal lobe lesions can lead to deficits in word processing (pure word deafness) and right temporal lobe lesions can cause deficits in music processing (amusia).
Hearing loss due to a peripheral lesion is called conductive hearing loss if it caused by problems in the outer or middle ear, and called sensorineural hearing loss if it is due to problems in the cochlea or auditory component of CN 8. Both conductive and sensorineural etiologies of hearing loss may be acquired or may have a congenital/genetic basis. Acquired causes of hearing loss are listed below.
Acquired causes of conductive hearing loss include:
Acquired causes of sensorineural hearing loss include:
Internal auditory artery infarct (the internal auditory artery [also called the labyrinthine artery] is usually a branch of the anterior inferior cerebellar artery [AICA])
Sudden sensorineural hearing loss (often idiopathic; may respond to steroids)
Ménière’s disease (see “Ménière’s Disease” below)
Vestibular schwannoma (also called acoustic neuroma; see “Vestibular Schwannoma” in Ch. 24)
Ototoxic medications (e.g., aminoglycosides)
Sequela of meningitis (especially in children)
Neurofibromatosis type II with bilateral vestibular schwannomas (see “Neurocutaneous Syndromes” ... |
Asthma rates continue to grow around the world. In the US, about 1 out of every 10 children now has the potentially life threatening condition. Although there are still many mysteries surrounding the disease, it’s long been known that physical factors such as pollen, dust, pet dander, pollution, smoke, and mold can both trigger the condition and contribute to its development. But new research suggests that psychological factors such as stress, neighborhood violence, and abuse can be just as influential in asthma development.
How? Asthma occurs when the body’s immune system overreacts to irritants. When children are exposed to too much stress for too long, their adrenal glands overproduce the chemicals cortisol and adrenaline. These chemicals shift the body’s immune system into overdrive, fueling numerous health issues including asthma.
A recent study showed that kids who face one traumatic event at home (divorce, death of a parent, abuse, etc) are 28% more likely to develop asthma. Kids who face four traumatic events are 73% more likely to develop asthma. This has hit children especially hard in cities like Detroit where violence is high and 25% of children under 6 live in homes with no working adult.
“You can’t ignore it anymore,” said Dr. Rosalind Wright of Kravis Children’s Hospital. “The data is there that says psychological stress is a factor, just like these other factors.”
Researchers are now looking into counseling as a way to help children better cope with their stress and thereby better manage their asthma. |
Hairs on bat wings aid flight
Hairy tale Treating bats with a depilatory cream has lead to the discovery that the microscopic hairs on their wings are crucial for flight control.
The bat is the only mammal truly capable of flight. Its wings are actually flexible membranes spread between its arms and hands.
The authors of the research article, published this week in Proceedings of the National Academy of Sciences, report that the tiny hairs spread across the a bat wing's dorsal and ventral surfaces, act like the Pitot tubes on aircraft wings to help gauge speed and control flight.
Dr Susanne Sterbing-D'Angelo, of the Institute for Systems Research at the University of Maryland in the USA, used a scanning electron microscope to map the distribution of the hairs in two species of bat: the big brown bat (an insect eater) and the short-tailed fruit bat.
They showed that the wing hairs were typically arranged in rows, with some minor differences between the two species of bats.
The researchers then demonstrated that stimulation of the wing hairs, with brief puffs of air from different directions, led to stimulation of the sensory nerve cells at the base of the hairs. This was distinct from tactile responses due to physical indentation of the skin in conditions of high level airflow.
They then used a common depilatory cream to show that the loss of hair removed the nerve cell response.
Obstacle training course
To evaluate the effect of hair loss on flight, the bats were trained to fly through an obstacle course and performance was measured before and after removing different sections of wing hair.
The authors found bats whose wing hairs had been removed flew faster and made wider turns. They believe nerve cell receptors at the base of these hairs detect turbulent air flow and likely help to stabilise flight.
This finding is in keeping with advice given to airline pilots to increase speed when recovering from a stall.
Hair patterns to suit lifestyle
Bob Bullen, who runs an ecological consultancy in Western Australia known as Bat Call, says there are two aspects to aerodynamics: performance, which is energy management, and control, which is the ability to manoeuvre and be agile.
"What these guys have done, which is absolutely fabulous, is to prove in an experimental situation that the air patterns on the wings work in the way that we have been proposing," says Bullen. "They give sensory feedback to the bat that enables them to fly right to the limits of performance and control.'
Bullen has been involved in research that has described the patterns of hair formation in the 37 species of bat found in Western Australia. He says the patterns are consistent with the bats different foraging strategies.
From bats that fly like helicopters to those that fly like butterflies, Bullen explains that each has evolved to optimise success in its habitat.
"We now have evidence that there's a correlation between observed hair patterns and aerodynamic ability, which is reflected by the different foraging strategies," he says.
Working out all the features and factors that affect flight patterns in different species of bat (of which there are approximately 1000 worldwide) will have wider implications than first envisaged.
"Militaries around the world are heavily into what they call micro-air vehicles or MAVs," says Bullen.
He says understanding the biological basis for the breadth of diversity in bats' wing hair patterns will add power to this type of technological research. |
Jose made this sketch of a battery and light bulb for science class. If this were a real set up, the light bulb wouldn’t work. The problem is the loose wire on the left. It must be connected to the positive terminal of the battery in order for the bulb to light up.
Q: Why does the light bulb need to be connected to both battery terminals?
Electric Circuit Basics
A closed loop through which current can flow is called an electric circuit . In homes in the U.S., most electric circuits have a voltage of 120 volts. The amount of current (amps) a circuit carries depends on the number and power of electrical devices connected to the circuit. Home circuits generally have a safe upper limit of about 20 or 30 amps.
Parts of an Electric Circuit
All electric circuits have at least two parts: a voltage source and a conductor. They may have other parts as well, such as light bulbs and switches, as in the simple circuit seen in the Figure below .
- The voltage source of this simple circuit is a battery. In a home circuit, the source of voltage is an electric power plant, which may supply electric current to many homes and businesses in a community or even to many communities.
- The conductor in most circuits consists of one or more wires. The conductor must form a closed loop from the source of voltage and back again. In the circuit above, the wires are connected to both terminals of the battery, so they form a closed loop.
- Most circuits have devices such as light bulbs that convert electrical energy to other forms of energy. In the case of a light bulb, electrical energy is converted to light and thermal energy.
- Many circuits have switches to control the flow of current. When the switch is turned on, the circuit is closed and current can flow through it. When the switch is turned off, the circuit is open and current cannot flow through it.
When a contractor builds a new home, she uses a set of plans called blueprints that show her how to build the house. The blueprints include circuit diagrams. The diagrams show how the wiring and other electrical components are to be installed in order to supply current to appliances, lights, and other electric devices. You can see an example of a very simple circuit in the Figure below . Different parts of the circuit are represented by standard circuit symbols. An ammeter measures the flow of current through the circuit, and a voltmeter measures the voltage. A resistor is any device that converts some of the electricity to other forms of energy. For example, a resistor might be a light bulb or doorbell.
The circuit diagram on the right represents the circuit drawing on the left. Below are some of the standard symbols used in circuit diagrams.
Q: Only one of the circuit symbols above must be included in every circuit. Which symbol is it?
- An electric circuit is a closed loop through which current can flow.
- All electric circuits must have a voltage source, such as a battery, and a conductor, which is usually wire. They may have one or more electric devices as well.
- An electric circuit can be represented by a circuit diagram, which uses standard symbols to represent the parts of the circuit.
- electric circuit : Closed loop through which current can flow.
- What is an electric circuit?
- Which two parts must all electric circuits contain?
- Sketch a simple circuit that includes a battery, switch, and light bulb. Then make a circuit diagram to represent your circuit, using standard circuit symbols. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.