content
stringlengths 275
370k
|
---|
2 ObjectivesExplain the role of indicators in monitoring and evaluation for ACSM.Describe the characteristics of well-defined indicators.Demonstrate how to develop indicator descriptions.In this session, we will explain what indicators are, what makes a good indicator, and how to develop effective indicator descriptions.
3 Creating a FrameworkLet’s take another look at our framework for our case example.Ask: How will we know we have these outputs? Solicit ideas and discuss.The easiest way is to count them. But how meaningful will that simple number be? How can we add some context to that? We could compare it to a target or baseline.Ask: How will we measure these outcomes? What will indicate that we have achieved these? What will our specific evidence be? Solicit ideas and discuss.Now let’s look at impact. What evidence could we examine to prove this impact?Solicit ideas and discuss.We just created what are called “indicators” for this framework.
4 What Is an Indicator? Clues, signs, or markers. Used to: Track inputs, activities, outputs, and outcomes.Measure progress toward the goal and objectives.Crow and the Pitcher!Indicators are the signs or markers that we watch for to “indicate” what is happening. We use indicators to keep track of our planned work and the results of that work.When we collect and analyze the correct indicators over time, we know that (1) we carried out the work we said we would do, and (2) whether or not the work achieved the objectives it was intended to support.Think back to our story about the Crow and the Pitcher. Can you remember some of the indicators from that story? How could the crow measure what he did and the effect afterward?Encourage responses. (Counting # of pebbles, level of water, amount of thirst.)
5 Indicators are Part of the M&E Plan FrameworkIndicatorsData CollectionData QualityData Use and ReportingEvaluationStrategyBudgetIndicator descriptions are another part of your M&E plan that we are building this week. Your monitoring and evaluation efforts will be designed to help you collect these indicators. Without indicators, you will not know what data you will need as evidence of your ACSM success.
6 Examples Framework Component Indicator INPUT: IPCC training (curriculum)100 copies of training manuals, slide sets, and handouts on IPCCACTIVITY: LobbyingMeeting with the finance minister and NTP directorOUTPUT: Patient coalition meetingNumber of patients attending the meetingOUTCOME: Increased fundingPercentage of NTP budget covered by the Ministry of HealthOUTCOME: Increased support for community-based DOTS* Policy change to allow volunteer health workers to serve as treatment supportersOUTCOME: Increased political support for TB control* Parliament declares TB a national emergencyHere are some examples of indicators for different components of a framework. You can see that they offer more detail to define that component, which helps everyone understand them in the same way.In this first example, we use an indicator to define the INPUT of curriculum.Read INPUT example.In example #2, we have an indicator to define our ACTIVITY of “lobbying.”Read ACTIVITY example.Indicators can also define OUTPUTS.Read OUTPUT example.Finally, we have some indicators that explain some OUTCOMES. Read OUTCOME examples.Notice that indicators can be quantitative (in black) and expressed numerically as counts, percentages, rates, etc. They can also be qualitative (in green with asterisk). They are expressed as a description (something you witness or observe but cannot count or calculate with numbers).
7 Indicators: Important M&E Evidence GoalWhat am I trying to achieve?ObjectiveWhat are the major steps I need to take to reach my goal?ActivitiesWhat am I going to do to reach my objective?Indicators MONITOR progress: Have I done what I said I would do?InputsWhat resources do I need to complete each activity?OutputsWhat will the immediate product of my activities be?OutcomesWhat do I hope will happen as a result of my activities?Indicators EVALUATE results: Did my work have the desired effect and contribute to my goal?ImpactWhat do I think these activities will contribute to my goal?As we said before, indicators help you answer important monitoring and evaluation questions. Activities, inputs, and outputs relate to monitoring. So indicators for these components will help you answer the question, “Have I done what I said I would do?”Outcomes and impact require evaluation. So your indicators will answer the question, “Did my work have the desired effect and contribute to my goal?”Review slide in more detail.Refer to Handout 3.1 Common ACSM Indicators.This handout offers a list of some useful output and outcome indicators for common advocacy, communication, and social mobilization activities. This is not a complete list, but it may be a helpful place to get some ideas.Briefly review Handout 3.1 Common ACSM Indicators.
8 Outputs versus Output Indicators Output is the immediate result of an activity:# of people trained# of people with TB symptoms going to health facility for evaluationREPORT WHAT YOU ACTUALLY ACCOMPLISHEDIndicators for outputs measure your result against a target value:# of people actually trained versus how many you wanted to train# of people with TB symptoms going for evaluation versus how many you thought would goASSESS WHETHER YOU REACHED YOUR TARGET VALUEMany people get confused when it comes to indicators for outputs. At first it would seem that an output indicator might be the same as the output itself. You will know how many people you trained by simply counting the number of people you trained. But there is a subtle difference between an output and its indicator.An output is simply what you can count right after the activity is completed. But that number alone does not tell you very much. There is no indication if this is a good number or not unless we compare it to our target. When we compare the output with its target, we have a better idea if we were successful in producing that output. That comparison becomes our output indicator.Review slide in more detail.
9 Inputs Activity Outputs Outcome Impact NTP Goal: Reduce morbidity and mortality due toTB in Country X.Indicator:Number of TB deaths per 100,000 per yearNTP Objective: Increase case detection rate from 42% to 60% by 2015.Indicator:Case detection rateASCM Objective: Raise knowledge of TB symptoms and TB services to increase the number of people in City X seeking care for TB symptoms at DOTS centers by 30% by DecemberIndicator:# of people requesting screening at the City X DOTS center compared to baselineInputsFundingList of subway routesActivityDevelop and produce subway adsOutputs# ads produced# subway trains withadsOutcomeIncreased knowledge of TB and DOTS centersImpactIncreased # of TB cases detectedTHIS SLIDE IS ANIMATED.Let’s look at another framework for a communication objective.Starting with NTP Goal, review each component of the framework in order. Then CLICK to display the indicator after it.Indicators:% of people who know two TB symptoms and have heard of the DOTS center# of people requesting screening at City X DOTS centerIndicator:Difference in case detection rate between intervention and control citiesIndicator:Lists developed(Yes/No)Indicator:Number of ads and subway trains versus targetIndicator:Ads produced according to schedule (Yes/No)
10 Main Steps of Creating Indicators Select an indicatorTest against criteriaWrite a descriptionNow let’s talk about the process of creating effective indicators. There are three main steps. First, we select an indicator we think might work. Then we test that indicator against some basic criteria that we will talk about next. Then we revise that indicator with a more complete description.
11 Characteristics of Good Indicators These are the characteristics of good indicators.Briefly review each characteristic.Once you have selected an indicator, ask yourself these questions to see if your indicator “passes the test.” If you answer no to any of these questions, you need to revise your indicator.
12 Is This Indicator Valid? Does it tell us what we really want to know?Could it actually measure something else?Example:% of Ministry of Health budget dedicated to TB as an indicator of government commitment to TB control.Improved indicator:% of NTP budget covered by government (compared to % covered by donors).THIS SLIDE IS ANIMATED.In this case, valid means correct or true. Does the indicator tell us what we really need to know? Could the indicator measure something else?In this example, we want to monitor the budget dedicated to TB over time as an indicator of the government’s commitment to TB control.What are some potential problems with this indicator? (Depends on the denominator. What if the Ministry of Health budget changes a lot over time? What if the NTP has the money it needs, regardless of its percentage of the Ministry of Health budget? What if all of that money comes from external donors? How could you improve this indicator?)Discuss possible problems and revisions.CLICK to display new indicator.This new indicator will tell us over time if the government steps up and provides more of the budget for TB. It is more valid because it will not measure something else, like changes in the overall Ministry of Health budget or in donor contributions.
13 Is This Indicator Reliable? Will everyone interpret or calculate it the same way?Example: Number of partners actively participating in advocacy coalition.Improved indicator: Number of partners who attend at least 75% of coalition meetings.Example: Smear conversion rate.THIS SLIDE IS ANIMATED.A reliable indicator is one that everyone knows how to measure or calculate in exactly the same way.Is this indicator clear to everyone?Discuss elements that are not clear. (What does it mean to “actively” participate? Just come to some meetings? How many? Lead or join a committee?)How could we make this indicator more clear?CLICK to display new indicator.CLICK to display second example. Our indicator in this example is “smear conversion rate.”How reliable is this indicator?Discuss possible problems and revisions. (No clear definition. What is the denominator?)To make sure we have reliable indicators, we first need very clear definitions. Then we need to train our staff on those definitions.
14 Is This Indicator Activity Specific? Does it tell us about our activity only? Could any other factor (ACSM or not) influence this indicator?Example: % of TB screening clients receiving a smear test as an indicator that providers from our training are referring TB suspects properly.Improved indicator: % of TB screening clients with documented referral for smear microscopy.THIS SLIDE IS ANIMATED.How specific is this indicator to our activity? Could anything else increase or decrease the number of smear tests (e.g., other program is incentivizing testing; our providers are referring, but the laboratories have chronic stockouts of supplies).In one country, the NTP invested a lot in IPCC training for DOTS nurses, including a review of referral practices and an emphasis on providing patients with the information they need in order to obtain a smear test and a referral from the clinic to the laboratory. The first indicator they thought of using was percentage of TB suspects receiving a smear test. But within this context, the laboratories were in terrible shape. In reality, the DOTS nurses were doing exactly what the NTP wanted, but TB suspects did not get the smear test because the laboratories did not have the supplies they needed. So we had to think of a better indicator.CLICK to display new indicator.We must always consider what other factors besides ACSM could affect our indicators. This is where our gap analysis of all of the individual-, group-, and system-level barriers is really helpful.
15 Is This Indicator Feasible? Do we have a realistic data source? Do we have enough money and staff?Example: % of population with correct knowledge about TB symptoms, annually (indicator of a successful communication activity).Alternative indicator: Number of people with possible TB symptoms presenting for diagnosis at specified DOTS clinics.THIS SLIDE IS ANIMATED.Feasibility may be the most important factor in our choice of indicators. Here is an example. If we are investing significantly in communication efforts to improve knowledge about TB in communities (with the ultimate goal of improving case detection), it would be nice to know the percentage of the population with correct knowledge about TB on an annual basis. But can we afford to collect it this often? What do you think?Discuss elements that are not feasible. (Can we really afford a KAP survey every year? Do we have expertise for population-based surveys?)Knowledge, attitudes, and practices, or KAP, surveys are very expensive, take a long time, and require specific expertise. Perhaps there is another result of our communication efforts that may be easier to collect and report.CLICK to display new indicator.
16 ComparableDo the results mean the same thing in different geographic areas at different times?Example: Number of nongovernmental organizations in each region mobilized to participate in World TB Day.Alternative indicator: % of nongovernmental organizations in each region mobilized to participate in World TB Day.THIS SLIDE IS ANIMATED.When indicators are comparable, they represent the same result or outcome in different places over time. It is important that we compare apples to apples and not apples to oranges.Do you think this indicator is comparable across regions? What could be different across regions for this indicator? (Total number of NGOs in the region [influences denominator], weather of region in March, physical size of region.)CLICK to display new indicator.
17 Indicator Descriptions What is the complete definition?What is the data source for this indicator?How do we calculate the value of the indicator?Numerator, denominatorQualitative criteriaTHIS SLIDE IS ANIMATED.After we select the right indicators, we have to determine how we will define, collect, analyze, and report them. These are called indicator descriptions. With indicator descriptions, everyone understands our activities and results to mean the same thing.These are the basic elements of an indicator description.CLICK to display each bullet and briefly explain.DEFINITION: Every part is clearly and fully explained.SOURCE: All indicators need a data source. Specify the form, database, reports, lists, etc., where the data should be found.CALCULATE VALUE: For numerical indicators, such as percentages, specify the numerator, denominator, and calculation. If you have simple counts, make sure it is clear who gets included in your count and who is excluded. For a qualitative indicator (e.g., policy change or political commitment), it is necessary to describe criteria for reporting the values.Keep in mind, we may not be able to select the perfect indicator. We have to make choices, and sometimes our choice may be influenced more by what is practical and easiest than what is the most accurate or reliable indicator.
18 Indicator: Percentage of district DOTS nurses receiving IPCC training Indicator DescriptionIndicator: Percentage of district DOTS nurses receiving IPCC trainingDefinitionData SourceCalculationPercentage of DOTS nurses who attend the full training and receive a completion certificate.IPCC training attendance sheet and certificate list submitted to NTP.Number of nurses completing the training.Total number of DOTS nurses in the district.This is a full description for this indicator.Review slide in detail.
19 Questions?Next, we are going to practice writing an indicator description for our case example. But before that, are there any questions? Where do you feel confused? |
The oldest pyramids were built by the people of Mesopotamia. They called these structures ‘Ziggurats’. These structures were coated with golden color. The Pyramids of Giza are the biggest pyramids in Egypt. However, the greatest number of pyramids in the world is in Sudan. Every pyramid has minimum 3 surfaces from the outside which are triangular in shape. The most popular form of pyramid is however a square type. It has four triangle-shaped surfaces and its base is also square in shape. So now let’s discover more in pyramid facts for kids.
A Quick Guide To Pyramid Facts For Kids
Largest Pyramid in the World (by size): Great Pyramid of Cholula (Mexico)
Tallest Pyramid in the World: Great Pyramid of Giza (Egypt)
Number of Pyramids in Sudan: 220
Number of Pyramids in Greece: 2
Number of Pyramids in Spain: 6
Largest Pyramid of Native American: Monks Mound
Basic Pyramid Facts For Kids
- The pyramids of Egypt are the most popular pyramids. Few of the Egyptian pyramids are one of the biggest structures of the world.
- Egyptians started making these pyramids in about 2700 BC and continued up till 1700 BC.
- The first Egyptian pyramid was constructed by the king named Djoser. He was the king of the Third Dynasty of Egypt.
- The pyramids of Egypt were coated with white-colored limestone.
- The name of the architect who built the first pyramid was Imhotep.
- The biggest pyramids of Egypt are the Pyramids of Giza.
- During 2008, archaeologists explored 135 Egyptians pyramids.
- The Great Pyramid of Egypt is one the biggest pyramids around the world.
- Until 1311 AD, the Great Pyramid of Egypt was the biggest construction around the world. In 1311, the construction of ‘The Cathedral Church of the Blessed Virgin Mary of Lincoln’ in England became the tallest.
- Majority of Egyptian pyramids are situated in Cairo, the capital of Egypt.
- The Egyptian royal pyramids were the last pyramids to be built. The name of the last king who built these was Ahmose.
- Sudan is the only country in the world that has maximum number of pyramids. Today, Sudan is home to 220 pyramids.
- Nubia is a region located along River Nile in Sudan. The Nubian Pyramids were built during the Kingdom of Kush (African kingdom) in this area.
- It is believed that there were almost 240 Nubian Pyramids in Sudan.
- Nubian Pyramids are also known as Pyramids of Kush.
- Igbo people are the local tribes of Nigeria and one of the Africa’s biggest ethnic groups. They built pyramids known as Nsude Pyramids.
- The material used in 10 pyramids was largely mud and dirt.
- These pyramids were built as a holy place for their gods to sit at the top of these structures.
- According to a Greek geographer of the second century AD named Pausanias, there were two pyramids in Greece. However, we do not know whether these structures were similar to the pyramids of Egypt or not because they are disappeared.
- The remains of one of the pyramids in Greece are in Hellenikon. There is also a second pyramid that exists in Ligourio (a village).
- These two pyramids do not resemble Egyptian pyramids. They are rectangular in shape rather than square and had big spaces inside them.
- These structures were probably constructed during 4th and 5th centuries.
- Located on Canary Islands in Chacona district, there are 6 pyramids that were constructed of lava stone. These 6 pyramidal structures are now called Pyramids of Güímar.
- The architects did not use mortar in these structures.
- They are rectangular in shape.
- They belonged to the 19th century AD.
- In the town of Güímar, there are now only 6 pyramids that still exist today. Previously, there had been 9 of them.
- One of the distinguishing features of Chinese pyramids is that all of them have flat or horizontal top.
- They are similar in shape to the Mexican pyramids called Teotihuacan pyramids.
- There are 38 pyramids in the Chinese city Xi’an in the province of Shaanxi.
- The most popular of all these tombs was the tomb of the First Qin Emperor called Qin Shi Huang. It took 38 years to build this tomb (246 BC – 208 BC).
- Mesoamerican pyramids are stepped pyramids. The main purpose of these structures in Mexico was the killing of human beings. The humans were sacrificed for a god which was a spiritual tradition.
- They are similar in shape to Mesopotamia’s pyramids called Ziggurats.
- According to the size of the pyramid, the Great Pyramid of Cholula is by far the biggest pyramid around the world. It is located in Puebla, a state in Mexico.
- The Great Pyramid of Cholula was started in third century BC and completed on 9th century AD.
- The Pyramid of the Sun is also found in Mexico. It is the third biggest pyramid around the world.
- The Native Americans constructed big pyramids called platform mounds.
- They were made up of soil.
- The Monks Mound in Illinois is the largest structure built by the Native Americans. It was finished in about 1100 AD.
- Located in the Italian city of Rome is the Pyramid of Cestius. It was built by the Romans during first century BC.
- It has the height of about 27 meters.
- There was another pyramid located in Borgo called ‘Meta Romuli’. However, it is no longer present because it was demolished in 15th century.
Did you really find these pyramid facts for kids helpful? Is it what you’re looking for? Please comment and help us improving this article. Thanks for reading it! |
Due to the energy transfer during an impact event, the earth surface is extensively modified at the location of the collision. As a consequence, impact structures have the potential to become economically significant sources for quarries, mines, hydroelectric reservoirs, and hydrocarbon reservoirs. While there are several examples of impact structures that have been exploited for building material and water reservoirs, most of the economic interest is regarding the formation of ore bodies (concentrations of metal and metal-bearing minerals) and hydrocarbons. These deposits can vary from small, local operations to some of the world largest deposits making impact craters economically significant. Most of the resource deposits at classified based on the timing and are described as progenetic, syngenetic, and epigenetic.
Progenetic deposits are economic deposits that originate prior to the impact event, but the impact event moved the deposits, in some cases bringing them to the surface or near-surface, making it possible to access the deposit. Processes such as structural displacement (movement of large amounts of mass within subsurface) and brecciation lead to concentration of deposit such as iron, uranium, gold, and hydrocarbons. An example of this kind of deposit is the uranium mining associated with the Carswell impact structure. As the world’s second-largest uranium producing region, the cumulative uranium production in this structure located in Canada is approximately 1.5 billion pounds of uranium oxide. The progenetic deposit at Carswell was brought to the surface by the central uplift of the impact structure and the subsequent erosion.
Syngenetic deposits are economic deposits that originate during the impact event, or immediately afterward, as a direct result of impact processes. The deposits occur as a product of shock metamorphism, melting, and post-impact hydrothermal activity resulting in economic deposits of copper, nickel, diamonds, zinc, lead uranium, platinum, gold, and uranium. Impact diamonds have been observed at the Popigai crater, the world’s largest known diamond deposit, as a product of the shock metamorphism caused graphite to transition to diamond and crystallize from coal. The ore deposits of Sudbury structure in Canada, one of world largest suppliers of nickel and copper ores, are closely associated with the impact melt sheet, dykes, and hydrothermal system caused by the impact event.
Epigenetic deposits are economic deposits that originate as a result of topographic change and structural features that allow for the entrapment of hydrocarbons. Accumulation of hydrocarbons occurs when oil and natural gas from shales migrates and is trapped within the impact structure. In North America, approximately 50% of the know impact structures in hydrocarbon-bearing sedimentary basins have commercial oil and/or gas fields. Examples include the Red Wing Creek structure, USA and Steen River structure, Canada. Although the Vredefort and Sudbury structures are world-class mining regions, hydrocarbon production, especially epigenetic deposits, is the most valuable resource deposits found at impact structures. |
A new study, Protecting the global ocean for biodiversity, food and climate, provides a roadmap for ocean conservation actions that should be taken to protect nature and people. Marine Conservation Institute scientists contributed information and were among the twenty-six scientists and economists led by Dr. Enric Sala of National Geographic Society that provided this groundbreaking analysis. For over two decades our institute has worked at the interface of marine science and policy to advocate for protecting special places in the ocean, and our MPAtlas.org tracks the amount of ocean in marine protected area. While this number has been growing steadily, it is far from sufficient to address the urgency of the nature crisis.
The study’s novel approach to conservation is to analyze the places that, if protected from fishing and harmful activities, would produce multiple benefits to humanity: sustaining fisheries, safeguarding biodiversity, and reducing carbon emissions. This is the most comprehensive study to date and serves to identify priorities for the coming decade as nations increasingly recognize the need to dramatically accelerate the quantity and quality of marine protected areas – at least 30% by 2030.
By mapping these locations globally and aligning protections with strong conservation standards such as those established by the Blue Parks initiative, we can ensure that the right places and the right management are the centerpieces of conservation in the next decade.
Read Sala, E. et al. 2021.
Read the editorial by Nature.
Some of the most compelling results of the report:
Study’s Topline Facts
- Ocean life has been declining worldwide because of overfishing, habitat destruction and climate change. Yet only 7% of the ocean is currently under any kind of protection with only 2.7% in fully or highly protected implemented areas.
- A smart plan of ocean protection will contribute to more abundant seafood and provide a cheap, natural solution to help solve climate change, alongside economic benefits.
- Humanity and the economy would benefit from a healthier ocean. Quicker benefits occur when countries work together to protect at least 30% of the ocean.
- Substantial increases in ocean protection could achieve triple benefits, not only protecting biodiversity, but also boosting fisheries’ productivity and keeping marine carbon stocks locked up in bottom sediments, not stirred up by bottom trawling.
Study’s Topline Findings
- The study is the first to calculate that the practice of bottom trawling the ocean floor is responsible for one gigaton of carbon emissions on average annually. This is equivalent to all emissions from aviation worldwide. It is, furthermore, greater than the annual emissions of all countries except China, the U.S., India, Russia and Japan.
- The study reveals that protecting strategic ocean areas could produce an additional 8 million tons of seafood per year (about 10% more than current global catch).
- The study reveals that protecting more of the ocean–as long as the protected areas are strategically located–would reap significant benefits for climate, food and biodiversity.
Priority Areas for Triple Wins
- Priority conservation areas change depending on the priority that is valued most–biodiversity, climate change or food provision.
- If society were to value marine biodiversity and food provisioning equally, and established marine protected areas based on these two priorities, the best conservation strategy would protect 45% of the ocean, delivering 71% of the possible biodiversity benefits, 92% of the food provisioning benefits and 29% of the carbon benefits.
- If no value were assigned to biodiversity, protecting 29% of the ocean would secure 8.3 million tons of extra seafood and 27% of carbon benefits. It would also still secure 35% of biodiversity benefits.
- Global–and not national–priorities should be the focus.
- Global-scale prioritization helps focus attention and resources on places that yield the largest possible benefits.
- A globally coordinated expansion of marine protected areas (MPAs) could achieve 90% of the maximum possible biodiversity benefit with less than half as much area as a protection strategy based solely on national priorities.
- Exclusive Economic Zones (EEZs) are key.
- Among those unprotected marine areas with the highest potential for a triple win–biodiversity conservation, carbon storage and food provision–most are found in EEZs.
- EEZs are areas of the global ocean within 200 nautical miles off the coast of maritime countries that claim sole rights to the resources found within them. (Source)
Priority Areas for Climate
- Eliminating 90% of the present risk of carbon disturbance due to bottom trawling would require protecting 3.6% of the ocean, mostly within EEZs.
- Priority areas for carbon are where important carbon stocks coincide with high anthropogenic threats, including Europe’s Atlantic coastal areas and productive upwelling areas.
Countries with the highest potential to contribute to climate change mitigation via protection of carbon stocks are those with large EEZs and large industrial bottom trawl fisheries.
Priority Areas for Biodiversity
- Through protection of specific areas, the average protection of endangered species could be increased from 1.5% to 82% and critically endangered species from 1.1% to and 87%.
- Other priority areas are around seamount clusters, offshore plateaus and biogeographically unique areas including:
- the Antarctic Peninsula
- the Mid-Atlantic Ridge
- the Mascarene Plateau
- the Nazca Ridge
- the Southwest Indian Ridge
- Despite climate change, about 80% of today’s priority areas for biodiversity will still be essential in 2050. In the future, however, some cooler waters will be more important protection priorities, whereas warmer waters will likely be too stressed by climate change to shelter as much biodiversity as they currently do. Specifically, some temperate regions and parts of the Arctic would rank as higher priorities for biodiversity conservation by 2050, whereas large areas in the high seas between the tropics and areas in the Southern Hemisphere would decrease in priority.
Priority Areas for Food Provision
- If we only cared about increasing the supply of seafood, strategically placed MPAs covering 28% of the ocean could increase food provisioning by 8.3 million metric tons. |
Did you know that the first Earth Day took place in 1970? With the help of California congressman Pete McCloskey and a young activist by the name of Denis Hayes, Wisconsin junior senator Gaylord Nelson organized what was supposed to be a teach-in about environmental protections on several college campuses. Hayes recognized the potential for a nationwide movement, so he and his staff began promoting Earth Day events across the country, which led to what was previously the largest single-day protest in US history. On April 22, 1970, 20 million US Americans took to the streets to peacefully protest against the adverse health and environmental impacts caused by 150 years of industrial development.
As a result of the first Earth Day, several critical environmental laws were passed (The National Environmental Education Act, the Occupational Safety and Health Act, and the Clean Air Act), and the United States Environmental Protection Agency was formed. The Earth Day demonstrations also led to Congress passing the Clean Water Act two years later, and eventually, the Endangered Species Act, followed by the Federal Insecticide, Fungicide, and Rodenticide Act.
In 1990, Hayes took Earth Day global, organizing events in 141 countries, which helped lead to the first-ever United Nations Earth Summit held in Rio de Janeiro in 1992. If recent history teaches us anything, Earth Day continues to have a positive impact internationally. On Earth Day 2016, the historic Paris Agreement, the most significant climate accord in the history of the climate movement, was signed and put into effect.
In honor of Earth Day, we wanted to highlight how Black Star Farms cares for the land. We recognize the importance of conservation and make every attempt possible to ensure all of our operational decisions are environmentally responsible. Without healthy soil and proper viticulture practices, Black Star Farms would not exist as we know it. Our products, services, and experiences are provided by the beautiful land we call home. That said, being stewards of the land to ensure that it remains healthy for future generations is a responsibility we do not take lightly. Below are just a few of the measures we take in our commitment to sustainable practices.
- Ensuring our Farmstead, Cropping, Livestock, and Forestry standards meet or exceed the Michigan Agriculture Environmental Assurance Program (MAEAP) as set forth by our agricultural governing bodies. We were proudly one of the first farms in Michigan to receive all four certifications.
- We are a proud member of the Great Lakes Sustainable Wine Alliance, a unifying group of Michigan wine industry members who have demonstrated their commitment to sustainability practices.
- We make every attempt to reduce our energy footprint by using LED lighting, high-efficiency heating systems, low energy pumps and motors, and photovoltaic electricity production.
- We partner with organizations, such as Tesla, to provide energy-efficient charging systems for low or zero-emission vehicles.
- We support educational institutions, such as Michigan State University, in providing real-world examples and data regarding the implementation of sustainable, environmentally friendly operations.
- We are continuing our search for sustainable and environmentally friendly methods to operate our business.
- We encourage the conversation regarding our environmental impact and the responsibility we have for protecting our current resources, in addition to continuously seeking out further opportunities to invest in sustainable practices in all of our agricultural operations.
If you are looking for ways to contribute to environmental protection efforts, a great place to start would be in your local area. Research your city’s or state’s non-profit conservancy organizations to see how you can give back. |
Desert Problem For Young Earth Creation Science
Young-earth creationists have a problem. According to their creation model, all the fossil-bearing rock layers in the world need to be created during the Flood of Noah. Fossils, in ancient rock layers, imply that death occurred before the Fall of man, which is contrary to their interpretation of Scripture.
The most visible rock layers in the world are those in the Grand Canyon. For many years young-earth creation scientists have invested a lot of time and research into the Grand Canyon. They believe that if they can find a model to explain the canyon rocks, then their followers will probably accept the rest of the earth is rocks as young.
One of the problems that the young earth model encounters in the Grand Canyon is the Coconino Sandstone. I’ve already discussed this in another article, so let me only summarise here. Geologists have stated that this formation of 315-foot thick sandstone was created by a desert environment, and is a deposition of wind-deposited sand dunes.
The problem for the young earth creationist is that this rock layer is topped by two other fossil-bearing marine rock layers, the Toroweap Limestone and the Kaibab Limestone.
This presents a problem to the young-earth model because if the sandstone originated by wind, then obviously it could not have been produced by Noah ís Flood. The young-earth scientist would have to explain how the water receded, then the sandstone formed, then the water came back and deposited the other layers. However, in the Biblical Flood account, the waters rose, then fell.
There were no cyclic water levels, nor was there a massive amount of time during the flood for a desert environment to create a 315-foot thick rock layer. The desert formation of this sandstone would disprove its formation during the Flood, and would disprove the young age of the earth.
Several young-earth scientists have attempted to explain this away, claiming that this sandstone was created underwater, and thus is not a desert sandstone. I dispute this theory because their model does not have the necessary forces to create the Coconino Sandstone (for more on this, see Coconino Sandstone). However, that is not the purpose of this article.
Other sandstones which are desert in origin will also disprove the young age of the earth. Therefore, the young-earth scientist must discredit every desert sandstone in the world. If one desert sandstone exists with a fossil-bearing ocean-deposited layer on top, it discredits the entire young earth flood model, and proves the old age of the earth.
Let’s look at other desert-origin sandstones. I will continually add to this article as I read through the research and discover other sandstones.
I’ll start with the Navajo Sandstone. This sandstone is most evident in the tall cliffs of Zion Canyon National Park in Utah. The thickness of this formation varies from 1,600 to 2,200 feet. It is evident from the excellent cross-bedding in this formation and other features that this is created from a desert environment. Below the Navajo there are thousands of feet of rock layers, including the layers of the Grand Canyon. Again, please note all the layers of the Grand Canyon are below the Navajo.
Looking at the rocks above the Navajo, the problem for the young-earth scientist gets even more complicated.. Looking at the Navajo at Arches National Park, there are at least 1,500 feet of rock layers above the Navajo at this location alone. The first is the Entrada Sandstone, which consists of three units, the Moab and Slick Rock members, (which are themselves desert dune sandstones), and the Dewey Bridge Member, which is about 200 feet of marine deposits. Above this is the thin Summerville Formation, siltstone from a lake/lagoon environment. Then comes the most serious problem for the young earth model…the Morrison Formation.
This formation has yielded thousands of dinosaur fossils, supposedly killed during Noah’s Flood. Above the Morrison are the Dakota Sandstone (beach environment) and the Mancoa Shale (shallow marine).
In fact, all the dinosaur fossils are far above the Grand Canyon sediments. The young earth model says the Flood killed most of the dinosaurs1…and according to their model, all the layers of the Grand Canyon were deposited during the Flood2. That is over 1 mile of sediment. The first dinosaur fossils appear in the Chinle formation, which is two formations above the Grand Canyon layers.
How did these dinosaurs survive the deposition phase of the flood, which deposited over 8,000 feet of sediment before we see the first dinosaur fossil? Young earth explanations (see sources below) fail to offer a valid explanation of this problem they make absolutely no sense out of the solid facts of the rock layers.
Given the young earth model, the flood waters must have created all these layers. However, you can’t have Flood-deposited rocks of the Grand Canyon, topped Stratigraphy by a desert sandstone, the Navajo, to the north of the Canyon, and then covered by more sea-deposited layers. None of these layers above the Grand Canyon, including the layers above the Navajo, can be accounted for by the young-earth model.
Evidence From Creation Scientists!
Here is the most amazing evidence for the desert, wind-formed Navajo Sandstone. Creation scientists themselves admit it! I don’t know if they are aware of this or not. I’ve done a review of the cornerstone book of young-earth proof of Noah’s Flood and the Grand Canyon (located at the Answers In Creation website) . The book is called Grand Canyon: Monument to Catastrophe. It is published by the Institute for Creation Research. This book was put together by 14 of the pre-eminent young-earth creation scientists in the world.
On page 32 of this book, they are making a case for the Coconino Sandstone of the Grand Canyon. They claim it was deposited not in a dry, desert environment, but in a water environment. Figure 3.10 shows a plot of grain sizes for the Coconino, two modern water environments, and a “Desert Sand Dune.” Through this plot, it is shown that the desert dune plots out to a straight line, whereas the Coconino, and the water environment sands, plot out as jagged, irregular lines. This is used as proof that the Coconino is not a desert sandstone.
The amazing thing is the source of the “Desert Sand Dune” grain size plots. The first paragraph in the right column, first sentence, gives the source as footnote number 44.
If you turn to this footnote, the source of the desert sand grain size plot is “Stratigraphic Analysis of the Navajo Sandstone,” published in the Journal of Sedimentary Petrology! That’s right! These creation scientists are using the desert-created Navajo Sandstone to argue against the Coconino as being desert in origin.
However, the Navajo is overlaid with many fossil bearing rock layers, including the Morrison Formation, with thousands of dinosaurs killed during the Flood of Noah. This can’t be! We now have proof, from young-earth creation scientists themselves, that the Navajo Sandstone formed as a dry, desert sandstone, right in the middle of Noah’s Flood!!!! Without meaning to, they have proved the old age of the earth!
The Coconino and Navajo are only two desert-created sandstones. No doubt the desert formations in China and Mongolia would also disprove the young age of the earth. I will post others here, as I have time to research them. Unfortunately for the young-earth creationist, it only takes one example of desert sandstone to disprove the young age of the earth. As you can see, the earth is old, just like the geologists have told us, and just as God ís creation testifies.
1 Oard, Michael, The Extinction of the Dinosaurs. (http://www.answersingenesis.org/home/area/magazines/tj/tj_v11n2.asp)
2 Austin, Steven (ed.), Grand Canyon: Monument to Catastrophe, Institute for Creation Research, 1995 |
Absorption Spectrometry is known as the sensitive method to analyze the elements. It generally allows the determination of metals at the picogram level. This method has been used for various applications that involve a wide diversity of samples. Atomic Absorption Spectrometry also involves the measurement of optical radiation subsequent reduction. Nowadays, most people follow the Agilent Absorption Spectroscopy method that consists of a hollow cathode lamp that emits specific light wavelengths. These light wavelengths are absorbed by the atomic cell that helps to convert the samples into the gaseous atoms. It also comes with a detection system that can be used to isolate and quantify the interest wavelengths. Here the computer system also helps to control and collect the data during the instrument operation process.
Absorption spectroscopy is widely used in chemical analysis due to its quantitative nature. It allows the compounds to get distinguished from each other and makes absorption spectroscopy useful in various applications. Here the infrared gas analyzers are used to identify the presence of pollutants and distinguish the pollutants from oxygen, nitrogen, and water. Determining the absolute concentration of compounds tends to require the knowledge of the compound’s absorption coefficient. This absorption coefficient is generally available from the relative reference resources. It is also determined by using the absolute concentration of the compound. Some of the common applications of Absorption Spectroscopy are Remote sensing, Astronomy, Atomic physics, and molecular physics. Apart from these absorption spectroscopies can be also used to determine the accuracy of various theoretical predictions. For example, the lamb shift is measured in the form of a hydrogen atomic absorption spectrum.
Some of the common elements that are analyzed with Atomic Absorption Spectrometry are electrothermal atomizers and flames. Flames are generally controlled by the combustion environment. They are known to carry the advantage of ease of use, speed, and various other factors. Hence it allows the simple interface for specification along with a chromatographic system. Its efficiency is generally 5%, and the atoms get dispersed across in huge volume. This causes the sensitivity of the Flame Atomic Absorption Spectrometry to get poor. Electrothermal Atomizers are also referred to as graphite furnaces that employ the small graphite tube where the temperature can be controlled by the different power supplies. Here further, we will refer to some of the common methods used in Absorption Spectrometry.
Atomic Absorption Spectroscopy
Atomic Absorption Spectroscopy analyzes the chemical elements by absorbing the optical radiation. Here it also uses the free atoms under a gaseous state. It generally relies on the Beer-Lambert Law, where it establishes the relationship between known analysis standards and the sample concentration. This is considered as the sensitive process that measures the parts per billion. Certain disadvantages can occur in this technique. With Atomic Absorption Spectroscopy, one can only analyze the solution. Apart from this, it also requires a large amount of samples for analysis purposes and can face issues with refractory elements.
Absorption Spectroscopy also uses electromagnetic radiation between 190mm to 800mm. It is further divided into the ultraviolet as well as into the visible regions. The absorption of ultraviolet and visible radiation tends to lead to the transition in between the electronic energy levels of the molecule. Hence it is also referred to as electronic spectroscopy. The given information of ultraviolet or visible radiation can provide you with clues for valuable structural information when combined with other sources of spectral data. Here the electric component of the electromagnetic waves is considered as the important one. The light that travels through the space is represented by the sinusoidal trace. The origin of this absorption is related to the valence electrons, and it can be generally found in any of the types of electron orbitals named single, double, and nonbonding orbitals. The single orbital tends to occupy the lower energy as compared to the nonbonding orbitals.
Optical Absorption Spectroscopy
Absorption Spectroscopy is also used to determine the mass concentration of the element, whether it is solid or liquid. This method is generally based on the atomic absorption principle, where the ground state electrons are elevated to the excited state by absorbing the energy derived from the light of a specific wavelength. Here the simple mass concentration can be quantified due to the energy absorption to the atoms in the light path. |
Twelve's Company, Thirteen's a Crowd
In 1694, a famous discussion between two of the leading scientists of the day - Isaac Newton and David Gregory - took place on the campus of Cambridge University. Their dispute concerned the "kissing problem." But don't get your hopes up. The term kissing in this context has nothing to do with the gesture of affection: here the verb kiss refers to the game of billiards, where it signifies two balls that just touch each other.
Heptagon: No touching and no kissing
Newton and Gregory argued about the number of spheres of the same radius that could be brought into contact with a central ball. On a straight line two balls can kiss a ball in the centre, one on the left and one on the right. On a billiard table, at most six balls can touch a central ball. There's no room for a seventh, as anyone can verify by rolling the balls around a bit. The reason for this is that a heptagon (seven-cornered polygon) whose sides have length 2 (i.e., two balls) is too large to fit tightly around a circle with radius 1. So far everything is clear. But let's move off the green felt and up into the realms of space.
In the 1950s, H. W. Turnbull, an English school inspector, was doing research on the life of Isaac Newton. Working his way through the numerous papers, letters, and notes, he came across two documents that would provide the basis for the kissing problem: a memorandum of a discussion that the two scientists had at Cambridge, and an unpublished notebook at Christ Church at Oxford in which Gregory had jotted down some notes. The two men had been discussing the distribution of stars of various magnitudes that revolve around a central sun. In the course of their deliberations the question arose whether a sphere can be brought into contact with 13 others of the same size. And that's where opinions diverged.
How many white billiard balls can kiss a black billiard ball in three-dimensional space? Kepler stated in his Six-cornered Snow that twelve spheres could touch a central sphere, and then went on to describe two possible ways to arrange the balls. But maybe 13 spheres can be brought into contact with a central sphere? Initially we may think this clearly impossible, since Kepler's arrangement is completely rigid. All the balls touch the central ball and they also touch each other. So how could a further ball be squeezed in between? The question is not quite trivial because the hexagonal packing - three balls below the central sphere, six around it, and three above - is not the only arrangement of "12 around 1". We already saw that the cubic packing - four balls below the central sphere, four on each side, and four above - is another display of "12 around 1". As Kepler pointed out, however, these two arrangements are identical, and any apparent differences are merely the result of looking at the balls from different perspectives. But maybe there is a truly different arrangement that brings a dozen white balls into contact with the black ball?
The icosahedral arrangement
There is - and not just one. Put one sphere on the bottom, then arrange five balls in a pentagon around the central ball, just below its equator, place another five balls more or less in the interstices of the lower five balls (this puts them slightly above the equator of the central ball), and finally top it off with the twelfth sphere on the pinnacle. So there you have it: another arrangement of a dozen spheres around the central ball. You may note that the spheres sit more or less on the vertices of an icosahedron, which is why this configuration is called the icosahedral arrangement.
If we now take a close look, a very surprising fact emerges: this arrangement is not rigid. Sufficient space is left over in the interstices between the twelve balls, that they can roll around a bit on the surface of the central ball. You may ask yourself, as did Gregory, can the balls be moved in such a way that sufficient space opens up for an additional sphere? Maybe the free spaces can be combined so that a thirteenth ball can be squeezed in? This may seem absurd, but Gregory did have a point and we will give a mathematical demonstration that "13 around 1" is at least conceivable.
Consider a number - as yet unknown - of spheres that kiss a central sphere, all of radius 1. Deposit the whole arrangement into a super-ball with a radius of 3. Imagine a lamp at the centre of the central ball that casts shadows of the surrounding balls onto the inside surface of the super-ball. These circular shadows cannot overlap. It is shown in the Appendix that each shadow has a surface area of 7.6; the total surface of the super-ball is 113.1. So how many shadows can fit onto the super-ball's surface? Divide 113.1 by 7.6 and you get 14.9. The inescapable conclusion is that there is room for nearly 15 balls! Definitely there is sufficient surface, at least theoretically, for 14 balls, and so 13 balls certainly should be considered a possibility.
Let us get back to the dramatis personae. Isaac Newton was one of the foremost scientists of all times. He was born on January 4, 1643, in Woolsthorpe, in Lincolnshire, a tiny, weak baby that was not expected to survive for even a week. However, Isaac proved doctors and midwives wrong and extended his lifetime to another 84 years. He never knew his father, an uneducated, illiterate man who died three months before his birth. After the death of her husband, Isaac's distraught mother Hannah, née Ayscough, was badly in need of the church's compassion and the minister of the nearby village was only too happy to oblige. The reverend took his comforting duties very seriously and after a proper period of mourning the two got married. With this, little Isaac became superfluous and was shipped off to his grandparents, who weren't too pleased with the sudden appearance of a two-year old boy. Isaac did not have a happy childhood with them. There was no love lost between him and his grandfather, James Ayscough. The old man even excluded him from his will. Isaac was furious. About his mother and stepfather he fantasized that he would "burn [them] and the house over them."
Newton's aim at college was to get - what else - a law degree. At Cambridge that included studying the antiquated texts of Aristotle, which still put the Earth at the centre of the universe and described nature in qualitative instead of quantitative terms. But revolutionary ideas can't be repressed, they just float around at centres of learning, and sometime during his undergraduate days Newton discovered René Descartes' natural philosophy. Descartes viewed the world around him as particles of matter and explained natural phenomena through their motion and mechanical interactions. Through Descartes' writings Newton was led to study - without anyone's knowledge - the important mathematical texts of the time. Then his genius began to emerge. All by himself he created revolutionary advances in mathematics, optics, and astronomy. Single-handedly he invented a new mathematical method to describe motion and forces, which he called the "method of fluxions," eventually known as calculus. To his greatest regret later in life, he never published an account of the method that allowed the computation of areas, lengths of curves, tangents, and maxima and minima of functions. He probably thought that the new technique was too radical a departure from traditional mathematics, and that discussions about it would detract the readers of his astronomy texts from the main results. Therefore he may have deliberately covered his tracks by keeping the invention of differential calculus secret. The treatise in which he eventually described the new method, De methodis Serierum et Fluxionum (On the methods of series and fluxions), written in 1671, would only be published 65 years later, ten years after his death.
At age 53, Newton decided on a career change: he became Warden of the Royal Mint, and three years later, until his death, he was its Master. Since he received a commission on all the coins that were struck, he managed to do quite well for himself. He also pursued counterfeiters relentlessly and with ferocity.
In 1703 Newton was elected president of the Royal Society of London. He was re-elected every year until his death 24 years later. Apart from dozing off during the meetings towards the end of his life, he used his presidency quite to his advantage, as we skissing see. Sir Isaac devoted the last 25 years of his life to a priority dispute with Gottfried Wilhelm von Leibniz over the discovery of calculus - Newton's method of fluxions. The stakes were high: calculus changed mathematics in a fundamental way, and its inventor would forever be remembered for this feat. As President of the Royal Society he appointed an "unprejudiced committee" to decide who had been the first to invent calculus. Of course the unprejudiced committee members knew exactly what was expected of them and didn't even dream of letting Leibniz off the hook. The gentleman from Germany was never even asked for his version of the events. Just to be on the safe side, Newton secretly wrote the committee's final report himself. And to top it off, he also wrote a very favourable review of the report - again anonymously - for the Transactions of the Royal Society.
Nowadays it is generally accepted that Newton deserves priority, having invented calculus in 1665 and 1666. Leibniz apparently re-invented the method, which he called the "method of differences," ten years later. Isaac Newton died on March 31, 1727, in London.
The other participant in the debate on the kissing number was David Gregory, the nephew of the even more famous scientist James Gregory. Newton's junior by 16 years, Gregory was born 1659 in Aberdeen, Scotland, to a very fertile family: he was one of his father's twenty-nine children (by two wives). A precocious child, he started his university education at the tender age of twelve. But the fast start did not last. Rather it ended as fast as it started and Gregory concluded his days as a student without a degree. This did not stop the University of Edinburgh from appointing him, at age 24, professor of mathematics. Gregory was an early supporter of Newton's. In fact, he was the first university lecturer to teach the cutting edge theories that no other university had yet adopted. In 1690, when unrest set in in Scotland, he left for Oxford. Thankful for Gregory's endorsement of his theory, Newton arranged for his appointment as Savilian Professor of Astronomy.
The discussion on kissing numbers began when the 51-year old Newton was between jobs: he already had one foot in the Mint in London, but had not yet removed the other foot from the ivory tower of Trinity College. On May 4, 1694, Gregory paid him a visit. His stay at Cambridge lasted several days, during which the two men talked nonstop about scientific matters. It was a rather one-sided conversation, with Gregory, the dutiful disciple, making notes of everything the great master uttered. He had to hurry because Newton freely related his thoughts, jumping from one subject to another. From editorial corrections to the Principia, he went to the curvature of geometric objects, the "smoke" issuing from a comet, speeds of different colors of light, conic sections, the interaction between Saturn and Jupiter, etc. All this happened at lightning speed, with Gregory attempting to take it all down. When he was not quite able to follow, Newton just took the pad from his friend's hands and scribbled his own remarks into the notebook.
One of the points discussed, number 13 in Gregory's memorandum of that day, was how many planets revolve around the sun. The discussion then went off on a tangent, to the question of how many spheres of equal size could rotate around a central ball of the same size. It deviated again to the distribution of stars of various magnitudes around a central sun. Finally Gregory asked the question: can a rigid material sphere be brought into contact with 13 other spheres of equal size?
In one of the notebooks found by H. W. Turnbull there is a passage in which Gregory discusses the packing of circles that are placed in concentric rings around a central circle, i.e., the two-dimensional problem. He correctly pointed out that six spheres can surround a central sphere in the innermost ring. This is the problem that was proved by Fejes-Toth in 1940. He also remarked that the next rings contain 12 and 18 spheres. Gregory then goes on to discuss the question for three dimensions. How many spheres can be placed in concentric layers so that they all touch the central ball? It is here that he made a claim that sparked the debate and the 250-year search for a final answer. Gregory stated - without further ado - that in three-dimensional space the first layer surrounding a central ball contains thirteen spheres.
Newton, on the other hand, had written in A table of ye fixed Starrs for ye yeare 1671 that there exist thirteen stars of the first magnitude, and in Gregory's report on the discussion of May 4, 1694, we read that in order "to discover how many stars there are of first, second, third etc. magnitude, [Newton] considers how many spheres, nearest, second from them, third etc. surround a sphere in space of three dimensions: there will be 13 of the first magnitude." Of course, Newton meant a total of 13 spheres, including the central one.
So Gregory and Newton did not agree on the number of spheres that can kiss a central ball. If you were to place a bet, whose side would you take? Do you believe in "12 around 1" or in "13 around 1"? Newton would be a safe wager. While Gregory is considered a fair but by no means outstanding mathematician, Newton was correct in most things he ever said or did. But he was not infallible. And - in Gregory's favor - there is enough space for nearly 15 balls, as we saw above.
It turns out that Newton was right: "13 around 1" is impossible. That's why the highest number of balls that can touch a central ball is nowadays often called the Newton number. But during their lifetimes the two men were never to know the correct answer, and after whom kissing numbers would be named. And anyway, being correct is only half the fun in mathematics. The other half is finding the proof, and headway on that track - though no definite progress - was only made 180 years after Newton and Gregory formulated their controversy. The final proof had to wait an additional 80 years and was only formulated in 1953.
About this article
This article is an abridged chapter from George Szpiro's forthcoming book "Kepler's conjecture: How Some of the Greatest Minds in History Helped Solve One of the Oldest Math Problems in the World" (to be reviewed in Issue 25 of Plus). The book will be published by John Wiley & Sons in February 2003. |
About 100 people have a rare mutation in a gene called SNCA that puts them at almost certain risk of getting Parkinson’s disease. This makes them ideal subjects for studying the root causes of this debilitating condition. Most of these people live in the northern Peloponnese in Greece, and a handful live in Campania, Italy. We were lucky enough to have 14 of these people agree to travel to London so we could study their brains.
More than 6m people, globally, have Parkinson’s disease; it is the second most common neurodegenerative disorder after Alzheimer disease. The symptoms, which worsen over time, include motor symptoms such as stiffness, slowness and shaking, as well as non-motor symptoms, such as memory problems. Researchers have been trying to find a reliable marker for the disease so that people at risk can be identified before the motor symptoms start.
There are no cures for Parkinson’s disease, but symptoms are treated with drugs that restore a brain chemical called dopamine to normal levels. Dopamine has long been considered a prime culprit in Parkinson’s disease as low levels cause problems with movement. But another brain chemical called serotonin has also been implicated in the disease. But we didn’t know how early and to what extent changes in serotonin occur and if these changes are related to disease onset. To help answer this, we needed to study those Greek and Italian subjects with the SNCA gene mutation.
Studying these gene carriers before they develop Parkinson’s disease is a unique opportunity to understand what comes first in the cascade of events that eventually leads to a diagnosis of Parkinson’s disease. This knowledge is critical so that we can develop sensitive markers to track the progression of the disease.
People with the mutation tend to display symptoms of Parkinson’s disease in their 40s or 50s, so we wanted to study subjects in their 20s and 30s to see if there were any brain changes a decade or more before symptoms started.
Seven of our volunteers, who kindly visited our lab for ten days of brain imaging and neurological tests, had no motor symptoms and seven had been diagnosed with Parkinson’s disease. We also examined 25 patients with sporadic Parkinson’s disease (Parkinson’s disease without a genetic cause) and 25 healthy volunteers.
All participant had three brain scans: one to measure dopamine, one to measure serotonin, and another to study anatomical regions in the brain.
We also carried out a series of clinical tests to investigate motor and non-motor symptoms. The volunteers wore an electronic device on their wrist for seven days to pick up any movements associated with Parkinson’s disease – movement that might be too subtle to be detected by a neurologist with the naked eye. These tests confirmed that the seven subjects with the gene mutation who had no motor symptoms were, indeed, Parkinson’s free.
Early serotonin loss
Comparing data from the different groups allowed us to measure the severity of dopamine and serotonin loss at different stages of the disease, from people without symptoms to people with a diagnosis. It also allowed us to compare changes seen in the gene carriers with changes seen in those with sporadic Parkinson’s disease. This helped us translate our findings in the gene carriers into the more common sporadic form of Parkinson’s disease.
We discovered that gene carriers without symptoms had depleted serotonin, while their dopamine neurons appeared to remain intact. So the changes in the serotonin system that we identified are likely to start very early and precede the onset of motor symptoms by some years.
Our study, published in Lancet Neurology, suggests that changes to the serotonin system come first, occurring many years before patients show symptoms. This important finding could lead to the development of new drugs to slow or even stop disease progression.
Our findings also suggest that brain scans of the serotonin system could be used as a tool for screening and monitoring disease progression. But these scans are expensive, so we need more work to develop affordable technology. We also need more research into genetic forms of Parkinson’s which could further unlock the earliest changes underlying this awful disease. |
14 Apr SCHIZOPHRENIA AND EXERCISE
in Mental Health
Schizophrenia is a complex brain disorder estimated to affect approximately 1% of the population. Onset usually happens in adolescence or young adulthood. While the cause isn’t fully understood, it seems to involve an interplay between genetic and environmental factors.
People with schizophrenia experience a range of symptoms that impact their physical, mental and social function. These can be divided into:
- positive symptoms – that is, symptoms that are added to usual everyday experience, such as hallucinations and delusions.
- negative symptoms – those that take away from usual experience, such as social withdrawal and reduced motivation.
- cognitive symptoms – those that affect cognitive functions, such as issues with attention and working memory.
There is currently no cure for schizophrenia, but generally, positive symptoms are often managed with medication. The negative and cognitive symptoms of Schizophrenia are much harder to treat. Mounting evidence is also showing that physical activity plays a key role in managing all three types of symptoms and helps to reduce the gap in health and life expectancy experienced by people with schizophrenia.
WHY IS EXERCISE IMPORTANT FOR PEOPLE WITH SCHIZOPHRENIA?
People with schizophrenia are at significantly higher risk for developing other health conditions. For example, a study including over 1,800 Australians living with psychosis found that three quarters of participants were overweight or obese, and more than half had metabolic syndrome (a cluster of risk factors that increase your likelihood of developing chronic disease such as diabetes). Some of the key reasons for this elevated risk include:
- the side effect profile of antipsychotic medications, which tend to cause significant weight gain, referred to as anti-psychotic induced weight gain
- the negative symptoms of Schizophrenia make it more difficult for those living with the disorder to be motivated to engage in activities that promote good wellbeing, such as exercise and cooking healthy meals
- the cognitive symptoms associated with Schizophrenia make it more difficult for those living with the disorder to sustain the organisation and planning required to engage in activities that promote good health and wellbeing. For example, being able to cook a healthy meal requires you to have budgeted time and money to complete grocery shopping prior.
As a result, people with schizophrenia have a life expectancy 10-20 years less than people in the general population, much of which is related to preventable conditions such as cardiovascular disease and diabetes.
Physical activity plays a vital part in improving or maintaining cardiometabolic health and mental health, making it especially important for people with a condition such as schizophrenia.
HOW EXERCISE CAN HELP
An increasing number of studies are showing that regular exercise can enhance physical and psychological wellbeing in people with schizophrenia. For example, a review of 20 studies published in 2015 found that physical activity interventions led to improved physical fitness and a reduction in positive and negative schizophrenia symptoms. The greatest benefit was seen in people who engaged in 90 minutes or more of moderate to vigorous physical activity per week. This is important, because improved physical fitness can reduce your risk for developing cardiometabolic disease and people living with Schizophrenia tend to have a significantly lower baseline level of fitness.
A 2017 study showed exercise could improve cognitive function in people with schizophrenia, with greater amounts of exercise linked to larger improvements.
WHAT TYPE OF EXERCISE IS BEST?
It is important to remember that any physical activity is better than none, and the type of activity you choose is less important than doing something. It’s best to find a type of exercise you’ll enjoy, so that you can keep exercising for the long term.
People living with schizophrenia should aim to meet the minimum recommendations for physical activity, which advise that on a weekly basis, Australians aged 18-64 years should accumulate:
- 150 to 300 minutes (2 ½ to 5 hours) of moderate intensity physical activity, or
- 75 to 150 minutes (1 ¼ to 2 ½ hours) of vigorous intensity physical activity, or
- an equivalent combination of both types of activity.
You should also aim to do activities that strengthen muscles (such as resistance training with bands, dumbbells, or body weight) at least two days per week.
Also, you don’t need to become a fitness junkie to reap the benefits of physical activity. Look for small opportunities to be more active, such as taking the stairs instead of the lift and walking rather than driving for short distances.
Aim to be active on most days and reduce the amount of time you spend sitting down.
GUIDANCE FROM AN ACCREDITED EXERCISE PHYSIOLOGIST
People living with schizophrenia often have complex health needs. It’s wise to get some guidance from an Accredited Exercise Physiologist (AEP) before starting an exercise program.
As there is currently no cure for schizophrenia, and exercise is good for you anyway, it’s important to find an exercise routine you’re likely to stick with. Exercise needs to become part of a long-term lifestyle that helps you maintain optimal physical and mental wellbeing and quality of life.
Importantly, research has shown that people with schizophrenia achieved greater fitness improvements when their exercise program was supervised by a health professional such as an AEP. Another study showed dropout rates were lower when physical activity programs were designed and supervised by qualified professionals.
An Accredited Exercise Physiologist can tailor a program suited to your needs and goals, considering factors such as your health status, living and working situation, medications, and exercise preferences. As your fitness improves, they can update your program accordingly. If needed, they can also train your support people to ensure exercise becomes a regular part of a healthy lifestyle that helps you enjoy better physical and mental wellbeing.
Click here to find an exercise physiologist near you.
Written by Amanda Semaan and Kara Foscholo. Amanda and Kara are Accredited Exercise Physiologists and Co-Directors of Active Ability. |
What is hypertension?
Hypertension refers to high blood pressure, and according to the Heart foundation, approximately 4 in 10 adults are experiencing hypertension which is a cause for concern.
What is blood pressure?
We often hear this term but may not fully comprehend what it means.
The heart pumps blood throughout the body through blood vessels knows as arteries. The force with which the blood is pumped is known as blood pressure. When the force is higher than normal, it is termed as high blood pressure. The higher the blood pressure, the greater the risk of heart disease. The risk is equally high for both men and women, young or old.
A blood pressure of 120/80 mm Hg or lower is normal.
How is blood pressure measured?
Blood pressure is measured in two ways. The first measurement is known as systolic blood pressure, which measures the pressure in your blood vessels when your heart beats. The second measurement is known as diastolic blood pressure, which measure the pressure in your blood vessels when your heart rests between beats.
What are the risk factors?
There are several risk factors for hypertension, such as age and genetics, but these are risk factors that we unfortunately do not have a control over. However, there are factors that we do have control over, such as weight, salt and alcohol intake, and abdominal obesity. This is characterised by weight concentrated around the abdomen. Abdominal obesity is of major concern because the fat tissue around the abdomen is metabolically active and can release free fatty acids into the circulation that accumulate around the heart, affecting its ability to function effectively.
Excess salt consumption also causes an increase in blood pressure. This is because when you consume a lot of salt the body starts to retain water. This puts pressure on the heart to pump blood around the body and causes high blood pressure.
Lifestyle management tips to help with hypertension:
1. The DASH (Dietary Approaches to Stopping Hypertension) diet is the recommended diet for reducing blood pressure and reducing weight. The diet consists of fruits, vegetables, wholegrains, nuts and low fat dairy products.
2. Reduce dietary sodium levels to 2300mg or less of sodium a day (which equals to less than 6g of salt per day). This will reduce systolic blood pressure by 2-7mm Hg.
Listed below are some ways you can reduce salt from your diet:
- Avoid adding salt to meals and swap sauces that have a high salt content (i.e. soy, oyster and fish sauce) with herbs and spices, lemon or lime juice, black pepper, fresh ginger, garlic and chilli to add flavour to your food.
- Swap processed meats like salami, ham, bacon, and sausages with fresh meat, fish or eggs.
- Choose low salt cheeses such as ricotta or quark cheese.
- Choose snacks with less salt such as plain rice wafers, corn thins and unsalted popcorn.
- Read labels on packaged foods to check for salt content. Packaged foods with less than 120mg are ideal.
Contact us for results focused on nutritional advice
This article was written by our dietitian and nutritionist Juhi Bhambhaney. If you have any questions regarding health and nutrition, make an appointment with one of our dietitians. We‘ll provide you with a simple and effective routine targeted to your concerns. Contact us today. |
Flash flooding is the most hazardous weather disaster in the United States. Floods cause power outages, damage infrastructure, trigger landslides, and can be deadly.
Heavy rainfall in a short period of time causes water to rise rapidly, elevating the risk of flooding. Flash floods occur with little warning, but flooding can also develop slowly after rain ceases.
Though most people associate hurricanes with wind damage, flooding poses one of the biggest threats from the storms. Hurricane Harvey in 2017 dropped 60 inches of rain in some parts of Texas, creating massive flooding hazards. In 2005, flooding from Hurricane Katrina caused a majority of the damage when old levees failed during the storm.
Before a flood
One of the biggest ways to protect yourself and your property is to prepare ahead of time. This includes:
- Avoid building in a floodplain—an area especially prone to flooding during heavy rains.
- If you do live in a floodplain, consider buying flood insurance to help with losses if, and when, a flood occurs.
- Construct barriers (levees, beams, floodwalls) to stop floodwater from entering your home. Sandbags can provide a temporary levee in an emergency.
- Seal walls in basements with waterproofing compounds to avoid seepage.
- Pay attention to weather forecasts. When heavy rain or storms are forecasted, listen to the radio or television for information on flooding risk.
- What’s worse—a flood watch or warning? A watch means flooding is possible. A warning means flooding is occurring or will occur soon.
When a flood is imminent
- Have an emergency plan and practice survival skills, like first aid and how to disinfect water.
- Be prepared! Assemble an emergency kit in case you need to evacuate. Don't forget to include necessary prescription medications and a small first aid kit.
- Charge cell phone batteries and any reusable batteries for flashlights. Buy extra batteries in case power isn’t restored immediately.
- Heed evacuation warnings. If there is any possibility of a flash flood, move immediately to higher ground. Follow appropriate evacuation signs.
- If possible, bring in outdoor furniture and move important items to an upper floor, above possible flood levels.
- Turn off utilities at the main switches or valves if instructed. Disconnect electrical appliances.
During a flood
- Avoid low spots, like ditches, basements, or underpasses. These become extremely dangerous during a flash flood.
- Do not walk through flooded areas. It can be difficult to tell how deep the water is and what lies underneath the water that could hurt you. Even shallow, moving water can make you fall.
- If you have to walk in water, wherever possible, walk where the water is not moving. Use a stick to check the firmness of the ground in front of you.
- Do not drive into flooded areas. Remember: “Turn around, don’t drown.” If floodwaters rise around your car, abandon the car and move to higher ground—only if you can do so safely.
- Do not touch electrical equipment if you are wet or standing in water.
After a flood
- Return home only when authorities say it is safe.
- Listen for news reports to learn whether the water supply is safe to drink and where emergency shelters are located.
- Avoid floodwaters; water may be contaminated by oil, gas, or raw sewage. Water may also be electrically charged from underground or downed power lines.
- Still avoid moving water—the danger decreases only when water levels drop.
- Be aware of areas where floodwaters have receded. Roads may have weakened and could collapse under the weight of a car.
- Stay away from downed power lines and report them to the power company.
- Stay out of any building if it is surrounded by floodwaters.
- Service damaged septic tanks, cesspools, pits, and leaching systems as soon as possible. Damaged sewage systems are serious health hazards.
- Only pump water out of a flooded building when water has receded outside.
- Clean and disinfect everything that was stuck in flooded waters. Mud left from floodwater can contain sewage and chemicals.
- Be wary of lingering water inside buildings after a flood. A dehumidifier will help remove excess water and minimize mold damage.
Be prepared, and stay safe. |
Hinduism is the world’s third largest religion, after Christianity and Islam. About 80 percent of India’s population regard themselves as Hindus and 30 million more Hindus live outside of India. Scholars describe modern Hinduism as the product of religious development that spans nearly five thousand years, making it the oldest surviving world religion. Buddhism and Jainism emerged from Hinduism. Outside India, South Africa has the largest group of Indians, most of whom practise Hinduism.
Hindus refer to their religion as the ‘eternal religion’ . It is a complex set of beliefs, values and customs – a way of life and the fulfilment of duties (dharma).
Although Hindus believe in one God, it differs from Christianity and other monotheistic religions in that it does not have:
- A single founder
- A single theological system
- A single concept of deity
- A single holy text, although the Vedas are considered to be the authoritative texts
- A single system of morality
- A central religious authority
Hindu philosophical systems are based on the sages’ and saints’ direct experiences of God. The Vedas, which are the primary written texts, are believed to have been received by the ancient sages in direct communion with the Divine. These saints are called Gurus. The most famous Guru in South Africa was Mahatma Gandhi. He was assassinated in 1948.
Hinduism is a diverse system of thought with beliefs spanning monism, monotheism, polytheism, and atheism, among others. Its concept of God is complex and allows freedom to each individual. Hindus use many forms, faces and symbols to explore the depth of God’s oneness. The most comprehensive name of God is AUM or OM.
Hindus believe that no religion teaches the only way to salvation, but that all genuine paths are facets of God’s light, deserving respect and understanding. Hinduism conceives the whole world as a single family and accepts all forms of beliefs and dismisses labels of distinct religions which would imply a division of identity. Hindus believe that all life is sacred, to be loved and revered and therefore practises non- injury in thought, word and deed.
Practices, philosophies and rituals:
Hindu practices generally involve seeking awareness of God, therefore, Hinduism has developed numerous practices meant to help one think of divinity in the midst of everyday life. The vast majority of Hindus engage in religious rituals on a daily basis. Hindus can engage in worship either at home or at a temple. Temples are usually dedicated to a primary deity along with associated subordinate deities. Hindus perform their worship through icons. The icon serves as a tangible link between the worshiper and God. The image is often considered a manifestation of God, since God is imminent.
Among other practices and philosophies, Hinduism includes a wide spectrum of laws and prescriptions of daily morality. These include the following:
• Righteousness, ethics, truth refers to the universal principle of law, order and harmony. Truth is a major tenet of Hinduism.
• Virtuous pursuit of wealth for livelihood, obligations and economic prosperity.
• Sensual pleasure which refers to desire, wish, passion, longing, pleasure of the senses, enjoyment of life, affection, or love.
• The realization of one’s eternal relationship with God
• Perfect unselfishness and knowledge of the self.
• The attainment of perfect mental peace
, and wisdom
• Mantras are invocations, praise and prayers that through their meaning, sound, and chanting style help a devotee focus the mind on holy thoughts or expr; the detachment from worldly desires
Yoga is a Hindu discipline which trains the consciousness for tranquility, health and spiritual insight, love, devotion, right action, meditation ess devotion to God/the deities.
Life-cycle rituals such as birth, marriage, and death involve elaborate religious customs.
The festival of lights- Diwali, is the most significant of the Hindu Holy Days. It is a time of reflection – to think about oneself and others, and forgive the wrongdoings by other people. It is a time to light up oneself’. The festival commemorates the triumph of good over evil.
Siva Aalayam TempleAddress: 41 Ruth Road, Rylands Phone: Email: Web:
ISKCON (International Society for Krishna Consciousness) Hare Krishna TempleAddress: Cnr St Andrews & Teddington Roads, Rondebosch Phone: Email: Web: |
Are you looking for strategies to improve students reading comprehension skills? If so, keep reading.
1. Get the student to look for the keywords and main ideas when reading.
2. After reading a selection, have the student have to verbally summarize what they have read.
3. Get the student to read high interest signs, advertisements, notices, etc., from newspapers, magazines, movie promotions, etc., placing emphasis on comprehension skills.
4. Teach the student to find the main points in content to enable their comprehension.
5. Refrain from placing the student in awkward reading skills (e.g., reading aloud in a group, reading with time limits, etc.).
6. Get the student to read independently each day to practice reading skills.
7. Spotlight essential information the student should pay close attention to when reading.
8. Minimize the amount of information on a page if it is visually distracting for the student.
11. Make sure the student learns the meanings of all commonly used prefixes and suffixes.
12. Make sure the student learns dictionary skills to find the meaning of words.
13. Cut out images from magazines and newspapers and have the student match captions to them. This learning experience could be varied by having one student write the caption while another student determines if it is appropriate.
16. Minimize the amount of content the student reads at one time (e.g., lessen reading content to individual sentences or one paragraph, etc.). As the student shows success, slowly increase the amount of content to be read at one time.
17. Get the student to supply missing words in sentences given by classmates and/or the teacher to enable comprehension skills.
18. Get the student to list new or complicated words in categories such as people, food, animals, things that are hot, etc.
19. Teach the student meanings of abbreviations to assist in comprehending content read.
20. Include frequent written tasks on topics that are of interest to the student to reinforce the correlation between writing and reading capacity and ability.
21. Consider using AI to teach reading comprehension.
22. Consider using Alexa to teach reading skills.
23. Try using one of our many apps designed to teach literacy skills and help students with reading issues: |
The environmental REsistome: confluence of Human and Animal Biota in antibiotic resistance spread (REHAB)
OVERALL STUDY AIM
We do not fully understand how important types (species) of bacteria and packages of genetic material (genes) coding for antibiotic resistance move between humans, animals and the environment, or where, how and why antibiotic resistance emerges. This study aims to look in detail on a genetic level at bacteria in farm animals, human/animal sewage, sewage treatment works and rivers, to work out the complex network of transmission of important antibiotic-resistant bacteria and antibiotic resistance genes. We will use this information to work out how best to slow down the spread of antibiotic resistance between humans, livestock and the environment.
STUDY BACKGROUND AND AIMS IN MORE DETAIL
Infections are one of the most common challenges in human and animal medicine, and are caused by a range of different micro-organisms, including viruses and bacteria. Amongst bacteria, there are some species, or types, of bacteria, which can live harmlessly in human and animal intestines, sewage, and rivers, but can also cause disease in humans and animals if they get into the wrong body space, such as the bloodstream or urine. Examples of these bacteria include E. coli, and other similar organisms, which belong to a family of bacteria called “Enterobacteriaceae”.
It has generally been possible to treat infections caused by bacteria using several classes of medicines, known as antibiotics. Different antibiotics kill bacteria in different ways: for example, they can switch off critical chemical processes that the bacteria need to survive, or they can break down the outer shell of the bacteria. In response to the widespread use of antibiotics, bacteria have changed over time, finding ways to alter their structure so that antibiotics no longer have a target to act on, or by producing substances that break down the antibiotic before it has a chance to act i.e. they develop antibiotic resistance. This adaptation is caused by changes in the bacterial genetic code. Bacteria can also rapidly acquire “packages” of antibiotic resistance genes from other surrounding bacteria. This is known as horizontal gene transfer. Through these mechanisms, members of the Enterobacteriaceae family of bacteria have developed antibiotic resistance to a number of different antibiotics over a short period of time. In some cases we are no longer able to treat these infections with the antibiotics we have available.
Studying antibiotic resistance and horizontal gene transfer in bacteria found in humans, animals and the environment is difficult because we cannot directly see how bacteria change their genetic code and acquire parcels of resistance genes through horizontal gene transfer in the environment. However, new “Next Generation Sequencing” (NGS) technologies allow scientists to look in great detail at the genetic code of large numbers of bacteria. Comparing this information for bacterial species which have been living in different parts of the environment (e.g. human/animal sewage, sewage treatment works, rivers) allows us to see how bacteria have evolved to become resistant to antibiotics, and how resistance genes have been shared between them.
This study will use NGS technologies to look at the genetic code of large numbers of Enterobacteriaceae bacteria found in humans, animals (pigs, sheep and poultry), sewage (pre-, during and post-treatment), and rivers. These different groups/areas will be sampled in different seasons over a year to determine how antibiotic resistance genes move around between these locations and over time, and what factors might influence this movement. We will also be investigating whether various chemicals and nutrients in the water may be affecting how quickly horizontal gene transfer occurs. Understanding this is essential to work out how we might intervene more effectively to slow the spread of antibiotic resistance genes and bacteria, and keep our antibiotic medicines useful. |
The electromagnet kit helps you to see the link between electricity and magnetism, and what happens when you combine the two. The kit comes with a small copper wire coil, an iron core, AA battery holder (battery not included), two compasses and a pair of paperclips. It also includes a detailed set of instructions to help guide you through experiments about electromagnetic fields and polarity.
An electromagnet is a type of magnet where the magnetic field is being generated by an electrical current. The field disappears when the current is no longer passing through the wire. They typically consist of a closely spaced coil of wire, which has been wound around a core, usually iron. When a magnetic core is used, this causes a concentration of the magnetic flux, making for a more powerful magnet.
The biggest advantage it has over a permanent magnet is the ability to change the magnetic field by controlling the amount of electric current that is passing through the winding. The downside, however, is that to function, an electromagnet requires a continuous supply of current in order to maintain its electrical field.
They find use in a number of different areas: from loudspeakers and earphones, to MRI machines, magnetic locks and for magnetic recording and data storage on devices such as tape recorders and computer hard disks. |
School-based occupational therapy practitioners are occupational therapists (OTs) and occupational therapy assistants (OTAs) who use meaningful activities (occupations) to help children and youth participate in what they need and/or want to do in order to promote physical and mental health and well-being. Occupational therapy addresses the physical, cognitive, psychosocial and sensory components of performance. In schools, occupational therapy practitioners focus on academics, play and leisure, social participation, self-care skills (ADLs or Activities of Daily Living), and transition/ work skills. Occupational therapy’s expertise includes activity and environmental analysis and modification with a goal of reducing the barriers to participation.
Occupational therapy services for students are determined through the IEP process. School-based occupational therapy is available for students who are eligible for special education. Occupational therapists complete evaluations and assessments, and work with other members of the school-based team to help determine what is needed for a student to receive a free, appropriate public education in the least restrictive environment. They collaborate with the team to identify a student’s annual goals and determine the services, supports, modifications, and accommodations that are required for the student to achieve their goals.
Websites of Interest: |
By analyzing the relationship between the geographic location of current human populations in relation to East Africa and the genetic variability within these populations, researchers have found new evidence for an African origin of modern humans.
The origin of modern humans is a topic that is hotly debated. A leading theory, known as "Recent African Origin" (RAO), postulates that the ancestors of all modern humans originated in East Africa, and that around 100,000 years ago some modern humans left the African continent and subsequently colonized the entire world, supplanting previously established hominids such as Neanderthals in Europe and Homo erectus in Asia.
In the new work reported this week, researchers Franck Prugnolle, Andrea Manica, and François Balloux of the University of Cambridge show that geographic distance from East Africa along ancient colonization routes is an excellent predictor for the genetic diversity of present human populations, with those farther from Ethiopia being characterized by lower genetic variability. This result implies that information regarding the geographic coordinates of present populations alone is sufficient for predicting their genetic diversity. This finding adds compelling evidence for the RAO model. Such a relationship between location and genetic diversity is indeed only compatible with an African origin of modern humans and subsequent spread throughout the world, accompanied by a progressive loss of neutral genetic diversity as new areas were colonized. The loss of genetic diversity along colonization routes is smooth, with no obvious genetic discontinuity, thus suggesting that humans cannot be accurately classified in discrete ethnic groups or races on a genetic basis.
'Y' a protein unicorn might matter in glaucoma
23.10.2017 | Georgia Institute of Technology
Microfluidics probe 'cholesterol' of the oil industry
23.10.2017 | Rice University
Salmonellae are dangerous pathogens that enter the body via contaminated food and can cause severe infections. But these bacteria are also known to target...
University of Maryland researchers contribute to historic detection of gravitational waves and light created by event
On August 17, 2017, at 12:41:04 UTC, scientists made the first direct observation of a merger between two neutron stars--the dense, collapsed cores that remain...
Seven new papers describe the first-ever detection of light from a gravitational wave source. The event, caused by two neutron stars colliding and merging together, was dubbed GW170817 because it sent ripples through space-time that reached Earth on 2017 August 17. Around the world, hundreds of excited astronomers mobilized quickly and were able to observe the event using numerous telescopes, providing a wealth of new data.
Previous detections of gravitational waves have all involved the merger of two black holes, a feat that won the 2017 Nobel Prize in Physics earlier this month....
Material defects in end products can quickly result in failures in many areas of industry, and have a massive impact on the safe use of their products. This is why, in the field of quality assurance, intelligent, nondestructive sensor systems play a key role. They allow testing components and parts in a rapid and cost-efficient manner without destroying the actual product or changing its surface. Experts from the Fraunhofer IZFP in Saarbrücken will be presenting two exhibits at the Blechexpo in Stuttgart from 7–10 November 2017 that allow fast, reliable, and automated characterization of materials and detection of defects (Hall 5, Booth 5306).
When quality testing uses time-consuming destructive test methods, it can result in enormous costs due to damaging or destroying the products. And given that...
Using a new cooling technique MPQ scientists succeed at observing collisions in a dense beam of cold and slow dipolar molecules.
How do chemical reactions proceed at extremely low temperatures? The answer requires the investigation of molecular samples that are cold, dense, and slow at...
23.10.2017 | Event News
17.10.2017 | Event News
10.10.2017 | Event News
23.10.2017 | Life Sciences
23.10.2017 | Physics and Astronomy
23.10.2017 | Health and Medicine |
On Earth, a total solar eclipse means that for just a few minutes, the sky goes dark. But what does a total solar eclipse look like from space?
What Exactly is a Solar Eclipse?
A solar eclipse happens when, at just the right moment, the moon comes between the sun and Earth. When the moon only blocks out part of the sun’s light, it’s called a partial solar eclipse. Sometimes, the moon blocks all of the sun’s light. This is called a total solar eclipse.
Seeing the Moon’s Shadow
As the moon passes in front of the sun’s light, it casts a shadow on part of the Earth—and Earth-observing satellites can see this shadow. Watch the short video clip below to see a satellite’s view of the moon’s shadow on Earth during a solar eclipse.
As you can see in the video above, the moon’s shadow moves as the Earth rotates. The shadow traces a path across the Earth. This path is called the path of totality. If you want to experience total darkness during an eclipse, you have to be right in this path of the moon’s shadow.
Never look directly at the sun! It can damage your eyesight!
To view a solar eclipse safely, you must use special solar viewing glasses. For safety tips, such as what kind of glasses to buy, visit the NASA Eclipse 2017 Safety Page: https://eclipse2017.nasa.gov/safety
Watching the Moon Pass in Front of the Sun
Total solar eclipses also provide a rare chance to see the sun’s atmosphere, called the corona. The corona is very dim, and it’s usually hard to see because the sun is so much brighter. However, during those few minutes of a total solar eclipse, all you can see is the light from the corona!
Some satellites that observe the sun can also get a special glimpse during a total solar eclipse. They can actually watch the sun as the moon passes over it. Watch the short video clip below to see a satellite’s view of the moon eclipsing the sun.
GOES-16 Watches Earth and the Sun
On August 21, 2017 all of North America was at least partially in the path of a solar eclipse. And, anyone lucky enough to be in the path of totality—which stretched from Salem, Oregon to Charleston, South Carolina—at that time was able to see a total solar eclipse.
The GOES-16 weather satellite was watching that day, too! GOES-16 has an instrument that allowed it to capture views of Earth during this solar eclipse.
The Advanced Baseline Imager, or ABI, keeps an eye on Earth. On an ordinary day, it helps scientists spot severe weather on Earth and other hazards like forest fires. During a solar eclipse, it can watch the moon’s shadow pass over the Earth! |
Paleozoic tabulate corals are generally thought to have been free standing, a flattened disc-shaped to dome-shaped morphology providing a degree of stability in shallow-water, high-energy environments. The ability to encrust has previously been suggested by patterns of competitive overgrowth in certain species. Definite proof of encrustation by favositid corals is exhibited in an extraordinary example of an ancient rocky shore exposed for 350 m on Hudson Bay near Churchill, Manitoba. Carbonate strata attributed to the Upper Ordovician Port Nelson or Lower Silurian Severn River Formations locally transgress a massive Precambrian quartzite. An ancient shoreface is clearly marked by large, smoothly eroded boulders of the dark quartzite, commonly 2–10 m in diameter. The boulders are buried in coarse carbonate debris, but corals up to 20 cm in diameter are found cemented directly onto the surface of some boulders. Deep pitting of many boulders to a depth of 2–3 cm was contemporaneous and may have been promoted by unpreserved encrusters such as sponges or anemones. |
Towards the end of Lord of the Rings: The Return of the King, Frodo and Sam hike through Mordor, heading in the direction of Mount Doom to destroy the Ring. Despite their best efforts, however, they find themselves continually returning to the same hill. Perhaps that movie reference was a bit obscure, but it’s a familiar situation in movies, novels and even personal experience: in an attempt to follow a straight course, we somehow end up right back where we started. How is it that we always manage to end up unconsciously walking in circles?
With a study done by Jan Souman of the Max Planck Institute for Biological Cybernetics in Germany and published last year in Current Biology, our documented tendency to aimless wandering has been scientifically confirmed. Without a point of reference such as the sun, a compass, or the tip of a mountain to follow, people trying to follow a straight course will indeed end up in the same place they started from. Something as simple as walking in a straight line actually involves the work of multiple senses, along with our motor actions and cognition.
In a series of experiments, the researchers instructed participants to walk as straight as they could through a forest (where one tree can begin to look like another) on both a cloudy and a sunny day. They noted that when the sun (an easily distinguishable reference point) was out, participants were able to follow almost a perfectly straight course. However, when the weather was cloudy, all of them went in circles! In the next part of the experiment, the researchers instructed people to walk in a straight line in the Sahara desert; though some participants strayed from a completely straight line, they did not walk in a circle (they had the sun as a reference). However, when the same experiment was conducted at night when the moon disappeared behind the clouds, the participant unknowingly veered into a circle and headed towards the starting point. These experiments proved that with a lack of an absolute reference (a building, mountain, sun, or moon), people would tend to walk in circles.
But even if this is true, we’re still left with the question of why. Souman tested the possibility of left or right-handedness, different levels of dopamine on the different sides of our brain, and even bigger muscles or stronger appendages on one side. However, no single explanation was satisfactory in describing this phenomenon. Instead, Souman suggests that walking in a straight line is an extremely complex task involving the brain, sense of sight and proprioception (our sense of where parts of the body are relative to each other in space), spatial awareness and sense of balance. When any of those elements are in any way disrupted, people tend to drift randomly.
This study shows us how crucial our different senses are for navigation. Collectively, they are able to provide us with the amazing ability to find our way through even the most complicated routes. However, the handicap of even one of these senses may just leave us walking in circles.
For more information, take a look at these articles: |
But at the cellular level, plants and animals are quite similar (mainly because they share a common eukaryotic cell ancestor).
Embryogenesis is the first developmental step in growing a multicellular life form, like a flowering plant or a cat.
These vertebrate embryos appeared different in the earliest stages of development. But as embryogenesis progressed, the forms of the different embryos began to converge so that, at mid-stage embryogenesis, they all appeared similar. Finally, at the latter stages of embryogenesis, the forms of the embryos from different vertebrate animals again diverged.
Plotting these results along a vertical axis yielded a shape somewhat reminiscent of an hourglass. Hence, this early sequence of events in the formation of vertebrate animals was named the embryonic “hourglass model”.
Fast-forward to the 21st century — scientists doing genetic studies have recently shown that during the mid-stage of animal embryogenesis (the narrowing the middle of the “hourglass” when the embryos appear the same), the embryos all express similar “ancient genes”. These so-called “ancient genes”, dating back to the origin of eukaryotic cells, are shared by all animals tested so far.
However, there was no similar evidence for this “hourglass model” of embryogenesis in plants — until now.
The Green Hourglass
A paper published in the 4 October 2012 issue of Nature (see Ref. 1 below), Dr. Marcel Quint and colleagues have provided two lines of genetic evidence that support the idea that the hourglass model also is present in plant development.
Although embryogenesis evolved independently in both animals and plants, “… these findings indicate convergent evolution of the molecular hourglass and a conserved logic of embryogenesis across kingdoms.” (from Ref 1 below)
(Figure from Ref. 1 below.)
Perhaps the best summary of these interesting results is provided by the editor of Nature:
“It seems that both animals and plants have independently converged on a similar way of managing gene expression as they transform from a single celled zygote to multicellular organism, even though their morphological development is very different.”
1. Quint, M., et al. (2012) “A transcriptomic hourglass implant embryogenesis.” Nature, Vol. 490, pp. 98-101. (Abstract)
HowPlantsWork © 2008-2012 All Rights Reserved. |
What are peptides
Peptides are naturally occurring biological molecules. Peptides are found in all living organisms and play a key role in all manner of biological activity. Like proteins, peptides are formed (synthesized) naturally from transcription of a sequence of the genetic code, DNA. Transcription is the biological process of copying a specific DNA gene sequence into a messenger molecule, mRNA, which then carries the code for a given peptide or protein. Reading from the mRNA, a chain of amino acids is joined together by peptide bonds to form a single molecule.
There are 20 naturally-occurring amino acids and, like letters into words, they can be combined into an immense variety of different molecules. When a molecule consists of 2-50 amino acids it is called a peptide, whereas a larger chain of > 50 amino acids generally is referred to as a protein.
Peptides are in every cell and tissue in the body
In the human body, peptides are found in every cell and tissue and perform a wide range of essential functions. Maintenance of appropriate concentration and activity levels of peptides is necessary to achieve homeostasis and maintain health.
The function that a peptide carries out is dependent on the types of amino acids involved in the chain and their sequence, as well as the specific shape of the peptide. Peptides often act as hormones and thus constitute biologic messengers carrying information from one tissue through the blood to another. Two common classes of hormones are peptide and steroid hormones. Peptide hormones are produced in glands, and a number of other tissues including the stomach, the intestine and the brain. Examples of peptide hormones are those involved in blood glucose regulation, including insulin, glucagon-like-peptide 1 (GLP-1) and glucagon, and those regulating appetite, including ghrelin.
Peptides primarily creates a biological effect by binding to cell surface receptors
For a peptide to exert its effect, it needs to bind to a receptor specific for that peptide and which is located in the membrane of relevant cells. A receptor penetrates the cell membrane and consists of an extracellular domain where the peptide binds, and an intracellular domain through which the peptide exerts its function upon binding and activation of the receptor. An example is the GLP-1 receptor, which is located on beta cells in the pancreas. Upon activation of the receptor by natural GLP-1 or a peptide analog (a synthesized molecule mimicking the effect of natural GLP-1, such as our lixisenatide), the cell is stimulated through a series of biological events to release insulin. |
When plants of the same kind are grown and cultivated at one place on a a large scale is called as a crop. Crops are classified on the basis of the seasons: Kharif and Rabi crops.
Kharif crops (Monsoon crops): The crops which are grown during the monsoon (rainy season) are called kharif crops. Seeds of the se crops are sown in the beginning of the monsoon season. After maturation, these crops are harvested at the end of the monsoon season (Oct-Nov).
Example: Paddy, maize, millet and cotton crops
Rabi crops (Winter crops): Crops which are grown during the winter season(October-March) are called Rabi crops. Seeds of these crops are sown in the beginning of the winter season. After maturation of crops, they are harvested at the end of the winter season (April- May).
Example: Wheat, Gram and Mustard. |
No one knows what dark matter is, but it constitutes 80 percent of the matter in our universe. By studying numerous dwarf galaxies — satellite systems that orbit our own Milky Way galaxy — NASA's Fermi Gamma-ray Space Telescope has produced some of the strongest limits yet on the nature of the hypothetical particles suspected of making up dark matter. Short, narrated video.
Poster image, and dark matter simulations credit: Simulation: Wu, Hahn, Wechsler, Abel(KIPAC), Visualization: Kaehler (KIPAC)
There's more to the cosmos than meets the eye. About 80 percent of the matter in the universe is invisible to telescopes, yet its gravitational influence is manifest in the orbital speeds of stars around galaxies and in the motions of clusters of galaxies. Yet, despite decades of effort, no one knows what this "dark matter" really is. Many scientists think it's likely that the mystery will be solved with the discovery of new kinds of subatomic particles, types necessarily different from those composing atoms of the ordinary matter all around us. The search to detect and identify these particles is underway in experiments both around the globe and above it.
Scientists working with data from NASA's Fermi Gamma-ray Space Telescope have looked for signals from some of these hypothetical particles by zeroing in on 10 small, faint galaxies that orbit our own. Although no signals have been detected, a novel analysis technique applied to two years of data from the observatory's Large Area Telescope (LAT) has essentially eliminated these particle candidates for the first time.
WIMPs, or Weakly Interacting Massive Particles, represent a favored class of dark matter candidates. Some WIMPs may mutually annihilate when pairs of them interact, a process expected to produce gamma rays — the most energetic form of light — that the LAT is designed to detect.
The team examined two years of LAT-detected gamma rays with energies in the range from 200 million to 100 billion electron volts (GeV) from 10 of the roughly two dozen dwarf galaxies known to orbit the Milky Way. Instead of analyzing the results for each galaxy separately, the scientists developed a statistical technique — they call it a "joint likelihood analysis" — that evaluates all of the galaxies at once without merging the data together. No gamma-ray signal consistent with the annihilations expected from four different types of commonly considered WIMP particles was found.
For the first time, the results show that WIMP candidates within a specific range of masses and interaction rates cannot be dark matter. A paper detailing these results appeared in the Dec. 9, 2011, issue of Physical Review Letters.
This dwarf spheroidal galaxy in the constellation Fornax is a satellite of our Milky Way and is one of 10 used in Fermi's dark matter search. The motions of the galaxy's stars indicate that it is embedded in a massive halo of matter that cannot be seen.
GCMD keywords can be found on the Internet with the following citation:
Olsen, L.M., G. Major, K. Shein, J. Scialdone, S. Ritz, T. Stevens, M. Morahan, A. Aleman, R. Vogel, S. Leicester, H. Weir, M. Meaux, S. Grebas, C.Solomon, M. Holland, T. Northcutt, R. A. Restrepo, R. Bilodeau, 2013. NASA/Global Change Master Directory (GCMD) Earth Science Keywords. Version 126.96.36.199.0 |
Ocean waves of the type monitored by the Disasters Charter are most typically tsunamis, but storms with strong winds can also cause hazardous coastal waves.
Tsunamis are seismic sea waves and typically occur as a result of underwater earthquakes or volcanic eruptions, but they can also be caused by other external factors; such as ice calving from glaciers or meteorites. Tsunamis are tall waves which can extend as high as tens of metres, and are enerated by activity at sea. These waves extend over land when they reach coastal areas and can impact with deadly force and leave flooding in their wake.
The majority of tsunamis occur in the Pacific Ocean's "Ring of Fire", a wide area stretching roughly from the western coast of South, Central and North America and then across to the eastern coast of Russia, China, and the entirety of Japan, the Philippines and many islands in the south Pacific Ocean. Tectonic plates meet in this ring, and as such it as a geologically active region.
Satellites are useful in imaging the aftermath of a tsunami, using optical and radar data to identify damage and flooded areas.
Japan - Minamisoma - Observed inundation extent as of March 12, 2011
RapidEye, DLR - Map produced by ZKI |
Scab refers to any of several bacterial or fungal diseases of plants characterized by crustaceous lesions on fruit, tuber, leaf, or stem. The term is also used for the symptom of the disease. Scab often affects the trees or plants of apples, crab apples, cereals, cucumbers, peaches, pecans, Photinis, potatoes, and pyracantha. Leaves of affected plants may wither and drop early. Potatoes are especially susceptible to common scab, caused by a bacteria that spreads rapidly in dry alkaline soils. It can be prevented by avoiding the use of materials such as wood ash, fresh manure, and lime that will add alkalinity to the soil. Other disease-prevention methods include planting resistant varieties or disease-free seeds, tubers, and corms; destroying diseased parts; removing weeds; rotating vegetables and flowers; and regularly spraying plants with fungicides. |
Edward the Confessor, St. at English => English (The Britannica Concise) Of Explained:
King of England (1042-66). The son of Ethelred II, he was exiled to Normandy for 25 years (1016-41) while the Danes held England (see Canute the Great). For the first 11 years of his reign the real master of England was Godwine, earl of Wessex. Edward outlawed Godwine in 1051 and appointed Normans to high positions in government, thus preparing the way for the Norman Conquest. Godwine continued his opposition, and his son Harold (see Harold I) dominated England after 1053, subjugating Wales in 1063. Edward named Harold as his successor on his deathbed, but the duke of Normandy (the future William I) invaded England to claim the crown earlier promised him. Though an ineffectual monarch, Edward was famous for his piety, which earned him the epithet "the Confessor." |
Steam turbines are one of the most versatile and oldest prime mover technologies still in general production used to drive a generator or mechanical machinery. Power generation using steam turbines has been in use for over 100 years, when they replaced reciprocating steam engines due to their higher efficiencies and lower costs.Most of the electricity produced in the United States today is generated by conventional steam turbine power plants. The capacity of steam turbines can range from 50 kW to several hundred MWs for large utility power plants. Steam turbines are widely used for CHP applications in the U.S. and Europe.
Unlike gas turbines and reciprocating engine CHP systems, steam turbines normally generate electricity as a byproduct of heat (steam) generation. A steam turbine uses a separate heat source and does not directly convert fuel to electric energy. The energy is transferred from the boiler to the turbine through high pressure steam that in turn powers the turbine and generator. This separation of functions enables steam turbines to operate with an enormous variety of fuels, varying from clean natural gas to solid waste, including all types of coal, wood, wood waste, and agricultural byproducts.
Steam turbines offer a wide array of designs and complexity to match the desired application and/or performance specifications. Steam turbines for utility service may have several pressure casings and elaborate design features, all designed to maximize the efficiency of the power plant. For more information visit our Applications Guide here. |
Geologists said that California can in the foreseeable future to face the devastating earthquake and tsunami. The researchers found that the famous geological fault in southern California could play a “cruel joke” with the people of the United States. The research results were published in Geophysical Research Letters.
Fault Ventura (Ventura Fault) stretched for a hundred kilometers West of the city of Ventura, California. It is held under settlements of Santa Barbara and Goleta. In this part of the fault located under the Pacific ocean. Ventura Fault for decades attracted the attention of scientists, but most often it is considered completely “harmless” from the point of view of seismic activity. Now the researchers came to the conclusion that this is not so, and the fault may pose a serious danger to Americans.
The main factor which can cause a devastating earthquake, called the stepped structure of the Ventura Fault (it was also assumed that it is smooth). In order to completely ascertain what is the structure of the fault, the experts conducted a computer simulation. It took into account such factors as plate tectonics and seismic activity of the region. Scientists have discovered that the Ventura Fault really has a stepped structure, and in addition, it is located closer to the surface than thought experts.
This means that California residents could face an earthquake of magnitude of 8.0. Note that such earthquakes occur on Earth about once a year and can have disastrous consequences. Serious concerns among experts is that part of the fault located under the ocean floor, and this increases the risk of a tsunami. |
(This is the third in a series about eating biodiverse foods; having looked at the early history of food, and the mechanics and markets of Industrial Revolution agriculture, I focus here on Green Revolution and its unintended side effects.)
The biggest shave of agricultural genetics came during the mid-1900s, during a period known as the Green Revolution. While it overlaps with the Industrial Revolution, I distinguish the two because one is based on mechanical issues and the other on chemicals. Yes, the Green Revolution caused a surge in agricultural productivity, lowered the price of food, and probably did save a lot of people from famine (not a couple hundred: from hundreds of thousands to millions). BUT it did this in ways that are not proving to be sustainable; the Green Revolution has produced a lot of negative consequences that must be dealt with before agriculture can really move forward again.
First, a little background: the advent of chemical agriculture
The atmosphere we breathe is filled with nitrogen, but it is in a form that plants cannot use. Instead, tiny little organisms in the soil convert atmospheric nitrogen to plant soluble nitrogen. There are plants known as nitrogen fixers, and they do a wonderful job of making those organisms very happy and prolific. There are a host of other nutrients and minerals required by plants for health, but it is most often a lack of available nitrogen that thwarts plant growth.
In the early 1900s, two scientists (working argumentatively, this was not a collaboration) developed the Haber-Bosch process, which creates plant soluble nitrogen- in the form of ammonia- from atmospheric nitrogen, water, and natural gas. The process is pretty involved, and includes changing both the temperature and the atmospheric pressure inside the reaction chambers, and that requires energy, beyond the initial natural gas requirement.
With the Haber-Bosch process, fertilizers containing higher than natural levels of nitrogen became commonplace. Plants do indeed respond positively (except for my eggplants, which when too happy seem to produce more flea beetles than fruits) to the additional nitrogen. But two problems crop up fairly quickly:
- the nitrogen fixing organisms disappear from the soil, and
- the crops become more prone to lodging.
Lodging happens when a high wind pushes the plants over and they cannot, for some reason, stand themselves back up. One cause of lodging involves top heavy plants with big fat seed heads, bigger than the stalk can handle.
Enter Norman Borlaug.
What was the Green Revolution?
Known as the father of the Green Revolution, Mr. Borlaug was an agronomist who spent a great deal of time in the 1940s and 50s working with wheat plants. In order to save time, Mr. Borlaug went to Mexico where he could plant a crop in the highlands during one season and in the lowlands during the next, effectively doubling the speed at which he could work.
By crossing and re-crossing, he eventually arrived at a variety of wheat with shorter stalks that could handle the increased size of the seeds when the field was treated with the artificial nitrogen fertilizer. Because of his two-locations technique, there was the side effect of that particular wheat variety being what I call ‘site agnostic’, meaning where it was being grown had little impact on how it performed. The wheat also had an exceedingly narrow genome.
Typically the genetic variety within a single crop is fairly high. This is how there is a broad enough genetic base to protect against total species devastation in the event of some new plant pathogen appearing. For this particular wheat, each and every plant is less like cousins and more like identical twins.
When, in the early 1960s, Mr. Borlaug was invited by the Indian government to repeat this process with rice, the result was similar: a site agnostic, short-stalked rice with a very narrow genome that grew very well in the presence of synthetic nitrogen fertilizers. The jump in rice productivity did indeed help more people eat more rice more cheaply. Unfortunately, the side effects, though accidental, are by no means incidental.
Problems with the Green Revolution
Seeds used to be free- farmers simply saved from the previous year’s crop. The hybrid seeds must be bought. Fertilizer used to be free- the composted kitchen scraps, cover crop systems, and composted animal manure were all put to use. The synthetic fertilizers must be bought.
After a few years of applying synthetic nitrogen fertilizers to the soil, the soil food web begins to break down as the nitrogen fixing organisms disappear. The fields become, in effect, addicted to chemical fertilizer. Reverting to prior methodologies becomes very difficult, more so the longer the field is engaged in this form of agriculture.
Because plants can only uptake food at a certain rate, fertilizer applied above that rate washes off in the next rain. The increased nitrogen in the streams causes algal blooms, which deprives the water (and fish) of oxygen and blocks sunlight from reaching any aquatic plants. Eventually the nitrogen breaks down into a variety of forms, a process known as the nitrogen cascade. After wreaking havoc en route, some of the excess fertilizer ends up in the atmosphere as laughing gas, a potent greenhouse gas.
The crops are of course not the only plants intrigued by the abundant nitrogen. Weeds grow with equal vigor. This results in the application of herbicides. The mass gathering of a specific crop in one place is seen by any insects or other creatures who find that crop tasty as a very convenient occurrence indeed. This results in a need for pesticides. Now there is a very large field growing a single crop with a tiny degree of genetic variability and no neighbors- the biodiversity is nil. The fancy term for that is monoculture. A monoculture is an economic gamble: should that crop fail, the farmer has lost a significant portion of that year’s revenue, perhaps all of it.
Mr. Borlaug’s crops were so successful in meeting their singular goal that a great deal of well-intended charitable money was put toward introducing them in other places. The fact that wheat is a bit of a water hog did not seem to factor in when deciding where to grow it. The fact that other cultures have other crops (improvable, no doubt) that are both culturally and ecologically more suited to their landscape did not factor in. (This is part of why the Green Revolution never really reached Africa.)
The fact that these high yielding seeds require the synthetic fertilizers (and thus require the farmer to buy said fertilizers) did not factor in. Neither did fact that those longer stalks were indeed very useful- as roof thatching and animal fodder, amongst other things. Those needs now must be grown elsewhere, a phenomenon Vandana Shiva has dubbed “shadow acres”. (Shadow from not being included in efficiency calculations.)
The fact that a monoculture of rice is not a balanced diet did not factor in. Nor did the reality that the caloric ratio of energy put into growing a crop compared the calories (energy) obtained from the crop had completely flip-flopped: we’re spending more energy to grow food than we are getting from the food! Too bad we can’t eat oil or gas.
All of which is perhaps more than you wanted to know, but it gets me to what I wanted to show you: our contemporary diet is artificially narrow. Part of the reason for this is the fundamental fallacy of the single-problem-to-solve mindset, and part of this is because farmers are people too. The vast majority of farmers are not trying to make it big, they are just trying to make a living, so they keep taking what seems like the next right step, but the steps are piling up in the wrong direction. Farmers who are seeing what is happening and trying to change the direction of agriculture need help, some of it from us, the eaters.
Next up: How biodiversity in the kitchen helps save the world |
Using Strings and Regular Expressions
WHAT’S IN THIS CHAPTER?
- The differences between C-style strings and C++ strings
- How you can localize your applications to reach a worldwide audience
- How to use regular expressions to do powerful pattern matching
Every program that you write will use strings of some kind. With the old C language there is not much choice but to use a dumb null-terminated character array to represent an ASCII string. Unfortunately, doing so can cause a lot of problems, such as buffer overflows, which can result in security vulnerabilities. The C++ STL includes a safe and easy-to-use string class that does not have these disadvantages.
The first section of this chapter discusses strings in more detail. It starts with a discussion of the old C-style strings, explains their disadvantages, and ends with the C++ string class. It also mentions raw string literals, which are new in C++11.
The second section discusses localization, which is becoming more and more important these days to allow you to write software that can be localized to different regions around the world.
The last section introduces the new C++11 regular expressions library, which makes it easy to perform pattern matching on strings. They allow you to search for sub-strings matching a given pattern, but also to validate, parse, and transform strings. They are really powerful and it’s recommended that you start using them instead of manually writing your own string processing code. |
A less-expensive way to duplicate the complicated steps of photosynthesis in making fuel
January 23, 2014
Argonne National Laboratory researchers have found a new, more efficient, less-expensive way to make fuel — principally, hydrogen — from sunlight and water: linking a synthetic cobalt-containing catalyst to an organic light-sensitive molecule called a chromophore.
Chromophore molecules, such as chlorophyll, are involved in capturing light for photosynthesis.
Currently, the most efficient methods we have for making fuel involve rare and expensive metal catalysts, such as platinum. Although cobalt is significantly less efficient than platinum when it comes to light-induced hydrogen generation, the drastic price difference between the two metals makes cobalt the obvious choice as the foundation for a synthetic catalyst, said Argonne chemist Karen Mulfort.
The Argonne study wasn’t the first to look at cobalt as a potential catalytic material; however, a paper by the researchers in Physical Chemistry Chemical Physic identified a new mechanism to link the chromophore with the catalyst.
Previous experiments with cobalt attempted to connect the chromophore directly with the cobalt atom within the larger compound, but this eventually caused the hydrogen generation process to break down. Instead, the Argonne researchers connected the chromophore to part of a larger organic ring that surrounded the cobalt atom, which allowed the reaction to continue significantly longer.
Future studies in this arena could involve nickel- and iron-based catalysts — metals that are even more naturally abundant than cobalt, although they are not quite as effective natural catalysts.
The research was supported by DOE’s Office of Science.
Abstract of Physical Chemistry Chemical Physics paper
We have designed two new supramolecular assemblies based on Co(II)-templated coordination of Ru(bpy)32+ (bpy = 2,2′-bipyridyl) analogues as photosensitizers and electron donors to a cobaloxime macrocycle, which are of interest as proton reduction catalysts. The self-assembled photocatalyst precursors were structurally characterized by Co K-edge X-ray absorption spectroscopy and solution-phase X-ray scattering. Visible light excitation of one of the assemblies has yielded instantaneous electron transfer and charge separation to form a transient Co(I) state which persists for 26 ps. The development of a linked photosensitizer–cobaloxime architecture supporting efficient Co(I) charge transfer is significant since it is mechanistically critical as the first photo-induced electron transfer step for hydrogen production, and has not been detected in previous photosensitizer–cobaloxime linked dyad assemblies. X-band EPR spectroscopy has revealed that the Co(II) centres of both assemblies are high spin, in contrast to most previously described cobaloximes, and likely plays an important role in facilitating photoinduced charge separation. Based on the results obtained from ultrafast and nanosecond transient absorption optical spectroscopies, we propose that charge recombination occurs through multiple ligand states present within the photosensitizer modules. The studies presented here will enhance our understanding of supramolecular photocatalyst assembly and direct new designs for artificial photosynthesis. |
Nine Elements of Digital Citizenship
At that same conference, ISTE released the book Digital Citizenship in
Schools, in which Gerald Bailey and I
cover nine themes that we discovered
are key to the concept of digital citizenship. (See Nine Elements of Digital
Citizenship.) The book includes updated coverage of the themes as well
as activities for the classroom and
district use that can help get students
started on their journey toward becoming full citizens of the emerging
global digital frontier.
Digital citizenship describes the
norms of appropriate, responsible
behavior with regard to technology
use. Our nine elements help
users focus on these issues,
but they expand beyond
the boundary of just
working with technology appropriately.
They also begin to set
the stage for how we
work with each other
in a global, digital
society. These nine
a foundation for
helping to educate
children on the
issues that face
them in an increasingly technological
It is also our
hope that digital
create a base for all
technology users to
begin to discuss the
Full electronic participation in society. Can all users participate in
a digital society at acceptable levels if they choose?
Electronic buying and selling of goods. Do users have the
knowledge and protection to buy and sell in a digital world?
Electronic exchange of information. Is there an understanding
of the digital communication methods and when they are
The capability to use digital technology and to know when and
how to use it. Have users taken the time to learn about digital
technologies? Do they share that knowledge with others?
The standards of conduct expected by other digital
technology users. Do users consider others when using
The legal rights and restrictions governing technology use.
Are users aware of laws (rules, policies) that govern the
use of digital technologies?
The privileges and freedoms extended to all digital technology
users and the behavioral expectations that come with them.
Are users ready to protect the rights of others to defend their
own digital rights?
The elements of physical and psychological well-being related to
digital technology use. Do users consider the risks (both physical
and psychological) when using digital technologies?
The precautions that all technology users must take to
guarantee their personal safety and the security of their
networks. Do users take the time to protect their information
while creating precautions to protect others’ data as well? |
Clear organization is extremely important in a well-written paper. Generally speaking, history papers should include the following three elements:
- an introduction with a clear thesis statement;
- body paragraphs that contain evidence supporting your point of view; and
- a conclusion which restates your thesis and suggests the implications of your thesis.
What is an Introduction?
The introduction should lay out your central argument in a clear thesis statement. For a more detailed explanation of historical arguments and thesis statements, visit our web page, "Writing a Thesis and Making an Argument."
The introduction also serves as a "road map" for the reader. It should offer the reader the direction and general ideas contained in your paper, and set up the necessary background for the paper.
What are Body Paragraphs?
A paragraph is a conceptual unit that teams several sentences to convey a larger thought or point.
Topic sentence and thesis statement: Each paragraph should begin with a general topic sentence that indicates what subject the rest of the paragraph will discuss, what issue it will explore, or what point it will make. Also, every body paragraph should be reflective of your thesis statement. The topic sentence should show the reader how the topic of a paragraph relates to your argument.
One topic for each paragraph: If your paragraph talks about several different subjects, it must either be divided up, so you can develop each point separately and effectively in its own paragraph, or be opened by a topic sentence that makes it clear that you want to mention briefly a variety of lesser points.
Length of a paragraph: Following the topic sentence, remaining sentences in each paragraph provide more detail or evidence about the main topic. For an explanation of effective usage of quotes, see our web page, "Paraphrases and Quotes." A paragraph should develop the subject or point it is making; hence, it normally contains at least three sentences in addition to the topic sentence and may have a concluding sentence as well. (Here history writing differs from journalistic style, which often uses shorter paragraphs.)
Transitions: You should pay attention to making clear, smooth transitions between paragraphs. Between sections you will need a transition or linking statement, indicating that you are moving on to a new topic. Each paragraph within a section should also be clearly related to the one before and the one after, creating an even, logical flow. If the link is not readily apparent, you should include a sentence which describes the transition.
What is a Conclusion?
Your conclusion should restate/recap your thesis and major points, showing how you have proven your position. You may also want to draw the reader’s attention to possible implications of what you have discussed and your conclusions. Think of this as an answer to the question "So what?" In doing so, however, be careful to stay within the field of history covered by the course. Do not make vague statements about learning from our mistakes or the fundamental good or evil of humanity, as such reflections are best left to the reader.
How to Check the Organization of your Paper
If you are still not sure about your organization once you have written a draft, we suggest the following two ways to check your organization.
The 'can I make an outline?' method
If you have a good paper with tight organization, someone should be able to take the first sentence from every paragraph, list them, and create an outline of your argument as a result. Try it! If the list jumps around in subject matter or chronology, you know that you need to rethink the way you have ordered your paper. If you end up with an outline that makes no sense, you know that your organization needs serious attention.
The 'cut up your paper' method
One way to check your organization is to cut up a draft of your paper. Trim away the margin space, any extra paper at the top or bottom of your pages, and separate all your paragraphs. Hand the pieces to a friend and ask him or her to reassemble your paper in the order that makes the best sense to him or her.
When he or she is done, look over the order your friend created for your paper. If it matches your original draft, you know that your organization makes sense and that the transitions between paragraphs were smooth. If he or she reorganized your paper, talk to your friend about why the new order made sense to him or her, and work out ways to make each paragraph connect gracefully to the next. If he or she could not put your paper in any order whatsoever, you know you really need to work on the first and last sentences of each paragraph, as well as the strength of your argument. |
First, let's talk about the definition of a pedal triangle. A pedal triangle is created by the intersections of perpendicular lines from a point in the plane to the sides of a given triangle. Here is an example of one...
The pedal triangle in the above picture is orange. The given triangle is blue and the given point is P.
I began exploring pedal triangles by making a script in Geometer's Sketchpad that created pedal triangles. I used this script to see what happens when we place P in different places. (To try the script for yourself in a GSP file, click here.)
First, I looked to see what happens when P is on one of the sides of triangle ABC. Using GSP, I made the following conjecture: the angle in the pedal triangle at P is equal to the sum of the two angles in the given triangle that are on the same side as P. Look at the picture below for clarification.
Here is a proof of my conjecture:
Here P is on side BC and AB so P is the intersection of the perpendicular from P to the triangle. The only side that it does make sense to look at is side AC. Thus, we get a straight line from P to E that is perpendicular to AC when we look for the pedal triangle when P is at one of the vertices of the original triangle.
In this case, the pedal triangle is the same as the orthic triangle. This is because the orthic triangle is made of the intersection points of the altitudes of the original triangle. These are exactly the same points that make the pedal triangle. Here is a picture of it.
After doing some experimenting, I discovered that the pedal triangle in this case is the same as the medial triangle. This is because the circumcenter is created by the perpendicular bisectors of each side. That means the intersection of the perpendicular from P to a side is at the midpoint. The definition of a medial triangle is the triangle that is created by the midpoints of the sides of the original triangle.
The incenter is created by the intersection of the three angle bisectors. That means the incenter is equidistant from the sides of the original triangle. Because of the special characteristics of the incenter, we can create an incircle that is tangent to the triangle's sides. The pedal triangle, in this case, could also be constructed by the points where the incircle intersects the sides of the triangle. The distance from the incenter to the sides has to be the same and the way we measure distance is along a perpendicular! This means that P is the circumcenter to our pedal triangle. Pretty cool, huh? Here is a picture of the pedal triangle and the incircle.
The centroid is a tough one. It is constructed by the intersection of the medians of the triangle. The centroid is not equidistant from the vertices of sides of the original triangle. It also is not constructed using perpendiculars, like the orthocenter. The centroid's claim to fame is that it splits the median into two parts where one part is 1/3 of the length and the other is 2/3 the length of the median. Could this be influencing the pedal triangle? I couldn't seem to find a special property when P was the centroid. Maybe you'll be able to figure it out. Click here to open a GSP file with this problem already constructed in it.
I found these explorations very intriguing. My knowledge of the triangle centers concepts and my understanding of them were tested and strengthened through this process. Not only do I understand what a pedal triangle is, I also feel much more confident in my understanding of the triangle centers. I recommend this activity to geometry teachers. |
1.The Dead Sea Scrolls were discovered in eleven caves along the northwest shore of the Dead Sea between the years 1947 and 1956. The area is 13 miles east of Jerusalem and is 1300 feet below sea level. The mostly fragmented texts, are numbered according to the cave that they came out of. They have been called the greatest manuscript discovery of modern times. See a Dead Sea Scroll Jar.
2. Only Caves 1 and 11 have produced relatively intact manuscripts. Discovered in 1952, Cave 4 produced the largest find. About 15,000 fragments from more than 500 manuscripts were found.
3. In all, scholars have identified the remains of about 825 to 870 separate scrolls.
4. The Scrolls can be divided into two categories—biblical and non-biblical. Fragments of every book of the Hebrew canon (Old Testament) have been discovered except for the book of Esther.
5. There are now identified among the scrolls, 19 copies of the Book of Isaiah, 25 copies of Deuteronomy and 30 copies of the Psalms .
6. Prophecies by Ezekiel, Jeremiah and Daniel not found in the Bible are written in the Scrolls.
7. The Isaiah Scroll, found relatively intact, is 1000 years older than any previously known copy of Isaiah. In fact, the scrolls are the oldest group of Old Testament manuscripts ever found.
8. In the Scrolls are found never before seen psalms attributed to King David and Joshua.
10. The Scrolls are for the most part, written in Hebrew, but there are many written in Aramaic. Aramaic was the common language of the Jews of Palestine for the last two centuries B.C. and of the first two centuries A.D. The discovery of the Scrolls has greatly enhanced our knowledge of these two languages. In addition, there are a few texts written in Greek.
11. The Scrolls appear to be the library of a Jewish sect. The library was hidden away in caves around the outbreak of the First Jewish Revolt (A.D. 66-70) as the Roman army advanced against the rebel Jews.
12. Near the caves are the ancient ruins of Qumran. They were excavated in the early 1950's and appear to be connected with the scrolls.
13. The Dead Sea Scrolls were most likely written by the Essenes during the period from about 200 B.C. to 68 C.E./A.D. The Essenes are mentioned by Josephus and in a few other sources, but not in the New testament. The Essenes were a strict Torah observant, Messianic, apocalyptic, baptist, wilderness, new covenant Jewish sect. They were led by a priest they called the "Teacher of Righteousness," who was opposed and possibly killed by the establishment priesthood in Jerusalem.
14. The enemies of the Qumran community were called the "Sons of Darkness"; they called themselves the "Sons of Light," "the poor," and members of "the Way." They thought of themselves as "the holy ones," who lived in "the house of holiness," because "the Holy Spirit" dwelt with them.
15. The last words of Joseph, Judah, Levi, Naphtali, and Amram (the father of Moses) are written down in the Scrolls.
17. The Temple Scroll, found in Cave 11, is the longest scroll. Its present total length is 26.7 feet (8.148 meters). The overall length of the scroll must have been over 28 feet (8.75m).
18. The scrolls contain previously unknown stories about biblical figures such as Enoch, Abraham, and Noah. The story of Abraham includes an explanation why God asked Abraham to sacrifice his only son Isaac.
19. The scrolls are most commonly made of animal skins, but also papyrus and one of copper. They are written with a carbon-based ink, from right to left, using no punctuation except for an occasional paragraph indentation. In fact, in some cases, there are not even spaces between the words.
20. The Scrolls have revolutionized textual criticism of the Old Testament. Interestingly, now with manuscripts predating the medieval period, we find these texts in substantial agreement with the Masoretic text as well as widely variant forms.
22. Although the Qumran community existed during the time of the ministry of Jesus, none of the Scrolls refer to Him, nor do they mention any of His follower's described in the New Testament.
23. The major intact texts, from Caves 1 & 11, were published by the late fifties and are now housed in the Shrine of the Book museum in Jerusalem.
24. Since the late fifties, about 40% of the Scrolls, mostly fragments from Cave 4, remained unpublished and were unaccessible. It wasn't until 1991, 44 years after the discovery of the first Scroll, after the pressure for publication mounted, that general access was made available to photographs of the Scrolls. In November of 1991 the photos were published by the Biblical Archaeological Society in a nonofficial edition; a computer reconstruction, based on a concordance, was announced; the Huntington Library pledged to open their microfilm files of all the scroll photographs.
25. The Dead Sea Scrolls enhance our knowledge of both Judaism and Christianity. They represent a non-rabbinic form of Judaism and provide a wealth of comparative material for New Testament scholars, including many important parallels to the Jesus movement. They show Christianity to be rooted in Judaism and have been called the evolutionary link between the two.
The rugged terrain of the Qumran area.
Recommended For Further Study:
The Dead Sea Scrolls: A New Translation
The Dead Sea Scrolls Bible
Understanding the Dead Sea Scrolls
Listing of Dead Sea Scroll Books and Links to Other Sites on the Web
Copyright ©1996-2011 CenturyOne Bookstore. All Rights Reserved.
All prices subject to change and given in U.S. dollars.
Your purchase from CenturyOne.com will assist the CenturyOne Foundation in providing funding for various archaeological and research projects which seek to provide more information about the period of the First Century C.E., the origins of Christianity and the world of the Bible in general.
All materials contained in http://www.centuryone.com are protected by copyright and trademark laws and may not be used for any purpose whatsoever other than private, non-commercial viewing purposes. Derivative works and other unauthorized copying or use of stills, video footage, text or graphics is expressly prohibited. |
The E nebula or Barnard 142 nebula, an area of cosmic dust that avoids light from the stars behind it is located in the constellation of the Eagle.
About 2,000 light years from here, this nebula can be distinguished with binoculars with the naked eye on clear nights.
Unfortunately, the best time to see it is in summer since now in winter, the elliptical is practically on the horizon.
As I said, this is what is called the dark nebula, where dust clouds are so dense that they absorb all light. Normally this type of nebulae, its composition, is what is called molecular gas at low temperatures.
What is molecular gas? Because simply molecules that absorb very large wavelengths (light, we go) such as formaldehydes, cyclopropylenes, heavy compounds of carbon and oxygen, ammonia. Heavy molecules, come on.
Obviously, under other wavelengths, such as infrared, you can see “through” them.
The important thing is that they are basic for the formation of stars and planets thanks to their density and the materials that compose them. That is, they are nurseries of new stars.
The nature of the universe (which is nature alike) is recycled. |
|This article does not cite any references or sources. (December 2009)|
|Positional systems by base|
|Non-standard positional numeral systems|
|List of numeral systems|
There was no notation for zero in the old system, and the numeric values for individual letters were added together. The principles behind this system are the same as for the Ancient Greek numerals and Hebrew numerals. In modern Armenia, the familiar Arabic numerals are used. Armenian numerals are used more or less like the Roman numerals in modern English, e.g. Գարեգին Բ. means Garegin II and Գ. գլուխ means Chapter III (as a headline).
The final two letters of the Armenian alphabet, "o" (Օ) and "fe" (Ֆ) were added to the Armenian alphabet only after Arabic numerals were already in use, to facilitate transliteration of other languages. Thus, they do not have a numerical value assigned to them.
Numbers in the Armenian numeral system are obtained by simple addition. Armenian numerals are written left-to-right (as in the Armenian language). Although the order of the numerals is irrelevant since only addition is performed, the convention is to write them in decreasing order of value.
- ՌՋՀԵ = 1975 = 1000 + 900 + 70 + 5
- ՍՄԻԲ = 2222 = 2000 + 200 + 20 + 2
- ՍԴ = 2004 = 2000 + 4
- ՃԻ = 120 = 100 + 20
- Ծ = 50
To write numbers greater than 9999, it is necessary to have numerals with values greater than 9000. This is done by drawing a line over them, indicating their value is to be multiplied by 10000:
- Ա = 10000
- Ջ = 9000000
- ՌՃԽԳՌՄԾԵ = 11431255 |
Google is a great educational tool that can be used in so many different ways to promote collaboration among students and teachers. My district has a Google Apps for Education account, so every teacher and student has their own Google account through the school district. The ideas listed below are ways that Google can be used in a math classroom.
1) Google Presentation – Students can use this tool to create their own presentations or collaborate with classmates. Whether it is used for displaying data and/or findings from an activity or a way to create and present word problems on any topic, Google Presentations is a great way to get your students collaborating in and out of the classroom. It can be used to deepen understanding of vocabulary to create online flash cards or teachers can use Google Presentations to display their lessons and have them available to students outside of class or for absent students. You can even have students create their own lessons to share with the class.
2) Google Docs – Students can use this online tool for note-taking. They can collaborate and add information to other classmates. It also gives the teacher the opportunity to check on their notes and clear up any misconceptions outside of the class period. You can also use it for math journals. Teachers can give students a problem to solve and students can turn it in to the teacher electronically, which keeps the students from losing papers and not having them to look back at earlier work. Students can also create quizzes for each other.
3) Google Forms – Teachers can use Google forms in many ways. I have used it to collect answers to a test or quiz. It can also be used for surveys and data collection. You can have students log in their online practice time and scores from websites like Khan Academy or MangaHigh. I like that I can embed a form directly onto my school website for easy access for my students.
4) Google Spreadsheets – Students can collect and analyze data collaboratively with Google Spreadsheets. One example is a Coin Toss activity where students collect data, find probability and create a graph. You can also use Google Spreadsheets to create an assignment tracker, project planner or a math quiz.
5) Google Earth – Check out www.realworldmath.org. This site has great resources for using Google Earth in a math classroom. If you’ve never used Google Earth, Real World has a large collection of tutorial videos that you can view on the website or inside of Google Earth.
Please comment and add new ways you or someone you know uses Google in their math classroom. |
Questions and Answers on Potentially Large Methane Releases From Arctic, and Climate Change
Sub-sea permafrost is losing its ability to be an impermeable cap
March 4, 2010
For more information, see the press release: Methane Releases From Arctic Shelf May Be Much Larger and Faster Than Anticipated.
What is methane?
Methane is a naturally-occurring compound that is created when organic material, such as the remains of plants and animals, rot or otherwise break down. Bacteria and other microbes play a large role in processes that produce methane. These methane-producing processes may, for example, occur in landfills as their contents age. And some animals release methane as their bodies digest their food.
Vast stores of methane are trapped in the permafrost of the Arctic--large swaths of land where the ground stays frozen. Because of climate change, some Arctic permafrost is showing signs of thawing. This thawed Arctic permafrost may release methane into the atmosphere.
Why does methane cause so much concern?
Like carbon dioxide, methane is a greenhouse gas. The presence of greenhouse gases in the atmosphere inhibits the Earth's heat from being released into space. Therefore, increased levels of greenhouse gases in the atmosphere may cause the Earth's temperature to increase over time.
Methane may be "stored" underground or under the seafloor as methane gas or methane hydrate; methane hydrate is a crystalline solid combining methane and water, which is stable at low temperatures and high pressure--conditions commonly found in marine sediments. When methane stores are released relatively quickly into the atmosphere, levels of atmospheric methane may rapidly spike.
As a greenhouse gas, methane is 30 times more potent (gram for gram) than carbon dioxide. This means that adding relatively modest amounts of methane to the atmosphere may yield relatively large impacts on climate.
THE NEW Science STUDY
Who conducted the study?
The study was conducted by an international team of researchers led by Natalia Shakhova and Igor Semiletov--both from the University of Alaska Fairbanks. The study was partially funded by the National Science Foundation.
Where is the Shakhova and Semiletov study published?
The study appears in the March 5, 2010 issue of Science.
How much methane does it take to increase warming?
There's no clear answer to that question. However, the Earth's geologic record indicates that atmospheric concentrations of methane have varied from about 0.3 to 0.4 parts per million during cold periods to about 0.6 to 0.7 parts per million during warm periods.
The Shakhova and Semiletov study indicates that methane levels in the Arctic now average about 1.85 parts per million, the highest level in 400,000 years.
How much methane is currently being released from the East Siberian Arctic Shelf?
The Shakhova and Semiletov study suggests that 7 teragrams of methane are currently being released annually from the East Siberian Arctic Shelf. That's about equal to the amount of methane that is annually released from the rest of the world's oceans combined, and much more than was previously believed to be released from that part of the Arctic. What's more, the study raises the possibility that methane releases from the East Siberian Arctic Shelf could rise dramatically as its permafrost cover is thawed by warming temperatures.
What are the mechanisms that release methane in the East Siberian Arctic?
Methane may be released in two ways:
- Organic material is contained in soil that is frozen into permafrost. This permafrost thaws as the Earth warms. When the organic matter in this thawing permafrost begins to decompose under anaerobic conditions, it gradually releases methane.
- A subsea layer of permafrost covers a layer of seabed methane--stored as methane gas or methane hydrates. The subsea permafrost layer has long served as a barrier to the methane, sealing it in the seabed. But warming waters have begun to melt this subsea permafrost. The result: destabilization and perforations in the permafrost that create pathways for releases of underlying methane. Such releases may be larger and more abrupt than those that result from decomposition.
Why wasn't this phenomenon predicted before? Why is it a surprise?
The East Siberian Arctic shelf is a relatively new frontier in methane studies. Earlier studies in Siberia focused on methane seeping from thawing terrestrial permafrost.
Nevertheless, the existence of methane releases from the East Siberian Arctic Shelf itself isn't a surprise; in fact, the Shakhova and Semiletov study was conducted precisely because this phenomenon was, in some ways, predicted. What is a surprise about the study results is the magnitude of methane releases and the fact that they already happening on such a scale.
Does methane released in the Arctic only warm the Arctic?
No. Once subsurface methane is released and enters the atmosphere, it may circulate all over the Earth. Also, because the Arctic has a special influence on global climate, increasing Arctic temperatures contribute to global climate change and global rises in sea level.
Dana Cruikshank, National Science Foundation, (703) 292-7738, [email protected]
Marmian Grimes, University of Alaska, (907) 474-7902, [email protected]
Henrietta Edmonds, National Science Foundation, (703) 292-8029, [email protected]
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2015, its budget is $7.3 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 48,000 competitive proposals for funding, and makes about 11,000 new funding awards. NSF also awards about $626 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ |
The piñons died during what Breshears dubbed a "global-change-type-drought." It's impossible to blame any particular weather event on climate change. Still, the drought was a glimpse of the future, when droughts are predicted to be hotter and drier. Breshears and his colleagues found that it took 15 months in extremely dry soils to kill the piñons around McDowell's office. The heat, they believed, had increased the overall death toll by siphoning more water from soil and plants, though they couldn't yet prove it.
Dramatic changes in Southwestern forests had been expected – eventually. Desert edges are already marginal tree habitat, and were predicted to become especially vulnerable to the future's hotter, more intense droughts. Still, the amount of dead wood around Los Alamos was startling. Piñons didn't die only at the ecological boundary between woodland and grassland, the dry end of their range where Breshears and others believed climate change impacts would first become visible. Instead, piñons died almost everywhere they grew.
No community can comfortably afford to lose its forests. Besides being nice places to hike and ski, forests provide food and shelter for birds and wildlife. Leaves scrub the air of pollutants humans saturate it with. And forests shelter winter snow, the source of most Westerners' water supply, filtering it to rivers and streams in spring.
More important from a global perspective is the fact that forests ingest an estimated quarter to a third of the carbon dioxide released by fossil fuels, effectively keeping the earth's burner turned down. When trees die, they not only stop absorbing CO2, but they also decompose, gradually releasing the carbon stockpiled in their wood. If enough forests collapse, the flame on the planetary heating element could turn from "low" to "high." Instead of slowing global warming, forests could start to make it worse.
Computer models either don't account for future tree death caused by climate change, or they do so simplistically. These shortcomings worry scientists, and with good reason: The most troubling thing it could mean is that the dramatic forecasts the models currently produce – the ones predicting not only a warmer climate, but also the fundamental transformation of life on earth – are understated.
Before scientists can more accurately predict our future climate, they have to complete a simpler task – at least, one that sounds simpler. They need to understand, in mechanistic detail, how trees meet their end.
After Nate McDowell spent a few years studying the inner lives of junipers, his attitude toward the trees softened. What junipers lack in majestic height and open, shady understories, they make up for in pluck and perseverance. McDowell, a spry, 41-year-old former endurance runner, began to appreciate these qualities. "They're just so tough," he says. "You have to respect someone who's tough."
Juniper doesn't cower in the face of drought. Even when extremely short on water, it doesn't close its stomata – the tiny pores on its needles that regulate the tree's basic bodily functions. Stomata allow trees to consume carbon dioxide and photosynthesize. They also let water escape, creating the tension that pulls water upward through the tree's circulatory system. If there's too little water in the soil, a tree's pipes can fill with air and break.
To prevent this, many trees close their stomata during droughts. Juniper, with its deep roots and sturdy build, doesn't. When extremely stressed, it begins severing the water supply to entire limbs – reducing the amount of water the whole tree needs to survive. This is why smooth, naked branches – the desert's version of driftwood – often protrude from living junipers otherwise covered in stringy bark and sharp needles.
Piñon is more cautious, slamming its stomata shut during drought. Perusing data Breshears and another colleague collected during the drought, McDowell had an epiphany: For a year, the piñons that died endured a level of water stress that should have kept their stomata shut. Photosynthesis is to trees what cooking is to people. It's how they eat. In trying to protect themselves from dying of thirst, he thought, maybe piñons had starved to death instead. |
|Town/Region||Reserve Ciénaga del Fuerte|
|Continent||Central and Latin America|
|Date||9 Feb 2018|
Wetlands are very important ecosystems that host a large number of species, offering a number of benefits for ecosystem services; in particular, the tropical flood forests have the function of collecting and retaining water, helping to mitigate floods, contributing water to the water table and provide water to the flow in times of drought, provide a good number of timber and non-timber resources, transform, receive and store high amounts of carbon in the soil, which helps to reduce the concentrations of CO2 in the atmosphere and offer recreational opportunities for human populations.
The loss of tropical flood forests due to activities related to changes in land use also causes an alteration in hydrology, making them more prone to degradation due to extreme events such as storms and hurricanes, losing the benefits of environmental services they provide, so which is of great importance the actions of conservation and restoration, to recover said services.
In a tropical flood forest in the Gulf of Mexico called “Ciénega del Fuerte”, work has been carried out since 2010 on degraded areas, with different restoration actions, promoted by different governmental and academic institutions, but there have been no evaluations of the actions applied and determine if the health of the ecosystem is recovered.
This project focuses on making an assessment of the state in which the restored sites are located, evaluating various techniques to overcome the limiting conditions of the restoration and carrying out dissemination activities on the environmental benefits of floodplains for human communities near the site, as well as for occasional visitors.
When evaluating the different processes of restoration of a flood forest, the necessary information will be provided to determine what is the state of the restoration in its different phases and if the objectives of recovering its structure and function can be achieved with the actions that it has, and if it is necessary to propose new strategies and activities so that these objectives can be achieved, the participation of groups in ecotourism activities will be included in order to have an economic income to satisfy their basic needs and to feed the family and use the ecosystem sustainably, which may influence the protection and conservation of the ecosystem since they will be able to value this environment for the benefits obtained and keep them for their children.
For further information contact: |
Marie Curie Unit Study
Hi everyone, I’ve been working hard behind the scenes over here to create a new set of unit studies! This series is all about famous Scientists & Inventors! As usual each study includes fun hands-on activities to go along with the unit to help students remember what they’ve learned, and also provide them with a fun reference too to review and recall each person they’ve learned about.
Each of my scientist & inventor unit studies contains educational lessons, activities and a fun lap book that your students will work on as they progress through the study. The lessons also include book reports, vocabulary, character traits of these important figures, and critical thinking skills. I currently have 7 studies in this series. These studies are geared towards grades Kindergarten-4th, but can probably be adapted for older students as well.
Today I’m pleased to present the Marie Curie Unit Study.
Marie Sklodowska-Curie was born November 7, 1867 in Warsaw, Kingdom of Poland, then part of the Russian Empire. She is is a well known Polish physicist and chemist who dedicated her life to research on radioactivity. She was also the first woman to win a Nobel Prize in two fields and the only person to win in multiple sciences. Marie discovered Radium and Polonium which she named after her native country of Poland. Marie devoted her life to studying and researching uses for radium. Sadly her close research with radium caused an early death in 1934 due to radiation poisoning.
This study uses the Marie Curie scientists and inventors series by Mike Venezia, but students are welcome to use any other researched information they can find as well. I would highly encourage older students to do some independent research on their scientist and inventor prior to completing their final report.
Click here to see a video of the Marie Curie Lapbook:
In this unit students will learn all about Marie Curie, her childhood history, lifetime achievements, characteristics, as well as some of her greatest discoveries and contributions to science. Below is a sample of the Lapbook that students will create as they learn about Marie Curie. Activities for this unit include:
- All About Marie Curie
- The Atom
- Marie’s Education
- Marie’s Diploma
- Discovered Elements
- Marie’s Characteristics
- Radium Uses
- Marie’s Unusual Death
- A Final Report
The study also includes a final report on Marie Curie for students to complete. There are several different formats of the report to accommodate varying student grade levels that might be completing the unit.
Can’t wait for the others to release?
You can get the whole set of Scientist & Inventor Units at a discount! Click the image below:
Ready to Win?
I’m offering a FREE copy of the Marie Curie Unit Study to one of my readers!
Simply enter using the rafflecopter below! |
We all know that antibiotic resistance increases through improper use of antibiotics. With recent discoveries of bacteria containing antibiotic resistance genes in isolated tribes it would seem that bacteria have always been resistant- so why is it such a problem now?Bacteria, like all organisms, evolve. Through improper use of antibiotics we have changed the course of this evolution away from natural selection towards selecting bacteria which contain the genes for resistance. Humanity has selected out the strongest, most resistant bacteria and that what we are trying to fight off today.
Over the last 60 or so years antibiotic production has gotten faster, easier and cheaper resulting in millions of tons of antibiotics in the environment. This has provided a strong evolutionary push towards resistant strains. Additionally bacteria are able to share genes for antibiotic resistance between themselves using ‘plasmid exchange’. It appears gene sharing is particularly common in S. aureus species.
There are many other ways bacteria become resistant to antibiotics which make this problem more complicated. The only way for certain to reduce the evolutionary pressure for bacteria to contain these genes is to reduce the amount of antibiotics bacteria are exposed to.
For more information see this informative review in Microbiology and Molecular Biology Reviews.
For a detailed explanation for plasmid exchange click here
Picture shows plasmid exchange under an electron microscope. |
Today's scientists are scrambling to develop technology to cope with climate change; carbon capture technology, renewable energy and drought-resilient crops are just a few examples.
But researchers recently learned that ours isn't the first civilization to innovate as the Earth's climate shifts. A new study suggests "pulses" in technological innovation that took place between 280,000 and 30,000 years ago in present-day South Africa could have been driven by dramatic shifts in the region's weather conditions.
During this period, known as the Middle Stone Age, humans developed the first symbolic art, like engraved pieces of red ochre and ostrich eggshell containers. Artifacts such as pierced shells, likely used for necklaces, and relatively complex stone and bone tools have also been dated within this time frame.
"Looking at sources of stone used to make tools, it is clear that some have been transported for hundreds of kilometers, suggesting the existence of long-distance trading networks," Chris Stringer, merit researcher at the London Natural History Museum's Department of Earth Sciences and one of the paper's authors, said in an email.
However, these major technological and behavioral developments happened in irregular bursts, the reason for which has been a scientific enigma until now.
"Some of these cultural groupings developed very rapidly, but they also declined very rapidly," said Ian Hall of Cardiff University's School of Earth and Ocean Sciences in the United Kingdom, who also contributed to the report, published this month in the scientific journal Nature Communications.
"There had been a lot of debate about exactly what the mechanism may be for that, and climate was certainly discussed. But there was never any good evidence to suggest that there was a climate signature that matched the development," Hall said.
The researchers analyzed a marine sediment core collected off the coast of the Eastern Cape of South Africa, close to where the Great Kei River meets the ocean. About 100,000 years of river discharge is reflected in the sediments of this core, providing a detailed history of the region's hydrological conditions for the first time.
Tying warm, wet periods to richer archaeological finds
The Great Kei and other nearby rivers discharged sediment rich in iron oxides. By examining the ratio of iron to potassium in the core, the researchers learned that relatively abrupt changes in the precipitation levels took place during the Middle Stone Age.
These swift increases in precipitation and humidity, as reflected by higher levels of iron in the sediment core, were likely caused by global climate fluctuations.
During what are called Heinrich events -- natural but still largely unexplained fluctuations in the global climate -- the Atlantic Ocean's circulation slows substantially. This "bipolar seesaw behavior of the Atlantic Ocean," as the paper calls it, led to a southward shift in the Intertropical Convergence Zone, which carried the monsoon rains away from sub-Saharan Africa and into the South African region.
As a result, large swaths of sub-Saharan Africa would fall into drought, the study says, while southeast Africa became warmer, wetter and more humid, transforming into prime real estate for early human groups. These events took place within millennia -- fairly quickly, on a climatic time scale -- and resulted in changes of up to 10 degrees Celsius in mean annual temperatures.
When comparing the history of hydrological changes in the region with artifacts from the Middle Stone Age, the researchers discovered a "striking correspondence between the archaeological record of South Africa and the timing of the abrupt climate change" as seen in the marine core, the study states.
"What we found is the warmer, wet conditions in southeast Africa matched almost precisely the timing of these cooling events in the north," Hall said. "All of the archaeological evidence fell within these wet and warm periods."
Although the reason why these conditions would spur early technological developments will require further study, the researchers theorize that higher, more concentrated human populations are more likely to develop new ideas. Recent findings suggest that when human populations fall under a certain population density level, cultural knowledge disappears over time, Stringer said.
"The opposite will occur if populations are relatively dense and interacting, as ideas can be built on, with more chance of being conserved," he said.
"The link between climate, population growth and innovation is also important for us today as we have been fortunate to have had a recent and long (about 10,000 years) period of relative warmth and climatic stability," Stringer added. "As a species we have thrived in terms of numbers and in terms of innovations, but rapid and adverse climate change could certainly threaten our success."
Reprinted from Climatewire with permission from Environment & Energy Publishing, LLC. www.eenews.net, 202-628-6500 |
Photo by Hillary and Anna
Let’s remix a famous Christmas poem, give it a Thanksgiving theme, and teach our students advanced poetry concepts at the same time.
Students learn about rhyme scheme, but meter is a great way to extend their understanding of poetry. Meter defines a poem’s rhythm through stressed and unstressed syllables.
Introduce stressed/unstressed syllables using students’ names:
- Jennifer, not Jennifer
- Melissa, not Melissa
- Dennis, not Dennis
They need to be comfortable identifying stresses before moving on.
Shakespeare and Seuss
Shakespeare is famous for alternating stressed and unstressed syllables. If exaggerated, it sounds like “da dum” repeated over and over.
A pair of syllables with the “da dum” pattern is called an “iamb.”
Here’s Sonnet 18:
Dr. Seuss also dabbled in “iambs”:
Dr. Seuss is more famous for three syllables in a “da da dum” pattern.
The “da da dum” pattern is called an “anapest.” Since Suess uses four anapests per line, and tetra means four, this meter is called “anapest tetrameter”. It gives a poetry a fun, galloping feel.
Get more than 250 lessons, projects, and enrichment activities!
Byrdseed.TV is packed with pre-made video resources to help you differentiate for your gifted students.Check out Byrdseed.TV now!
A Visit From St. Nicholas
A Visit From St. Nicholas is the most famous poem built on anapest tetrameter.
Note: Detail-oriented students will catch that not all lines perfectly obey this meter.
Building The Remix
Here’s where rookie Ian would have gone wrong. Rather than scaffolding this task, I would have tossed the kids in. Some would have succeeded, but many would have struggled.
Instead, let’s build up to the large task by scaffolding smaller pieces and modeling the process.
1. Brainstorm Vocab
Let’s begin by brainstorming Thanksgiving-related words. Multisyllabic words are best since they have both stressed and unstressed syllables.
|autumn||corn on the cob||horn of plenty|
2. Break Down Stresses
Then find the stresses:
|autumn||corn on the cob||horn of plenty|
We also need some single-stressed-syllable words like: pie, fork, and plate.
You could even group these into categories based on stresses so kids have ammunition when they attack their own poem: first syllable stressed, second syllable stressed, etc.
3. Build An Anapest
Now model building an anapest for your students. We’ll start with “turkey.” It might help kids to have a table to fill out with 12 boxes to help structure their lines:
- The stress is on the first syllable: “turkey.”
- We need two unstressed syllables to go first: “We’ll have turkey.”
- Finish off that second anapest: “We’ll have turkey and pie.”
Now we have two anapests for our first line. Let’s fill in our table:
4. Finish Line
The original poem uses tetrameter, so we need a total of four anapests per line. I love that “cranberry sauce” already has an anapest built in, so let’s use that:
We’ve demonstrated how to build one line in anapest tetrameter that mirrors the famous poem A Visit From St. Nicholas. Now let’s rhyme it!
5. Rhyme The Line
Since the original poem’s rhyme scheme is an AABB pattern, we just need to rhyme “sauce.”
Show students that before writing this second line, it’s a good idea to plan the rhyme first. I’m going to end with “floss” and build from there.
We’ll have turkey and pie and some cranberry sauce.
We will eat so much food that I’d better bring floss!
Building The Final Product
Novices think creation happens linearly, but anyone who’s filmed a video, written a song, or penned an essay knows that creation is organic and often chaotic. My first lines might end up in the middle or the very end of the poem.
To emphasize this, have students write each line on a notecard, then move the cards around, finding the best order as they write.
How Long? The original poem is dozens of lines long, which will burn your kids out. Try remixing just the first six lines.
Groups? I’d limit this activity to friendly pairs or singles. Highly creative tasks don’t work well when we force kids to collaborate.
- Of course some type of classroom reading is a must.
- Consider traveling the school and reading to younger students.
- Add illustrations and build small books.
- Use Garageband to record readings, adding music and sound effects as well.
- For kids intrigued by “iambs” and “anapests,” let them browse this page for others syllable patterns.
Get creative ideas in your inbox.
I'll send you one or two emails a month to help you better understand and differentiate for gifted students.Get free resources now! |
Creating Dialogue Boxes
In this lesson you are going to learn about how to use Lisp to create Macintosh or Windows interface features like windows and dialogue boxes.
The procedures to do this are all provided in a LispWorks library called CAPI, and their names start with capi:.
Displaying a message: capi:display-message
To display a message use the capi:display-message procedure. This is followed by a format string and a list of arguments, just like format. For example:
(capi:display-message "The sum of ~a and ~a is ~a." 3 4 (+ 3 4))
Prompting for a string: capi:prompt-for-string
This asks the user to enter a string. For example:
(capi:prompt-for-string "Think of an animal:")
and will return the string you type in.
Prompting for a number: capi:prompt-for-number
In a similar way:
(capi:prompt-for-number "How many legs does it have:")
and return the number you type in.
Asking yes or no: capi:prompt-for-confirmation
The procedure capi:prompt-for-confirmation asks a question, and lets the user answer yes or no:
(capi:prompt-for-confirmation "Are you hungry?")
If the user clicks Yes the procedure returns T (true). If the user clicks No it returns Nil (false).
Giving the user a choice: capi:prompt-with-list and capi:prompt-for-items-from-list
Finally, the function capi:prompt-with-list takes a list and a message, and lets the user select one of the items. For example:
(capi:prompt-with-list '("red" "blue" "green" "pink") "What's your favourite colour?")
and return the value of the item you've selected.
The procedure capi:prompt-for-items-from-list is identical, except that it allows you to select any number of items, and it returns a list of the items you've selected.
A story-writing program
Finally, here's a story-writing program that puts all these procedures together:
(defun story () (let ((name (capi:prompt-for-string "What is your name:")) (food (capi:prompt-for-string "What is your favourite food:")) (colour (capi:prompt-with-list '("red" "blue" "green" "pink") "What's your favourite colour?"))) (capi:display-message "There once was a witch called ~a who liked ~a. One day ~a found some ~a ~a and ate so much that she died. The end." name food name colour food)))
To run the story program evaluate:
because there's no parameter.
1. Try improving the program to write a longer story, and use capi:prompt-for-confirmation and if statements to add branches in the story; for example:
(capi:prompt-for-confirmation "Should the witch die at the end?")
blog comments powered by Disqus |
To create a custom lesson, click on the check boxes of the files you’d like to add to your
lesson and then click on the Build-A-Lesson button at the top. Click on the resource title to View, Edit, or Assign it.
N.1.Number and Operations (NCTM)
Number and Operations (NCTM)
1.1. Understand numbers, ways of representing numbers, relationships among numbers, and number systems. 1.1.1. Work flexibly with fractions, decimals, and percents to solve problems. Quiz, Flash Cards, Worksheet, Game & Study Guide Percentage
1.1.2. Compare and order fractions, decimals, and percents efficiently and find their approximate locations on a number line.
1.1.3. Develop meaning for percents greater than 100 and less than 1.
1.1.4. Understand and use ratios and proportions to represent quantitative relationships. Quiz, Flash Cards, Worksheet, Game & Study Guide Ratio
1.1.5. Develop an understanding of large numbers and recognize and appropriately use exponential, scientific, and calculator notation. Quiz, Flash Cards, Worksheet, Game & Study Guide Exponents
1.1.6. Use factors, multiples, prime factorization, and relatively prime numbers to solve problems.
1.1.7. Develop meaning for integers and represent and compare quantities with them.
1.2. Understand meanings of operations and how they relate to one another. 1.2.1. Understand the meaning and effects of arithmetic operations with fractions, decimals, and integers.
1.2.2. Use the associative and commutative properties of addition and multiplication and the distributive property of multiplication over addition to simplify computations with integers, fractions, and decimals.
1.2.3. Understand and use the inverse relationships of addition and subtraction, multiplication and division, and squaring and finding square roots to simplify computations and solve problems.
1.3. Compute fluently and make reasonable estimates. 1.3.1. Select appropriate methods and tools for computing with fractions and decimals from among mental computation, estimation, calculators or computers, and paper and pencil, depending on the situation, and apply the selected methods. Quiz, Flash Cards, Worksheet, Game & Study Guide Estimation
1.3.2. Develop and analyze algorithms for computing with fractions, decimals, and integers and develop fluency in their use.
1.3.3. Develop and use strategies to estimate the results of rational-number computations and judge the reasonableness of the results. Quiz, Flash Cards, Worksheet, Game & Study Guide Estimation
1.3.4. Develop, analyze, and explain methods for solving problems involving proportions, such as scaling and finding equivalent ratios. Quiz, Flash Cards, Worksheet, Game & Study Guide Ratio
N.11.Grade 7 Curriculum Focal Points (NCTM)
Grade 7 Curriculum Focal Points (NCTM)
11.1. Number and Operations and Algebra and Geometry: Developing an understanding of and applying proportionality, including similarity 11.1.1. Students extend their work with ratios to develop an understanding of proportionality that they apply to solve single and multi-step problems in numerous contexts. They use ratio and proportionality to solve a wide variety of percent problems, including problems involving discounts, interest, taxes, tips, and percent increase or decrease. They also solve problems about similar objects (including figures) by using scale factors that relate corresponding lengths of the objects or by using the fact that relationships of lengths within an object are preserved in similar objects. Students graph proportional relationships and identify the unit rate as the slope of the related line. They distinguish proportional relationships (y/x = k, or y = kx) from other relationships, including inverse proportionality (xy = k, or y = k/x). Quiz, Flash Cards, Worksheet, Game & Study Guide Functions Quiz, Flash Cards, Worksheet, Game & Study Guide Ratio
11.2. Measurement and Geometry and Algebra: Developing an understanding of and using formulas to determine surface areas and volumes of three-dimensional shapes 11.2.1. By decomposing two- and three-dimensional shapes into smaller, component shapes, students find surface areas and develop and justify formulas for the surface areas and volumes of prisms and cylinders. As students decompose prisms and cylinders by slicing them, they develop and understand formulas for their volumes (Volume = Area of base x Height). They apply these formulas in problem solving to determine volumes of prisms and cylinders. Students see that the formula for the area of a circle is plausible by decomposing a circle into a number of wedges and rearranging them into a shape that approximates a parallelogram. They select appropriate two- and three-dimensional shapes to model real-world situations and solve a variety of problems (including multi-step problems) involving surface areas, areas and circumferences of circles, and volumes of prisms and cylinders. Quiz, Flash Cards, Worksheet, Game & Study Guide Area Quiz, Flash Cards, Worksheet, Game & Study Guide Formulas Quiz, Flash Cards, Worksheet, Game & Study Guide Volume
11.3. Number and Operations and Algebra: Developing an understanding of operations on all rational numbers and solving linear equations 11.3.1. Students extend understandings of addition, subtraction, multiplication, and division, together with their properties, to all rational numbers, including negative integers. By applying properties of arithmetic and considering negative numbers in everyday contexts (e.g., situations of owing money or measuring elevations above and below sea level), students explain why the rules for adding, subtracting, multiplying, and dividing with negative numbers make sense. They use the arithmetic of rational numbers as they formulate and solve linear equations in one variable and use these equations to solve problems. Students make strategic choices of procedures to solve linear equations in one variable and implement them efficiently, understanding that when they use the properties of equality to express an equation in a new way, solutions that they obtain for the new equation also solve the original equation.
N.12.Connections to the Grade 7 Focal Points (NCTM)
Connections to the Grade 7 Focal Points (NCTM)
12.1. Measurement and Geometry: Students connect their work on proportionality with their work on area and volume by investigating similar objects. They understand that if a scale factor describes how corresponding lengths in two similar objects are related, then the square of the scale factor describes how corresponding areas are related, and the cube of the scale factor describes how corresponding volumes are related. Students apply their work on proportionality to measurement in different contexts, including converting among different units of measurement to solve problems involving rates such as motion at a constant speed. They also apply proportionality when they work with the circumference, radius, and diameter of a circle; when they find the area of a sector of a circle; and when they make scale drawings. Quiz, Flash Cards, Worksheet, Game & Study Guide Formulas
12.2. Number and Operations: In grade 4, students used equivalent fractions to determine the decimal representations of fractions that they could represent with terminating decimals. Students now use division to express any fraction as a decimal, including fractions that they must represent with infinite decimals. They find this method useful when working with proportions, especially those involving percents. Students connect their work with dividing fractions to solving equations of the form ax = b, where a and b are fractions. Students continue to develop their understanding of multiplication and division and the structure of numbers by determining if a counting number greater than 1 is a prime, and if it is not, by factoring it into a product of primes. Quiz, Flash Cards, Worksheet, Game & Study Guide Percentage Quiz, Flash Cards, Worksheet, Game & Study Guide Ratio
12.3. Data Analysis: Students use proportions to make estimates relating to a population on the basis of a sample. They apply percentages to make and interpret histograms and circle graphs. Quiz, Flash Cards, Worksheet, Game & Study Guide Graphs
12.4. Probability: Students understand that when all outcomes of an experiment are equally likely, the theoretical probability of an event is the fraction of outcomes in which the event occurs. Students use theoretical probability and proportions to make approximate predictions.
2.1. Understand patterns, relations, and functions. 2.1.1. Represent, analyze, and generalize a variety of patterns with tables, graphs, words, and, when possible, symbolic rules. Quiz, Flash Cards, Worksheet, Game & Study Guide Sequences
2.2. Represent and analyze mathematical situations and structures using algebraic symbols. 2.2.1. Develop an initial conceptual understanding of different uses of variables.
2.2.2. Explore relationships between symbolic expressions and graphs of lines, paying particular attention to the meaning of intercept and slope.
2.2.3. Use symbolic algebra to represent situations and to solve problems, especially those that involve linear relationships.
2.2.4. Recognize and generate equivalent forms for simple algebraic expressions and solve linear equations
2.3. Use mathematical models to represent and understand quantitative relationships. 2.3.1. Model and solve contextualized problems using various representations, such as graphs, tables, and equations. Quiz, Flash Cards, Worksheet, Game & Study Guide Formulas
2.4. Analyze change in various contexts. 2.4.1. Use graphs to analyze the nature of changes in quantities in linear relationships.
3.1. Analyze characteristics and properties of two- and three-dimensional geometric shapes and develop mathematical arguments about geometric relationships. 3.1.1. Precisely describe, classify, and understand relationships among types of two- and three-dimensional objects using their defining properties. Quiz, Flash Cards, Worksheet, Game & Study Guide Perimeter
3.1.2. Understand relationships among the angles, side lengths, perimeters, areas, and volumes of similar objects.
3.1.3. Create and critique inductive and deductive arguments concerning geometric ideas and relationships, such as congruence, similarity, and the Pythagorean relationship.
3.2. Specify locations and describe spatial relationships using coordinate geometry and other representational systems. 3.2.1. Use coordinate geometry to represent and examine the properties of geometric shapes.
3.2.2. Use coordinate geometry to examine special geometric shapes, such as regular polygons or those with pairs of parallel or perpendicular sides.
3.3. Apply transformations and use symmetry to analyze mathematical situations. 3.3.1. Describe sizes, positions, and orientations of shapes under informal transformations such as flips, turns, slides, and scaling.
3.3.2. Examine the congruence, similarity, and line or rotational symmetry of objects using transformations.
3.4. Use visualization, spatial reasoning, and geometric modeling to solve problems. 3.4.4. Use geometric models to represent and explain numerical and algebraic relationships. Quiz, Flash Cards, Worksheet, Game & Study Guide Area Quiz, Flash Cards, Worksheet, Game & Study Guide Perimeter Quiz, Flash Cards, Worksheet, Game & Study Guide Volume
4.1. Understand measurable attributes of objects and the units, systems, and processes of measurement.
4.1.1. Understand both metric and customary systems of measurement.
4.1.2. Understand relationships among units and convert from one unit to another within the same system. Quiz, Flash Cards, Worksheet, Game & Study Guide Measurement
4.1.3. Understand, select, and use units of appropriate size and type to measure angles, perimeter, area, surface area, and volume. Quiz, Flash Cards, Worksheet, Game & Study Guide Measurement
4.2. Apply appropriate techniques, tools, and formulas to determine measurements. 4.2.2. Select and apply techniques and tools to accurately find length, area, volume, and angle measures to appropriate levels of precision. Quiz, Flash Cards, Worksheet, Game & Study Guide Area Quiz, Flash Cards, Worksheet, Game & Study Guide Formulas Quiz, Flash Cards, Worksheet, Game & Study Guide Volume
4.2.3. Develop and use formulas to determine the circumference of circles and the area of triangles, parallelograms, trapezoids, and circles and develop strategies to find the area of more-complex shapes. Quiz, Flash Cards, Worksheet, Game & Study Guide Area Quiz, Flash Cards, Worksheet, Game & Study Guide Formulas
4.2.4. Develop strategies to determine the surface area and volume of selected prisms, pyramids, and cylinders. Quiz, Flash Cards, Worksheet, Game & Study Guide Volume
4.2.5. Solve problems involving scale factors, using ratio and proportion.
4.2.6. Solve simple problems involving rates and derived measurements for such attributes as velocity and density.
N.5.Data Analysis and Probability (NCTM)
Data Analysis and Probability (NCTM)
5.1. Formulate questions that can be addressed with data and collect, organize, and display relevant data to answer them. 5.1.1. Formulate questions, design studies, and collect data about a characteristic shared by two populations or different characteristics within one population.
5.1.2. Select, create, and use appropriate graphical representations of data, including histograms, box plots, and scatterplots. Quiz, Flash Cards, Worksheet, Game & Study Guide Graphs
5.2. Select and use appropriate statistical methods to analyze data. 5.2.1. Find, use, and interpret measures of center and spread, including mean and interquartile range. Quiz, Flash Cards, Worksheet, Game & Study Guide Statistics
5.2.2. Discuss and understand the correspondence between data sets and their graphical representations, especially histograms, stem-and-leaf plots, box plots, and scatterplots. Quiz, Flash Cards, Worksheet, Game & Study Guide Graphs Quiz, Flash Cards, Worksheet, Game & Study Guide Tables
5.3. Develop and evaluate inferences and predictions that are based on data. 5.3.2. Make conjectures about possible relationships between two characteristics of a sample on the basis of scatterplots of the data and approximate lines of fit.
5.3.3. Use conjectures to formulate new questions and plan new studies to answer them.
5.4. Understand and apply basic concepts of probability 5.4.2. Use proportionality and a basic understanding of probability to make and test conjectures about the results of experiments and simulations.
5.4.3. Compute probabilities for simple compound events, using such methods as organized lists, tree diagrams, and area models.
N.6.Problem Solving (NCTM)
6.1. Build new mathematical knowledge through problem solving.
6.2. Solve problems that arise in mathematics and in other contexts.
6.3. Apply and adapt a variety of appropriate strategies to solve problems.
N.7.Reasoning and Proof (NCTM)
Reasoning and Proof (NCTM)
7.1. Recognize reasoning and proof as fundamental aspects of mathematics.
7.2. Make and investigate mathematical conjectures.
7.3. Develop and evaluate mathematical arguments and proofs.
7.4. Select and use various types of reasoning and methods of proof.
9.2. Understand how mathematical ideas interconnect and build on one another to produce a coherent whole. |
The major role of deoxyribonucleic acid is to provide the information for the production of proteins that are responsible for our structure, carry out life sustaining processes and provide the necessary compounds for cellular reproduction. Just like an instructional or "how-to" book found at your local library, the information held within a DNA molecule is organized into sections and can be broken down to letters that code for different commands depending upon their sequence. Keeping with the library book metaphor, DNA is also stored neatly into chromosomes with molecules similar to a book’s bindings.
Letters and Words
DNA consists of the nitrogen bases adenine, guanine, cytosine and thymine. These bases are usually abbreviated as A, G, C and T, respectively. Just as in a book, these letters are grouped in a specific order to communicate a particular idea or task. These orders are written in the language that messenger ribonucleic acid (mRNA) can understand, which is the molecule responsible for making a ribonucleic acid (RNA) template of a specific gene in the DNA strand. The mRNA knows where to bind to DNA to make the gene’s RNA copy by "reading" the DNA for the start point sequence, or "word," that is coded by the nitrogen bases.
The instructions for synthesizing different proteins are organized in the DNA strand into "chapters" called genes. Start sequences within the nitrogen bases serve as chapter pages, informing the mRNA "readers" of where the section begins.
Sciencing Video Vault
Reading the Book
The mRNA "reads" the DNA in order to make an RNA copy of a gene. To make an RNA copy, a complementary strand of bases is formed off the DNA template. In DNA, adenine is complimentary to thymine and cytosine is to guanine. The RNA language differs slightly from the DNA language, however, as it uses a different base to compliment adenine, called uracil (U), which is used instead of thymine. This RNA also contains words, called codons, which comprise three nucleotide bases that will code for amino acids.
The mRNA strand now exits the nucleus and travels to the cytoplasm for the commands contained within the chapter to be carried out. A transfer RNA (tRNA) with a methionine amino acid group will bind to the complementary mRNA copy of the gene at the site that holds a specific sequence of three bases, called the start codon. Once the start codon is read, tRNA molecules holding the anti-codon, which complement the next open codon, will bind to the mRNA strand briefly while carrying the attached amino acid group. This amino acid group then forms a peptide bond with the previous amino acid group and joins the growing peptide chain. In this way, tRNA translates the mRNA information into the language of proteins, forming the intended molecule. |
At 14,000 miles-per-hour, the spacecraft completed its 13th flyby of the moon at an altitude of only 29 miles above its surface. Since 2005, the small moon has intrigued planetary scientists with the water ice plumes jetting from geyser rifts in the south polar region of the frozen mini-world.
The fly-by aimed to "sniff" the plume ejecta with chemistry sensors in space above the north part of the moon, says a NASA statement, looking for a thin atmosphere left by those eruptions. At its closest, the spacecraft flew over the 61-degree north latitude line on the moon.
"Cassini has performed several Enceladus passes near the south pole, where the active plume spews vapor and water ice particles high above the surface. The south polar fly-bys have focused on studying plume composition and density, along with surface temperatures and geology. This northern hemisphere pass (E-13) will provide important comparative measurements to the south polar data. In particular, the fields and- particles instruments will sample the environment at the northern hemisphere, searching for neutral and ionized gases, " says a NASA fact sheet.
By Dan Vergano
Visit Science Fair for your daily dose of scientific news, from dinosaurs to distant galaxies. Science Fair is written by science reporters Dan Vergano and Elizabeth Weise and weather reporter Doyle Rice. Their subjects are often controversial -- and always fascinating -- be they stem-cell research, slime mold, or underground slush on Mars. More about the team |
The destruction of the Jerusalem Temple in 70 C.E. eliminated Judaism's national religious center. Long sections of Exodus, Leviticus, and Numbers, describing the priestly system and its requirements were now inoperative, and in danger of being irrelevant. Midrash halakhah enabled the rabbis to fashion new practices to replace sacrificial worship and to connect those practices to the words of the Torah.
Jews came to encounter many of the Torah’s passages regarding law and practice in light of halakhic midrashim. It is almost impossible to read Deuteronomy today and not think of the Shema and the mezuzah.
Collections of Midrash Halakhah
The two centuries after the destruction of the Temple were the heyday of midrash halakhah. As noted above, midrash halakhah from this period was collected in three books--the Mekhilta on Exodus, the Sifra on Leviticus, the Sifrei on Numbers and Deuteronomy--known as the tannaitic midrashim. (The tannaim were the rabbis from the time of the Mishnah, the legal code edited around the year 200.)
While the Sifra and the Sifrei are almost entirely works of halakhah, all three tannaitic midrashim contain aggadic midrash as well. They present many of the same laws found in the Mishnah, in order of their connection to the Torah text (in contrast with the Mishnah, which is organized by topic and contains relatively few quotations from the Torah).
The Schools of Rabbi Yishmael and Rabbi Akiba
Two styles or schools of midrash halakhah were associated with particular sages. The dictum "The Torah speaks in the language of human beings" is attributed to Rabbi Yishmael; that is, that Torah's use of language reflects the way that people customarily converse. Consequently, when a sage is interpreting a verse or deriving a practice, he should assume that the text is making use of human conventions (such as digressions, repetitions, etc.) rather than assigning significance to each textual quirk. Thirteen interpretive principles were attributed to Rabbi Yishmael, such as noting when a general law and a particular specific application are both supplied in the Torah.
Rabbi Akiba, by contrast, held that the Torah's language was divine in character, and thus no letter or word in the Torah could be dismissed as a mere redundancy or convention. Even the smallest element of the text, such as the particle et (which in ordinary language is merely a frequent grammatical indicator), had its own unique, even mystical, significance. What Rabbi Ishmael might interpret as simply a feature of human language, Akiba would view as having deliberate significance and meaning. In one talmudic story, Moses himself is shown in Akiba’s classroom, where Akiba links particular laws to calligraphic flourishes on the letters in the Torah scroll, and attributes the laws to traditions received from Moses at Sinai.
Did you like this article? MyJewishLearning is a not-for-profit organization. |
Eight Steps Toward Healing a Hurting World
In 2000, the Millennium Development Goals (MDGs) were established to reduce, by the year 2015, the number of people who live in extreme poverty. Developed by the international community including leaders from 191 countries, the eight goals were endorsed by development institutions and religious bodies, and have galvanized unprecedented energy and effort.
Each goal works toward alleviating poverty and disease by establishing targets that will directly improve people’s lives.
- Challange: One billion people live on less than USD $1 per day. 854 million people are chronically hungry and one child dies from hunger every 5 seconds
- Target: Cut in half the number of people who live on less than $1 per day. Cut in half the number of hungry people
- Challange: Approximately 77 million children do not attend primary school.
- Target: Ensure that girls and boys everywhere are able to complete primary school
3. Promote gender equality and empower women
- Challange: 96 million young women aged 15-24 in developing countries cannot read or write.
- Target: Eliminate discrimination against women in education.
- Challange: 26,000 children under 5 die every day, many from preventable illnesses.
- Target: Reduce by two-thirds the number of children who die before age 5.
- Challange: Approximately 500,000 women die every year from complications due to pregnancy and childbirth.
- Target: Reduce by 75% the number of women who die as a result of pregnancy and childbirth.
- Challange: One million people die each year from malaria — an easily preventable disease. 14,000 new HIV/AIDS infections are diagnosed every day.
- Target: Stop the spread of these diseases and see a decline in death rates.
- Challange: 1 billion people — one-fifth of the world’s population— do not have access to clean water within a 15-minute walk from their home. Forests worldwide are shrinking at an unprecedented rate
- Target: Cut in half the number of people without access to safe drinking water. Reverse the loss of natural resources by practicing sustainable development
- Challange: Unfair trade systems, crippling debt and limited access to markets prevent growth and opportunity for all people
- Target: Improve levels of development assistance, promote good governance, provide access to markets, offer solutions for indebted countries |
Published by K12 Handhelds, Inc.
Copyright © 2005 by K12 Handhelds, Inc. All rights reserved. This book is intended for licensed users only. Please do not distribute.
Developed in conjunction with Wicomico County Schools.
Table of Contents
Poetry is a special kind of writing that uses the sound and rhythm of words to tell a story and to make the reader feel a certain way. These feelings are created through setting, mood, and tone. Setting is the time and place a story or poem takes place in. Mood and tone have to do with how the poem makes you feel. It could be a funny, silly poem or a dark, sad one.
Sometimes, poems rhyme, but sometimes they don’t. Sometimes, poems follow specific rules, like how many words are in each line, but not always. Some poems have several stanzas or sections, and other don't. Some poems have a specific number of syllables in each line, but some poems have no rules at all.
One of the best things about poetry is that there are so many kinds of poems. You can choose the type that works best for what you are writing about and how you feel. You can even create your own kind of poem.
Here are some different types of poems with examples of each one.
Alphabet poems have 26 lines, each beginning with a different letter of the alphabet. They are written about one theme. Sometimes they rhyme, but they don’t have to.
Here is an example of an alphabet poem:
Airplanes, airplanes go
Back and forth
Covering the landscape of our
Flying high above the
Ground to soaring new
In all types of weather
Just perfect or a
Kind of cloudy night
Lifting its wings
Moving through the air
Nice and peaceful
Over our heads
Pilots bring us
Quickly to our destinations
Rome, Japan, and other
Traveling is my
Ultimate favorite activity
Vanishing off to a new place
Wishing I could fly every day
X marks the spot I will go
Yelling with excitement
Zipping across the country
A cinquain is a five-line poem (like “cinco” for five in Spanish). The first line is one word, usually a noun, which is the main subject of the poem. The second line contains two adjectives that describe the topic. The third line has three verbs that relate to the topic. The fourth line has four words that can be a phrase or sentence telling something about the topic. The fifth line is a single word that is another word for the topic.
Here is an example of a cinquain:
A diamante is similar to a cinquain, but it has seven lines. Diamante poems have the shape of a diamond. The first line is one noun. The second line is two adjectives. The third line has three participles (-ing verbs). The fourth line has four nouns. Then the pattern repeats the opposite direction. The fifth line has three participles (-ing verbs). The sixth line has two adjectives. The seventh line has one noun.
Here is a diamante poem:
A definition poem defines something using metaphors or imagery. This is special language that paints a picture for the reader. It is much more interesting than a regular dictionary definition. Definition poems generally use free verse (which means that they have no regular rules for rhythm or meter).
Here is an example of a definition poem:
Dancing is beautiful movement
Dance brings out our emotions
Dancing is poetry in motion
A catalog poem is a list of things. It can be any length and may rhyme or not.
Here is an example of a catalog poem about spring:
An acrostic poem is a poem that is written around a word. The first letter of each line spells out that word.
Here is an acrostic poem:
Buddies for life
A limerick is a silly or humorous poem that follows a specific pattern. Lines 1, 2, and 5 are longer and rhyme with each other. Lines 3 and 4 are shorter and rhyme with each other.
Here is an example of a limerick:
There once was a clown named Bo
A quatrain is a four-line poem that rhymes. (“Quatro” means four in Spanish.) Each line is about the same length. The rhyming pattern may be that lines 1 and 2 rhyme and lines 3 and 4 rhyme. Or lines 1 and 3 and lines 2 and 4 may rhyme.
Here is a sample quatrain:
Swimming is a lot of fun
Haiku poetry comes from Japan. Haiku poems have three lines. They follow very specific rules. The first line has five syllables, the second line has seven syllables, and the third line has five syllables. Haiku poems do not rhyme. Often, the topic is related to nature or the seasons.
Autumn leaves falling
A concrete poem is written in the actual shape of the subject of the poem.
Here is a concrete poem. You can tell what it is about without even reading it!
A poem for two voices is written for two people to perform. Often, it is written in two columns with each person’s part in a column.
Here is a poem for two voices. Try reading it aloud with someone.
I love it when it rains
I can’t say the same
The raindrops and puddles,
What’s so great about
Watching the lightening,
I love to run around
I stay in doors and read a book
You should go really
I’m content in my room;
Here are some activities to try on your own.
1. Choose two types of poems and compare and contrast them. How are they similar? How are they different? Which do you like better? Why?
2. For each of the following topics, which type of poem might be most appropriate?
3. Choose a type of poem and write your own poem in that format.
accent pattern – the way in which certain words or syllables are stressed or said more loudly or emphatically
acrostic poem – a poem that is written around a word, usually the topic of the poem, such that the first letter of each line spells out that word
alliteration &ndash the repetition of the first letter in several words used to give writing a poetic sound; example: The cat was slinking along in its slim, sleek manner
alphabet poem – has 26 lines, each beginning with a different letter of the alphabet
catalog poem – a poem that consists of a list or itemization of things or events
cinquain – a poem that has five lines and follows specific rules including that the first line be one word that is the topic of the poem, the second lines has two adjectives, the third line has three verbs, the fourth line has four words that are a sentence or phrase, and the fifth line has a single word that sums up the poem
concrete poem – a poem that is written in the physical shape of the subject.
definition poem – a free verse poem that uses imagery to define something
diamante – a poem that has seven lines as follows: line 1 has one noun, line 2 has two adjectives, line 3 has three participles, line 4 has four nouns, line 5 has three participles, line 6 has two adjectives, and line 7 has one noun
figurative language – using metaphors and other words to mean more than their literal meaning
free verse – poetry that has no regular rhythm or meter
haiku – an unrhymed poem that has three lines with 5, 7, and 5 syllables each; this type of poetry comes from Japan and the topic often relates to nature
imagery – the use of figurative language to paint a vivid picture
limerick – a humorous poem that has five lines, with lines 1,2, and 5 have three feet (units of verse) and rhyme, and lines 3 and 4 have two feet and rhyme
metaphor – a figure of speech that states two unlike things are the same in a figurative way; example: She was the wind.
mood – emotions; feelings
onomatopoeia – the use of words that imitate or suggest a sound; example: hiss, buzz
personification – a description of something that is not a person as though it were a person; example: The stream made a happy, singing sound through the forest
quatrain – a four-line verse or poem that rhymes
rhyme – words that end in the same sound but have a different beginning sound; examples: cat/hat, toy/joy
rhythm – tempo or beat
setting – the time and place in which a story takes place
simile – a figure of speech comparing two unlike things; example: She was as fast as the wind
stanza – a section of a poem with lines grouped together
tone – mood; quality or manner of expression |
This book is a collection of articles by international specialists in the history of mathematics and its use in teaching, based on presentations given at an international conference in 1996. Although the articles vary in technical or educational level and in the level of generality, they show how and why an understanding of the history of mathematics is necessary for informed teaching of various subjects in the mathematics curriculum, bot at secondary and at university levels. May of the articles can serve teachers directly as the basis of classroom lessons, while others will give teachers plenty to think about in designing courses or entire curricula. For example, there are articles dealing with the teaching of geometry and quadratic equations to high school students, of linear algebra, combinatorics, and geometry to university students, and of the notion of pi at various levels. But there is also an article showing how to use historical problems in various courses and one dealing with mathematical anomalies and their classroom use.
Although the primary focus or subject of the book is the teaching of mathematics through its history, some of the articles deal more directly with topics in the history of mathematics not usually found in textbooks. These articles will give teachers valuable background. They include one on the background of Mesopotamian mathematics by one of the world’s experts in this field, one on the development of mathematics in Latin America by a mathematician who has done much primary research in this little known field, and another on the development of mathematics in Portugal, a country whose mathematical accomplishments are little known. Finally, an article on the reasons for studying mathematics in Medieval Islam will give all teachers food for thought when they discuss similar questions, while a short play which covers the work of Augustus DeMorgan will help teachers provide an outlet for their class thespians.
Table of Contents
Part I: General Ideas on the Use of History in Teaching
Part II: Historical Ideas and their Relationship to Pedagogy
Part III: Teaching a Particular Subject Using History
Part IV: The Use of History in Teacher Training
Part V: The History of Mathematics
Notes on Contributors
About the Editor
Victor J. Katz was born in Philadelphia. He received his Ph.D. in mathematics from Brandeis University in 1968. He has long been interested in the history of mathematics and, in particular, its use in teaching. His well-regarded textbook, A History of Mathematics: An Introduction, is now in its second edition. Its first edition won the Watson Davis prize of the History of Science Society in 1995, a prize awarded annually to the best book on the history of science aimed at undergraduates. He has published numerous articles on the history of mathematics and its use in teaching. He has also directed two NSF-sponsored projects which helped college teachers learn the history of mathematics and how to use it in teaching, and involved high school teachers in writing materials using history in the teaching of various topics in the high school curriculum.
Don't be turned off by the title. Victor Katz has gathered a diverse and fascinating selection of 26 essays on the history of mathematics and on ways to use it to teach mathematics, just like it says in the title. The title, though, does not capture the enthusiasm of the various authors, the depth and breadth of their topics, or their conviction that understanding and using history can enrich and improve the ways we teach mathematics.
Katz has divided the essays into five groups, proceeding from the more pedagogical in Part I to the more historical in Part 5. The first four parts consist of three to five essays each, and the fifth part consists of eleven.
The three essays in "Part I: General Ideas on the Use of History in Teaching" lay a foundation and motivation for the incorporation of history. Siu Man-Keung opens the work with some ways to include history without sacrificing mathematical content. Frank Swetz follows with an account of mathematical education from Mesopotamia through China to the Italian Renaissance.
Wann-Sheng Horng contributes "Euclid versus Liu Hui: A Pedagogical Reflection" to "Part II: Historical Ideas and their Relationship to Pedagogy." He gives a provocative comparison between the structural approach to mathematics that the Greeks used to the more operational approach of the Chinese, with special emphasis on Euclid's and Liu Hui's descriptions of the so-called Euclidean algorithm.
The third part of the book turns to "Teaching a Particular Subject Using History." Janet Heine Barnett shows how mathematical anomalies such as incommensurables, infinity and non-Euclidean geometries open mathematical minds and "prepare new intuitions." Evelyne Barbin gives a delightful account of how the meaning of "obvious" has evolved. For example, geometric proofs of proportionality may be beautiful or tedious, depending on your aesthetic, but those same theorems proved symbolically become obvious "in the sort of 'blind' way that algebraic calculations allow.” Continued... |
There are a few concepts that need to be understood relating to the movement of substances.
- The movement of substances may occur across a semi‐permeable membrane (such as the plasma membrane). A semi‐permeable membrane allows some substances to pass through, but not others.
- The substances, whose movements are being described, may be water (the solvent) or the substance dissolved in the water (the solute).
- Movement of substances may occur from higher to lower concentrations (down the concentration gradient) or from the opposite direction (up or against the gradient).
- Solute concentrations vary. A solution may be hypertonic (a higher concentration of solutes), hypotonic (a lower concentration of solutes), or isotonic (an equal concentration of solutes) compared to another region.
- The movement of substances may be passive or active. If movement is with the concentration or gradient, it is passive. If movement is against the gradient, it is active and requires energy.
Passive transport process
Passive transport describes the movement of substances down a concentration gradient and does not require energy consumption.
- Diffusion is the net movement of substances from an area of higher concentration to an area of lower concentration. This movement occurs as a result of the random and constant motion characteristic of all molecules, atoms, or ions (due to kinetic energy) and is independent from the motion of other molecules. Since at any one time some molecules may be moving against the concentration gradient and some molecules may be moving down the concentration gradient (remember, the motion is random), the word “net” is used to indicate the overall, eventual end result of the movement. If a concentration gradient exists, the molecules (which are constantly moving) will eventually become evenly distributed (a state of equilibrium).
- Osmosis is the diffusion of water molecules across a semi‐permeable membrane. When water moves into a cell by osmosis, hydrostatic pressure (osmotic pressure) may build up inside the cell.
- Dialysis is the diffusion of solutes across a semi‐permeable membrane.
- Facilitated diffusion is the diffusion of solutes through channel proteins in the plasma membrane. Note that water can pass freely through the plasma membrane without the aid of specialized proteins, although special proteins called aquaporins can aid or speed‐up water transport.
Active transport processes
Active transport is the movement of solutes against a gradient and requires the expenditure of energy (usually ATP). Active transport is achieved through one of the following two mechanisms:
- Transport proteins in the plasma membrane transfer solutes such as small ions (Na +, K +, Cl –, H +), amino acids, and monosaccharides.
- Vesicles or other bodies in the cytoplasm move macromolecules or large particles across the plasma membrane. Types of vesicular transport include the following:
- Exocytosis, which describes the process of vesicles fusing with the plasma membrane and releasing their contents to the outside of the cell. This process is common when a cell produces substances for export.
- Endocytosis, which describes the capture of a substance outside the cell when the plasma membrane merges to engulf it. The substance subsequently enters the cytoplasm enclosed in a vesicle. There are three kinds of endocytosis:
- Phagocytosis (“cellular eating”) occurs when undissolved material enters the cell. The plasma membrane engulfs the solid material, forming a phagocytic vesicle.
- Pinocytosis (“cellular drinking”) occurs when the plasma membrane folds inward to form a channel allowing dissolved substances to enter the cell. When the channel is closed, the liquid is enclosed within a pinocytic vesicle.
- Receptor‐mediated endocytosis occurs when specific molecules in the fluid surrounding the cell bind to specialized receptors in the plasma membrane. As in pinocytosis, the plasma membrane folds inward and the formation of a vesicle follows. Certain hormones are able to target specific cells by receptor‐mediated endocytosis. |
Presentation on theme: "GCSE PE Year 10 The Participant as an Individual: Gender."— Presentation transcript:
GCSE PE Year 10 The Participant as an Individual: Gender
AIMS: Consider the differences that exist between males and females. Consider the physical, metabolic and hormonal differences that exist Consider the allowances that are made in view of these differences and because of the effects they can have.
KEY TERMS!! PHYSIQUE: The form, size and development of a persons body. METABOLIC: The whole range of biochemical processes that occur within us. POWER: The combination of speed and strength. MAXIMAL STRENGTH: The greatest amount of weight that can be lifted in one go
GENDER The particular sex of a participant is not something that is within their control, but it is a factor that has to be considered. It is not a sexist view to state that there are differences between males and females because this is a scientifically proven fact!!
PHYSICAL DIFFERENCES Body shape, size and physique are generally different in men and women. Women are smaller overall with a flatter, broader pelvis, smaller lungs and heart and a higher % of fat. This will also be affected by diet, which has an affect on metabolic rate. Because women have smaller hearts and lungs, they also have a lower O2-carrying capacity. Muscle strength and power can vary. In tests for maximal strength there was a difference of up to 40-50%, as women has less total muscle mass. Women are more flexible than males.
Rates of maturity differs. Girls mature faster than boys. Some competition can be fair. After age of 11 boys over take girls! Menstruation and hormone imbalance can disadvantage females if they are participating. Men tend to be less effected by chemical change.
The differences mentioned do not always mean that women are disadvantaged compared to men, as they are often able to compete on equal terms with men in many situations. Can you think of any of those situations?
EQUESTRIAN Equestrian events are one of the few events where women compete against men head to head and not in separate competitions.
There can be advantages, with less weight and greater flexibility in sports such as gymnastics. However, they may be seriously disadvantaged when it comes to competing in events dependant on strength and power. These differences are recognised and it is for this reason that competition between males and females is organised in single sexes at top level
PERCEIVED DIFFERENCES DISCRIMINATION!! – Women seen as the weaker sex and not allowed the same opportunities as men. They werent allowed to compete in distance races until the Olympics in 1960 – the 1500m was added in 1972. Traditional male sports – football!!!! Religion Historically there have been fewer opportunities for women!!
TASKS In groups of 3: 1.Write a list of why you think males and females should compete equally head to head in all sports. 2.Write a list of why you think males and females should not compete equally head to head 3.Class discussion with for and against equal opportunity in sport
HOMEWORK GET YOUR PLANNERS OUTS!!! Complete Sheet Due next WEEK!!!!!
FINALLY!!! From this lesson you should be: Able to identify some of the physiological differences between men and women and back this up with knowledge about why competition is usually single sexed. |
Impacts of livestock on soil fall into two broad categories: firstly the physical impact of the animal on soil as it moves around and secondly the chemical and biological impact of the faeces and urine that the animal deposits to soil. Physically damaged soil can be even more susceptible to the chemical and biological impact of faeces and urine.
Heavy livestock such as cattle compact soil structure and destroy vegetation on parts of a field that they tread most often. This is visually apparent around drinking water troughs, entrances to fields and other parts of the land where the animals congregate. Destruction of soil structure in this way is known as 'poaching' and can be seen to be harmful because restoration of vegetation does not always occur spontaneously once the grazing animal is withdrawn. Sheath et al. (1998) found losses of 5-10 kg dry matter ha-1 d-1 where up to 50% of an area was affected by cattle treading but recovery occurred within a few months. Compacted soil becomes strong making it difficult for new shoots to penetrate the soil and emerge; structureless soil is unlikely to drain well and will pond after moderate rainfall. Soil particles from these zones will be susceptible to erosion carrying particles, organic matter and phosphorus to surface waters (Warren et al., 1986). Anaerobic zones in waterlogged soils will encourage denitrification which implies a loss of nitrogen and pollution of the atmosphere with N2O if conditions for denitrification are sub-optimal in the compacted zone (see below).
Problems with soil structure are not limited to cattle farming. Pig production is notorious for its destructive effects on vegetation. Part of pig behaviour is to dig into soil with the snout, the effect on soil and vegetation is obvious but without the protective effect of plant roots that confer strength to the rooting zone and without a plant withdrawing water from a field, the soil become weak and the structure collapses under the regular passage of the animal. Soil becomes compacted and the same problems listed above ensue. High stocking rates on pig farms exacerbate the problem. Sheep grazing, particularly in the UK is not normally thought of in these same terms because production is largely extensive on upland rough grazing. In some farms, however, sheep are used to graze root cover crops (such as turnips) in the late winter and all but sandy soils are likely to be susceptible to damage. At equivalent (i.e. metabolic weight) stocking densities on wet soils, short-term treading by sheep was, however, found to be less damaging than treading by cattle (Betteridge et al., 1999).
Chemical and biological impacts of manure and urine
Although many of the impacts of animal wastes on the environment concern losses to water or the atmosphere, soil is an intermediary and as such these impacts deserve space here. The amount of urine delivered to soil by a grazing cow is of the order of 2 litre applied to an area of about 0.4 m2 (e.g. Addiscott et al, 1991). This represents an instantaneous application of 400-1200 kg N ha-1. Such an amount burns vegetation and is often toxic to plant roots which cannot immediately recover to take up the N (full recovery can take up to 12 months and the problem is obviously worst in areas where animals congregate). Urea in soil is quickly hydrolysed and given that grass can take up perhaps 400 kg N ha-1 annually without loss, pollution of groundwater or the atmosphere is almost inevitable whenever urine is applied to soil. Both Calcium and Magnesium are also lost in substantial amounts from urine patches on pasture soils (Early et al., 1998).
Losses of N from urine and manure will normally be as ammonia, dinitrogen and nitrous oxide (during denitrification) or as nitrate leaching. Two key processes deserve mention. The first is that during dentrification (of nitrate to N2 or N2O) the major product is almost always N2. If conditions for this process are in anyway sub-optimal, especially if there is a deficiency of organic carbon relative to nitrate such as might occur under a urine patch, N2O production increases (e.g. Swerts, 1996). Since N2O is a potent greenhouse gas its emission from soil is clearly undesirable. Secondly nitrate is produced from urine and manure during nitrification which is itself a multi-stage process. Where organic matter levels are high such as in or around manure not all the N is converted to the end product, nitrate (NO3-), and some remains as nitrite (NO2-). Nitrite is equally susceptible to leaching as nitrate but is far more toxic. Debate in recent years has focussed wrongly on nitrate, which is in fact a precursor to the production of NO (nitric oxide) that is one of the first lines of the body's defence against pathogenic organisms. Most instances of damage to health that have been attributed to nitrate are in fact the result of nitrite such as methaemoglobinaemia from well water contaminated with nitrate but also nitrite. Incidence of stomach cancers have been found to be negatively correlated with nitrate in the diet but a theoretical link assumed that nitrate could be reduced in situ to nitrite in the stomach. Fortunately nitrite in the wider environment is generally short-lived, but arises during sub-optimal nitrification of ammonia to nitrate, for example where ammonium is washed directly into surface waters either from the soil or because the animal urinates close-by. Nitrite is nonetheless occasionally found in natural waters at levels that exceed EU limits.
Compaction of and damage to soil also limits the growth and use pasture can make of available nutrients. Douglas and Crawford (1998) found between 1.7 and 2.1 t ha-1 reduction in dry matter production in a compacted sward and reduction in recovery of N from 71% to 55% of that applied in the uncompacted and compacted swards respectively.
Cattle sometime spread pathogenic organisms by picking them up from a point source but urinating or defecating elsewhere. Weeds, plant diseases, e-coli O157 are all thought to be spread in this way.
The amounts of nutrients in manure are equally a source of waste, a missed opportunity and potentially of pollution. Manure is partly microbial in composition derived from fermentation during digestion and partly composed of recalcitrant components of the feed. As such it is rather less decomposable than fresh plant material and does not supply N to soil as rapidly or damagingly as urine. It does, however, block light and grass growth underneath manure will be temporarily retarded. Some regrowth occurs with penetration where the pasture is well enough established, some with reseeding directly into the manure.
Application of manures is not necessarily harmful. As implied in much of what has been said above, manure and urine contain nutrients that grass or crops can use. Because manure is relatively long-lived in soil it releases its nutrients slowly and can continue to benefit crop production for many years. Whitmore and Schroder (1996) estimate that applications of slurry to maize during the 1970s and 80's has increased the N-supplying power of Dutch soils by about 70 kg N ha-1. Because the extra fertility is long-lived this extra N-supply is expected to take 10 years to decline to half its current level. This is beneficial, however, only so long as a pasture or crop recovers the N. The N can also mineralise during winter or at some other time when the crop is not growing at its full potential. Under these circumstances losses to the environment are inevitable. The fertility is only maintained as long as the pasture remains in place. Ploughing a grassland soil results in a burst of nutrient availability that slowly declines. Whitmore et al. (1992) showed that the intensive ploughing of grassland during the 1940's and 50's in the UK is a probable cause of the increases in nitrate found in aquifers in the 1970's onwards. Watts et al. (1996) have shown that increased levels of organic C in soil confers desirable resilience to soils in relation to tillage. Mineral pasture soils almost certainly resist hoof damage in proportion to their organic matter content.
The impact of manure and urine on soil from livestock is not simply one of perturbing nutrient cycles. Additives such as copper, zinc, antihelminthics and antibiotics or other veterinary treatments are given to animals. The presence of Cu and Zn can make manure unsuitable for use as a fertilizer on other farms and metals such as these pose a long-term risk in pasture soils because they can accumulate and are only slowly removed by leaching or offtake in vegetation. Heavy metals have been shown to reduce the microbial life and diversity in soil (Griffiths, 2000) and the activity of N-fixers in particular (Giller, 1999).
One rough and ready way of assessing the impact of livestock farming has been to consider the balance between inputs and measured outputs of the nutrients used in livestock farming. The difference is usually large and positive implying enormous loss of nutrients to the wider environment or retention in soil. Given that in the majority of the loss pathways nutrients pass through the soil, impacts on soil is an appropriate place to consider this imbalance. As a very rough rule of thumb worries about surpluses of N are immediate in that more N is lost than retained by soil; worries about P concern the gradual build-up over many years that leads to subsequent but sustained losses. On one Dutch dairy farm in the 1980's about 400 kg N, 23 kg P and 56 kg K ha-1 annually of 467, 35, and 73 kg ha-1 applied respectively was unaccounted for. More generally 75% of the 1.1 x 109 kg N applied annually throughout the whole of the Netherlands is thought to be wasted (Whitmore and Van Noordwijk, 1995). Surpluses of N on UK dairy farms were recently reported to range from 63-667 g N ha-1 with a mean of 257 kg N ha-1 (Jarvis 2000) and exports in produce were estimated to be only 20% of the N applied (Jarvis 1993). Haygarth (1998) estimated gains of P by soil in a typical UK dairy farm to be 26 kg P ha-1 annually with a stocking density of 2.26 animals ha-1 on average. On an upland sheep farm the gain was 0.24 kg P ha-1 only. Strategies to reduce the impact of animal manure and slurry on the environment usually focus on limiting spreading according to the amount of P (e.g. Van der Molen et al., 1998). This is because the relative amounts of NPK required by pasture and arable crops differs from the rate these elements are found in manure; manure is too enriched in P relative to N.
Grazing systems can have an effect on soil and more particularly water courses if manure or silage is not stored properly and leaks out. The resultant point source contamination can affect soil for many years, destroy aquatic life and make water unfit for consumption.
Addiscott, T.M., Whitmore, A.P. and Powlson, D.S. (1991) Farming, Fertilizers and the Nitrate Problem. pp 170. C.A.B. International, Wallingford, UK.
Betteridge, K., Mackay, A.D., Shepherd, T.G., Barker, D.J., Budding, P.J., Devantier, B.P. and Costall, D.A. (1999) Effect of cattle and sheep treading on surface configuration of a sedimentary hill soil. Australian Journal of Soil Research 37, 743-760.
Douglas, J.T. and Crawford, C.E. (1998) Soil compaction effects on utilization of nitrogen from livestock slurry applied to grassland. Grass & Forage Science 53, 31-40.
Giller, K.E., Witter, E. and McGrath, S.P. (1999) Assessing risks of heavy metal toxicity in agricultural soils: Do microbes matter? Human and Ecologocal Risk Assessment 5, 683-689.
Early, M.S.B., Cameron, K.C. and Fraser, P.M. (1998) The fate of potassium, calcium and magnesium in simulated urine patches on irrigated dairy pasture soil. New Zealand Journal of Agricultural Research 41, 117-124.
Griffiths, B.S. Ritz, K., Bardgett, R.D., Cook, R., Christensen, S., Ekelund, F., Sørensen, S., Bååth, E., Bloem, J., de Ruiter, P.C., Dolfing, J. and Nicolardot, B. (2000) Ecosystem response of pasture soil communities to fumigation-induced microbial diversity reductions: an examination of the biodiversity-ecosystem function relationship. In press
Haygarth, P.M., Chapman, P.J, Jarvis, S.C., and Smith, R.V. (1998) Phosphorus budgets for two contrasting grassland farming systems in the UK. Soil Use and Management 14, 160-167.
Jarvis, S.C. (2000) Progress in studies of nitrate leaching from grassland soils. Soil Use and Management 16, 152-156.
Jarvis, S.C. (1993) Nitrogen cycling and losses from dairy farms. Soil Use and Management 9, 99-105.
Sheatch, G.W. and Carlson. W.T. (1998) Impact of cattle treading on hill land. 1. Soil damage patterns and pasture status. New Zealand Journal of Agricultural Research 41, 271-278.
Swerts. M., Merckx, R. and Vlassak, K. (1996) Influence of carbon availability on the production of NO, N2O, N2 and CO2 by soil cores during anaerobic incubation. Plant and Soil 181, 145-151.
Van der Molen, D.T., Breeuwsma, A. and Boers P.C.M. (1998) Agricultural nutrient losses to surface water in the Netherlands: Impact, strategies and perspectives. Journal of Environmental Quality 27, 4-11.
Warren, S.D., Thurow, T.L., Blackburn, W.H. and Garza, N.E. (1986) Journal of Range Management 39, 491-495
Watts, C.W., Dexter, A.R., Dumitru, E. and Arvidsson, J. (1996) An assessment of the vulnerability of soil structure to destabilisation by tillage. Part I. A laboratory test. Soil and Tillage Research, 37: 161-174
Whitmore, A.P. and Schröder, J.J. (1996) Modelling the change in soil organic C and N in response to applications of slurry manure. Plant and Soil 184, 185-194.
Whitmore, A.P. and Van Noordwijk, M. (1995) Bridging the gap between environmentally acceptable and agronomically desirable nutrient supply. In: Ecology and Integrated Farming systems, D.M. Glen, M.P. Greaves and H.M. Anderson (eds.), pp 271-288, John Wiley and Sons, Chichester.
Whitmore, A.P., Bradbury, N.J. and Johnson, P.A. (1992) The potential contribution of ploughed grassland to nitrate leaching. Agriculture, Ecosystems and Environment 39, 221-233. |
Keeping with the scope and sequence of skills I have designed file folder activities for the numbers: 1, 2, 3, 4, 5, and 6 so far. Each target number activity contains the following:
- Number Mats - Which can be used as playdoh mats, children can drive matchbox cars along the path of the numbers, or they can use dry erase markers to trace the number.
- Number Pattern Blocks - There are number pattern block mats and pattern blocks to print.
- Sort the Number by Color.
- Sort the Number by Size.
- Number Puzzles
- Number Work Mats- Each number work mat contains cards with the numeral, its name, ten frames, and corresponding number of objects representing the target number.
- Can You Find the Number…? Children can find and circle the target number out of a grid of numbers.
- The number line enables children to see the number in numerical order. The activity encourages the children to write the missing number using the number line as a visual guide.
- NEW in Six is Half a Dozen: Can You Find the Pattern?
|All About the Lonely Number 1 Activity Pack Sample|
Download>> All About the Lonely Number 1 Activity Pack
|Two is a Pair Activity Pack Sample|
Download>> Two is a Pair Activity Pack
Download >> Three is a Trio Activity Pack
|Four is a Quad Activity Pack Sample|
Download >> Four is a Quad Activity Pack
|Five is a Pentagon Activity Pack Sample|
|Six is Half a Dozen Activity Pack Sample|
Download >> Six is Half a Dozen Activity Pack
Stay tuned for more updates to include All About the Numbers 7, 8, 9, and 10.
~ Catherine : ) |
How people react–both psychologically and physically–can have implications for a person’s health and well-being, including how well they age.For some, stressors are viewed as challenges to overcome, whereas others may see them as threats and give up or shut down when faced with a stressful situation.Everyone experiences stress, but how a person responds varies.
Cindy Bergeman, a Notre Dame professor of psychology and associate vice president for research, is currently conducting a 10-year study based on how different people respond to stress, why they react the way they do, and the different ways people cope. From interviews to medical exams, the research team is looking to better understand how stress can affect someone over both a short and a long period of time and what the best coping strategies are in order to remain resilient against stress.
“When a zebra on the savannah is being hunted by a lion, the zebra’s blood pressure rises, its body begins moving glucose to the muscles, and breathing increases, all in order to achieve peak performance. This primal survival response is caused by stress,” Bergeman said. “People have the same physical reaction to stress, but in today’s world stressors don’t require a fight or flee response. The body’s reaction, however, is the same, even if stressors come from work pressures, complicated relationships, or financial problems.”When people are chronically stressed, these physical responses become detrimental to the human body. For example, continuous high blood pressure can lead to hypertension and a constant increase of glucose levels can cause diabetes.
Bergeman’s study is taking a comprehensive approach that uses both daily and yearly assessments as well as quantitative and qualitative data. By collecting information over a longer period of time, her lab is working to understand the varying effects of daily stressors as well as one-time, stressful events.
“What goes on in our day-to-day life is really important, but it may not affect a person’s health for 10 or 15 years,” Bergeman said. “My lab is looking broadly at the lifespan, because it may not be the major life events–like the loss of a loved one–that really get to you. Instead, it may be the daily hassles, time pressures, and bad relationships that in the end have the most detrimental impact on health. Currently, we are in the final year of the study and we are hoping to extend it for another five years to get a broader picture of the impact stress has during a lifetime.”
Bergeman will be discussing her research on the relationship between stress, resiliency, and the impact on health and well-being as a part of the University of Notre Dame’s Saturday Scholar Series. The series, hosted by the College of Arts and Letters, features intimate discussions with Notre Dame faculty. Members of the public are welcome to attend Bergeman’s presentation at 4 p.m. Sept. 17 in the Snite Museum’s Annenberg Auditorium.
Originally published at research.nd.edu.
Originally published by Brandi Klingerman at research.nd.edu on September 16, 2016. |
Waveguide Impedance and Impedance Matching
- details of waveguide impedance, how waveguide impedance is defined, waveguide impedance matching including the use of a waveguide iris or a waveguide post.
Waveguide impedance can be important in a number of applications. In the same way that the characteristic impedance is important for other forms of feeder, the same can be true in a number of instances with waveguides. Techniques including the use of a waveguide iris, or a waveguide post can be used to provide the required level of waveguide impedance matching.
The waveguide impedance needs to be known on a number of instances to ensure the optimum power transfer and the minimum level of reflected power is obtained.
Waveguide impedance definition
There are several ways to define the waveguide impedance - it is not as straightforward as that of a more traditional coaxial feeder.
- To determine the waveguide impedance by using the voltage to be the potential difference between the top and bottom walls in the middle of the waveguide, and then take the value of current to be the integrated value across the top wall. As expected the ratio gives the impedance.
- Measure the waveguide impedance is to utilising the voltage and then use the power flow within the waveguide.
- The waveguide impedance can be determined by taking the ratio of the electric field to the magnetic field at the centre of the waveguide.
All the methods tend to give results that are within a factor of two of the free space impedance of 377 ohms.
Waveguide impedance and reflection coefficient
In just the same that more common coaxial and other feeder systems need to have loads closely matched to the source impedance to obtain the maximum power transfer, the same is true with waveguides. If the waveguide impedance is matched to the source or load, then a greater level of power transfer will occur.
When waveguides are not accurately matched to their loads, standing waves result, and not all the power is transferred.
To overcome the mismatch it is necessary to use impedance matching techniques.
Waveguide impedance matching
In order to ensure the optimum waveguide impedance matching is obtained, small devices are placed into the waveguide close to the point where the matching is needed to change its characteristics.
There are a number of ways in which waveguide impedance matching can be achieved:
- Use of gradual changes in dimensions of waveguide.
- Use of a waveguide iris
- Use of a waveguide post or screw
Each method has its own advantages and disadvantages and can be used in different circumstances.
The use of elements including a waveguide iris or a waveguide post or screw has an effect which is manifest at some distance from the obstacle in the guide since the fields in the vicinity of the waveguide iris or screw are disturbed.
Waveguide impedance matching using gradual changes
It is found that abrupt changes in a waveguide will give rise to a discontinuity that will create standing waves. However gradual changes in impedance do not cause this.
This approach is used with horn antennas - these are funnel shaped antennas that provide the waveguide impedance match between the waveguide itself and free space by gradually expanding the waveguide dimensions.
There are basically three types of waveguide horn that may be used:
- E plane
- H plane
The different types of gradual matching using a waveguide horn can be seen in the diagram below:
E, H plane and pyramid Horn antennas used for waveguide matching
Impedance matching using a waveguide iris
Irises are effectively obstructions within the waveguide that provide a capacitive or inductive element within the waveguide to provide the impedance matching.
The obstruction or waveguide iris is located in either the transverse plane of the magnetic or electric field. A waveguide iris places a shunt capacitance or inductance across the waveguide and it is directly proportional to the size of the waveguide iris.
An inductive waveguide iris is placed within the magnetic field, and a capacitive waveguide iris is placed within the electric field. These can be susceptible to breakdown under high power conditions - particularly the electric plane irises as they concentrate the electric field. Accordingly the use of a waveguide iris or screw / post can limit the power handling capacity.
Impedance matching using a waveguide iris
The waveguide iris may either be on only one side of the waveguide, or there may be a waveguide iris on both sides to balance the system. A single waveguide iris is often referred to as an asymmetric waveguide iris or diaphragm and one where there are two, one either side is known as a symmetrical waveguide iris.
Symmetrical and asymmetrical waveguide iris implementations
A combination of both E and H plane waveguide irises can be used to provide both inductive and capacitive reactance. This forms a tuned circuit. At resonance, the iris acts as a high impedance shunt. Above or below resonance, the iris acts as a capacitive or inductive reactance.
Impedance matching using a waveguide post or screw
In addition to using a waveguide iris, post or screw can also be used to give a similar effect and thereby provide waveguide impedance matching.
The waveguide post or screw is made from a conductive material. To make the post or screw inductive, it should extend through the waveguide completely making contact with both top and bottom walls. For a capacitive reactance the post or screw should only extend part of the way through.
When a screw is used, the level can be varied to adjust the waveguide to the right conditions.
By Ian Poole
Share this page
Want more like this? Register for our newsletter |
Active & Future Projects
Due to rapid advances in infrared detector technology, the development of adaptive optics for ground based work and the commitment to infrared missions from space organizations such as NASA, ESA and ISAS, the future of infrared astronomy is extremely bright. Within the next decade, infrared astronomy will bring us exciting discoveries about new planets orbiting nearby stars, how planets, stars and galaxies are formed, the early universe, starburst galaxies, brown dwarfs, quasars and interstellar matter. Below is a summary of currently active and future infrared projects. Click on the links to learn more.
For links to submillimeter missions such as SWAS and microwave missions such as MAP, see NSSDC's Astrophysics Missions. For information on past missions see the section on the background of infrared astronomy.
Description: An infrared array consisting of 3 cameras and 3 spectrometers.
Goals: Provide spectra and high resolution images in the near infrared of regions in space.
Wavelengths: 0.8 - 2.5 microns
Description: The Keck Interferometer Project will combine the twin Keck Telescopes to form an interferometer. The Keck Interferometer is part of NASA's Origins program and will use adaptive optics to remove the effects of atmospheric turbulence.
Goals: To detect planets around nearby stars in the infrared. In visible light, the light from a star is millions of times brighter than the light from a planet. The visible light from a planet is hidden by the brightness of the star that it orbits. In the infrared, where planets have their peak brightness, the brightness of the star is reduced. This makes it possible to detect planets in the infrared.
Wavelengths: 1.6 - 10 microns
Start-Duration: Launched in August 2003
Description: The Spitzer Space Telescope consists of a 0.85 meter telescope, a camera, spectrograph and photometer. Spitzer is much more sensitive than prior infrared missions and will study the universe at a wide range of infrared wavelengths. Like ISO, Spitzer is operated as an observatory.
Goals: The Spitzer Space Telescope mission will concentrate on gathering data on: protoplanetary and planetary debris disks, brown dwarfs and super planets, Ultraluminous galaxies and active galactic nuclei, and the early universe. Spitzer can also be used to study the outer solar system, early stages of star formation and the origin of chemical elements.
Wavelengths: 3.5-180 microns
Start-Duration: Launch in 2004 - 1.5 years
Description: IRIS is an infrared space mission planned by the Japanese space agency ISAS. It will have a near and mid infrared camera and a far infrared scanner.
Goals: IRIS will be used to study the formation and evolution of galaxies, star formation, interstellar matter and extra-solar systems.
Wavelengths: 2-25 microns and 50-200 microns
Start-Duration: Launch planned in 2008 - > 3 years
Goals: The Herschel Space Observatory will perform spectroscopy and photometry over a wide range of infrared wavelengths. It will be used to study galaxy formation, interstellar matter, star formation and the atmospheres of comets and planets. The current plan is to merge Herschel with ESA's PLANCK, mission.
Wavelengths: 80 - 670 microns
Start-Duration: Launch planned in 2008
Goals: PLANCK will image the anisotropies of the Cosmic Background Radiation over the entire sky with exceptional resolution and sensitivity.
Wavelengths: 350-10,000 microns
Start-Duration: Scheduled to begin operations in 2009
Description: SOFIA, a joint project between NASA and the German Space Agency, will be optical/infrared/sub-millimeter telescope mounted in a Boeing 747. Designed as a replacement for the very successful Kuiper Airborne Observatory, SOFIA will be the largest airborne telescope in the world.
Goals: Flying at altitudes between 41,000 and 45,000 feet, SOFIA will take infrared observations high above most of the infrared absorbing atmosphere and will be able to observe at all infrared wavelengths. SOFIA will be used to study interstellar clouds, star and planet formation, activity in the center of the Milky Way and the composition of planets and comets in our solar system. As with the Kuiper Airborne Observatory, teachers and students will be allowed to fly on SOFIA to learn about infrared astronomy.
Wavelengths: The entire IR range
Description: The James Webb Space Telescope is an infrared space mission which is part of NASA's Origins program
Goals: The James Webb Space Telescope will have extremely good sensitivity and resolution, giving us the best views yet of the sky in the near-mid infrared. It will be used to study the early universe and the formation of galaxies, stars and planets.
Wavelengths: 0.5 to 20 microns
Start-Duration: Launch date - ?
Description: TPF is envisioned as a long baseline interferometer space mission and is a part of NASA's Origins program An interferometer is a group of telescopes linked together across a "baseline". By gathering data with several telescopes linked in this way, very precise position measurements can be made.
Goals: TPF will concentrate on detecting terrestrial planets (small and rocky planets - like Mercury, Venus, Earth and Mars) outside our solar system and orbiting other stars. By studying near infrared spectral lines, astronomers can also detect several molecules which can indicate how earth-like these planets are.
Another long-term space mission which has been identified by NASA is a far- infrared interferometer, covering infrared wavelengths not included in the TPF mission. This mission, which has not yet been give a name, would study the earliest and coolest phases of star and planetary disk formation.
Wavelengths: 7-20 microns (the best range for searching for Earth-like planets)
Description: Darwin is a candidate for the European Space Agency's infrared interferometer space mission
Goals: The primary goal of Darwin is to search for Earth-like planets around nearby stars, and to search for signs of life on these planets by studying infrared spectral lines in their atmospheres. Darwin would also be used as a general infrared astronomy observatory. The Darwin project would consist of about 6 individual telescopes combined as an interferometer about 100 yards across and would orbit between Mars and Jupiter, beyond the zodiacal dust which radiates infrared light at the wavelengths which will be used to search for planets.
Not yet defined - near infrared
Infrared Astronomy HOME PAGE | Discovery of Infrared | What is Infrared? | Infrared Astronomy Overview | Atmospheric Windows | Near, Mid & Far Infrared | The Infrared Universe | Spectroscopy | Timeline | Background | Future Missions | News & Discoveries | Images & Videos | Activities | Infrared Links | Educational Links | Getting into Astronomy |
Researchers have unexpectedly found radio emission from a supernova that went unnoticed when it exploded more than a year ago in the galaxy M82 in Ursa Major. Its visible light was apparently blocked by dense dust in the galaxy's central region. By following the radio emission created as the shock wave continues to expand, researchers hope to learn about the star’s last few thousand years before it blew up.
The discovery was serendipitous the team was studying proper motions in M82 and M81 using the Very Large Array of radio telescopes in New Mexico. On reducing their data, the team (led by Andreas Brunthaler of the Max Planck Institute for Radioastronomy) noticed a strong new point source of radio emission near the center of M82 a smallish, irregular galaxy 12 million light-years away with vigorous star formation happening in its middle.
The team says the supernova must have happened between October 2007 when they didn't see any radio emission and March 2008 when they did.“Most supernovae you only see in the optical,” says co-author Geoffrey Bower (University of California, Berkeley). He says this one went unseen because the star was hidden by particularly dense gas and dust. But Stefan Immler (NASA/Goddard) stresses that it would take an enormous amount of material to hide the accompanying ultraviolet and X-ray emissions as well: “There are many other X-ray sources in the bulge of M82, and these are not absorbed.” Christopher Stockdale (Marquette University) says: “If nothing else, it clearly shows it’s an unusual object.”
You can’t blame other researchers for being slightly skeptical: they haven’t had a chance to check all the UV, X-ray and visible observations that failed to see anything throughout 2008. Even Bower admits, “It’s quite striking that a supernova so nearby is completely obscured in the visible.”But additional evidence that Brunthaler and collaborators haven’t yet published seems to convince even the skeptics that there was indeed a supernova here. It again has to do with the density of material around the star not the dust and gas giving birth to stars this time, but closer-in material expelled by this particular aging star before it exploded.
Brunthaler and collaborators see the radio source expanding, balloon-like, at the speed expected for the shock wave of a supernova: thousands of kilometers per second, a few percent of the speed of light. The radio waves are synchrotron radiation from electrons in the former star's stellar wind as they get swept up by the expanding blast wave. For this to happen, the star must have blown off a massive wind in the millennia before exploding. The ultraviolet flash of the explosion ionized this material, breaking apart its hydrogen atoms into a plasma of protons and electrons and these emit the radio as the shock wave hits them and plows them forward.Such strong stellar winds aren't seen in the famous Type Ia supernovae used as cosmic standard candles. These blow up when a white dwarf star accretes enough mass to ignite a nuclear reaction that suddenly consumes it. So the aftermath of that kind of explosion should be radio-weak. Strong radio emission is a sign that the blast was caused by the other mechanism for supernovae: the collapse of a large, massive star’s core when it has exhausted all its nuclear fuel. Such stars sometimes emit very thick winds.
The shock wave will thus play back for us a record of the last epoch of the star’s history in reverse order, and at a thousand times the original speed. As the thousand-times-faster shock wave overtakes the wind plasma, it will emit more or less strongly depending on the plasma's local density. By watching for such variations in the coming years, Bower hopes to see signs of what the star went through during its final millennia.
Here is the team's discovery announcement. |
English Tap Pronunciation Course.
Pronunciation is not just the way we make the sounds of words but how
the tone and stress change and make patterns
throughout words and sentences.
Take the sentences :
- "The high court has ruled that social workers weren't to blame for a child's death"
- "The high court has ruled that social workers were to blame for a child's death"
as an example. Superficially there might seem to be only a very slight difference between these sentences.
An inexperienced listener might be mistaken.
Intonation is used to communicate meaning too and these two sentences should be read and intonated in quite
This course takes you through the basic sounds of English and finishes on more advanced aspects of pronunciation such as
the stress patterns of long words and the intonation and phrasing of sentences.
We should not only able to clearly pronounce words using the correct basic sounds
but also know how to expess ourselves using stress and intonation.
is the way of making certain words or parts of words sound louder or more powerful. Intonation
the way our voice tones rise and fall during words and sentences.
In many languages intonation is used to show the difference between a question and a statement.
This is not usually the case in English, however intonation
is still very important in communicating feeling and meaning.
In spoken English we can change the meaning of a sentence only by changing the intonation and stress. |
What is incomplete pollination?
Pregnancy is an all-or-nothing situation; a woman either is or she isn’t. But in plants, the situation is very different. Partial or incomplete fertilization occurs when some of the ovules are pollinated and some are not. Insufficient pollination limits the number of seeds that can be formed, but it can also have a large impact on the amount of fruit produced per plant, as well as the quality of fruit.
Pollination is performed by wind, gravity, rain, or animals. Some flowers can self-pollinate, but many others must be pollinated with pollen from different flowers on the same plant, or by flowers from a totally different plant.
One of the easiest places to see incomplete pollination is in corn, a wind-pollinated plant. To be completely fertilized, every single silk (flower) must be pollinated separately. Once pollination is complete, a pollen tube grows down through the silk, the ovule is fertilized, and the kernel can grow. Any silks that were not pollinated produce nothing. In the photo you can see the difference between a fully pollinated ear, and one that was only partially pollinated.
In brambles such as blackberries, raspberries, and salmonberries each of the little nubbins (technically called drupelets) needs to be pollinated separately as well. If not, the berries are irregular and small.
In trees such as apple and pear, partial pollination results in lop-sided fruit. The ovaries of these fruits are divided into compartments called carpels. When the ovules in the carpel are properly fertilized, the carpel expands and provides protection for the developing seeds. If no fertilization occurs within a carpel, the carpel stays small. Poor pollination of the trees will greatly reduce both the yield and the quality of fruit.
In many other plants, insufficient pollination results in small or misshapen fruits as you can see in the photo of cucumbers, strawberries, and blueberries. Simply stated, if there are few viable seeds inside, the plant doesn’t need to grow a big fruit to protect them while they develop—a scrawny little fruit will do.
The flesh of tomatoes expands and ripens when hormones are released by the developing seeds. If a tomato is only partially fertilized, the fruit may be irregular or part of it may never ripen.
To further complicate the process, all this pollination needs to take place during a short flowering period with proper weather. So when you think of pollination, discard the idea that it is yes or no, on or off, black or white. Pollination is a “process” with many discrete steps that demand a lot of helpers and whole bunch of luck. Click on any photo for slides. |
Protesting in the 1960s and 1970s
Michelle L. Janowiecki
Digital Exhibits Intern, American Archive of Public Broadcasting
When discussing the role of protests in America, it seems fitting to begin in the 1960s— one of the most contentious decades in living memory. The decade that began with the protests of the civil rights movement would end in a wave of activism by students, marginalized communities, and women that continued into the mid 1970s. As one historian put it,
"In the 1960s, dissidents shook the very foundation of U.S. civil society." 4
The AAPB holds notable audio and video from the civil rights movement, which reached the peak of its activism in the mid-1960s. Commentary from Rosa Parks, audio from members of the Congress of Racial Equality (CORE) and Student Nonviolent Coordinating Committee (SNCC), interviews with members of Southern Christian Leadership Conference, and radio programs describing stay-outs vividly show the organized resistance to discrimination and the use of protest during the civil rights movement. Another AAPB exhibit, Voices from the Southern Civil Rights Movement, provides an in-depth examination of the movement in the American south.
During the mid-1960s, a new strategy and philosophy to improve black lives grew out of the civil rights movement. The Black Power Movement, espoused by organizations like SNCC and the Black Panther Party, began to advocate and rally in favor of black pride, black liberation, and revolutionary determinism. Pieces like Sproul Hall sit-in documentation (1968), Rally for the Oakland Seven (1968), and Cecil Williams and Angela Davis Speaks (1972) document these protests as well as show their intersection with the rise of the “New Left” and student radicalism.
The Red Power Movement and the Chicano Movement also fought against racism and sought to renew ethnic pride during the turbulent decades of the 1960s and 1970s. The Red Power movement was an inter-tribal movement by Americans Indians that fought for self-determination, sovereignty, and better reservation conditions during the late 1960s and the 1970s. The AAPB contains three pivotal moments of the Red Power movement in its archives: programs covering the occupation of Alcatraz Island in 1969, the Siege of Wounded Knee in 1973, and the Longest Walk in 1978. These three protests highlighted the concerns of American Indians to the public through acts of civil disobedience and mass protest. The Chicano Movement also began in this period, fighting for better labor conditions, against racism, and seeking to celebrate Mexican-American heritage.
Tour Our Resources:
- 1956—Commentary of a Black Southern Bus Rider / Rosa Parks
- 1960—Sit-ins and the New South, Florida
- 1961—Children of McComb
- 1963—Stay Out For Freedom; Boycott Report
- 1963—March on Washington; George Geesey Introduction; Part 1 of 17
- 1964—New England Scene; St. Augustine
- 1964—Mississippi Project; Andrew Young Interview / Ted Mascott
- 1966—Cesar Chavez speaks on the Delano Grape Strike
- 1968—Rally for the Oakland Seven
- 1969—Radio Free Alcatraz
- 1970—The Chicano moratorium
- 1972—Cecil Williams and Angela Davis Speak
- 1973—The road to Wounded Knee I: conditions at Pine Ridge (Part 1 of 5)
- 1978—The Longest Walk
Keep Exploring— More Online Exhibits:
The Civil Rights Act of 1964: A Long Struggle for Freedom: This exhibit by the Library of Congress “explores the events that shaped the civil rights movement, as well as the far-reaching impact the act had on a changing society.” The exhibition includes “archival footage of the era, as well as contemporary interviews with civil rights leaders and activists reflecting on the civil rights era.”
The Civil Rights Movement in The Bay Area: This exhibit from the Bancroft Library at the University of California, Berkeley documents the civil rights movement within California “through photographs and news stories from the News-Call Bulletin Newspaper during the period 1960-1965."
Voices of Civil Rights: Through images and quotes, this exhibit by the Library of Congress provides insight into the individual voices of the civil rights movement.
Hispanic Americans: Politics and Community, 1970s-Present: This 2005 exhibit from the University of California features the artwork, images, and moments from the Chicano and the La Raza movements.
Black Power! The Movement, The Legacy: This exhibit from the Schomburg Center for Research in Black Culture explores the Black Power movement's impact on American life through photography.
New Mexico Navajo Protest, 1974: This collection from Stanford University provides images from 1974 Navajo protests. These protests were sparked after white teenagers, who attacked and killed three Navajo individuals, only had to attend reform school for their crimes.
The growth of the New Left and student radicalism began in the early 1960s and reached its height during 1968. This new political movement sprouted protests on college campuses from the East Coast to the West Coast on issues including the Vietnam War, free speech, the environment, and racism. Including student groups like Students for a Democratic Society and the Free Speech Movement in Berkeley, the New Left rallied for the “common struggle with the liberation movements of the world.”5
The women’s liberation movement also gained renewed energy and force in the late 1960s and early 1970s as women fought for equal pay, equal treatment, and new opportunities. A vivid piece of this history is found in this recording of a 1973 celebration of International Women’s Day, in speeches by Equal Rights Amendment supporters and in this recording of 1970s rally by women fighting to gain child care support.
During the late 1960s and the 1970s, gay and lesbian activism also flourished in the form of parades and demonstrations as activists and supporters protested the stigmatization of the gay community, demanded equal rights, and celebrated their identities. 1969 would prove especially momentous, as the Stonewall uprising in New York City propelled activists around the United States into action and prompted annual pride parades. Be sure to check out this resource, from the New York Public Library, to learn more: 1969: The Year of Gay Liberation.
Protests against the war in Vietnam loomed large in New Left activities, drawing crowds of students, non-students, radicals, and moderates who agreed that the conflict should end. Protests against the Vietnam War began to gain prominence in 1965 on college campuses and around the United States, eventually garnering national attention in the following two years. Some civil rights leaders, such as Martin Luther King Jr. and James Bevel, also joined the antiwar movement. In an anti-war protest in April of 1967, King and other members of a broad coalition called Spring Mobilization Committee to End the War in Vietnam helped to lead a march of 300,000 anti-war protesters in New York City. Hispanic community leaders, in events like the Chicano Moratorium, and black community leaders also protested that the war had a greater impact in terms of deaths and suffering on their communities.
In fall of 1967, over 1,000 student protesters returned their draft cards at the steps of the Justice Department. It was the beginning of over 25,000 draft cards burnt, returned, or destroyed in protest during the course of the war. Recounting in 1982 his role in this act of civil disobedience, Rev. William Sloane Coffin of Yale remarked, “My own feeling was that this war was so wrong that having done all the other things I just felt I would have to commit civil disobedience. Now, it’s not an easy thing to do if you’re married and if you have small children…[But] I felt sort of a wider parish of students were turning in their draft cards. And what was their chaplain going to do? And the obvious thing was that the pastor should stand by his parishioners.”
“I just felt I would have to commit civil disobedience.” 6
By the fall of 1967, only 35 percent of Americans supported U.S. policies in Vietnam.7 And during 1968, the anti-war movement only gained in momentum and fervor. In particular, the Democratic Party felt the effects of anti-war sentiment as the party became increasingly divided over the war. During the 1968 Democratic Convention in Chicago, a stand-off between anti-war protesters and police erupted in violence as police brutally and indiscriminately used force against the crowds gathered to protest.
From 1968 to 1970, protests continued in force as events like the Tet Offensive, My Lai massacre, and the Kent State massacre led individuals to further protest the role of the United States in Vietnam. The last event—in which National Guardsmen shot and killed four Kent State students at an anti-war protest—led to a nationwide student strike that shut down 500 colleges. As one historian put it, “By January 1973, when Nixon announced the effective end of U.S. involvement [in Vietnam], he did so in response to a mandate unequaled in modern times." 8
Tour Our Resources:
- 1968—Sproul Hall sit-in documentation (Part 1)
- 1968—Sproul Hall sit-in documentation (Part 2)
- 1969—Harvard: Where do we go from here?
- 1969—What Does the Campus Upheaval Mean?
- 1970—Raw footage of Amherst College Takeover
- 1967—Vietnam: A Television History; Peace March Scenes, United Nations
- 1967—Public Affairs; Wakefield Rally
- 1968—Commentary by Sidney Roger; Nikki Bridges interviewed by Sidney Roger
- 1968—Vietnam: A Television History; 111; CBS News Special: 1968
- 1970—The Chicano moratorium
- 1970—Program about the Jackson State Killings, Jackson, Mississippi
- 1971—Program to Commemorate the Kent State Shootings
- 1972—People's Blockade protesters arrested at Twin Cities Arsenal ammunition plant
- 1973—Anti-War Demonstration on President Richard Nixon 2nd Inauguration
- 1977—Kent State University Rally in Kent, Ohio (During Tent City, 1977)
- 1982—Interview with William Sloane Coffin
- 1970s—University of Minnesota Child Care Rally
- 1973—International Woman’s Day
- 1975—Supporters of ERA Outline the Benefits of the Amendment
Keep Exploring— More Online Exhibits:
1969: The Year of Gay Liberation: During the 1960s and 1970s, the fight for LBGTQ rights accelerated. This resource from the New York Public Library covers the movement during the pivotal year of 1969.
Raised My Hand To Volunteer: Want to learn more about student protests? Check out this exhibit by the University of North Carolina at Chapel Hill. It provides an overview of 1960s student protests at UNC through “digitized documents, images, and other archival materials.”
Catching the Wave: Photographs of the Women’s Movement: This resource from Harvard’s Schlesinger Library provides a rich sample of photographs from the Women’s Movement from feminist photographers Bettye Lane and Freda Leinwand.
Chicago: Law and Disorder: This exhibit from the Chicago History Museum covers the 1968 Democratic Convention through photographs, documents, and ephemera. |
Researchers at the Korean Advanced Institute of Science and Technology (KAIST) have created composite materials using graphene that are up to 500 times stronger than the raw, non-composite material. This is the first time that graphene has been successfully used to create strong composite materials — and due to the tiny amounts of graphene used (just 0.00004% by weight) this breakthrough could lead to much faster commercial adoption than pure graphene, which is still incredibly hard to produce in large quantities.
At this point, we shouldn’t be wholly surprised that graphene — which holds a huge number of superlative titles, including the strongest material known to man — can also be used to create strong composite materials. In this case, the KAIST researchers created a copper-graphene composite that has 500 times the tensile strength of copper (1.5 gigapascals), and a nickel-grapehene composite that has 180 times the tensile strength of nickel (4 gigapascals). This is still some way off graphene’s tensile strength of 130 GPa — which is about 200 times stronger than steel (600 MPa) — but it’s still very, very strong. At 1.5 GPa, copper-graphene is about 50% stronger than titanium, or about three times as strong as structural aluminium alloys.
To create these composites, the KAIST researchers use a process called CVD (chemical vapor deposition) to grow monolayers (one-atom-thick layers) of graphene. These monolayers are then deposited onto a thin film of metal (copper or nickel). Another layer of metal is then evaporated (a method of deposition) on top of the graphene. This process is repeated, until you have a sandwich consisting of a few layers of metal and graphene. Different metal thicknesses were tested (between 70nm and 300nm), and it was found that thinner layers result in much stronger composites. Because graphene is so thin, the amount used is absolutely tiny: Just 0.00004% of the metals by weight.
The reason these composites are so strong is that the graphene stops the metal atoms from slipping and dislocating under stress. In a solid metal, if a slip plane forms (due to stress), the atoms will readily slip apart, causing a fracture. The layers of graphene stop the metal atoms from sliding — the metal atoms cannot physically pass through the super-strong graphene — so no fractures can form (pictured above). It’s essentially the metallic equivalent of steel-reinforced concrete. In case you were wondering, this is also one of the primary reasons why metals are nearly always used in alloy form — because there’s a mix of different metal atoms with different atom sizes, it’s much harder for slip planes to form.
Moving forward, the researchers will now need to find a way of mass-producing these graphene-based composites, preferably with a roll-to-roll or metal sintering process. These composites, due to their massive strength, could find myriad uses in the automotive and aerospace industries, or simply as a new tool for structural engineers and industrial design. Just the sheer fact that we now know that graphene can be used as a composite is massive news, too: If graphene can suddenly turn soft copper into a structural material, imagine what it might do for something like titanium or steel, or even commercial polymers like Kevlar.
Research paper: doi:10.1038/ncomms3114 – “Strengthening effect of single-atomic-layer graphene in metal–graphene nanolayered composites” |
Cave and karst systems are important to our nation for numerous reasons. Groundwater comprises the largest single freshwater resource, and about 25% of this groundwater is located in cave and karst regions. The protection and management of these vital water resources are critical to both public health and sustainable economic development. Water resources and supplies are a critical concern as society enters the twenty-first century.
Caves are also storehouses of information on natural resources, human history, and evolution. Therefore, many avenues of research can be pursued in caves. Recent studies indicate that caves contain valuable data that are relevant to global climate change, waste disposal, groundwater supply and contamination, petroleum recovery, and biomedical investigations. Caves also contain data that are pertinent to anthropologic, archaeological, geologic, paleontological, and mineralogic discoveries and resources.
Many researchers have turned to caves as natural laboratories where over eons paleoclimatic evidence has been naturally deposited and is awaiting discovery. For example, recent discoveries in Manhole Cave in New Mexico raised scientific interest in the possibilities of gaining further insight into global climate change following analyses of materials found in this cave.
Many caves act as natural traps for flora and fauna. Paleontological excavations in caves have yielded the discovery new species of extinct animals such as a mountain goat and a bush oxen related to the present day musk ox (Ovibus moschatus). These finds add to the knowledge of paleo-fauna and aid in the understanding global climate change.
Cave and karst lands provide specialized habitats and environments. Animal species living in caves have special adaptations that help them survive in total darkness, such as extreme longevity and enhanced sensory perceptions. The adaptations reveal much about the evolutionary responses to past environmental changes and may provide valuable clues to current climate change. |
Good readers try to determine importance in what they are reading. Usually it is automatic. If it is not, meaning will break down. When meaning breaks down, the point of reading is lost. There are three levels to consider when determining importance: Word level,
Sentence level, and Text level,
Word level refers to the ability to understand what words convey the most meaning.
“Her attempts at reconciliation were fleeting.”
Ask: What words are most important and why? What impact will they have on the storyline? What will this do to the plot? To the characters’ lives?
Sentence level is determining the most important sentences which carry the weight of the passage.
“She was determined to block entry at all costs. Pushing furniture in front of the passage and flinging anything she could lay her hands on into the path. But beyond all reason and effort, they made their way nearer to her hiding place. At last, they reached the thresh hold. Try as she might, the weight of the door was too much.”
Ask: Where is the pivotal moment in this paragraph? What will change the course of events for the character?
Text level is generally referring to key ideas, concepts, or themes in the entire text.
Think in terms of Aesop’s fables…
The Kid and the Wolf
A kid, returning without protection from the pasture, was pursued by a wolf. Seeing he could not escape, he turned round, and said: "I know, friend Wolf, that I must be your prey, but before I die I would ask of you one favor you will play me a tune to which I may dance." The wolf complied, and while he was piping and the kid was dancing, some hounds hearing the sound ran up and began chasing the wolf. Turning to the kid, he said, "It is just what I deserve; for I, who am only a butcher, should not have turned piper to please you."
~In time of dire need, clever thinking is key or Outwit your enemy to save your skin.
Ask: What is the moral? What is the point? How does this apply to my life? How can I connect this to other things? What point is the author trying to make? What should I walk away remembering?
Thinking in this way gives readers an opportunity to interact with text, to enjoy it, and really derive meaning from the text.
Wishing you homeschool blessings, |
Many children find difficulty in learning to tell the time. Mastering the basic concepts is important in helping them in their understanding of ‘time’, particularly when they move onto digital time keeping.
The front side of the ‘Telling the Time’ study mat presents colourful clock faces with example times of o’clock, half past, quarter past and quarter to. In the examples shown, clock times are given in numerical and word description. A list of ‘Time Words’ is also featured for reference.
There are many activities on the reverse of the mat, giving the children practice in writing the time on clock faces from given instructions, writing the time in numerals, filling in missing times on grids and writing the time in both numerical and word description. |
What word comes to mind when you think of your child at play? Did you say ‘toys’? If you did, your answer is very common. Perhaps a better way, however, to think of play is through the word ‘activity.’What will your child do as he plays? What will your child say as she pretends?
We know today that child’s play is not merely play - it’s the child’s work. It’s the way children interact with the world around them and the way they grow in any number of social, emotional and educational skills. And while some children will choose imaginative play over television or computer games, most will need a nudge in that direction. It’s not enough to select mostly educational toys because as great as they are, the real need children have to create, explore, pretend and design is found in play without lots of man-made materials. They need interaction with the simplest objects such as water and cups, play dough, or rice and containers. They need to play with found objects such as rocks and sticks. They need practice with imaginative play using their stuffed animals or dolls.
Here are some very simple tips for creating the kind of creative environment that a young child needs and will learn to love if it’s available:
Create an arts and crafts centre - Store a variety of simple household goods and supplies to encourage creative play. These supplies will include cardboard boxes, paper of all kinds, art materials, play dough (or a homemade version of it), fabric and old clothing, etc.
Ask leading questions - Suggest play scenarios to get the creative juices flowing. You might say to your child, “Why don’t you build a racetrack for your cars?” Or, “What’s happening at your farm today? What are the animals doing?” Or, “I wonder if you can make a fort with all those blankets over there.”
Set the stage - Get involved with your child’s play by creating play money, gathering kitchen containers, cartons and boxes. Before you know it, all the pieces are in place to play ‘store.’ Or gather a supply of paper, pencils, markers and envelopes and you’re ready to play ‘post office.’ Gather all the stuffed animals and play ‘going to the zoo.’
Clear creative playtime - Limit screen time in the home. There are wonderful television shows to enjoy and there are great educational opportunities on the computer too, but be sure to monitor the time spent on those activities to protect a quiet home environment encouraging the creative play so necessary to healthy child development.
It’s actually delightful to set the stage for creative play and then sit back to observe what children do. They’ll create characters, conflict and dialogue. They’ll work out problems and design new worlds. Your walls and shelves will be filled with original pieces of art and your children will be accomplishing the tasks they were designed to do - their work - creative play.
Jan Pierce, M.Ed., is a retired teacher and freelance writer who specializes in parenting and family life articles. Find her at janpierce.net.
Calgary’s Child Magazine © 2017 Calgary’s Child |
Sunspots are much cooler than the surrounding solar surface because the magnetic fields that create them reduce convective heating. But you knew that. So why are the regions above them hundreds of times hotter? To wit:
To help find the cause, NASA directed the Earth-orbiting Nuclear Spectroscopic Telescope Array (NuSTAR) satellite to point its very sensitive X-ray telescope at the Sun. Featured here is the Sun in ultraviolet light, shown in a red hue as taken by the orbiting Solar Dynamics Observatory (SDO). Superimposed in false-coloured green and blue is emission above sunspots detected by NuSTAR in different bands of high-energy X-rays, highlighting regions of extremely high temperature. Clues about the Sun’s atmospheric heating mechanisms come from NuSTAR images like this and shed light on solar nanoflares and microflares as brief bursts of energy that may drive the unusual heating.
Find these astronomy posts fascinating even if it’s only for the ‘Wow’ factor, so keep ’em coming Chompsky please. Wow ..
+1 but how the fupp do we still not know how the Sun works?!
Keep ‘em coming Chompsky! |
Dyslexia, music and exams
Learning to play an instrument or to sing presents particular challenges for people with dyslexia. Sally Daunt, from the British Dyslexia Association, summarises the issues involved and suggests strategies for supporting students.
Why should we, as music teachers, parents, carers, candidates or examiners be bothered about dyslexia? Well, it is generally accepted that dyslexia affects 10% of the population and it can affect musical activity.
What is dyslexia?
Dyslexia is one of a number of Specific Learning Difficulties (SpLDs) and may overlap with others: dyspraxia, dyscalculia, attention deficit (and hyperactivity) disorder and autistic spectrum disorders. The British Dyslexia Association describes it as ‘a combination of abilities and difficulties that affect the learning process’. It can affect reading, spelling, writing and music – both theory and practical. It’s lifelong, can vary in severity, is independent of intelligence and is hereditary. Helpful strategies can certainly be developed and, importantly, dyslexic individuals may have particular strengths in areas such as design, problem solving (think of Albert Einstein) and creative skills (think of Nigel Kennedy or Cher).
How do I recognise dyslexia?
One of the key indicators of dyslexia is a mismatch between someone’s perceived intellectual ability and the way that person works day to day. Tasks may take a surprisingly long time and there may be problems with the speed of processing information, short-term memory, organisation, spoken language and motor skills. There can also be problems with auditory and/or visual perception, including ‘visual stress’– a distortion of text or musical notation. Many activities require much extra effort for dyslexic individuals, leading to exhaustion and stress: look out for this. Dyslexic people may also have low self-esteem, so encouragement and patience are key.
What is dyslexia?
There are short tests designed to flag up the probability of dyslexic difficulties (not a diagnosis) as well as full diagnostic assessments (see the BDA website for information). For school-age pupils, speak to the school’s Special Educational Needs Co-ordinator (SENCo).
How does dyslexia affect music learning?
Commonly reported difficulties with music include reading notation, especially at sight, and learning new music quickly. Remembering interval names and the number of sharps or flats in a key signature, and recognising cadences can all cause problems. Taking information from written music, especially fingerings, and applying them to the instrument can be difficult. Aural work is often challenging.
"All strategies need to be individualised. Pupils, however young, know best what helps them, so ask ... good strategies for dyslexic pupils are usually really good general teaching strategies too!"
How can you support a pupil with dyslexia?
All strategies need to be individualised. Pupils, however young, know best what helps them, so ask!
Help with visual stress
Visual stress can be helped with individually chosen tinted paper, coloured overlays and/or enlargement, including Modified Stave Notation – Google that! Specialist tinted glasses and/or use of technology that modifies the format of music can be useful (find out more from the BDA). Remember, it is legal to photocopy music to make it easier for someone who ‘has a cognitive impairment such as dyslexia’ to read, as long as the original is taken into the exam or performance.
It may be that written music isn’t always necessary and, as an alternative, improvisation and memorisation can both be fulfilling. Multi-sensory approaches are also helpful. For example, work on intervals by making shapes in the air or steps on the ground, reinforce metre through movement, and physically demonstrate terms such as ‘high/low’ and ‘right/left’. Using colour can be useful, with pupils choosing preferences and annotating music themselves. For short-term memory problems, try chunking or breaking down. Aural can be treated in this way, gradually building up to longer phrases. Generally, be sure of one point or skill before moving on. Use over-learning or revision with plenty of time to firm up skills. Both Dalcroze and Kodály are worth exploring for their dyslexia-friendly approaches. Indeed, good strategies for dyslexic pupils are usually really good general teaching strategies too!
Support with organisation
Organisation can be difficult for some dyslexic people. Do you know a student who constantly turns up for lessons without the right music or at the wrong time or place? That person may be dyslexic. ‘To do’ lists can be attached to music cases – less likely to get lost! Send texts/emails and encourage students to put reminders on their phones. Have a website with useful information and perhaps videos with ‘how to practise’ demonstrations. Be imaginative!
Taking an exam
Many aspects of dyslexia can affect exams. Accessing sight-reading and written material can be difficult for SpLD candidates. Short-term memory problems can affect aural tests. Verbal instructions can also be difficult. Think about the following: ’Please play B ... harmonic ... minor ... a third apart ... staccato’. A dyslexic candidate may well have forgotten the key by the end of such a sentence.
ABRSM and other exam boards offer ‘reasonable adjustments’ for candidates with a large range of disabilities. They don’t make the exam easier, but do create a level playing field. Remember, dyslexia is a ‘disability’ and it is illegal to discriminate against disabled people. To benefit from these adjustments, candidates must have written proof of their dyslexia – you can contact the BDA or ABRSM if you need help here. You also need to include the correct information when making an exam entry. ABRSM’s Access Co-ordinator can explain the range of possible adjustments, which include extra time and modification of written papers and sight-reading. Depending on the adjustment, you may need to send examples of the type of paper/print needed to ABRSM.
Elements of the exam
In the aural tests that include listening to a musical example or phrase, candidates may be able to ask for the question to be repeated. Also, if a candidate answers and it seems that they have misunderstood the question, the examiner may restate the question and ask them to answer again. Additional attempts at the scales may be possible and candidates may be able to use a scale book for reference – or the unaccompanied traditional song words, for singers. There may also be options to annotate sight-reading tests during preparation, using colour if that helps, and to make notes of verbal instructions during the exam. Do remember, you will need prior approval from ABRSM for these options, so they can provide special copies and meet any other requests. Adjustments cannot be approved on the day of the exam by the examiner
Finally, don’t forget ABRSM’s Performance Assessment, which provides another option and may be more appropriate for some musicians with dyslexia.
This article was originally featured in the April 2017 edition of Libretto, ABRSM's magazine.
Sally Daunt is Chair of the British Dyslexia Association music committee and a support tutor at the Liverpool Institute for Performing Arts.
For full details of our provision for candidates with dyslexia, dyspraxia and other learning difficulties: www.abrsm.org/specificneeds
British Dyslexia Association
Search for music at www.bdadyslexia.org.uk Or email the BDA Music team for more information: [email protected]
Incorporated Society of Musicians
The ISM has a free webinar on music and dyslexia: www.ism.org/professional-development/webinars
Music Publishers Association
For information about photocopying, see page 11 of the MPA’s Code of Fair Practice: www.mpaonline.org.uk/content/code-fair-practice |
We are now in the process of collecting agricultural waste from farmers and using it as feedstock to make biogas and bio manure. After these products have been processed, farmers will be able to use them in their fields as fertilizers and plant food.
Most farmers are not aware of the toxic build-up of animal manure in their fields. But because organic materials like animal manure help plants grow, too much is not good for the soil and may pollute groundwater. The natural solution? This waste byproduct can be distributed as bio gas or bio manure, which will provide an effective, safe way to manage these wastes while also helping protect our environment.
Burning up their waste is the primary method that farmers use to get rid of their agricultural waste. The burning of farm wastes, such as corn stalks, is also bad for the soil. Plants produce a compound called chlorophyll when they grow in the presence of oxygen. Chlorophyll is what gives plants their green color. Chlorophyll contains magnesium which bonds with other minerals in the soil, forming an organic mineral salt. When farm waste is burned, the chlorophyll it produces is destroyed and the salts break down into hydrogen and nitrogen gas which will not be replenished by further rain or irrigation. Without these minerals, plants are deprived of nutrients which cause them to wither. This leaves farmers with a tough decision; either find a way to stop burning their crop waste or risk lowering yields by fertilizing less often.
By collecting the waste that would otherwise be burned up, we help keep harmful agricultural waste from contributing to environmental pollution while creating valuable resources. The resulting bio gas can be used as an alternative energy source, and the bio manure can be used as a natural fertilizer. |
Important constituents of lipids, fatty acid molecules characteristically consist of an even number of carbon atoms linked in a chain bonded with hydrogen atoms that features a carboxyl group at one end. When all of the carbon-to-carbon bonds that hold the chain together are single, the fatty acid is said to be saturated. However, if one or more of the bonds is double, the molecule is unsaturated. As implied by their name, monounsaturated fatty acids only exhibit one double bond, while polyunsaturated acids possess two or more. A few fatty acids that contain triple bonds are also known to exist, although they are far less common than other fatty acids.
Although lipids and fatty acids are relatively large molecules, they are not biopolymers made up of consecutive units like their cousins DNA, RNA, proteins and polysaccharides. Fatty acids have two chemically different structures combined to allow the unusual properties necessary for these biochemicals to perform their biochemical functions. The generalized structure for fatty acids contains a hydrophilic (water-loving) acidic "head" and a hydrophobic (water-fearing) hydrocarbon "tail". This structural combination is similar to that found in common soaps and fatty acids display a number of soap-like properties.
Due to concerns about cholesterol and heart disease, scientists have been very interested in determining which types of fats and oils are best for one's health. All such lipids contain mixtures of fatty acids, but are designated saturated, monounsaturated, or polyunsaturated based upon their predominant component. Fats chiefly consisting of saturated fatty acids are quite stable and are typically found in solid form at room temperature. Studies have shown that these fats may raise blood cholesterol and can contribute to an increased risk of heart disease. Monosaturated fats, which are usually liquid oils at room temperature but may begin to solidify if exposed to colder environments, have often been found to be less unhealthy that saturated varieties. Polyunsaturated fats, however, are an even greater improvement over saturated fats, and substituting these oils, which remain in liquid form even when placed in a refrigerator, for solid saturated varieties may help lower total blood cholesterol.
Making the issue of "good" versus "bad" fats an even more complex issue, however, is the common process of hydrogenating vegetable and fish oils. In nature, unsaturated fatty acids are typically found in the cis form, in which their hydrogen atoms are located on the same side of their double carbon bonds. Yet, when lipids are hydrogenated, hydrogen atoms can be found on opposite sides of the double bonds they contain, forming what are known as trans fatty acids. In order to extend the shelf life of various food items or to create a solid product, such as margarine, from polyunsaturated oils, which may become rancid rather quickly in their unaltered form, companies often hydrogenate the fatty acids they contain. Though these products were originally believed to be better for the health of consumers than if they contained saturated fats, little evidence remains to support this notion. Most recent studies have suggested that trans fatty acids raise low-density lipoprotein (LDL) and total blood cholesterol levels, while reducing beneficial high-density lipoprotein (HDL) levels.
Fatty acids perform a variety of biochemical functions in the human body, ranging from aiding in the maintenance of the immune system to facilitating the development of healthy cell membranes and enabling the production of prostaglandins, thromboxanes, and other eicosanoids, which are involved in the regulation of vasoconstriction, blood viscosity, blood pressure, and similar activities. However, humans, as well as many other animals, are unable to synthesize all of the fatty acids they need, and some must be obtained from the diet. These essential polyunsaturated fatty acids generally consist of linoleic and alpha-linolenic acids, but arachidonic acid is sometimes also included in the group although it may be synthesized from linolenic acids. Some good dietary sources of linoleic acid, which is part of the Omega-6 family, are green leafy vegetables, nuts, grains, seeds, and the oils made from them. A constituent of the Omega-3 group, alpha-linolenic acid is found in considerable quantities in similar items and is especially prevalent in flaxseed and fish oils.
The human body is able to metabolize fatty acids through the progressive division of pairs of carbon atoms, which are then converted into acetyl coenzyme A. This coenzyme is oxidized as part of the citric acid cycle that is also involved in the breakdown of the sugar glucose. Through this process, fatty acids yield relatively large amounts of adenosine triphosphate (ATP), making the molecules an excellent source of energy, a fact that is supported by the tendency of the human body to store excess fuel as fat. Nevertheless, some parts of the body, such as the brain, are unable to utilize fatty acids as an energy source and must instead depend upon glucose metabolism to support their activity.
The Molecular Expressions collection of fatty acids contains only a few members of this important class of biochemicals. Fatty acids are very difficult to crystallize and this makes it difficult to obtain high-quality photomicrographs. Also, they have been hard for us to obtain, and we are looking for new fatty acid and lipid samples. Should you be able to provide us with any samples in this arena please contact us by phone or e-mail.
© 1995-2022 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, software, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our |
A primary DNS server, also known as a master DNS server, plays a crucial role in the functioning of the Domain Name System (DNS). In this article, we will explore what a primary DNS server is and why it is essential for the smooth operation of the internet.
What is a Primary DNS Server?
A primary DNS server is a central authority that holds the original and authoritative copies of DNS records for a specific domain. It acts as the primary source of information for resolving domain names into IP addresses and vice versa.
How Does It Work?
When you type a website address in your browser’s address bar, your computer needs to know the corresponding IP address to establish a connection. This is where the primary DNS server comes into play.
The primary DNS server contains a zone file that consists of various resource records (RRs) associated with a domain. These RRs include information like A records (IPv4 addresses), AAAA records (IPv6 addresses), MX records (mail exchange servers), CNAME records (canonical names), and more.
One crucial feature of a primary DNS server is its ability to perform zone transfers. Zone transfers allow secondary DNS servers to obtain copies of the zone file from the primary server. This ensures redundancy and improves fault tolerance in case the primary server becomes unavailable.
Why Is It Important?
The primary DNS server acts as an authoritative source for domain name resolution. Its importance lies in its ability to provide accurate and up-to-date information about domain names and their corresponding IP addresses. Without an authoritative source like the primary DNS server, resolving domain names would be challenging and unreliable.
- Efficient Name Resolution: By having an authoritative copy of zone files, the primary DNS server can quickly respond to queries, reducing latency in name resolution.
- Redundancy and Fault Tolerance: Through zone transfers, secondary DNS servers can obtain copies of the zone file, ensuring that domain name resolution remains functional even if the primary server goes offline.
- Easy Management: As the central authority for a domain, the primary DNS server allows administrators to easily manage and update DNS records, ensuring that changes propagate correctly across the network.
In summary, a primary DNS server plays a vital role in the functioning of the Domain Name System. It serves as an authoritative source for domain name resolution, holding the original and authoritative copies of DNS records. Its efficiency in responding to queries, redundancy through zone transfers, and ease of management make it an essential component of a reliable DNS infrastructure.
By understanding the importance of a primary DNS server, you can appreciate its role in enabling seamless communication on the internet. |
As with many things in nature, it helps to understand the past when trying to predict the future.
Ilya Bindeman, an associate professor of geological sciences at the University of Oregon, believes this is true of the Yellowstone supervolcano and the likelihood that it will produce an apocalyptic eruption as it has three times over the last the last 2 million years.
“Yellowstone is one of the biggest supervolcanos in the world,” he says. “Sometimes it erupts quietly with lava flow, but once or twice every million years, it erupts very violently, forming large calderas,” which are very large craters measuring tens of kilometers in diameter.
If it happens again, and he says most scientists think that it will, he predicts such an eruption will obliterate the surroundings within a radius of hundreds of kilometers, and cover the rest of the United States and Canada with multiple inches of ash. This, effectively, would shut down agriculture and cause global climate cooling for as long as a decade, or more, he says. A volcanic event of such magnitude “hasn’t happened in modern civilization,” he says.
However, the National Science Foundation (NSF)-funded scientist doesn’t think it’s going to happen anytime soon–at least not for another 1 million to 2 million years.
“Our research of the pattern of such volcanism in two older, ‘complete’ caldera clusters in the wake of Yellowstone allows a prognosis that Yellowstone is on a dying cycle, rather than on a ramping up cycle,” he says.
By this, he is referring to an ongoing cycle that occurs within the so-called Yellowstone “hot spot,” an upwelling plume of hot mantle beneath the Earth’s surface, when magma chambers, which are large underground pools of liquid rock, reuse rocks, eject lava, melt again and prompt large eruptions many thousands of years later.
It is a complicated process that also involves the position of the North American plate, which is moving at the rate of two to four centimeters a year, and its relationship to the hot spot, as well as the continuing interaction of the Earth’s crust with basalt, a common volcanic rock derived from the mantle.
“Yellowstone is like a conveyer belt of caldera clusters,” he says. “By investigating the patterns of behavior in two previously completed caldera cycles, we can suggest that the current activity of Yellowstone is on the dying cycle.”
Calderas first form due to the hot spot’s interaction with the North American plate, forming new magma after about a two-million-year delay.
“It takes a long time to build magma bodies in the crust,” he says. “We discovered a consistent pattern: subsequent volcanism is a combination of new magma production and the recycling of already erupted material, which includes lava and tuff,” a rock composed of consolidated volcanic ash.
By comparing Yellowstone to previous completed caldera cycles, “we can detect that the Yellowstone hot spot is re-using the already erupted and buried material, rather than producing just new magma, ” he says. “Either the crust under Yellowstone is turning into hard-to-melt basalt, or because the movement of North American plate has changed the magma pluming system away from Yellowstone, or both of these reasons.”
The Yellowstone hot spot has produced multiple clusters of nested volcanic craters, known as calderas, during the last 16 million years. “Caldera cycles go on for maybe several million years, and then it is done,” he adds. “The current magmatic activity in Yellowstone is in the middle of the cycle, or at the end, as three caldera forming eruptions have already happened.”
The three most recent eruptions, which occurred 2 million, 1.3 million, and 640,000 years ago, resulted in a series of nested calderas forming what we know as Yellowstone National Park and its immediate vicinity.
Eventually, the cycle comes to an end for unknown reasons.
“By performing micro-analytical isotopic investigation of tiny minerals in rocks, we are trying to understand when it’s done,” he says. “We know the behavior of the past and we know at what comparative stage Yellowstone is right now. We think Yellowstone is currently on a third cycle, and it’s a dying cycle. We’ve observed a lot of material that represent recycled volcanic rocks, which were once buried inside of calderas and are now getting recycled. Yellowstone has erupted enough of this material already to suggest that the future melting potential of the crust is getting exhausted.”
To be sure, however, he also points out that “everything is possible in geology, and not very precise.”
Bindeman is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2009. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education, and the integration of education and research within the context of the mission of their organization. NSF is funding his work with $533,606 over five years.
As part of the grant’s education component, Bindeman is training graduate and undergraduate students using lab-based learning, summer research programs for undergraduates and community college students, and through new courses.
He also has developed exchanges and collaboration among graduate and undergraduate students and scientists in the United States, Switzerland, Russia, France and Iceland.
“International exchange will involve collaborative lab visits, joint fieldwork, excursions for foreign students, and international student and postdoc hiring,” he says. He recently led a two-week Yellowstone field school for graduate students and professors visiting from Switzerland.
Bindeman’s research involves using radioactive dating to determine the age of volcanic materials, such as tuff and lava, “with the goal of understanding its history,” he says. “Knowing the age is important as a context for understanding everything else.”
They analyze oxygen isotope ratios in quartz and zircon, and water- and heat-resistant minerals, from volcanic rocks. Despite re-melting, the zircon crystals have retained their isotope signatures, enabling the scientists to date their cores and rims, and look into the history of the magma assembly.
“We found patterns indicating that material was recycled as older volcanic rocks forming the roofs of magma chambers collapsed and re-melted during eruptions, only to be re-ejected in the next volcanic outburst,” he says.
Specifically he and his team studied the two most recently completed cycles, that is, the cycle that produced the eruption of 2 million years ago, known as Heise, and the one that followed, producing the eruption of 1.3 million years ago, known as Picabo.
The results of those studies enabled them to determine the current state of the supervolcano, and predict that a new catastrophic caldera-forming eruption likely will happen only in 1 million to 2 million years, probably in Montana.
An eruption of power has not occurred in the world for at least 74,000 years. “The last one was in Toba, Indonesia,” he says.
Bindeman also is investigating the potential effects of the next massive eruption on the atmosphere. “Sulfur dioxide gas will be released in large quantities, resulting in global cooling and ozone destruction, but nobody knows yet how cold it’s going to get and what will be the effects of temporary ozone layer destruction,” he says.
To convey the power of the last Yellowstone eruption, and quite possibly the next one, Bindeman cites two recent examples for comparison purposes: The 1980 eruption of Mt. St. Helens in Washington State, which killed 57 people and caused widespread destruction, spewed one cubic kilometer of material into the air, he says. The 1991 eruption of Mt. Pinatubo in the Philippines, which killed hundreds of people and for several years decreased global temperatures, released ten cubic kilometers, he says.
“The last Yellowstone eruption 640,000 years ago was 1,000 kilometers of material,” he says. |
Piston and cylinder - What is the role of the piston and cylinder in the engine?
The cylinder is probably the basis of everything. It determines the unit volume of the engine (total engine volume = unit x number of cylinders), but also the noise you will produce by squeaking tires at traffic lights. As a rule, those who have more cylinders under the hood, creak louder… But, joke aside!
She's bothering with math again…
The cylinder is, as its name suggests, a part of a motor of round cross-section (pipe section) which is defined by the following dimensions: diameter and length. Basically, the cylinder is actually a "round hole" in the block of our engine, inside which the piston moves. The diameter of the cylinder, which we call piston diameter the value is self-explanatory, while the length we are interested in with the cylinder is, in fact, the distance between GMT (upper dead center - the highest position of the piston head) and DMT (lower dead center). That length is called piston stroke.
A little math that we eventually remember from elementary school: so, our cylinder can actually be represented as a geometric body called a roller (to be very precise, it is a "vertical roller"). Now,… The volume of the roller (o) is equal to the product of the surface of the roller base and its height. The surface of the base, on the other hand, is calculated as the square of the radius (r) multiplied by Pi, and then we multiply it all nicely with the height (v) and we get the volume.
It would be a good idea not to mix the units. So, when calculating the volume of a cylinder, take into account that all dimensions are expressed in e.g. mm or cm. And, please, dear students, do not ask now what Pi is, because, you know full well that it is Archimedes' constant or Ludolf number whose numerical value is 3,14159 26535 89793 and blah, blah, blah. In practice, a value of 3,14 is used.
Fortunately, here's a practical implementation of what has been learned
If you read the technical data of a car, you will often come across information about the diameter and stroke of the piston expressed as diameter x stroke. So, for example, we can take the engine Range Rover Evoque 2.2 SD4 whose diameter and stroke are 85,0 x 96,0 mm (these are nice round numbers that will make our calculations easier!). Knowing the data on the diameter and stroke of the piston of the Evoque engine, we can calculate the volume of an individual cylinder. So, in this particular case, our unit volume is 544,75 cm3 (cubic centimeters), ie. 0,54475 l (liters) - all values are rounded to two decimal places for convenience. And, as it is a 4-cylinder engine, we will get the total volume of this engine by multiplying the obtained (unit) volume of each cylinder by 4. So, such a calculation would give a value of 2179,00 cm3, or 2,179 liters, which is, of course, correct data.
Let's also say that in practice, for commercial / marketing / simplifying reasons, the values of engine volumes are rounded off so that they can be so beautifully shaped in the designation of a car model. Our Evoque, thus, bears the 2.2 SD4 engine designation because when we round up the volume we just calculated, we can say that it is approximately 2,2 liters. Most often, the volume is denoted by one decimal place in liters, but recently we also encounter engine designations such as 1.25, where, in particular, it is a 1242 cm3 engine (strictly mathematically speaking, Ford has taken a little too much freedom in rounding , no…).
The cylinders determine the engine
Of course, there are engines with more, but also fewer cylinders. The most common cylinder layouts in the engine are as follows: in-line engines have cylinders in a row (usually reserved for smaller engines), V-shaped engines save on total engine length, and boxer engines have cylinders facing each other ( even force distribution and low engine silhouette).
Finally, their versions should be mentioned about the cylinders. Namely, the classic solution is certainly the "shirt" cylinders. Such cylinders have another "tube" of other wear-resistant material within their diameter. Engines made of quality alloys have no sleeves and the piston is in direct contact with the cylinder wall, which is also part of the engine block. A special solution are the so-called "Wet" liners are common in high-performance engines. Such an engine has liners that are not embedded in the cylindrical part of the block, but stand (almost) independently so that the coolant can reach them more easily.
Finally, the piston!
The piston is what "runs" in the cylinder from one end position (dead point) to another. It is a metal part that has approximately the shape of a cup with a round cross-section, turned upside down. Its task is to take on the pressure of the force that is created in the cylinder by burning the mixture of fuel and air. The pistons are connected to the crankshaft with connecting rods, where their rectilinear movement is translated into a circular one.
However, in order to allow the piston to move, its diameter is slightly smaller than the inner diameter of the cylinder (ie the cylinder liner). In order to enable sealing, ie. to prevent gases from passing past the piston into the lower part of the engine during combustion, the pistons are equipped with several rings. Most often, these are three rings, two of which are compression and one oil. The compression rings are responsible for sealing between the piston and the cylinder wall, while the oil ring has the "task" of leaving a thin layer of oil on the cylinder wall (several micrometers) as the piston moves towards the DMT. That is why there is an oil supply in the middle of this piston ring. Also, with its outer edge, the oil ring should "wipe" excess oil from the cylinder wall.
Piston rings are exposed to high wear and have been specially machined to increase their resistance. As a rule, they are made of steel and, for greater durability, can be coated with chrome, special ceramic material, etc.
These rings (so-called "rings") are the main reason why old and worn out engines smoke. Namely, you have probably heard several times that someone says that this and that engine of a car is worn out, because bluish smoke comes out of the exhaust. These are worn piston rings that (primarily oil) leak small amounts of oil into the combustion chamber. Certainly, such a phenomenon is harmful, reduces engine power and destroys the catalyst, and long-term neglect can lead to complete destruction of the piston rings, damage to the cylinder liner and the like.
The difference between the diameter of the piston and the cylinder, ie. its liner (depending on the construction), is important due to the fact that all materials, including the one from which the piston is made, expand due to heating. That is why our piston needs to be provided with enough space, and the difference in diameter is compensated by the piston rings.
Here we can mention that, e.g. when constructing an engine, it is important to take into account its operating temperature. Just when it reaches the (predicted) operating temperature, the piston will expand to the predicted dimension, and all seals and friction will be optimal. This is another reason why we should not load the engine harder before it reaches operating temperature. We can therefore conclude that only after all the moving parts of the engine have warmed up, the parts reach the intended dimensions, the engine can work optimally.
The pistons of today's internal combustion engines are usually made of aluminum alloys (by casting). They provide mass savings (lower inertia of moving parts) and at the same time a high level of thermal conductivity (which is 13 times higher in aluminum and its alloys than in stainless steel and 4 times higher than in ordinary steel). We should also mention that very high performance engines (eg above 500 hp) use forged pistons that are stronger, and are made of alloys that guarantee minor dimensional changes due to heating. In diesel engines, where higher loads occur during the expansion stroke, steel pistons are also used. Steel provides minor dimensional changes due to heating and some other advantages. But the choice of material is a matter of compromise here as well.
Retrieved from: www.autonet.hr
Recommendation of similar texts:
Hi there, I am Mladen and I am an auto enthusiast. I started this blog years ago to help like minded people share information about latest cars, car servicing ideas, used car info, exotic cars, and auto technology. You will find helpful articles and videos on a wide variety of cars - Audi, Mercedes, Toyota, Porsche, Volvo, BMW and much more. Ping us if you have anything cool to share on latest cars or on how to make older cars more efficient, or just want to say hi!
Leave a reply
Sorry, in order to post a comment, you must be logged in.
The question is;
On the Massey Ferguson 3065E, one cylinder was used up, replaced and a new one was installed, and the machine alignment of the block and head was normally done.
Now the Base on the 018 block has been removed, so the pistons are a bit too small for me, but I can't find data anywhere on the height needed to how much the pistons now have to be machined, but I need those measures.
Or have the whole engine done by machine and leveled and measured. |
Have you ever had difficulty hearing in a crowded room or restaurant but can hear without any problem at home? Do you have particular trouble hearing higher-pitched voices or TV dialogue?
If so, you might have hearing loss, and hearing aids may be able to help you.
But how exactly do hearing aids work? Are they simple amplifiers, or something more complex?
This week we’ll be looking into how hearing aids work and how they are a bit more advanced than many people realize. But first, let’s start with how normal hearing works.
How Normal Hearing Works
The hearing process commences with sound. Sound is essentially a type of energy that travels in waves, like ripples in a pond. Things produce sound in the environment when they trigger vibrations in the air, and those vibrations are ultimately caught and transferred to the ear canal by the outer ear.
Immediately after moving through the ear canal, the sound vibrations hit the eardrum. The eardrum then vibrates, creating and amplifying the original signal which is then transmitted by the middle ear bones to the snail-shaped organ of the middle ear known as the cochlea.
The cochlea is filled with fluid and tiny nerve cells called cilia. The vibrations transferred from the middle ear bones stir the fluid and stimulate the cilia. The cilia then conduct electrical signals to the brain and the brain interprets the signals as sound.
With the majority of instances of noise-induced hearing loss, there is damage to the cilia. So, the incoming signal to the brain is compromised and sounds appear softer or muffled. But not all sound frequencies are evenly weakened. Generally, the higher-pitched sounds, such as speech, are impacted to a greater degree.
In a noisy setting, like a restaurant, your capacity to hear speech is diminished because your brain is obtaining a diminished signal for high-frequency sounds. Simultaneously, background noise, which is low-frequency, is getting through normally, drowning out the speech.
How Hearing Aids Can Help
You can understand that the solution is not simply amplifying all sound. If you were to do that, you’d just continue to drown out speech as the background noise becomes louder relative to the speech sounds.
The solution is selective amplification of only the frequencies you have a difficult time hearing. And that is only feasible by having your hearing professionally evaluated and your hearing aids professionally programmed to boost these particular frequencies.
How Hearing Aids Precisely Amplify Sound
Contemporary hearing aids consist of five internal parts: the microphone, amplifier, speaker, battery, and computer chip. But hearing aids are not just simple amplifiers—they’re sophisticated electronic devices that change the attributes of sound.
This happens via the computer chip. Everyone’s hearing is one-of-a-kind, like a fingerprint, and so the frequencies you need amplified will differ. The amazing part is, those frequencies can be determined precisely with a professional hearing test, technically known as an audiogram.
Once your hearing professional has these numbers, your hearing aid can be programmed to enhance the frequencies you have the most difficulty with, enhancing speech recognition in the process.
Here’s how it works: the hearing aid receives sound in the environment with the microphone and transmits the sound to the computer chip. The computer chip then translates the sound into digital information so that it can distinguish between various frequencies.
Then, based on the programmed settings, the high-frequency sounds are enhanced, the low-frequency background sounds are subdued, and the enhanced sound is delivered to your ear via the speaker.
So will your hearing revert completely to normal?
While your hearing will not totally revert to normal, that shouldn’t prevent you from acquiring significant gains in your hearing. For most people, the amplification provided is all they require to comprehend speech and indulge in effective and effortless communication.
Think of it this way. If your eye doctor told you they could enhance your vision from 20/80 to 20/25, would you forfeit prescription glasses because you couldn’t get to 20/20? Of course not; you’d be able to function perfectly with 20/25 vision and the gain from 20/80 would be substantive.
Are you set to find out the improvements you can attain with modern hearing aids? Give us a call today! |
Click on an oval to select your answer. To choose a different answer,
click one different oval.
Until the early- to mid-twentieth century, scientists believed that stars generate energy by shrinking. As stars contracted, it was thought, they would get hotter and hotter, giving off light in the process. This could not be the primary way that stars shine, however. If it were, they would scarcely last a million years, rather than the billions of years in age that we know they are. We now know that stars are fueled by nuclear fusion. Each time fusion takes place, energy is released as a by-product. This energy, expelled into space, is what we see as starlight. The fusion process begins when two hydrogen nuclei smash together to form a particle called the deuteron (a combination of a positive proton and a neutral neutron). Deuterons readily combine with additional protons to form helium. Helium, in turn, can fuse together to form heavier elements, such as carbon. In a typical star, merger after merger takes place until significant quantities of heavy elements are built up.
We must distinguish, at this point, between two different stellar types: Population I and Population II, the latter being much older than the former. These groups can also be distinguished by their locations. Our galaxy, the Milky Way, is shaped like a flat disk surrounding a central bulge. Whereas Population I stars are found mainly in the galactic disk, Population II stars mostly reside in the central bulge of the galaxy and in the halo surrounding this bulge.
Population II stars date to the early stages of the universe. Formed when the cosmos was filled with hydrogen and helium gases, they initially contained virtually no heavy elements. They shine until their fusible material is exhausted. When Population II stars die, their material is spread out into space. Some of this dust is eventually incorporated into newly formed Population I stars. Though Population I stars consist mostly of hydrogen and helium gas, they also contain heavy elements (heavier than helium), which comprise about 1 or 2 percent of their mass. These heavier materials are fused from the lighter elements that the stars have collected. Thus, Population I stars contain material that once belonged to stars from previous generations. The Sun is a good example of a Population I star.
What will happen when the Sun dies? In several billion years, our mother star will burn much brighter. It will expend more and more of its nuclear fuel, until little is left of its original hydrogen. Then, at some point in the far future, all nuclear reactions in the Sun`s center will cease.
Once the Sun passes into its "postnuclear" phase, it will separate effectively into two different regions: an inner zone and an outer zone. While no more hydrogen fuel will remain in the inner zone, there will be a small amount left in the outer zone. Rapidly, changes will begin to take place that will serve to tear the Sun apart. The inner zone, its nuclear fires no longer burning, will begin to collapse under the influence of its own weight and will contract into a tiny hot core, dense and dim. An opposite fate will await the outer region, a loosely held-together ball of gas. A shock wave caused by the inner zone`s contraction will send ripples through the dying star, pushing the stellar exterior`s material farther and farther outward. The outer envelope will then grow rapidly, increasing, in a short interval, hundreds of times in size. As it expands, it will cool down by thousands of degrees. Eventually, the Sun will become a red giant star, cool and bright. It will be so large that it will occupy the whole space that used to be the Earth`s orbit and so brilliant that it would be able to be seen with the naked eye thousands of light-years away. It will exist that way for millions of years, gradually releasing the material of its outer envelope into space. Finally, nothing will be left of the gaseous exterior of the Sun; all that will remain will be the hot, white core. The Sun will have become a white dwarf star. The core will shrink, giving off the last of its energy, and the Sun will finally die. |
10 Fun and Easy Ways to Help Your Child Learn the Alphabet!
July 2, 2018
You can easily help you child recognise, read and write the alphabet. You will not be stuck for ideas once you go through this list!
1) Letter Spotting
While going out and about, point at real-life letters you can spot everywhere, from car plates and shop signs, to street names and restaurant menus! This activity also boosts children’s awareness with world around them.
2) Musical Letters
This activity combines exercise with alphabet learning! Get around a dozen of sheets and write one unique letter on each sheet in a large size. You can mix uppercase and lowercase letters. Place the sheets in a circle on the floor. Start some music! Prompt your child to run around the circle of letters when the music is turned on and stop when the music is turn off. When they stop, ask your child to sound out the nearest letter to them. Carry on turning the music on and off and have lots of fun!
3) Write and Wipe Flashcards
Flashcards are one of the top resources in language learning. They are an excellent way to practise the alphabet while improving vocabulary.
WordUnited’s Write & Wipe bright and vivid pictorial flashcards are one of the largest, clearest and most durable you can find. They are great for visual learning and help children to focus on reading and writing the alphabet (both uppercase and lowercase). Children love the unlimited practice on these whiteboard-like cards, writing and wiping time after time. The kits come complete with pens, detachable erasers, instructions, stickers and heavy-duty box with a magnetic lid. Don’t forget to give your child one of the free reward stickers for their excellent writing! You can find more about these cards here.
4) Bake the ABCs!
Use this recipe to make a basic biscuit dough:
Ingredients: 1 cup sugar, 1 cup butter, 1 egg, 2 tsp baking powder, 1 tsp vanilla and 2.75 cups flour.
- Preheat the oven to 200°C/400°F (180°C fan-assisted).
- Cream butter + sugar with an electric mixer.
- Add and beat the egg + vanilla
- Separately, mix flour + baking powder
- Add flour mix to the rest of the beaten ingredients gradually, mixing after each addition.
- Start the alphabet fun! Make letters from the dough (they should be around 0.3 cm thick (1/8 inch).
- Bake for around 8 minutes or until light brown.
- Enjoy eating the ABC biscuits with your child!
5) Free Printable Worksheets
These free printable worksheets from WordUnited are perfect to help children recognise and write the alphabet: https://support.wordunited.com/resourcecategory/alphabet-english/
The images and fun and clear and there is plenty of space of practice writing the letters!
6) Letter Lacing
Punch holes forming the shape of a letter in thick paper.
Get some colourful yarn and a child-safe plastic needle and help you child to lace or sew the letter. They can even sew and frame their name on punched cards.
7) Type and Print
Open a word document, choose a large font size and a bright font colour.
Child-friendly fonts such as “Comic Sans“ are ideal to use, especially ones that have the handwritten shape of the letter “a” (a instead of a).
Encourage your child to press on keyboard letters while saying them and watching the letter appear on the screen. Take it a step further and help your child to type his name and other words. Play with a variety of fonts, sizes and colours!
Print out your child’s document and encourage them to read it!
8) Playdough ABC
Use playdough to make shapes of the letters. You can draw a letter on a piece of paper and encourage your child to make it with playdough. They can place the playdough letter over the written letter. Be creative, and help your child to make different sizes and a mix of colours of shapes… Why not stick eyes and a mouth on some the letters? Playing with playdough helps improve your child’s much needed fine motor skills to start writing!
9) Make a Letter Collage
Save some old magazines and newspapers and help your child cut letters from titles and headings using child-safe scissors. You child can paste the letters in their preferred arrangement to create an alphabet collage! Prompt them to identify letters they are cutting.
10) Read, Read and Read!
Reading from a very young age is strongly linked to high performance in literacy. To focus on learning the alphabet, take a moment on each page to point at a letter at the beginning of a word and say its sound. Encourage your child to spot letters they know on the page or ask them to page-hunt a letter you say! Discover a fantastic range of book for preschoolers here.
Explore a wide selection of toys and books to help children learn the alphabet here.
We hope these ideas inspired you! If you have any more ideas, we would love to hear them! Please share them with us on: |
Hypatia was a mathematician, scientist, and philosopher from Alexandria. She was the daughter of Theon, a vital scientist, and mathematician, who also taught her science and philosophy.
Her influence in the city during the 4th century of the common era had been profound. People addressed her as ‘the blessed lady’ or ‘the holiest and revered philosopher’ by her students.
Anybody who briefly encountered her claimed that she was just, compassionate and wise.
Hypatia was born in Alexandria of Egypt in circa 355 C.E. and died a horrific death in circa 415. Although her birth year is de, most of the scholars agree to this year. She was, if not the first, the earliest known female mathematician and philosopher. Her father, Theon, was her teacher. He is known for preserving Elements by Euclid and his commentaries on Ptolemys Almagest and Handy Tables. However, no information is available on her mother.
His school’s name was Mouseion, which was strictly conservative and open to only the elites. He taught only that which fitted his inclination, Plotinian Neoplatonism.
Extending on her father’s work, she broadened his template of teachings on Greek mathematics and astronomy. She further commented on Conics by Appolonius of Perga and Arithmetica of Diophantus of Alexandria.
She was well-trained in horse-riding, swimming, rowing as a part of physical activities, and in the art of speech to become a good orator. Her tutor was her father, who attempted to perfect his daughter in every field. Hence, she was also taught her arts and literature alongside mathematics and astrology.
It was when she travelled to Athens that she attended a school and proved to be a mathematician. She returned and took a job as a lecturer in her father’s school. She taught Arithmetica by Diophantus and also lectured on eminent personalities such as Plato and Aristotle. She gathered various men at her lectures from all over the world due to her oratory skills.
Inheriting her father’s profession, she dismissed the philosophy of Iamblichus and accepted the Plotinian Neoplatonism. At that time, philosophical schools were rich in Athens. However, Alexandria came second to Athens in terms of philosophical teachings. Hypatia continued to lecture on Plato and Aristotle while donning as tribon, a kind of cloak, as she publicly held lectures.
She expounded on her father’s template and taught the philosophy of Plotinus instead of that of Iamblichus. Her teachings were tolerant of all other religions such as Christianity, the Jews, and Islams (although recognized much later). Thus, she had cosmopolitan students from various backgrounds. The most prominent was Synesius, who later wrote her letters. This bundle of exchanged notes between Hypatia and Synesius was the evidence of her career.
She was excellent in science and maths, even better than her father. Some historians like Socrates of Constantinopole, Philostorgius, Damascius, and Hesychius have elaborated extensively on her life and career in their books through various references. She remained unmarried her whole life and devoted her life to expanding paganism.
Hypatia’s Works and Contribution
She was the first known woman mathematician who wrote extensively on maths, philosophy, and astronomy. She is also the second woman scientist after Marie Curie. She invented scientific instruments, including the astrolabe and the hydroscope or hydrometer.
The astrolabe was a hand-held device used to predict the time by looking at the stars’ positions. Similarly, it was a navigational instrument used by astronomers and astrologers. It survived the test of time, including the Golden Islamic Age, Medieval European Age, and the Rennaisance Age.
The second most notable invention is the hydrometer, an instrument for determining the specific gravity or relative density of liquids. Devised at the request of Synesius, this instrument has various usefulness depending upon differing commodities. Such as alcoholometer is used to measure the percentage of alcohol in spirits. Similarly, a saccharometer measures the sugar content of a solution.
She independently commented on Diophantus’ Arithmetica, which was a three-volume book. These writings are preserved in Arabian translations, while one surviving volume is in its original Greek. The Arabian extant works contain more verifications and comprehensive solutions than the actual Greek book.
Similarly, she wrote commentaries on the conic sections of Apollonius of Perga. She divided the cones into different parts using a plane. She developed concepts such as parabola, hyperbola, and ellipses. She made a comprehensive edition of Apollonius’s works, which was widely accepted and taught for many generations.
Unfortunately, history lost her work in oblivion.
She also wrote many historical treatises while also studying and teaching about astrolabe and hydrometer.
Hypatia also created Astronomical Canon, believed to be the commentaries on Almagest by Ptolemy. She not only edited Book III but all nine volumes of Almagest. Furthermore, Ptolemy’s table and calculation of the sun’s degrees while orbiting the Earth were rife with miscalculations. It was Hypatia whose new edition of the book mentioned above contained a table called the astronomical table. Compared to her prior works, she also edited Measurement of a Circle by Archimedes.
Later, philosophers Descartes and scientists such as Newton and Leibniz were inspired by her and expounded on her work. She was more an innovator than a teacher and more philosophical than religious.
Hypatia was a Pagan at the time when dominant Christian rule persecuted them. Although inevitably drawn in these conflicts, the vanguard of intellectual circles revered her. She also housed many Christian and Jew students who later assumed high posts officially.
Hypatia’s philosophy was Neoplatonism, which is especially resonant today even after 1,600 years. During her time, considered the Age of Anxiety, people sought new spiritual paths that would make their lives less hectic and more rewarding. They were primarily dissatisfied with the prevailing dogmas and rooted more meaningful, beyond religious experiences to connect with the divine.
Neoplatonism by Hypatia overlaps both philosophy and spirituality, somewhat like Buddhism of Eastern philosophy. As the engagement with Eastern wisdom and philosophy is getting popular amongst people of all ages now, back then, Hypatia’s Neoplatonism offered a respite in people’s day-to-day lives. She not only focused on religious teachings, but her philosophy thrived in practising it. The core of her philosophical pursuit was to live well. Therefore, Neoplatonism was more than philosophy; it was a way of living. Her teachings were tolerant of all other religions, including Islam (which came into being much later).
That is what she did-taught some techniques practised to live a fuller, more aware, and enriching life that was attractive to everyone, whether a Christian, Jew, Pagan or even the non-believer. Who doesn’t want to live joyously? She taught her disciples a way of living intuitively by tapping into the divine powers situated inside us for guidance and insights to transform our lives. Her philosophy was not only intended to study academically but work on it universally.
Hypatia’s Killing and Aftermath
During the 4th century, the Roman Empire amalgamated a Christian entity into the purely pagan culture. Instead of a harmonious merging, some places were starting to idolize Christianity more than paganism. One such place was Alexandria of Egypt. It was tumultuous for all Jews, Christians, and pagans who were living there. Enter Hypatia, the practitioner and teacher of a pagan cult, and her peculiar mystical unification methodologies with the divine.
She fell prey to the current struggle between the orthodox and the radical pagans and met with an untimely and horrible death. Orestes, the prefect of Alexandria, was a pagan friend of Hypatia. At the same time, Cyril was the Bishop there. As a pagan and a friend, Orestes supported Hypatia, but her growing popularity and widespread acknowledgement of paganism over Christianity threatened Cyril.
Cyril accused Hypatia of corrupting youths and practising cults in the name of mathematics and religion. Eventually, a group of zealous mobsters attacked her and gruesomely killed her. They stripped her clothes, lynched publicly, and dragged throughout the streets until she succumbed to the throes of an undeserving death.
Whether it was the result of the power struggle between Cyril and Orestes or the consequence of a triggered Christian bishop is unknown. What is available in writings is that her killing has a political overtone and not just a religious reason. Regardless, her death stirred many controversies and shifted the religious stances of Alexandria.
The bigoted few mercilessly killed Hypatia. Yet, those who buried her never imagined that she was a seedling to an enormous tree, branching out from all sides for centuries to come. Her legacy has survived antiquity, the medieval age, and the early modern period to the 21st century.
When she died, people assumed that she’d be the last Alexandrian Neoplatonist. However, soon after, various lecture halls were established in Alexandria to teach paganism and Neoplatonism. It extended to the eastern Mediterranean, which produced many philosophers such as John Philoponus and Olympiodorus, the Older. Similarly, the other female Neoplatonist philosophers were Aedesia and Theodora.
During the Middle Ages, she was synonymous with Christian martyrs who died a similar death. Her lifelong commitment to her religion and refraining from marrying entitled her as the great virgin. She was associated with the Saint Catherine of Alexandria. She is mentioned even in the Byzantine encyclopedia, Suda. Hypatia (now christened) is attached to any woman who is wise, considerate, and exceedingly talented.
Besides that, she has been the protagonist of numerous books and novels, including children’s’ books. Voltaire has, in his book, written about Hypatia. He called her a free-thinking genius with a capacity to think outside of dogmas. Similarly, Socrates has mentioned Hypatia and openly praised this uniquely gifted philosopher.
During the 19th century, she was the basis of some seminal works on literature by European artists. She epitomized justice, beauty, and truth and became the idol of vulnerability. Soon, there were many plays, poems, novels, and biographies romanticizing Hypatia. French poet, Charles Leconte, penned her life and the Deistic narrative in his two poems. Charles Kingsley authored a bulky novel under Hypatia’s name, portraying her as the woman with a genius mind and petite body.
Bertrand Russell’s wife, Dora Russell, also wrote about Hypatia and highlighted the importance of women’s education. Some of the major writings of the 20th century revolved around the life of Hypatia. It includes works of Marcel Proust and Iain Pears.
Her life continued to be fictionalized even during the 21st century. From Charlotte Kramer to Youssef Zeidan to Ki Longfellow, she has been the central character who spins the novel’s plot. The movie Agora is somewhat an ode to her, depicting her rise to fame and her shortcomings. She has also been portrayed in the T.V. series, such as Cosmos by Carl Sagan and The Good Place.
Although Hypatia maintained such a mythic aura capable of rupturing her contributions altogether, she is mostly known for her lynching and subsequent death. Since then, she has forever remained as the symbol of a tussle between science and religion. Moreover, she is the standing figurine of feminism and pagan martyrdom. Hailing from Egypt, most people also admire her as a black woman martyr.
We can translate her life into a book, a movie, or even a series. However, it wouldn’t be wrong to call it a tragedy because she that brought her fame became the reason for her sad demise. Today, she is the emblem of a powerful, learned, and fearless woman.
1 thought on “Who was Hypatia: Life and Contributions”
Hypatia’s life is not well documented, even the exact year of her birth is unknown, however her gruesome death at the hands of fanatics has made her into a legend and overshadowed her career as a mathematician, philosopher, teacher and political advisor. |
The first known interstellar visitor to the solar system is keeping astronomers guessing.
Ever since it was spotted in October 2017, major mysteries have dogged the object, known as ‘Oumuamua (SN Online: 10/27/17). Astronomers don’t know where it came from in the galaxy. And they’ve disagreed over whether ‘Oumuamua is an asteroid, a comet or something else entirely.
One of the strangest mysteries is how ‘Oumuamua sped up after it slung around the sun and fled the solar system, a motion that can’t be explained by the gravitational forces of the sun or other celestial bodies alone. The most natural explanation is that ‘Oumuamua spouts gas like a comet, which would have given the object an extra push away from the sun — except astronomers saw no signs of such outgassing.
In November, Harvard University astronomers Shmuel Bialy and Avi Loeb sparked a firestorm of media coverage when they suggested that the acceleration could be explained if ‘Oumuamua is an alien spaceship, in a paper published in Astrophysical Journal Letters. In particular, the duo suggested, the object could be a solar sail: a large flat sheet less than 1 millimeter thick that uses pushes from starlight to navigate the galaxy (SN: 9/10/11, p. 18). Loeb is part of an organization called the Breakthrough Initiative that has suggested sending solar sails to visit a nearby planet orbiting the star Proxima Centauri (SN Online: 8/25/16). Maybe some other spacefaring civilization sent a similar sail to visit us, Loeb argues.
Since then, astronomers have been kicking around other origin stories to explain ‘Oumuamua and its bizarre behavior. “Jumping to the conclusion that it has to be produced by extraterrestrial intelligence, I think we don’t have evidence for it yet,” says astronomer Amaya Moro-Martín of the Space Telescope Science Institute in Baltimore. “There are other natural explanations that can be explored.”
Here are three such possibilities.
1. Fluffy ice fractal
To get a push from starlight, an object needs to have a large surface area — to provide more surfaces for particles of light called photons to nudge — and a small mass, so that even tiny amounts of photon pressure can make a difference.
A flat sheet, such as a solar sail, isn’t the only way to harness this radiation pressure, Moro-Martín says. A fluffy, porous structure that resembles a fractal, a geometric pattern that repeats itself on smaller and larger scales, could also be propelled by light, she argues. “Physically it would be the same idea, just the geometry would be different.”
Dust particles collected in Earth’s stratosphere can have this sort of fluffy fractal form, Moro-Martín says. She also sees similar structures in computer simulations of the way planets grow up in the dusty planet-forming disks astronomers see around other stars. As ice grains in the distant, frigid regions of those disks stick together, the particles develop into fractals.
‘Oumuamua could be one of those still-forming planets that got booted out of its star system before it finished forming, Moro-Martín proposes in a study published February 22 in the Astrophysical Journal Letters.
“If ‘Oumuamua were to have such an origin, it will be very interesting because it will be the first time that we have evidence for what this intermediate stage is,” Moro-Martín says. “We don’t know how the planet formation process proceeds. All we can see are the smallest particles, the dust particles, or the very largest, planets.”
But could a fluffy fractal survive the journey from another star’s planet-forming disk, all the way into the solar system and out again?
To accelerate as much as ‘Oumuamua did, the object must have a density of just 0.00001 grams per cubic centimeter, Moro-Martín estimates. In comparison, graphene aerogel — the lowest-density artificially produced material — is at least 10 times as dense. “It tells you [the object] must be very fragile,” Moro-Martín says.
“The idea that ‘Oumuamua is a fluffy fractal of ice, pushed by radiation pressure from sunlight, is an interesting scenario,” Loeb says. “But there are major challenges that it faces,” including how such a fragile object would survive, he says.
2. Comet skeleton
Planetary scientist Zdenek Sekanina of NASA’s Jet Propulsion Laboratory in Pasadena, Calif., agrees that a fluffy structure could account for ‘Oumuamua’s strange speedup. But he doesn’t think ‘Oumuamua was born with it. Instead, the object is a desiccated comet that lost most of its water and gases when it swooped close to the sun, he proposes in a paper posted January 30 at arXiv.org.
“It’s like a skeleton of the original body, with all the ice out,” Sekanina says.
Comets that fly close to the sun often do not survive. But some of these doomed objects have left observable fragments behind, like comet LINEAR. That comet came within 0.7 times the Earth’s distance to the sun in 2000 and left a cloud of mini comets behind, which were observed with the Hubble Space Telescope. ‘Oumuamua faced a harsher situation: It swooped closer to the sun, about 0.25 times Earth’s distance.
Like Loeb and Moro-Martín, Sekanina thinks solar radiation pressure is the best explanation for how ‘Oumuamua sped up. And a fluffy structure is the best way to accelerate with radiation pressure without invoking “little green men sending a sail,” he says. Although ‘Oumuamua is denser in Sekanina’s estimates than Moro-Martín’s, that’s still “just unbelievable,” he says. “It’s like a fairy castle type structure, or gossamer.”
If ‘Oumuamua were a fully solid, icy comet when it approached the solar system, and developed that gossamer structure only after flying close to the sun, that could explain how the object survived a trip through interstellar space.
3. Weird comet or ice shard?
When the Spitzer Space Telescope checked ‘Oumuamua for signs of a cometlike tail, the instrument saw none, meaning only minuscule amounts of carbon monoxide and carbon dioxide gas would have been expelled, if any. And if you assume ‘Oumuamua’s composition is similar to comets in the solar system, Spitzer’s data suggest that the object must not have been spewing out much water, either.
But if ‘Oumuamua is a strange sort of comet, it could spew water vapor or other noncarbonated gases that Spitzer didn’t detect, which could explain how the object sped up. “‘Oumuamua is made of still water, not Perrier,” quips astronomer Gregory Laughlin of Yale University.
Laughlin and colleagues are working on a study that suggests that ‘Oumuamua releases a nozzlelike jet of gas whose source migrates across the object’s surface, following the warmth of the sun. That migration would let ‘Oumuamua tumble through space without spinning so fast that it breaks apart. Other comets, including one visited by the Rosetta spacecraft (SN: 11/11/17, p. 32), exhibit this sort of sun-tracking jet.
“The weirdness is that [‘Oumuamua] would have to be made of pretty pure ice” to explain such outgassing, Laughlin says. It’s not clear if a comet, even a weird one, could be made of pure ice. So it’s possible that it ‘Oumuamua could be an ice shard of a larger body, such as if an icy planet came too close to a larger neighbor and was ripped apart, he says.
Unfortunately, there’s no way to check how ‘Oumuamua is structured now — it’s too far away to make any more observations. The ultimate test will come when — and astronomers think it’s a matter of when, not if — another interstellar visitor comes calling.
“If [‘Oumuamua] was representative of a population, there will be opportunities to get an up-close look at them,” Laughlin says. |
So You Can Hear! But Can You Listen?
We hear with our ears, but we listen with our brains. Find out what the difference is, and what you can do to listen better.
Hearing and listening are often talked about as if they’re the same thing, but they’re not. We all know people who can hear just fine, but they’re terrible listeners, while many people with hearing impairments are great listeners.
Hearing Vs. Listening
Hearing is what happens when your inner ear sends sound signals to your brain, making you aware of sounds around you. On the other hand, listening means understanding the sounds around you – it involves comprehension and memory. Listening requires effort, and becomes even more difficult when you have hearing loss.
Hearing Aids: Good For Hearing, Not For Listening
Hearing aids these days are small, light and technologically advanced. They help tremendously in getting sound waves to reach your brain if you have hearing loss. But they don’t help with listening. Listening involves being present in the conversation, understanding what people are saying, and remembering it.
The Importance Of Listening With Hearing Loss
Being a good listener is important for people with hearing loss, particularly in loud places. Hearing aids will amplify sounds around you, but they don't eliminate background noise. Practicing listening skills will help you understand what’s being said in a noisy environment, and the more you practice, the more refined this skill will become.
Exercises To Listen Better
1. Watch and listen to a TV show, and then re-watch it with closed caption or in slow motion
2. Read a book while listening to the audio book at the same time
3. Listen to someone else read a newspaper or magazine article while you follow along reading your own copy of the article
These exercises all encourage you to be present in the moment, as you focus directly on what’s being said. Try them all first in quiet settings, then add a little background noise. Turn on a radio, or do the last two exercises in a coffee shop, and help train your brain to listen in different environments.
Visit An Audiologist |
Did The Dinosaurs Die A Cold, Dark Death?
Published: 3rd Feb 2017
There have been many theories about the extinction of the dinosaurs. Scientists have hypothesized that a dust cloud and volcano’s resulting by the asteroid impact contributed to their death. However a new study suggests that freezing temperatures and a blanket of darkness contributed to the mass extinction.
The Potsdam Institute for Climate Impact Research (PIK) in Germany conducted a computer simulation called the Coupled Climate Model. Using the asteroid impact as a starting point, scientists looked at the sulphuric acid bearing gases caused by the collision. They would have become a main factor in sunlight being blocked from the earth causing it to cool down.
It is believed that due to these sun blocking gases air temperatures could have dropped by at least 26 degrees, and last to up to 3 to 16 years of sub freezing temperatures.
The research shows how climate is so important to all life on earth.
You can see the animation of the Coupled Climate Model heres: https://www.pik-potsdam.de/research/earth-system-analysis/projects/flagships/ace/extinctions
Full article about the discovery here: http://www.sciencealert.com/here-s-how-the-darkness-and-cold-killed-off-the-dinosaurs |
Less than 100 years after Americans won independence from the British, way up in the Pacific Northwest, a little-known squabble took place between the two. In the late 1800s, Americans and British soldiers averted actually firing on each other.
A Bucolic Setting
San Juan Island, sections of which today are part of a National Historical Park, had a pleasant temperate climate, and farming, fishing and timber opportunities that appealed to several nations. In the 1800s, it had been visited but not yet claimed. Eventually, ships from England and the U.S. mainland brought military contingents to occupy the territory. Both staked claims to the island and in 1859 they agreed to jointly occupy the island, separated at the 49th parallel, until the water boundary could be settled.
English Camp occupied the northwest end, while American Camp occupied the southern tip. Soon, British-owned Hudson’s Bay Company located a large sheep farming operation there. In time, other farm animals and agricultural operations were added. The large Belle Vue Sheep Farm was a strategic move on the part of the British to fully establish their claim to the land.
Underlying tensions persisted between the two. The Americans tried to tax Hudson’s Bay but no taxes were paid. Though both countries had military camps at opposite ends of the island, things remained relatively calm between the two communities. Officers and their families even visited with each other.
Changes in the Wind
Summer 1859, everything changed. An American settler shot and killed a pig belonging to the Hudson’s Bay Company. He claimed that the pig had wandered onto his property and, therefore, he shot the trespasser. Though the pig’s owner, who ran the HBC operation, made little fuss about the incident, things escalated rapidly. The time is known as the Pig War crisis. Tensions continued to simmer, with more and more American settlers coming to the island, many squatting on HBC land.
The British wanted the American settlers removed from the island, but American officials said no way. British warships sailed to the harbor, while troops at both camps multiplied. Both sides stood their ground but no war ensued.
Finally, the disputed water boundary went to arbitration by a third party – Germany. An arbitration panel settled the boundary between Canada and the island, and the San Juan Islands became American possessions. In 1871, the United States and Great Britain signed the Treaty of Washington, and a year later the British left the island.
The Pig War had ended diplomatically and peacefully.
Today, little remains of the two camps but visitors can wander their spectacular landscapes. |
Given a binary tree, is it a search tree?
In part A students are asked to write the function ValsLess:
In Part B, students are asked to write IsBST using ValsLess and assuming that a similar function ValsGreater exists. The solution is shown below:
Before continuing you should try to determine/guess/reason about what the complexity of IsBST is for an n-node tree. Assume that ValsLess and ValsGreater both run in O(n) time for an n-node tree.
What is the asymptotic complexity of the function
DoStuff shown below. Why? Assume that
the function Combine runs in O(n) time
when |left-right| = n, i.e., when
Combine is used to combine n elements in the
You may recognize this function as an implementation of Mergesort. You may also remember that the complexity of Mergesort is O(n log n) fo an n-element array/vector. How does this relate to the function IsBST?
We'll make a mathematical definition:
Then we have the following relationship:
T(n) = 2 T(n/2) + O(n) [the O(n) is for Combine] T(1) = O(1)
This relationship is called a recurrence relation because the function T(..) occurs on both sides of the = sign. This recurrence relation completely describes the function DoStuff, so if we could solve the recurrence relation we would know the complexity of DoStuff since T(n) is the time for DoStuff to execute.
How does this relate to the time for IsBST to execute? If you look carefully at the code for IsBST you'll see that it has the same form as the function DoStuff, so that IsBST will have the same recurrence relation as DoStuff. This means that if you accept that DoStuff is an O(n log n) function, then IsBST is also an O(n log n) function.
We'll write n instead of O(n) in the first line below because it makes the algebra much simpler.
T(n) = 2 T(n/2) + n = 2 [2 T(n/4) + n/2] + n = 4 T(n/4) + 2n = 4 [2 T(n/8) + n/4] + 2n = 8 T(n/8) + 3n = (ask your class to fill in this line, or you fill it in) you should have written: 16 T(n/16) + 4n = 2k T(n/2k) + k n [this is the Eureka! line]
You could ask students to fill in parts of the last line. Note that the last line is derived by seeing a pattern --- this is the Eureka/leap of faith/practice with generalizing mathematical patterns part of the problem.
We know that T(1) = 1 and this is a way to end the derivation above. In particular we want T(1) to appear on the right hand side of the = sign. This means we want:
n/2k = 1 OR n = 2k OR log2 n = k
Continuing with the previous derivation we get the following since k = log2 n:
= 2k T(n/2k) + k n = 2log2 n T(1) + (log2n) n = n + n log2 n [remember that T(1) = 1] = O(n log n)
So we've solved the recurrence relation and its solution is what we "knew" it would be. To make this a formal proof you would need to use induction to show that O(n log n) is the solution to the given recurrence relation, but the "plug and chug" method shown above shows how to derive the solution --- the subsequent verification that this is the solution is something that can be left to a more advanced algorithms class.
Before continuing, or with your class, try to fit each of the above recurrence relations to an algorithm and thus to its big-Oh solution. We'll show what these are below. Of course for practice you can ask your students to derive the solutions to the recurrence relations using the plug-and-chug method.
|T(n) = T(n/2) + O(1)||Binary Search||O(log n)|
|T(n) = T(n-1) + O(1)||Sequential Search||O(n)|
|T(n) = 2 T(n/2) + O(1)||tree traversal|| O(n)
|| T(n) = T(n-1) + O(n)
|| Selection Sort (other n2 sorts)
|| T(n) = 2 T(n/2) + O(n)
|| Mergesort (average case Quicksort)
|| O(n log n)
The solution below correctly solves the problem. It makes a call to the partition function from Quicksort. Assume that the partition function runs in O(n) time for an n-element vector/vector-segment. For completeness we'll include a partition function at the end of this document.
For an n-element vector a the call FindKth(a,0,n-1,k) returns the kth element in a:
What is the big-Oh complexity of FindKth in the worst-case and in the average-case. Since it's difficult to reason precisely about average-case without more mathematical sophistication than we want to use, assume that things behave nicely in the average-case. As it turns out, this gives the right answer for most definitions of average-case. In later courses we can define more precisely what average case means.
If T(n) is the time for FindKth to execute for an n-element vector, the recurrence relation in the worst-case is:
Where the O(n) term comes from Partition. Note that there is only one recursive call made in FindKth.
This is one of the big-five recurrences, it's solution is O(n2) so that FindKth in the worst-case is an n2 function.
The recurrence relation for the average case is
This isn't one of the "big five", so you'll have to solve it yourself to determine the average-case complexity of FindKth. Hint: it's pretty good. |
The first humans to pluck a Caribbean fighting conch from the shallow lagoons of Panama's Bocas del Toro were in for a good meal. Smithsonian scientists found that 7,000 years ago, this common marine shellfish contained 66 percent more meat than its descendants do today. Because of persistent harvesting of the largest conchs, it became advantageous for the animal to mature at a smaller size, resulting in evolutionary change.
Human-driven evolution of wild animals, sometimes referred to as "unnatural selection," has only previously been documented under scenarios of high-intensity harvesting, like industrialized fishing. "These are the first evidence that low-intensity harvesting has been sufficient to drive evolution," said lead author Aaron O'Dea of the Smithsonian Tropical Research Institute. "The reason may be because the conch has been subjected to harvesting for a long period of time." Published March 19 in Proceedings of the Royal Society B, the findings are based on a comparison of mature shell sizes prior to human settlement, from shells excavated from human trash heaps representing various points in the last few thousand years and from modern sites.
As a juvenile, the fighting conch Strombus pugilis lives hidden in the muddy sediments of lagoons. It emerges to compete for mates when it reaches sexual maturity, but only after it has thickened up its outer lip as a protection from predators. By observing the size of shells and the thickness of lips in fossil, archeological and modern conchs the researchers found that size at sexual maturity declined during the past 1,500 years in concert with human harvesting.
The study brought together ecologists, paleontologists and archeologists to expose the effects of long-term subsistence harvesting on an important marine resource. Co-authors include Marian Lynne Shaffer, at the time an undergraduate student of the University of Wisconsin-Green Bay, and archeologist Thomas Wake of UCLA's Cotsen Institute of Archeology.
The team suggests that declining yields may not be the only detrimental effects of an evolutionary change to mature at smaller size. The ability to reproduce, the quality of offspring and other vital traits can be damaged by size-selective evolution. Further study is required to learn the extent to which the fitness of S. pugilis has decreased because of long-term size-selective evolution.
"There is a glimmer of hope that the evolutionary trend toward smaller size can be halted or reversed," said O'Dea, drawing attention to the fact that modern sites that are protected from harvesting have the largest conchs. "Marine protected areas not only serve to protect biodiversity, they can also help maintain genetic diversity. This study shows that such genetic diversity is critical to sustain value of marine resources for the millions of humans that rely upon subsistence harvesting around the world." |
The Revolution changed the social rank of race and gender in America. African Americans, Native Americans, and women during the American Revolution experience a small amount of equality. There was an erosion of class differentiation because of the new ideals that the Revolution supported. Egalitarianism in America during The Revolution gradually allowed for the abolition of slavery and the equality of women in society. .
The Revolutionary War provided black people in America with opportunities. In 1776, the United States was composed of 500,000 African Americans; 25,000 0f which were slaves. Approximately 5,000 African Americans served in the Continental Army due to manpower demands. Although some slaves were freed, white Americans were reluctant to hire them. The Revolution did not end slavery or inequality but it allowed for some civil rights in America.
White women also felt the effect of the new ideals that the American Revolution offered. Women expanded their involvement in the war and gained new responsibilities which permitted them to gain equality. In the1780s, white American women gained more advanced education. Massachusetts in 1780 forbade the exclusion of girls from elementary schools. The Revolution gave white women a voice but complete equality was not present.
The Revolution held no promise for the Native Americans. Territorial expansion was one problem Native Americans faced during this time. Indians began to incorporate European tradition in their lives and relied somewhat on the American economy. Native Americans attempted to retain their identity in spite of the Revolution's promise of a changed nation. .
The social effects of the American Revolution proved to be somewhat advantageous to African and female Americans. Native Americans did not experience any benefit as the blacks and women did during this time. Issues of equality throughout American society were assessed during the Revolution. |
6 impressive ways how coding helps the environment
Clean water, clear air, healthy ecosystems are the foundation of human society. Issues like air pollution, water contamination, endangered wildlife impact this foundation and jeopardize our comfortable lifestyle. Ever since 1974 the world has celebrated Environment Day every year on 5 June. The day reminds us of these environmental issues and allows us to improve our habits and behaviour and focus our efforts on making a positive change.
There are, however, lots of people who are striving to reverse these processes, and computer science is one of the solutions. In this list, you’ll see how coding can help preserve nature and biodiversity – this year’s theme for World Environment Day.
#1 The Internet
Let’s start with a very basic one. Technology allows us to work remotely which can reduce air pollution in cities. It also saves office space. Video conference programs like Zoom, Teams, Hangouts reduce the need to commute for meetings resulting in reduced fuel consumption. Using email and messenger services means printing less and reducing the production of paper. Naturally, there are some drawbacks like technological waste. That’s why the EU is creating laws for improving recycling processes, energy consumption and service life of electrical devices.
#2 Ocean pollution
Plastic pollution in the oceans has been a critical topic for years. The problem has become so severe that companies like LADBible have launched campaigns to have ‘Trash Isles’ recognised by the UN as a separate country.
Initiatives like Plastic Adrift have been able to create statistical models to track the paths of these isles of plastic trash and eventually identify their source.
“Since the late 1970s, ocean scientists have tracked drifting buoys, but it wasn’t until 1982 the World Climate Research Programme put forward the idea of a standardised global
array of drifting buoys. These buoys float with the currents just like plastics except – like Twitter from the sea – they send a short message to scientists every few hours about where they are and the conditions in that location.”
The data from this research is available to everyone on the website of Plastic Adrift.
Parents and educators can teach children about the impact of plastic pollution and instil a responsible lifestyle in them through coding. Vidcode offers a project, called ‘End Plastic Pollution’ which teaches kids about coding and raises awareness of the environmental issue. The project can be found here.
#3 Freshwater supply
Freshwater is an invaluable but scarce resource, especially in some parts of the world. UN Environment, Google and the European Commission have launched a data platform to track the world’s water bodies. The app enables all countries to monitor their freshwater supply. The Data and interactive map are available here. Explore!
#4 Forest health
There’s nothing quite like walking on a forest path and taking in the fresh air. However, forests are threatened by many factors like climate change, drought or changes in temperature. That’s why scientists use geographic information systems to collect and analyse relevant data to help preserve forests.
“Some people fear working with cyberinfrastructure because of the presumed complexities of learning to code,” says Tyson Swetnam, a science informatician who recently led a research project on forest biomass data analysis. As such, learning code from a tender age is a necessary skill to make a positive change in the environment. Read more about the project on sciencenod.org.
#5 Wildlife corridors
The human population grows every day and so do the areas occupied by us leaving less space for wild animals. A way to reduce the impact of urbanisation is to create wildlife corridors – areas of protected land where animals are safe. So, what scientists do is compile massive amounts of data to create models of the areas inhabited by wildlife. This way, applying a computational approach, they can predict where those areas are and can determine how to design the corridors. Platforms like Scratch offer great resources for children to make their firs steps into coding while learning about wildlife.
#6 Protecting habitats
The Jane Goodall Institute combats the same issue using coding and computer science to protect primates and their habitats. When large forest areas are cleared to develop human infrastructure, the forest patches which are left are often not enough to support larger populations. That’s where remote sensing technologies come into play. They enable the use of information, collected by satellites, to monitor chimpanzee habitats. This way, the institute can use the information to protect great ape habitats in numerous countries.
Computer science enables researchers and scientists to use large-scale data and investigate and analyse issues like climate change and water contamination. And through this research, people can positively impact the environment. |
The two regions are geographically completely different but even so it has been a puzzle.
A new report by Richard Bintanja, lead author of the study at the Royal Netherlands Meteorological Institute shows that the permanent glaciers in the sea are melting from the bottom. This is caused by warmer sea currents melting the glacier ice and the fresh water released is floating over the colder, more dense, sea water and therefore it is more vulnerable to freezing by the cold winter conditions. This has caused more ice to form.
Scientist, Paul Holland of the British Antarctic Survey has shown that the increased wind speed around Antarctica is blowing the fresh surface ice away from the land and this has caused the ice to spread over a larger area.
Which ever is the dominant factor the sea ice melts in the summer and ultimately will be overcome by a warmer World and the area will start to reduce. |
As humans, we sure do admire bee efficiency, but generally, we assume they are just tiny, well-programmed robots. Researchers are now uncovering a range of cognitive skills that were previously believed to only belong to larger animals. At the Queen Mary University of London, in the Bee Sensory and Behavioural Ecology Lab, researchers found that bees count in simplistic ways and can even recognize faces. More recently, the lab has found that bees can be trained to differentiate between colors more accurately than was previously thought. The lab is also studying how bees learn from each other. The lab is testing this social learning by observing how inexperienced bees learn the quickest routes to flower patches by mimicking the more seasoned foragers.
Ravens, it seems, never forget a face. In the wild, crows co-exist in groups until they select mates, then each pair veers off into a solitary, conjugal life. In this research study, researchers simulated these social arrangements and kept the pairs in separate aviaries. But the ravens still remembered their peers from group life and recognized their recorded calls, as they had different reactions to birds that they knew before than to ones they did not.
Since the turn of the 20th century, scientists have known that chimps are capable planners— for instance, they’ll stack boxes to reach a dangling bunch of bananas. In 2015, researchers at Harvard researchers decided to see whether the primates could handle something considered to be exclusively human: cooking. Cooking requires several cognitive abilities, including self-discipline, reasoning, as well as preparation. At the Tchimpounga Chimpanzee Rehabilitation Center in the Republic of the Congo, researchers offered chimpanzees a choice: They could place raw slices of food in a device that would return it to them uncooked, or in another that would give them cooked pieces. The team didn’t give the chimps the option to do real cooking out of concern that they might burn themselves. The chimps preferred the cooked food and even moved raw slices from the other device over to the “oven,” showing that they had some of the necessary cognitive skills for cooking. |
1971 Attica Prison Riot, Prison Life, New Hampshire State Prison, Prisoners' Rights, Further Readings
A public building used for the confinement of people convicted of serious crimes.
Prison is a place used for confinement of convicted criminals. Aside from the death penalty, a sentence to prison is the harshest punishment imposed on criminals in the United States. On the federal level, imprisonment or incarceration is managed by the Federal Bureau of Prisons, a federal agency within the DEPARTMENT OF JUSTICE. State prisons are supervised by a state agency such as a department of corrections.
Confinement in prison, also known as a penitentiary or correctional facility, is the punishment that courts most commonly impose for serious crimes, such as felonies. For lesser crimes, courts usually impose short-term incarceration in a jail, detention center, or similar facility.
Confining criminals for long periods of time as the primary form of punishment is a relatively new concept. Throughout history, various countries have imprisoned criminal offenders, but imprisonment was usually reserved for pre-trial detention or punishment of petty criminals with a short term of confinement.
Using long-term imprisonment as the primary punishment for convicted criminals began in the United States. In the late eighteenth century, the nonviolent Quakers in Pennsylvania proposed long-term confinement as an alternative to CAPITAL PUNISHMENT. The Quakers stressed solitude, silence, rehabilitation, hard work, and religious faith. Confinement was originally intended not only as a punishment but as an opportunity for renewal through religion.
In 1790, the WALNUT STREET JAIL in Philadelphia constructed a separate cell house for the sole purpose of holding convicts. This was the first prison in the United States. The concept of long-term imprisonment became popular as the U.S. public embraced the concept of removing offenders from society and punishing them with confinement and hard labor. Before the existence of prisons, most offenders were subjected to CORPORAL PUNISHMENT or public humiliation and then released back into the community. In the nineteenth century, as the United States became more urban and industrial, poverty became widespread, and crime increased. As crime increased, the public became intolerant of even the most petty crimes and viewed imprisonment as the best method for stopping repeated criminal activity.
The early nineteenth century was filled with fierce debates about how a prison should be run. There emerged two competing ideas: the Auburn System and the Eastern Penitentiary System. The Auburn System took its name from the Auburn, New York, prison, which opened in 1819. At first, the prison placed all its worst offenders in solitary confinement, but this arrangement led to nervous breakdowns and suicides. The system was modified so that inmates slept in separate cells but worked and ate together. However, the inmates were forced to remain silent. Administrators believed this code of silence would prevent prisoners from picking up bad attitudes and would promote their rehabilitation.
The Eastern Penitentiary System at Cherry Hill, Pennsylvania, opened its gates in 1829. The prison building was designed in the form of a central hub with spokes radiating from this administrative center. Small cells lined each spoke and prisoners had their own exercise space. Unlike the Auburn System, this system promoted extreme isolation. Not surprisingly, many inmates committed suicide. In time, the Auburn System prevailed, as state legislatures saw advantages in congregate living. The Auburn System encouraged prison industries to help make prisons self-supporting.
By the mid-nineteenth century, prisons existed throughout the United States. Prisoners were kept in unsanitary environments, forced to work at hard labor, and brutalized by guards. These conditions continued until the 1950s and 1960s, when heightened social and political discourse led to a renewed emphasis on rehabilitation. The closing of one particular prison symbolized the change in correctional philosophy. Alcatraz Prison, located on an island off San Francisco, was used exclusively to place in solitary confinement convicts classified as either violent or disruptive. Rehabilitation was non-existent in Alcatraz. The prison was filthy and rat-infested, and prisoners were held in dungeon-like cells, often chained to stone walls. Established in 1934, Alcatraz was closed in 1963, in part because its brutal treatment of prisoners symbolized an outdated penal philosophy.
By the mid-1960s, the stated purpose of many prisons was to educate prisoners and prepare them for life after prison. Many federal and state courts ordered administrators to improve the conditions inside their prisons, and the quality of life for inmates greatly improved.
By the 1980s, most prison administrators abandoned rehabilitation as a goal. Forced by an increasing problem with overcrowding and the resulting increase in violence, administrators returned to punishment and security as the primary purposes of prison. Though most prisons continue to operate educational and other rehabilitative programs, the rights of prison inmates have been frozen at the minimal number recognized by courts in the 1960s and 1970s. The U.S. Supreme Court has ruled against prison guard violence, but courts have generally refused to expand the rights of prison inmates. In most cases, courts have approved increased infringement of inmates' rights if prison officials declare that the restrictions are for security purposes.
- Prison - 1971 Attica Prison Riot
- Prison - Prison Life, New Hampshire State Prison
- Prison - Prisoners' Rights
- Prison - Further Readings
- Other Free Encyclopedias |
10 Dangerous Objects Orbiting The Earth
There are at least 500,000 objects orbiting the Earth today. Some estimates put the figure closer to 700,000. More than 21,000 are larger than 10 centimeters (4 in), and these objects pose a threat to future space travel and life on Earth. Many are fragments of artificial satellites that were destroyed when they collided with other satellites.
Today, there are over 1,700 artificial satellites in operation and an additional 2,600 that are no longer working. Most of these satellites have either completed their missions or have succumbed to malfunction. At least 30 of these inoperable objects were nuclear powered at some point. They still contain—and in some cases, leak—nuclear waste to this day.
The following list discusses 10 objects in orbit around Earth that are worrisome for different reasons.
Tiangong-1 is a prototype space station launched by the Chinese government in 2011. It originally had a two-year mission to test the effects of space travel on astronauts and the docking capabilities of other spacecraft. The mission was extended beyond its original plan before finally being abandoned because the operators of the station in China claimed that they no longer had control of it.
Tiangong-1 was large, weighing about 8,500 kilograms (19,000 lbs), and was capable of housing two astronauts at a time.
Although most of the station incinerated in the atmosphere upon reentry over the Pacific Ocean in early April 2018, the expectation was that the rocket engines were made of materials that would not burn up. Although it was once feared that these intact pieces might cause enormous damage to structures, animals, and human beings, no catastrophic events were reported.
9 SNAP 10-A
In 1965, the United States launched SNAP 10-A into space from Vandenberg Air Force Base. SNAP 10-A is the only nuclear fission satellite launched into space by the United States. It was designed as an experimental nuclear spacecraft capable of producing 500 watts of electrical power. Its primary purpose was to monitor how nuclear fission reactors behave in space.
Unfortunately, the nuclear reactor worked for only 43 days, and then the power supply’s voltage regulator failed. The satellite started to fall apart in the late 1970s, and approximately 50 pieces of debris have been created as a result.
During this shedding process, it was very likely that some radioactive material was released into space. The nuclear reactor currently orbits the Earth at 700 nautical miles above the surface. It will remain in orbit for the next 4,000 years unless additional shedding or a collision with another object shortens its orbital life.
8 Kosmos 1818
In 1987, the Soviet Union launched Kosmos 1818, which was powered by a TOPAZ 1 (or thermionic) nuclear reactor. The purpose of Kosmos 1818 was as a naval surveillance satellite, or RORSAT (Radar Ocean Reconnaissance Satellite). Unfortunately, the nuclear reactor on Kosmos 1818 operated for only five months before shutting down.
In 1978, a similar satellite reentered the atmosphere and crashed into Earth, spreading radioactive material over Canada. Kosmos 1818 was placed into high orbit to avoid a similar catastrophe. However, its high orbit also means that it has a high collision probability.
Any collision might accelerate the descent of possibly contaminated materials to Earth. Some of the objects and liquid released from the spacecraft are thought to be radioactive and are still in orbit.
7 Kosmos 1867
Kosmos 1867 was launched by the Soviet Union in 1987, the same year as its twin, Kosmos 1818. It had a similar purpose to Kosmos 1818, but Kosmos 1867 operated for 11 months before shutting down.
Since it is in a high orbit like its twin, Kosmos 1867 has succumbed to the pressures of repeated solar heating. As a result, the coolant tubes aboard the satellite’s nuclear reactor have cracked and allowed the release of liquid metal into space.
6 Kosmos 1900
Kosmos 1900 is a US-A or Controlled Active Satellite used for RORSAT missions. Launched in 1987 by the Soviet Union, the satellite was plagued from the beginning and never quite reached the cruising orbit for which it was designed.
After several rocket boosts to try to correct its orbit, the satellite continued to lose altitude. Moreover, the nuclear reactor did not make it into its storage orbit. At some point prior to 1995, NASA determined that a cloud of liquid radioactive material had originated from the Kosmos 1900 satellite. NASA claimed that the leak was likely due to a collision with another satellite.
5 Satellite Debris
With all the satellite collisions, there is now a large debris field orbiting Earth. This debris field is perhaps more dangerous than any single intact object because of the increased chance of potential collisions from multiple debris objects. Several large satellite collisions have already been recorded, and these events have exacerbated the space junk problem.
In 2009, the satellites Iridium 33 and Kosmos 2251 collided at a speed of 42,000 kilometers per hour (26,000 mph) while in low Earth orbit (approximately 800 kilometers (500 mi) above the planet’s surface). Both satellites were destroyed by the collision.
So, instead of having two large objects orbiting Earth, we now have approximately 1,000 objects larger than 10 centimeters (4 in) that threaten many other satellites. (There are also many smaller pieces.)
Although about half the debris from the 2009 accident has now burned up in the atmosphere, several other collisions have occurred. Scientists estimate that the Iridium-Kosmos accident, along with China’s intentional destruction of a satellite by long-range missile in 2007, has doubled the number of dangerous and potential collision objects in orbit.
4 Black Knight
Whether Black Knight is dangerous will depend on whom you ask. Conspiracy theorists argue that the object is a 13,000-year-old extraterrestrial satellite from the star system Epsilon Bootis that Nikola Tesla discovered in 1899. NASA claims that the object in question is nothing more than a thermal blanket that got loose during a space walk.
This object is dangerous mostly for the time that is wasted on it by conspiracy theorists. Unfortunately, more time has been wasted by conspiracy theorists and speculation about this object than all the time lost by those who died prematurely as result of falling space debris.
The International Space Station (ISS) does not present a nuclear or likely collision threat that we know of, but it remains one of the most dangerous objects in orbit because of its size. Collisions are possible with any space object, but any such accident with the space station could create a doomsday scenario involving space debris that is proposed by the Kessler syndrome.
In simple terms, this means that an object striking the ISS might cause a cascading effect of other such accidents from all the resulting debris. At some point, there would be too much debris for us to continue with certain space activities, possibly for generations. As recently as 2017, objects have detached from the station and now have the potential to crash into the ISS.
The station is also a danger to the astronauts who work aboard it. There have been several problems with the oxygen generators, carbon dioxide removal systems, environmental controls, the central computer, electrical and power systems, torn solar panels, and ammonia leaks. If one of these problems turned into a catastrophe, the ISS could quickly become a serious danger as it fell to Earth and collided with other satellites and debris along the way.
2 Hubble Space Telescope
The Hubble Space Telescope is not as big as the ISS. But Hubble is still one of the largest objects in orbit and a danger mostly for its collision potential. If Hubble were to strike another satellite or piece of debris, the amount of additional wreckage would significantly add to the space debris problem.
Initially, Hubble was launched aboard the Discovery space shuttle in 1990 after a multiyear delay following the destruction of Challenger. Currently, Hubble is not in a controlled orbit and is descending toward Earth.
As Hubble’s materials are so strong and dense, the space telescope is not likely to burn up in the Earth’s atmosphere during descent. After entry into the atmosphere, Hubble would then fall uncontrolled to the Earth’s surface. This is likely to occur sometime between now and 2040.
Envisat is a large satellite launched in 2002 to monitor the Earth’s environment and geography. Although it lasted five years beyond its original plan, the European Space Agency (ESA) lost contact with it in 2012. Envisat now poses the greatest Kessler syndrome threat in Earth’s orbit.
Two objects pass close to Envisat and could cause a collision. Considering Envisat’s mass of approximately 8,200 kilograms (18,000 lb), any crash between it and other satellites or pieces of space junk would be catastrophic and create a large debris field that would be nearly impossible to clean up.
The wreckage of Envisat would be so immense that the potential chain reaction of collisions proposed by the Kessler syndrome is the real danger, and Envisat represents its greatest risk.
Currently, the satellite is expected to continue in orbit for approximately 150 years before falling to Earth, which greatly increases the probability of an accident. For this reason, special considerations have been made to create a spacecraft capable of removing Envisat from orbit.
Envisat is perhaps one of the greatest ironies of our space program: A satellite that was celebrated for helping us to understand the health of the Earth’s environment is now one of the greatest risks to its orbital field. |
A sequencing task using a simple persuasive text.
Use this sequencing activity when teaching students the structure of a persuasive text. This task includes a simple text about why homework is unnecessary and a sequencing activity.
Students cut out sentences and glue the text in the scaffold table provided.
Download this resource as part of a larger resource pack or Unit Plan.
NSW Curriculum alignment
Uses effective and accurate sentence structure, grammatical features, punctuation conventions and vocabulary relevant to the type of text when responding to and composing texts
Victorian Curriculum alignment
Understand that paragraphs are a key organisational feature of written texts
Australian Curriculum alignment
Understand that paragraphs are a key organisational feature of written textsElaborationsnoticing how longer texts are organised into paragraphs, each beginning with a topic sentence/paragraph opener which predicts how the paragraph will develop and i...
We create premium quality, downloadable teaching resources for primary/elementary school teachers that make classrooms buzz!
Find more resources for these topics
Answer sheet added.
Addition by Stephanie (Teach Starter) Nov 20th, 2017
Request a change
You must be logged in to request a change. Sign up now!
Report an Error
You must be logged in to report an error. Sign up now! |
District improvement plans are tools districts use to monitor student learning, analyze what is working and what is not, and then make needed changes. The goal is to decrease achievement gaps among groups such as disadvantaged, minority and special education students. Each year since Turner started using this tool, its schools have shown improvement.
Some key findings from Turner’s improvement plan are:
- Special education students need to participate more in classes with other students so as to have full benefit of the curriculum.
- Training teachers to ask better questions can strengthen student performance in reading and math.
- Learning problem-solving skills can help students apply knowledge to their lives outside school.
- There are increasing numbers of disadvantaged students in the district.
- There are increasing numbers of Hispanic students in the district.
- Students are using technology about a third of their time in the classroom.
- Community involvement with the schools — including parent-teacher conferences — is low.
- Both parents and staff think communication needs improvement, especially in the secondary schools.
- Parents and staff also think the curriculum is not rigorous enough.
Some strengths of the district according to the improvement plan:
- English Language Learners have done better in reading over the past three years.
- Student improvement in reading and math has been better than expected.
- Participation in the BOOST program after school has helped improve student performance on standardized testing.
- Participation the Ninth Grade Academy has helped improve student achievement.
- More than half of district staff members have advanced academic degrees.
- More than 10 percent of district staff members are certified to teach English as a second language.
- The graduation rate usually stays above 75 percent.
- Disruptive behavior and discipline referrals have been decreasing.
- Students feel safer at school than they do in the community.
- Tobacco use has decreased.
- Families receive regular newsletters.
- Parent Teacher Associations exist in every school.
- More teachers are differentiating instruction based on varying student needs.
Some challenges for the district:
- There is an achievement gap between special education students and those in the general population.
- The improvement rate for special education students has been inconsistent.
- There is an achievement gap between African-American students and those in other racial and ethnic groups.
- Over summer break, students may forget what they learned during the previous school year.
- Students have had trouble applying their learning to real-world situations.
- Behavior incidents in the secondary schools are increasing.
- Behavior incidents on the buses are increasing.
- An increasing number of young people qualify for free lunches, which means there are more disadvantaged students.
- About a quarter of students report they are exposed to crime and drug use near their homes.
- Teachers perceive a need for more time to collaborate with each other.
- Almost half of parents think school staff members do not listen to them.
- Almost 40 percent of certified teachers think administrators do not listen to their concerns.
- Almost 20 percent of certified teachers think the teaching staff does a fair or poor job modifying instruction based on student test scores.
- Students move around. Almost a quarter of them change homes during the academic year.
What are some ways the improvement plan authors say the district can increase student academic success?:
- Close the achievement gap between special education students and the general population.
- Meet the needs of the growing number of English language learners.
- Teach teachers how to involve students in higher-level learning activities.
- Help students learn better critical thinking skills.
The board meeting, which is open to the public, will take place in the Board Room of the Administrative Service Center; 800 S. 55th St.; Kansas City, KS 66106.
Read our Twitter feed for more education news from local, regional and national sources:http://twitter.com/#!/JW_Martinez
Or you can subscribe to the daily KC Education Enterprise roundup of local, state and national education news on paper.li: http://paper.li/JW_Martinez/1322883462 |
Hand AnatomyThe human hand, the most distal part of the upper limb, is a remarkable feat of engineering and evolution. It is strong enough to allow climbers to tackle any mountain, but also sufficiently precise for the manipulation of some of the world’s smallest objects and the performance of complex actions.
The hand itself consists of specific bones onto which various muscles are attached, and a collection of neurovascular structures responsible for drainage and innervation. However, the intrinsic muscles of the hand are only partially responsible for all its range of motion. The other major contributors are actually the forearm muscles, which project tendons towards the hand via an equally complex and flexible anatomical structure, called the wrist.
A solid understanding of the hand requires good grasp (pun intended) of its entire anatomy, so in this page we will look at all of the above structures.
The bones of the hand can be divided into three distinct groups:
Watch the following videos to find out everything about all the bones of the hand.
Each group of hand bones is important in its own right, but the eight carpals are especially interesting because they are arranged in two distinct rows and are direct contributors to the formation of the wrist. We’ll come back to the wrist later on.
In the meantime, it would be beneficial for you to complete the following carpal bones quiz because they are the most difficult hand bones to get your head around.
The muscles of the hand consist of five groups:
- Thenar muscles
- Hypothenar muscles
- Palmar interossei
- Dorsal interossei
The thenar muscles are four in total; they are evident and easy to palpate on the radial side of the palmar surface of the hand, at the base of the thumb. They form the ‘ball’ or ‘fleshy’ part of the thumb known as the thenar eminence, and are named as follows: abductor pollicis brevis, adductor pollicis, flexor pollicis brevis, and opponens pollicis.
|Abductor pollicis brevis||
Origin - tubercles of the scaphoid and trapezium; flexor retinaculum
Insertion - base of the proximal phalanx, radial sesamoid bone
Innervation - median nerve
Function - thumb abduction (moving away from the hand)
Origin - palmar base of third metacarpal (transverse head), capitate bone and palmar bases of second and third metacarpals (oblique head)
Insertion - base of proximal phalanx, ulnar sesamoid bone
Innervation - deep branch of ulnar nerve
Function - thumb adduction (moving towards the hand)
|Flexor pollicis brevis||
Origin - flexor retinaculum, tubercle of trapezium (superficial head), trapezoid and capitate bones (deep head)
Insertion - radial sesamoid bone and base of the proximal phalanx (superficial head), base of first phalanx, radial sesamoid bone (deep head)
Innervation - median and ulnar nerves
Function - thumb flexion (bending)
Origin - flexor retinaculum, tubercle of trapezium bone
Insertion - first metacarpal bone
Innervation - median nerve
Function - thumb flexion, abduction and medial rotation resulting in a combined movement called opposition
The thenar muscles are capable of various thumb movements; abduction, adduction, flexion, and opposition. Watch the following video to learn more about the thenar muscles.
Also on the palmar surface of the hand, the thenar eminence has a corresponding, ‘fleshy’ region on the ulnar side of the hand. It is easily palpated and visible at the base of the little finger. This region is called the hypothenar eminence and consists of the four hypothenar muscles: abductor digiti minimi, flexor digiti minimi, opponens digiti minimi, and palmaris brevis. This group of muscles are expert movers of the little finger (fifth digit); they abduct, flex, and bring it towards the thumb to facilitate opposition.
The last three groups of hand muscles, that is the lumbricals, dorsal interossei, and palmar interossei, are situated in the deepest layer of the hand and are commonly taken together as one big group called the metacarpal muscles of the hand. They work in unison to help with the extension, flexion, abduction, and adduction of the phalanges. Watch the next video to learn more about those muscles:
Hand nerves, arteries & veins
The nerves innervating the muscles of the hand originate higher up, from a structure called the brachial plexus. This plexus is formed from the combination of the anterior branches of the 5th to 8th cervical spinal nerves and the first thoracic one.
The important nerves travelling towards the hand from the brachial plexus are the median, ulnar, and radial nerves. When referring to the hand, the radial nerve only provides cutaneous innervation along the outside of the thumb. In contrast, the other two nerves supply the hand muscles; the median nerve predominantly supplies the thenar muscles, while the ulnar nerve mainly innervates the hypothenar and other intrinsic muscles of the hand. The main branches projecting onto the hand muscles are from the median and ulnar nerves.
Common palmar digital nerves
Proper palmar digital nerves
The following video and articles will explain everything you need to know about the innervation of the hand, as well as its origins.
Arteries & veins
Since the hand is the terminal region of the upper extremity numerous anastomoses take place here, resulting in quite a complex vascular network. All the hand arteries originate from two main, larger ones; the radial and ulnar arteries. These two blood vessels travel down the radial and ulnar sides of the forearm, respectively.
The radial and ulnar arteries give off the following specific branches in the hand:
- Superficial palmar arch
- Deep palmar arch
- Common palmar digital arteries
- Proper palmar digital arteries
- Dorsal carpal arch
- Dorsal metacarpal arteries
- Dorsal digital arteries
- Principal artery of the thumb
Understanding all the above arterial arches and anastomoses is easiest through a visual approach, therefore the video given below will clarify the entire neurovasculature of the hand.
The veins are very similar to the arteries, so if you understand the latter then the drainage pattern of the hand will be a piece of cake. Watch the video mentioned above to learn all about the hand veins.
Essentially, the veins of the hand drain into either the radial or ulnar veins, and consist of the following:
- Superficial palmar venous arch
- Deep palmar venous arch
- Dorsal venous network of the hand
- Palmar metacarpal digital veins
The anatomy of the hand is incomplete without understanding the wrist. This complex structure connects the entire hand to the radius and ulna, facilitates the passage of tendons together with the above mentioned neurovascular structures from the forearm to the hand, and permits us to exploit all its movements. Those are flexion, extension, abduction, and adduction of the hand.
Find out more about the anatomy of the wrist and it’s movements in the following article:
This quiz is specially designed to test your knowledge about the hand and wrist. It specifically focuses on bones, muscles (including attachments, innervation, functions), arteries, veins, and nerves. Tackle it to cement and master the anatomy of the hand and wrist!
Video tutorials & articles for Hand Anatomy
Hand Anatomy Quizzes
Finally, take the following last quiz, which was created in order to test your knowledge in hand anatomy. It specifically focuses on bones, muscles (including attachments, innervation, functions), arteries, veins, and nerves. Tackle it to cement and master the anatomy of the hand and wrist! |
A library is fundamentally an organized set of resources, which include human services as well as the entire spectrum of media (e.g., text, video, hypermedia). Libraries have physical components such as space, equipment, and storage media; intellectual components such as collection policies that determine what materials will be included and organizational schemes that determine how the collection is accessed; and people who manage the physical and intellectual components and interact with users to solve information problems.
Libraries serve at least three roles in learning. First, they serve a practical role in sharing expensive resources. Physical resources such as books and periodicals, films and videos, software and electronic databases, and specialized tools such as projectors, graphics equipment and cameras are shared by a community of users. Human resources--librarians (also called media specialists or information specialists) support instructional programs by responding to the requests of teachers and students (responsive service) and by initiating activities for teachers and students (proactive services). Responsive services include maintaining reserve materials, answering reference questions, providing bibliographic instruction, developing media packages, recommending books or films, and teaching users how to use materials. Proactive services include selective dissemination of information to faculty and students, initiating thematic events, collaborating with instructors to plan instruction, and introducing new instructional methods and tools. In these ways, libraries serve to allow instructors and students to share expensive materials and expertise.
Second, libraries serve a cultural role in preserving and organizing artifacts and ideas. Great works of literature, art, and science must be preserved and made accessible to future learners. Although libraries have traditionally been viewed as facilities for printed artifacts, primary and secondary school libraries often also serve as museums and laboratories. Libraries preserve objects through careful storage procedures, policies of borrowing and use, and repair and maintenance as needed. In addition to preservation, libraries ensure access to materials through indexes, catalogs, and other finding aids that allow learners to locate items appropriate to their needs.
Third, libraries serve social and intellectual roles in bringing together people and ideas. This is distinct from the practical role of sharing resources in that libraries provide a physical place for teachers and learners to meet outside the structure of the classroom, thus allowing people with different perspectives to interact in a knowledge space that is both larger and more general than that shared by any single discipline or affinity group. Browsing a catalog in a library provides a global view for people engaged in specialized study and offers opportunities for serendipitous insights or alternative views. In many respects, libraries serve as centers of interdisciplinarity--places shared by learners from all disciplines. Digital libraries extend such interdisciplinarity by making diverse information resources available beyond the physical space shared by groups of learners. One of the greatest benefits of digital libraries is bringing together people with formal, informal, and professional learning missions.
Formal learning is systematic and guided by instruction. Formal learning takes place in courses offered at schools of various kinds and in training courses or programs on the job. The important roles that libraries serve in formal learning are illustrated by their physical prominence on university campuses and the number of courses that make direct use of library services and materials. Most of the information resources in schools are tied directly to the instructional mission. Students or teachers who wish to find information outside this mission have in the past had to travel to other libraries. By making the broad range of information resources discussed below available to students and teachers in schools, digital libraries open new learning opportunities for global rather than strictly local communities.
Much learning in life is informal--opportunistic and strictly under the control of the learner. Learners take advantage of other people, mass media, and the immediate environment during informal learning. The public library system that developed in the U.S. in the late nineteenth century has been called the "free university", since public libraries were created to provide free access to the world's knowledge. Public libraries provide classic nonfiction books, a wide range of periodicals, reference sources, and audio and video tapes so that patrons can learn about topics of their own choosing at their own pace and style. Just as computing technology and world-wide telecommunications networks are beginning to change what is possible in formal classrooms, they are changing how individuals pursue personal learning missions.
Professional learning refers to the on going learning adults engage in to do their work and to improve their work-related knowledge and skills. In fact, for many professionals, learning is the central aspect of their work. Like informal learning, it is mainly self-directed, but unlike formal or informal learning, it is focused on a specific field closely linked to job performance, aims to be comprehensive, and is acquired and applied longitudinally. Since professional learning affects job performance, corporations and government agencies support libraries (often called information centers) with information resources specific to the goals of the organization. The main information resources for professional learning, however, are personal collections of books, reports, and files; subscriptions to journals; and the human networks of colleagues nurtured through professional meetings and various communications. Many of the data sets and computational tools of digital libraries were originally developed to enhance professional learning.
The information resources--both physical and human--that support these types of learning are customized for specific missions and have traditionally been physically separated, although common technologies such as printing, photography, and computing are found across all settings. This situation, is depicted in Figure 1.
Digital libraries combine technology and information resources to allow remote access, breaking down the physical barriers between resources. Although these resources will remain specialized to meet the needs of specific communities of learners, digital libraries will allow teachers and students to take advantage of wider ranges of materials and communicate with people outside the formal learning environment. This will allow more integration of the different types of learning, as depicted in Figure 2.
Although not all students or teachers in formal learning settings will use information resources beyond their circumscribed curriculum and not all professionals will want to interact even occasionally with novices, digital libraries will allow learners of all types to share resources, time and energy, and expertise to their mutual benefits. The following sections illustrate some of the types of information resources that are defining digital libraries.Back one topic. Forward to next topic. Back to Outline. |
PESWiki.com -- Pure Energy Systems Wiki: Finding and facilitating breakthrough clean energy technologies.
- For a list of resources on betavoltaics that harness electrons given off in radioactive decay, see Directory:BetaVoltaics
Betavoltaics is an alternative energy technology that promises vastly extended battery life and power density over current technologies. Betavoltaics are generators of electrical current, in effect a form of battery, which use energy from a radioactive source emitting beta particles (electrons). The functioning of a betavoltaic device is somewhat similar to a solar panel, which converts photons (light) into electric current. This type of radioactive battery (or nuclear battery) operate on the continuous radioactive decay of certain elements. These theoretical batteries last a long time.
Betavoltaics were invented over 50 years ago. Betavoltaic power cells are sometimes referred to as betavoltaic batteries, atomic batteries, nuclear batteries, nuclear micro-power sources / devices, or stimulated / accelerated isotope decay power cells. They are sometimes described with the prefix "long-lived" since theoretically they can last as much as 20 years or more.
They have been developed since the 1950's. They were initially desgined to meet the high-voltage, high-current draw requirements of electrically powered space probes and satellites. (For example, the Army Research Lab did betavoltaic testing in 1954 using dissimilar metals. In one device not much larger than a car battery they were successful in achieving a 70 watt output for a short time). As early as 1973, betavoltaics were suggested for use in long-term medical devices such as pacemakers.
The modern Betavoltaic power cell's standard operating voltage is between 100kV and 1.5 million kV in potential, however they are capable of being adapted for lower voltage power requirements. In 2005 a new betavoltaic device using porous silicon diodes was proposed to increase their efficiency. This increase in efficiency is largely due to the larger surface area of the capture material. The porous silicon allows the tritium gas to penetrate into many pits and pores, greatly increasing the effective surface area of the device.
A betavoltaic power cell is composed of semiconductors and at least mildly radioactive material.As the radioactive isotope decays, it emits beta particles (electrons). Betavoltaic devices are not "free energy" or over-unity devices. Defintions to remember in discussion of thier operation include:
- Beta: meaning beta-electron, highly energetic electrons / positrons ejected during the decay of a neutron into a proton.
- Voltaic: pertaining to or producing electric current.
- Betavoltaic: producing / extracting electricity from radioactive decay.
- Betavoltaic power cell / battery: a device that captures beta-electrons emitted by a decaying radio-isotope for the purpose of producing useable electric power.
In a betavoltaic, when an electron strikes a particular interface between two layers of material (a p-n junction), a current is generated. Whether betavoltaics will replace current battery technologies altogether remains to be seen. Recent developments however, are promising. The following is meant to provide a basic introduction to betavoltaics in general and the current state of the art in betavoltaic technology. A common source used in betavoltaicsis the hydrogen isotope, tritium. Unlike most nuclear power sources, which use nuclear radiation to generate heat, which then generates electricity (thermoelectric and thermionic sources), betavoltaics use a non-thermal conversion process.
Betavoltaic devices use radioactive isotopes as their source of fuel, that is why they are sometimes called radioactive batteries. Betavoltaic devices are not nuclear reactors in the traditional sense. Unlike typical nuclear power generating devices, betavoltaic power cells do not rely on a nuclear reaction (fission / fusion) or chemical processes (as in most batteries) and do not produce radioactive waste products. The atomic nuclei (protons and neutrons,) is not split apart or fused with other nuclei. Rather, this process takes advantage of beta (electron) emissions that occur when a neutron decays into a proton. Internally, the impact of the beta electron on the P/N junction material causes a forward bias in the semiconductor. This makes the betavoltaic cell a forward bias diode of sorts, similar in some respects to a photovoltaic (solar) cell. Electrons scatter out of their normal orbits in the semiconductor and into the circuit creating a useable electric current. (Ed. a simple simple-beta gif image is availble.)
Although betavoltaics use a radioactive material as a power source, it is important to note that beta particles are low energy and easily stopped by shielding, as compared to the gamma rays generated by more dangerous radioactive materials. With proper device construction (i.e.: shielding), a betavoltaic device would not emit any dangerous radiation. Leakage of the enclosed material would of course engender health risks, just as leakage of the materials in other types of batteries lead to significant health and environmental concerns.
The theory behind betavoltaic devices are relativistic in nature: protons and neutrons are essentially highly compressed and more or less stable forms of energy (E=MC2). The decay of a neutron into a proton releases large amounts of electrical energy. Neutron beta-decay into protons is said to be the world's most concentrated source of electricity.
Several limitations can inhibit the efficiency of betavoltaic power cells. One limitation of betavoltaic power cells is the re-absorption of electrons in the radioactive source itself. In order to reduce the self-absorption of beta energy, the radioactive isotope must be incorporated into the lattice of a semiconductor.
Another limitation is that the highly energetic electrons tend to wear down or break apart the internal components (semiconductors) of the power cell. Betavoltaic devices suffer internal damage to their components as a result of the energetic electrons. Additionally, as the radioactive material emits, it slowly decreases in activity (refer to half-life). Thus, over time a betavoltaic device will output less and less power. This decrease occurs over a period of many years.
In device design, one must account for what battery characteristics are required at end-of-life, and insure that the beginning-of-life properties take into account the desired useable lifetime. Much of the betavoltaic research conducted over the years has been in identifying more durable semiconductors for power cell applications. Materials research in long lasting semiconductors which can take the punishment of long term exposure to beta-electron impact has been conducted by Sandia National Labs of NM in partnership with the University of New Mexico. One promising material is called Icosahedral Boride. This is a very hard semiconductor material which may lead to a technology for direct conversion of the beta electron energy into electric current. The unique structure of these semiconductors allows for the design of safe, high-output devices.
The isotopes used in stimulated decay betavoltaic devices are electronically pumped. They are inert when the cell runs out of power. This eliminates the possibility of toxic or radioactive waste. A thick epoxy shield prevents the chemical components of these cells from leaking out and entering the environment. Also potentially useful in sheilding these devices are a new class of metals called "liquid metals": metals with amorphous (as opposed to crystalline) structures. The long life-span of these devices (as much as 10 to 30 years of continuous use,) means that many fewer of them need to be sold than regular batteries thus helping to minimize environmental impact.
Thin-film tritiated amorphous silicon cells have been built. These cells are often called tritium batteries. Tritium batteries are cheap, long-life, high energy density, low-power batteries. They have a specific power of 24 watts per kilogram, a full load operating life of 10 years, and an overall efficiency on the order of 25%. Tritium is readily substituted for the hydrogen present in hydrogenated amorphous semiconductors. Tritiated amophous films are mechanically stable, free from flaking or blistering, with good adherence to the substrate and may be simultaneously deposited onto both conducting and insulating substrates. The deposition technique is a discharge in tritium plasma. The silicon layer sputtered in a tritium / argon ambient environment at temperatures below 300'C results in a tritiated amorphous silicon film with the tritium concentration varying from 5 to 30% depending upon deposition conditions. Tritium however, is typically only produced inside traditional nuclear (fission) reactors. Radioisotopes other than tritium, may also be used as a source of energetic electrons such as krypton-85 for example.
Stimulated Decay Batteries
One new development in betavoltaic technology are the attempts at controlling output by artificially stimulating / accelerating the natural beta-decay rates of various materials. This stimulation is usually electro-magnetic and / or acoustic in nature. A common misconception is that electro-magnetic stimulation of atomic nuclei is impossible given that the nucleus is highly resistant to electron bombardment. Electrons are typically repelled by the nucleus. Rather than enter the nucleus, they instead tend to orbit in cloud-like formations. When electrons are expelled from the nucleus during beta decay, they are expelled violently with extremely high velocities, (hence the term "high energy electron" or "beta-particle".) However, the methods typically used in beta decay stimulation / acceleration involve so-called "standing wave" technology (also called "longitudinal" or "scalar waves". See also "scalar interferometry".)
Some promising fuel sources for stimulated beta-decay include:
These isotopes have a decay rate of many thousands of years. For this reason they are not regulated by the United States government as are many more energetically radioactive materials (those with short half-lives, that is: rapid decay rates). These isotopes have been found to have significent beta decay energy. Many of these are light metals and can be inexpensively plated.
The primary use for betavoltaics is for remote and long-term use, such as application requiring electrical power for a decade or two. The recent progress in technology has prompted some to suggest using betavoltaics to trickle-charge conventional batteries in consumer devices, such as cell phones and laptop computers. Betavoltaic applications include:
- Aerospace: satellite and other unmanned vehicle power supplies.
- Power industry: power sources, backup power sources, and remediation of radioactive waste through artificial acceleration of natural isotopic decay rates.
- Bio-technology: long lasting electrically powered implants.
- Counter-terrorism: radioisotope detection sensors for nuclear and / or radiological (so-called "dirty bomb") devices.
Although consumer applications are being developed, it is uncertain whether consumers will be willing to adopt "personal nuclear technology" given the pervasive negative sentiment toward nuclear power in general as inherently unsafe.
Betavoltaic technology is the science of deriving useful electrical power from the beta decay of certain radioactive isotopes. There are inherent theoretical limits to the efficiency and output of betavoltaic devices, however their output even at low efficiencies can be quite significant. Betavoltaic technology has a fairly long history (50 years or more,) but has benefitted significantly from recent breakthroughs in materials science, nanotechnology and quantum electrodynamics. Betavoltaic technology holds particular promise for the aerospace, security, and power industries. The environmental impact of this technology appears minimal especially when compared with current battery technology.
External articles and references
- Beta Voltaic -- Energy from Radioactive Decay
- Wikipedia contributors. Wikipedia, The Free Encyclopedia. July 19, 2006. |
Researchers from the University of California have programmed synthetic cells to mobilize nearby natural cells into complex structures. At first, individual cells self-organized into multi-layered structures resembling simple organisms or the tissues from the first stages of embryonic development. The technology could have a bright future in repairing damaged tissue or re-growing injured organs.
The fundamental question of developmental biology is how do complex biological structures emerge from a single fertilized egg. The deep understanding of these processes and the technology based on it could help thousands of people waiting for organ transplants. Today, scientists mostly place their hopes in growing and 3D-printing organs as means to combat the organ shortage. However, this technique is currently capable of producing only limited tissues and relies on the use of stem cells to generate mostly 2D structures like skin. In the study recently published in Science, researchers believe they might have found a way to make more organs viable for transplant. They used a new compound that mimics the DNA’s instructions for cells to turn into different tissues.
“People talk about 3D-printing organs, but that is really quite different from how biology builds tissues,” said study senior author Wendell Lim, chair of the department of cellular and molecular pharmacology at UCSF, for UCSF News. “Imagine if you had to build a human by meticulously placing every cell just where it needs to be and gluing it in place. It’s equally hard to imagine how you would print a complete organ, then make sure it was hooked up properly to the bloodstream and the rest of the body.”
During organ development, cells communicate one with another and make coordinated, collective decisions about how to structurally organize themselves. The UCSF research group used genetically modified cells to get other groups of individual cells to self-organize into multi-layered structures. Obtained structures were similar to simple organisms or early-stage tissues in human embryonic development.
“What is amazing about biology is that DNA allows all the instructions required to build an elephant to be packed within a tiny embryo,” said Lim. “It’s easy to get overwhelmed by the complexity of natural systems, so here we set out to understand the minimal set of rules for programming cells to self-assemble into multicellular structures.”
The new technique is made possible because of a customizable synthetic signaling molecule called synNotch, referring to a synthetic Notch receptor. The Delta-Notch signaling pathway is a highly conserved element in cell-cell communication across species. The synNotch cells, therefore, allow researchers to program other nearby niche cells with specific sets of instructions. For example, they engineered several groups of neighboring cells to produce “Velcro-like” adhesion molecules called cadherins along with fluorescent marker proteins. Specific directions were sent via the synNotch cells to induce neighboring cells to change color and self-organize into desired structures.
According to Lim, the synNotch technique is fundamentally different from other current techniques for tissue regeneration and growth. If researchers can find a way to program increasingly complex structures, then natural cellular development could take care of the rest. All that is left is to supply them with the blueprints.
“The beauty of self-organizing systems is that they are autonomous and compactly encoded,” Lim said. “You put in one or a few cells, and they grow and organize, taking care of the microscopic details themselves.”
Although the synNotch technique is in early stages and repairing tissue is still far away, the research group has set a good foundation. Surprisingly complex and important structures were already engineered. Some generated cells formed structures with a basic polarity which is the gateway to the development of functional complexity. These axes define the front-back, left-right and head-toe plans of individual organisms. By adding different types of cadherin adhesion molecules, researchers directed cellular assemblages to divide into “head” and “tail” sections, or to produce four distinct radial “arms.”
Watch an animation depicting the Notch intracellular signaling pathway:
Learn more about how signaling proteins are built from simple modules arranged in different ways, in Dr. Lim’s video below:
By Andreja Gregoric, MSc |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.