content
stringlengths 275
370k
|
---|
What is Run-of-River?
Run-of-River hydroelectric (ROR hydro) projects generate electricity by using part of natural stream flows and natural elevation differences found in mountainous regions like British Columbia. With ROR hydro, a portion of the mountain stream is diverted by an intake structure into a buried pipe (called “penstock”) where it is channeled downstream into one or more turbines. The flowing water causes the turbine(s) to spin. A generator is directly attached to the turbine and creates electricity. The water from the turbine is released unaffected back into the stream.
ROR hydro differs from conventional storage hydro (the majority of BC Hydro facilities are the conventional storage type) in several ways:
- In conventional storage hydro, a dam is placed across a river to create a reservoir. All (or almost all) of the water is impounded behind the dam and the flow downstream is regulated, which changes the natural variation of flow significantly for the entire length of the downstream river.
- With ROR hydro, only a portion of the stream flow is affected, and even then, only a short length of the river experiences reduced flows (the so called “diversion reach” between the intake and the powerhouse). The volume of water a ROR project may divert through the penstock to run the turbine depends significantly on a stream`s morphology and environmental characteristics, but a typical power plant would utilize less than two-thirds of a river’s total annual flow.
- Immediately below the powerhouse, all flows diverted to produce power are returned to the stream and the natural downstream flow patterns are preserved.
- ROR hydro has a much smaller environmental footprint compared to traditional reservoir storage hydro projects. ROR projects typically have very little water storage capacity (the so called “pondage”), compared to many weeks and months of storage found at conventional Large Hydro dams. The advantage of not having a large amount of water storage is that less land has to be flooded and therefore the potential footprint impacts are reduced, but without storage ROR hydro are able to supply electricity only as the flow allows (and flow conditions conducive to ROR power generation do not always correspond to times when electricity demand is high). Accordingly, both technologies have advantages and disadvantages and should be viewed as complementary resources.
Run-of-River Hydro in BC
- Small Hydro projects have long been used historically throughout British Columbia to power mines, mills and towns.
- As of late 2014 there are 56 independent run-of-river projects supplying electricity to BC Hydro and another 25 that are anticipated to reach operation by 2018 in British Columbia.
- Nearly two-thirds of these projects have an installed capacity of less than 10 megawatts (MW) and around 15% are projects with 50 MW or more.
- Although there are countless rivers and streams in the province, not all are suitable for ROR hydro projects. Potential sites must have:
- the right balance between water flow and steepness of the terrain;
- cost effective transmission access;
- the ability to be constructed in a cost effective manner; and, most importantly,
- the ability to operate with minimal or no negative impacts on aquatic and terrestrial life.
- While developers make efforts to obtain a water license on hundreds of streams in BC and the theoretical potential for ROR hydro in BC is very high, in reality only a small percentage of this potential will be developed because not all sites fulfill the above mentioned requirements. Only the projects that are both ecological and economical will deliver clean energy in the future.
- ROR hydro is a readily available source of renewable electricity in carefully selected watersheds. It plays a prominent role in helping BC to meet its growing energy needs in a sustainable manner and helps to meet greenhouse gas emissions reduction targets as a part of worldwide efforts to reduce the impacts of climate change.
- ROR hydro projects directly distribute economic benefits to a larger number of communities and municipalities compared to large hydroelectric projects.
- Being situated closer to points of electricity demands reduces transmission losses.
- The scattered locations lower the overall power system risk by the distributed nature of multiple sources of electricity versus large electricity generation sources located in a single location.
Environmental & Regulatory Considerations
- All ROR hydro projects undergo a comprehensive environmental assessment process. This process typically requires three or more years of field study followed by an extensive review process by provincial and federal government agencies. It takes 5-6 years to bring a typical ROR hydro project to construction and it requires around two years to build. Many projects take more than 10 years from idea to operation.
- Each ROR hydro project requires over 50 permits, licenses, approvals and reviews from over a dozen government agencies, involving extensive public and First Nations consultation, before they can be built and operated. A recently constructed project had over 1,500 permit conditions to comply with.
- Projects that are successful in achieving environmental approval must adhere to strict operational parameters. As a result of the environmental assessment and permitting process, every project must comply with dozens of operational commitments and/or conditions, which are monitored by independent, third-party engineers and compliance officers to ensure a high standard of environmental protection and mitigation. Among many others, these commitments include the amount of water that must be left in the stream (the in-stream flow requirement) and consequently, how much water can be diverted, and the rate at which the diversion amounts may be changed to prevent “ramping conditions” that may harm fish in the stream.
- Water licenses for power generation purposes issued by the provincial government typically run for a 40-year term. Over this period, the operator pays an annual water rental levy as well as land lease payments to the provincial government.
- When a water license expires, the developer has the opportunity to apply for a renewal of the license. If it is not granted, the right to use the land and water revert back to the provincial government.
- A typical BC 10 MW run-of-river power plant producing 40,000 megawatt hours (MWh) of green energy annually would displace approximately 13,700 tons of carbon dioxide, the equivalent of taking about 3,000 cars off the road.
- ROR hydro projects ensure environmentally sustainable development of local resources.
- Diversification of economic activity in remote areas.
- Provide training and employment opportunities for First Nations and communities.
- Continuous source of clean and green renewable energy with minimal environmental impact.
Innovation in BC
- In striving to meet the strict environmental assessment and approval standards in the Province of British Columbia, the developers, consultants and equipment suppliers in the BC ROR hydro industry have become some of the most knowledgeable in the world on minimizing the operational impacts of hydroelectric power on fish and fish habitat.
- This expertise resulted in innovations such as turbine designs and operation measures to ramp up and down in a safe manner to ensure minimal impact on fish and fish habitat and innovations like energy dissipation chambers to meet strict environmental operations requirements to ensure an environmentally sustainable development of local resources. |
Just how wet is the Moon's interior?
Scientists had thought our Moon to be mostly free of water up until around a decade ago. However, studies are starting to trickle in that suggest our satellite may be much wetter than we realized. The latest example has drawn on satellite data to uncover volcanic deposits spread out across its surface that contain surprising amounts of trapped water.
Things changed for our presumedly parched Moon back in 2008, when trace amounts of water were detected in samples brought back by the Apollo astronauts. There were concerns that the water had entered those samples on their return to our planet, but further inspection has since revealed that they were just as rich in water as some basalts here on Earth.
"The key question is whether those Apollo samples represent the bulk conditions of the lunar interior or instead represent unusual or perhaps anomalous water-rich regions within an otherwise 'dry' mantle," said Ralph Milliken, lead author of the new research and an associate professor in Brown University's Department of Earth, Environmental and Planetary Sciences.
In 2013, NASA's Moon Mineralogy Mapper, an imaging spectrometer that flew aboard India's Chandrayaan-1 lunar orbiter, returned data revealing water locked in mineral grains on the surface of the Moon. And now relying on data from that same instrument and a new thermal correction technique, Milliken and his team have unearthed yet more evidence that the Moon is water-rich.
The new thermal correction technique overcomes one of the previous limitations of using orbital instruments to measure water in volcanic deposits on the Moon, which present as glassy beads formed by magma eruptions coming from its interior. Normally, taking these measurements involves bouncing light off the surface and seeing which wavelengths are reflected back and which are not, which can give an indication of the minerals and compounds within.
But because the surface of the Moon becomes warmer over the course of the day, the readings gathered by the spectrometer can be confused by the emitted thermal radiation, which happens to occur at the same wavelength as those that represent water. The researchers were able to correct for this, however, by building a detailed temperature profile of the areas of interest and combining it with data from the Apollo samples.
"That thermally emitted radiation happens at the same wavelengths that we need to use to look for water," Milliken said. "So in order to say with any confidence that water is present, we first need to account for and remove the thermally emitted component."
Using this method, the scientists found evidence of water in nearly all large volcanic deposits to have been mapped on the Moon's surface, including those near the Apollo landing sites. The amount of water inside these volcanic beads is relatively small, only around 0.05 percent of their weight, but the deposits are large and the water could potentially be extracted by future lunar explorers.
"The distribution of these water-rich deposits is the key thing," Milliken says. "They're spread across the surface, which tells us that the water found in the Apollo samples isn't a one-off. Lunar pyroclastics (volcanic deposits) seem to be universally water-rich, which suggests the same may be true of the mantle."
If indeed the Moon's interior is wet, it makes for some interesting discussion around its formation. It is thought that the Moon was formed when a planet-sized object smashed into the Earth during the early days of our solar system. Any of the hydrogen needed to form water would be unlikely to survive such an impact, which is a key reason that the Moon was assumed to be so dry for so long.
That water was transported to the Moon by ancient asteroids soon after its formation is one possible explanation for its newly discovered wetness. Last year, scientists working with data from unmanned lunar missions published a paper arguing that a class of water-rich asteroid delivered as much as 80 percent of the Moon's water, with comets also making a contribution. The topic is still very much subject to debate.
"The growing evidence for water inside the Moon suggest that water did somehow survive, or that it was brought in shortly after the impact by asteroids or comets before the Moon had completely solidified," Li said. "The exact origin of water in the lunar interior is still a big question."
The research was published in the journal Nature Geoscience.
Source: Brown University |
Ethernet is an important topic in the Cisco CCNA because network administrators typically oversee LANs (local area networks), and pretty much all LANs today use some form of Ethernet, whether it be copper Fast Ethernet, or fiber optic Gigabit Ethernet, or wireless Ethernet. Ethernet became what it is today, because it was cheap and easy to install. It continued to improve its standards and hardware (eg. hubs to switches), also it has remained backwards compatible with the ability to change physical implementations from wireless, to fiber, to copper, as well as change speeds and standards all within the same functional network.
Ethernet and Collision Domains
Early versions of Ethernet used coaxial cable (10Base5 Thicknet and 10Base2 Thinnet). The physical topology could be described as a single cable that all users connected to or tapped into, this was known as a physical bus or multi-access network. Logically Ethernet was also a bus, or multi-access network, all hosts on the network could see each other, and all packets as well. All users were essentially on the same cable or same collision domain. What characterizes an Ethernet collision domain is in a collision domain, when two users send packets at the same time, the result is a collision or spike of voltage on the wire and all sending of packets must cease for a short period of time.
If you have ten hosts connected to a hub using regular Ethernet cables (10BaseT, twisted pair) then all hosts comprise a single collision domain. If you connect to many hosts to a hub or extend the network by connecting hubs to more hubs and more hosts then network performance will decrease and collisions will increase. In this way, if you have ten hosts connected to a hub and that hub is connected to another hub with another 10 hosts, then that network also comprises just a single collision domain.
Collisions were exacerbated because of the fact that Ethernet was designed as a multi-access network, where all hosts see all other hosts and all packets as well. The number of hosts in the network, and the presence a broadcast packets coming from multiple hosts, would increase the chances for collisions to occur.
The advent of switches was a significant improvement for Ethernet and local area networks. Switches provide many important improvements to a network, including collision free networking and better bandwidth utilization. Whereas a hub receives a frame on one port and automatically forward it out of all other ports, in contrast a switch maintains a table or map of MAC addresses to switchports, and is able to switch a frame to the destination port where the destination MAC address resides. Only when a switch does not have the MAC address in its table, or if it is a layer 2 broadcast, will a switch forward a frame out of all ports except the one it came in on. Thus less frames are traveling on the network unnecessarily. Since traffic is sent to only one port, each port or link on a switch is considered its own collision domain. Thus, switches break apart or create collision domains as opposed to hubs which extend or grow collision domains. With the advent of full duplex communications, hosts connected to switches could both send and receive frames at the same time without collisions.
Ethernet and ARP
ARP stands for address resolution protocol and its function is to resolve IP addresses to MAC addresses at Layer 2. When a frame or “packet” needs to be delivered to a host on a local area network it needs to delivered to the host’s MAC address. If the sending host does not have the destination host’s MAC address in its ARP cache it will send an ARP broadcast packet requesting the MAC address from the destination host’s IP address. So a MAC address needs to be resolved from an IP address before a packet can be delivered on a local network. In this way, ARP is plays an important role in the functioning of local area networks. In the video below I demonstrate the ARP process using a command prompt and Wireshark.
For more information on ARP: http://en.wikipedia.org/wiki/Address_Resolution_Protocol
For more information on Multicast addresses: http://en.wikipedia.org/wiki/Multicast_address
Hexadecimal Notation, Counting and Conversion
The ability to convert binary to decimal and vice versa is important to the Cisco CCNA, but you must also know how to convert hexadecimal. Hexadecimal is a shorthand notation that is used in computers all the time. MAC addresses are written in hexadecimal notation like this: B3:A2:77:00:F1:C9. Their are hexadecimal color charts for HTML and the web like 0xFF0000 which equals the color red, and hexadecimal is used in programming as well.
In the Cisco CCNA, hexadecimal notation is introduced when learning about layer 2 physical addressing, or MAC addresses. MAC addresses are 48 bits long and are typically written in 6 character pairs separated by a colon or a dash (eg. B3:A2:77:00:F1:C9 or B3-A2-77-00-F1-C9), but they can also be written in pairs of six or groups of four (eg. B3A277:00F1C9 or B3A2:7700:F1C9). You will also find hexadecimal numbers with a “0x” prefix or a “h” suffix to indicate that the number is in hexadecimal notation.
Hexadecimal is a Base16 counting system because there are 16 characters or numbers (0,1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f) with “a” through “f”equaling the numbers 10 through 15. Since a single hexadecimal digit or character has 16 possible values we can equate one hexadecimal character with 4 bits (24 equals 16). This creates an easy conversion between a binary 8 bit number to a 2 digit hexadecimal number:
10111000 in binary = 184 in decimal
1011 – 1000 (splits the 8 bits into two 4 bit nibbles)
1011 = 11 in decimal and B in hex
1000 = 8 in decimal and 8 in hex
0xB8 = 184 in decimal
Ethernet, Data Link, and Local Area Network Tips
- Ethernet functions on both layer 2 the Data Link layer and layer 1 the Physical layer. In the TCP/IP model layers one and two from the OSI model are combined into the Network Access layer.
- The Data Link layer has an upper and lower sublayer, the LLC and the MAC sub layers
- 802.2 is the the LLC, logical link control sublayer. Its role is to function in software and identify the network layer protocol above it.
- Ethernet at its core is CSMA/CD. Ethernet is collisions and collision detection.
- Hubs cause collisions. Switches cause no collisions because each port is its own collision domain.
- Source and destination MAC addresses change as a frame travels across networks. Source and destination IP addresses do not change.
- You only need to send packets/frames to the gateway/router when you are trying to contact a different network.
- Packets/frames destined for a host on the same network do not have to go through the router but are delivered directly to the destination host’s MAC address
Other Ethernet Topics
Ethernet as a WAN
MAC Address Structure
Ethernet Unicast, Multicast, and Broadcast
10Mbps 100Mbps, 1000Mbps Ethernet |
Shodō is the Japanese word for calligraphy. It means not just penmanship, but the Way, or Path of writing. In China and Japan, Shodō has long been regarded as one of the most important forms of art. Most of the languages of the world use phonetic symbols, but except for the earliest languages there are hardly any that use hieroglyphs, pictographs or ideographs as the basis for written languages.
Languages using phonetic symbols need only a relatively few symbols to fulfill their communicative purposes: it is this need for only few symbols that represents the strength of such systems. However, in China as well as in Japan the written language is based on direct visual representation. At the simplest level of vocabulary, each written word is actually a picture of the object it describes.
There are about two hundred of these elemental ideographs for common objects. They are the foundation of the forty thousand and more Chinese and Japanese characters. Until the 20th century, writing in the east was done with a brush dipped in ground ink. With the supple brush, the elegantly complex characters, and the proper frame of mind, every piece of writing becomes an act of profound expression.
Oriental Calligraphy is the dynamic execution of lines in vast range of rhythmic combinations. The infinitely variable balance of the successive strokes, the flow and the tension, the strength and proportion of each character are the unique product of the form, writer’s personality, and the moment. Through Shodō, the language, the eye, and the hand are linked to the deeper sources of consciousness.
There are five traditional styles of Shodō, ranging from rigid precision to flowing smoothness. It is essential to have an appreciation of how characters have changed over the centuries. Until characters were introduced into Japan they had undergone many stylistic changes, producing a magnificent variety of possibilities. By the time these characters arrived in Japan around the fifth and sixth centuries, all basic form and styles of Chinese characters as they now exist had been “completed.”
The changes could be divided into five historical stages as follows:
1. Tensho – Seal script of the Chan, Ohou, and Ch’in dynasties (1500 – 200 B.C.) is th oldest and most formal style. This script consisted of well-balanced horizontal and vertical, left and right symmetrical elements. Over the years, however, these characters tended to become somewhat vertically elongated.
2. Reisho – Clerical script or modified seal script of the Han dynasty and the Three Kingdoms (200 B.C. – 250 A.D.). Basically this style is similar to the Seal script except that symmetrical balance began to break down and the elements tended to become somewhat horizontal and flatter.
3. Sōsho – Cursive script of the Han and the Six dynasties (200 B.C. – 590 A.D.). Developed as an abbreviated form of the Clerical script. In Sōsho the natural flow of the hand reaches its maximum.
4. Kaisho – Standard script of the Six dynasties. This script is structurally balanced somewhat differently from Tensho and Reisho. Most characters have a tendency to form a square.
5. Gyōsho – Semicursive script of the Six dynasties. Developed together with Kaisho as an informal, faster way of writing.
As the calligraphic styles evolved over the centuries, calligraphers struggled of course, to refine their techniques and through trial and error they have brought Shōdo to the present hight level of achievement. The traditions of Wang Hsi-chin, Ou-yang Hsun, the Han Clerical styles, and the Six Dynasty styles were all created from these crucibles of experience. It has become standard practice to study these techniques to become a full-pledged calligrapher.
There are almost no example of even geniuses creating outstanding art without reference to the past traditions. In order to transcend the rules, one must study and master the techniques of the past and follow the moral values of the teachers of the past.
Respect the tradition and master it and find your self expression within it, such has long been the teaching of the Kampo Ryu in Shodō. |
Genetic engineering is the deliberate manipulation of DNA, using techniques in the laboratory to alter genes in organisms. Even if the organisms being altered are not microbes, the substances and techniques used are often taken from microbes and adapted for use in more complex organisms.
Steps in Cloning a Gene
Let us walk through the basic steps for cloning a gene, a process by which a gene of interest can be replicated many times over. Let us pretend that we are going to genetically engineer E. coli cells to glow in the dark, a characteristic that they do not naturally possess.
- Isolate DNA of interest – first we need to identify the genes or genes that we are interested in, the target DNA. If we want our E. coli cells to glow in the dark, we need to find an organism that possesses this trait and identify the gene or genes responsible for the trait. The green fluorescent protein (GFP) commonly used as an expression marker in molecular techniques was originally isolated from jellyfish.In cloning a gene it is helpful to use a cloning vector, typically a plasmid or virus, capable of independent replication that will stably carry the target DNA from one location to another. Plasmid vectors are available from both bacteria and yeast.
- Cut DNA with restriction endonucleases – once the target and vector DNA have been identified, both types of DNA are cut using restriction endonucleases. These enzymes recognize short sequences of DNA that are 4-8 bp long. The enzymes are widespread in both bacteria and archaea, with each enzyme recognizing a specific inverted repeat sequence that is palindromic (reads the same on each DNA strand, in the 5’ to 3’ direction).
While some restriction endonucleases cut straight across the DNA (i.e. blunt cut), many make staggered cuts, producing a very short region of single-stranded DNA on each strand. These single-stranded regions are referred to as “sticky ends,” and are invaluable in molecular cloning since the unpaired bases will recombine with any DNA having the complementary base sequence.
- Combine target and vector DNA – after both types of DNA have been cleaved by the same restriction endonuclease, the two types of DNA are combined together with the addition of DNA ligase, an enzyme that repairs the covalent bonds on the sugar-phosphate backbone of the DNA. This results in the creation of recombinant DNA, DNA molecules that contain the DNA from two or more sources, also known as chimeras.
- Introduce recombined molecule into host cell – once the target DNA has been stably combined with vector DNA, the recombinant DNA must be introduced into a host cell, in order for the genes to be replicated or expressed. There are different methods for introducing the recombinant DNA, largely depending upon the complexity of the host organism. In the case of bacteria, transformation is often the easiest method, using competent cells to pick up the recombinant DNA molecules. Alternatively, electroporation can be used, where the cells are exposed to a brief pulse of high –voltage electricity causing the plasma membrane to become temporarily permeable to DNA passage.While some cells will acquire recombinant DNA with the appropriate configuration (i.e. target DNA combined with vector DNA), the method also will yield cells carrying recombinant DNA with alternate DNA combinations (i.e. plasmid DNA combining with another plasmid DNA molecule or target DNA attached to more target DNA). The mixture is referred to as a genomic library and must be screened to select the appropriate clone. If random fragments of DNA were originally used (instead of isolation of the appropriate target DNA genes), the process is referred to as shotgun cloning and can yield thousands or tens of thousands of clones to be screened.
Introducing recombinant DNA into cells other than bacteria
Agrobacterium tumefaciens and the Ti plasmid
Agrobacterium tumefaciens is a plant pathogen that causes tumor formation called crown gall disease. The bacterium contains a plasmid known as the Ti (tumor inducing) plasmid, which inserts bacterial DNA into the host plant genome. Scientists utilize this natural process to do genetic engineering of plants by inserting foreign DNA into the Ti plasmid and removing the genes necessary for disease, allowing for the production of transgenic plants.
A gene gun uses very small metal particles (microprojectiles) coated with the recombinant DNA, which are blasted at plant or animal tissue at a high velocity. If the DNA is transformed or taken up by the cell’s DNA, the genes are expressed.
For a viral vector, virulence genes from a virus can be removed and foreign DNA inserted, allowing the virus capsid to be used as a mechanism for shuttling genetic material into a plant or animal cell. Marker genes are typically added that allow for identification of the cells that took up the genes.
Gel electrophoresis is a technique commonly used to separate nucleic acid fragments based on size. It can be used to identify particular fragments or to verify that a technique was successful.
A porous gel is prepared made of agarose, with the concentration adjusted based on expected size. Nucleic acid samples are deposited into wells in the gel and an electrical current is applied. Nucleic acid, with its negative charge, will move towards the positive electrode, which should be placed at the bottom of the gel. The nucleic acid will move through the gel, with the smallest pieces encountering the least resistance and thus moving through the fastest. The length of passage of each nucleic acid fragment can be compared to a DNA ladder, with fragments of known size.
Polymerase Chain Reaction (PCR)
The polymerase chain reaction or PCR is a method used to copy or amplify DNA in vitro. The process can yield a billionfold copies of a single gene within a short period of time. The template DNA is mixed with all the ingredients necessary to make DNA copies: primers (small oligonucleotides that flank the gene or genes of interest by recognizing sequences on either side of it), nucleotides (the building blocks of DNA), and DNA polymerase. The steps involve heating the template DNA in order to denature or separate the strands, dropping the temperature to allow the primers to anneal, and then heating the mixture up to allow the DNA polymerase to extend the primers, using the original DNA as an initial template. The cycle is repeated 20-30 times, exponentially increasing the amount of target DNA in a few hours.
Uses of Genetically Engineered Organisms
There can be numerous reasons to create a genetically modified organism (GMO) or transgenic organism, defined as a genetically modified organism that contains a gene from a different organism. Typically the hope is that the GMO will provide needed information or a product of value to society.
Source of DNA
Genetically engineered organisms can be made so that a piece of DNA can be easily replicated, providing a large source of that DNA. For example, a gene associated with breast cancer can be spliced into the genome of E. coli, allowing for the rapid production of the gene so that it may be sequenced, studied, and manipulated, without requiring repeated tissue donations from human volunteers.
Source of RNA
Antisense RNA is ssRNA that is complementary to the mRNA that will code for a protein. In cells it is made as a way to control target genes. There has been increasing interest in the use of antisense RNA as a way to prevent diseases that are caused by the production of a particular protein.
Antisense RNA. By Robinson R [CC BY 2.5], via Wikimedia Commons
Source of Protein
Since microbes replicate so rapidly, it can be extremely advantageous to use them to manufacture proteins of interest or value. Given the right promoters, bacteria will express genes for proteins that are not naturally found in bacteria, such as cytokine. Genetically engineered cells have been used to make a wide variety of proteins of use to humans, such as insulin or human growth hormone.
genetic engineering, cloning, target DNA, green fluorescent protein (GFP), cloning vector, restriction endonuclease, sticky ends, DNA ligase, recombinant DNA, chimera, transformation, electroporation, genomic library, shotgun cloning, Agrobacterium tumefaciens, Ti plasmid, gene gun, microprojectiles, viral vector, gel electrophoresis, DNA ladder, polymerase chain reaction (PCR), template DNA, primer, nucleotide, DNA polymerase, denaturing, annealing, extending, genetically modified organism (GMO), transgenic organisms, antisense RNA.
- Define genetic engineering.
- Identify and describe the basic steps used in the genetic engineering of a bacterial cell. What components are needed and why?
- Summarize the different ways that recombinant DNA can be inserted into a cell or organism. Be able to provide specific examples.
- Describe techniques used in the manipulation of DNA. What are the essential components of each process?
- Explain the different applications of genetic engineering. |
Although several types of beetles are pests on trees, only twig girdlers chew a circular pattern around the limbs. The female beetles girdle small twigs and limbs to provide a nursery for their eggs in the fall. The damaged limbs drop into the lower branches of infested trees or fall onto the ground. Small infestations of twig girdlers are usually not a problem, but large infestations can result in the stunted growth, or even death, of a tree.
Twig girdlers chew the bark and inner wood in a complete circle around limbs and twigs, leaving a small, central core of wood. The girdled limbs may hang on by the central core for some time before falling off. Although the damage is usually minor, dried brown leaves on the severed limbs can be unsightly. The symmetry of a tree can be affected if many offshoots grow from the damaged end of a girdled limb. Repeated damage can cause deformities in tree limbs, such as crooks or forks. A heavy infestation of twig girdlers can reduce the production of nut and fruit trees by severing the limbs that bear produce.
Adult twig girdlers are grayish brown beetles with broad ash-colored stripes on their wing covers. They are difficult to see because their coloration is similar to the bark of common host trees, such as oaks (Quercus spp.) and hickories (Carya spp). (Oaks grow in U.S. Department of Agriculture plant hardiness zones 3 through 9, and hickories grow in zones 6 through 8.) Twig girdlers are about 1/2 to 5/8 inches long with antennae that are slightly longer than their bodies. After mating, the females begin girdling limbs and twigs, and then lay eggs just under the bark or in small pits that they secrete a sealant over.
Each female twig girdler lays three to eight eggs in a girdled stem, and lays a total of 50 to 200 eggs. The tiny, oblong, white eggs hatch in about three weeks. The larvae are whitish, legless grubs, which over winter in the girdled twigs. They rapidly grow up to 1 inch long in the spring and feed on the inner portion of the twigs, creating tunnels toward the end of the twigs. When it is time to pupate, the mature larvae wall off chambers in the tunnels with shredded fibers.
The pupae of twig girdlers are almost as long as the larvae and darken in color as they mature. Since they do not form cocoons, their antennae, legs and wing covers are visible while they pupate. In 12 to 14 days, adult twig girdlers chew holes in the bark of their chambers and emerge. They feed on the tender bark at the end of the girdled stems, then move to live host trees.
Chemical controls are usually not needed for twig girdlers. The best method of control is removing and destroying the fallen limbs in the fall, winter and spring. The population of twig girdlers can be reduced in one or two seasons by eradicating the eggs, larvae and pupae in the fallen limbs.
- Hemera Technologies/PhotoObjects.net/Getty Images |
Adult Congenital Heart Disease and Dental Issues: What’s Up with Preventive Antibiotics?
What is endocarditis?
Endocarditis (also called subacute bacterial endocarditis, or SBE) is an infection of the inner lining of the heart. It can destroy heart tissue and spread infection throughout your body. SBE can cause severe illness or even death.
Bacteria can get into your bloodstream in many different ways, especially through your mouth. Bacteria live in your mouth, often around your gums and teeth. These bacteria can get into your bloodstream especially if you have bleeding gums. As the blood travels through your heart, the bacteria can settle in the heart tissue or valves. This is more likely to happen if there are areas in your heart that are abnormal, if you have abnormal valves or in areas that have had surgery.
The best way to protect yourself from SBE is by keeping your mouth as healthy and clean as possible! Good oral health is really important. Chewing your food well, brushing and flossing will keep the bacteria levels in your mouth low.
Sometimes. Heart doctors used to recommend that all patients with CHD take antibiotics before certain kinds of medical procedures that might involve bleeding. This included visits to the dentist. However, the American Dental Association found that keeping your mouth as clean and healthy as possible can lower the amount of bacteria you are exposed to in daily life without antibiotic use.
This is more important to lowering your risk of SBE than taking antibiotics before a dental procedure. In other words, you are more likely to get SBE from bacteria living in a dirty mouth than from a dental procedure.
In March 2007 the American Heart Association (AHA) released recommendations for preventing SBE.
Most CHD patients do not need to take antibiotics before dental visits or other medical procedures.
The AHA knows that certain patients have a higher risk of getting very ill or dying if they develop SBE. These people should continue to take antibiotics before dental visits or other procedures. This includes:
- People who have unrepaired cyanotic (“blue”) CHD. This includes people with palliative shunts, conduits and single ventricles.
- People who have repaired CHD but who have a prosthetic device or have defects near a prosthetic device.
- All patients with a mechanical or tissue artificial valve.
- All patients with other prosthetic materials, such as GORE-TEX patches, for six months after placement.
- Patients who have ever had SBE before.
- Patients who received a heart transplant and who later developed abnormal heart valves.
You should take antibiotics before all dental procedures when:
- Your gums or top region of your teeth might be moved around. This includes dental cleanings and the filling of cavities.
- The inside soft tissues of your mouth (oral mucosa) might be punctured.
You do not need antibiotics:
- Before dental X-rays.
- If you have bleeding of the mouth or lips from an accident or trauma.
This is a question your heart care team will need to answer.
No. The AHA and the American College of Cardiology (ACC) only give advice to help you be healthy. These are not rules. Some people may find it difficult to change the way they have cared for their heart. Discuss any questions or concerns you have with your ACHD doctor.
No. In fact, a research study in 2012 found no increase in SBE cases after the new AHA recommendations were made.
In a healthy mouth there is a thin surface of tissue that prevents bacteria from getting into your bloodstream and lymphatic system, a group of vessels that drains fluid from your body into your blood stream. Your mouth is also full of saliva and saliva is your friend! It is antimicrobial, balances the pH of your mouth and helps control plaque. Saliva also contains tiny amounts of minerals that help protect your tooth enamel.
There are many kinds of health problems you may experience in your mouth. Some of the most common include:
- Cavities. A cavity is a hole or structural damage in a tooth.
- Gum disease. Gum disease can include gingivitis (red swollen gums that bleed easily) and periodontal disease (gums that have been destroyed).
- Dry mouth. This is when there is not enough saliva in your mouth. Dry mouth can be caused by some kinds of medications, especially diuretics (water pills) and blood pressure medicines. It may be caused by hormonal or nutrient deficiencies. You can also have dry mouth from anxiety or depression, diabetes, or a blocked saliva gland.
- Oral cancer. Cancers of your mouth often include your lips or tongue.
- Canker sores and cold sores. Canker sores are painful, open sores in your mouth. They can happen after an injury to your mouth or be caused by many other things like a virus, stress or a lack of vitamins. Cold sores (also called fever blisters) are small, painful blisters caused by the herpes simplex virus.
Dedicate five minutes every day to your heart health! Taking care of your mouth helps protect your heart. Here are easy steps for keeping your mouth – and your heart – healthy:
- Brush your teeth at least twice a day for 2 minutes each time. It’s best to brush your teeth after meals. This cuts down on the amount of time that your teeth are exposed to sweet or acidic things that can break down tooth enamel. You should gently brush all sides of your teeth with a soft brush using round and short back-and-forth strokes. Gently brush along the gum line and lightly brush your tongue.
- Floss your teeth at least once a day. Have your dentist or your dental hygienist show you the best way to floss.
- Change your toothbrush every three months.
- Get regular dental check-ups twice a year, or more if your dentist thinks you need to pay extra attention to your teeth.
Your dentist is also your ally in helping protect your heart. See your dentist immediately if:
- Your gums bleed often.
- You have red or white patches on your gums, tongue, or the floor of your mouth (under your tongue).
- You have mouth/jaw pain.
- You have sores that do not heal.
- You have problems swallowing or chewing.
Yes – living a heart-healthy lifestyle affects your oral health, and vice-versa. Both your ACHD doctor and your dentist would advise you to:
- Eat a healthy diet with fiber-rich fruits and vegetables. These foods stimulate your mouth to produce saliva, which protects and strengthens your tooth enamel.
- Limit soda – even diet soda – because it can erode the enamel.
- Avoid snacking on sugary or starchy snacks between meals.
- Do not use tobacco products. This includes chewing tobacco. Tobacco products can cause gum disease, oral and throat cancers, oral fungal infections, stained teeth, and bad breath.
- Limit the amount of alcohol you drink. Alcohol use has been linked to oral and throat cancers.
The bottom line
Poor oral health is associated with (though not necessarily the cause of) the development of general heart disease.
Good oral health is essential to helping protect your heart. According to Healthy People 2020, a 10-year program to improve the health of all Americans, good oral health makes it easier for you to speak, taste, smile, smell, touch, chew, swallow and make facial expressions to show your feelings and emotion, all important to living a full and complete life.
ACHA partnered with Disty Pearson, PA-C, senior physician assistant with the Boston Adult Congenital Heart and Pulmonary Hypertension service at Boston Children’s Hospital and Brigham & Women’s Hospital. She has been a member of the ACHA Medical Advisory Board since its beginning, and has worked with adults with CHD for more than 30 years. |
Obesity and overweight
08 June 2006
One of the most common problems related to lifestyle today is being overweight. Severe overweight or obesity is a key risk factor in the development of many chronic diseases such as heart and respiratory diseases, non-insulin-dependent diabetes mellitus or Type 2 diabetes, hypertension and some cancers, as well as early death. New scientific studies and data from life insurance companies have shown that the health risks of excessive body fat are associated with relatively small increases in body weight, not just with marked obesity.
Obesity and overweight are serious problems that pose a huge and growing financial burden on national resources. However, the conditions are largely preventable through sensible lifestyle changes.
2. What is obesity and overweight?
Obesity is often defined simply as a condition of abnormal or excessive fat accumulation in the fat tissues (adipose tissue) of the body leading to health hazards. The underlying cause is a positive energy balance leading to weight gain i.e. when the calories consumed exceed the calories expended.
In order to help people determine what their healthy weight is, a simple measure of the relationship between weight and height called the Body Mass Index (BMI) is used. BMI is a useful tool that is commonly used by doctors and other health professionals to determine the prevalence of underweight, overweight and obesity in adults. It is defined as the weight in kilograms divided by the square of the height in metres (kg/m2). For example, an adult who weighs 70 kg and whose height is 1.75 m will have a BMI of 22.9 kg/m2.
Overweight and obesity are defined as BMI values equals or exceeding 25 and 30, respectively. Typically, a BMI of 18.5 to 24.9 is considered ‘healthy’, but an individual with a BMI of 25–29.9 is considered "at increased risk" of developing associated diseases and one with a BMI of 30 or more is considered at "moderate to high risk" .
BODY MASS INDEX
- <18.5 Underweight
- 18.5 - 24.9 Healthy weight
- 25 - 29.9 Overweight
- ≥30 Obese
Fat distribution: apples and pears
BMI still does not give us information about the total fat or how the fat is distributed in our body, which is important as abdominal excess of fat can have consequences in terms of health problems.
A way to measure fat distribution is the circumference of the waist . Waist circumference is unrelated to height and provides a simple and practical method of identifying overweight people who are at increased risk of obesity-related conditions. If waist circumference is greater than 94-102 cm for men and 80-88 cm for women, it means they have excess abdominal fat, which puts them at greater risk of health problems, even if their BMI is about right [3, 4].
The waist circumference measurement divides people into two categories: individuals with an android fat distribution (often called “apple” shape), meaning that most of their body fat is intra-abdominal and distributed around their stomach and chest and puts them at a greater risk of developing obesity-related diseases. Individuals with a gynoid fat distribution (often called “pear” shape), meaning that most of their body fat is distributed around their hips, thighs and bottom are at greater risk of mechanical problems. Obese men are more likely to be “apples “while women are more likely to be “pears” .
3. The dynamics of energy balance: the bottom line?
The fundamental principle of energy balance is:
Changes in energy (fat) stores
energy (calorie) intake - energy expenditure
Overweight and obesity are influenced by many factors including hereditary tendencies, environmental and behavioural factors, ageing and pregnancies . What is clear is that obesity is not always simply a result of overindulgence in highly palatable foods or of a lack of physical activity. Biological factors (hormones, genetics), stress, drugs and ageing also play a role. However, dietary factors and physical activity patterns strongly influence the energy balance equation and they are also the major modifiable factors. Indeed, high-fat , energy-dense diets [8, 9] and sedentary lifestyles [10, 11] are the two characteristics most strongly associated with the increased prevalence of obesity world-wide. Conversely, weight loss occurs when energy intake is less than energy expenditure over an extended period of time. A restricted calorie diet combined with increased physical activity is generally the advice proffered by dieticians for sustained weight loss .
Miracle or wonder diets that severely limit calories or restrict food groups should be avoided as they are often limiting in important nutrients and/or cannot be sustained for prolonged periods. Besides, they do not teach correct eating habits and can result in yo-yo dieting (the gain and loss of weight in cycles resulting from dieting followed by over-eating). This so called yo-yo dieting may be dangerous to long-term physical and mental health. Individuals should not be over ambitious with their goal setting as a loss of just 10% of initial weight will bring measurable health benefits .
4. What are the trends in obesity and overweight?
Evidence suggesting that the prevalence of overweight and obesity is rising dramatically worldwide and that the problem appears to be increasing rapidly in children as well as in adults.
The most comprehensive data on the prevalence of obesity worldwide are those of the World Health Organisation MONICA project (MONItoring of trends and determinants in CArdiovascular diseases study) . Together with information from national surveys, the data show that the prevalence of obesity in most European countries has increased by about 10-40% in the past 10 years, ranging from 10-20% in men and 10-25% in women . The most alarming increase has been observed in the Great Britain, where nearly two thirds of adult men and over half of adult women are overweight or obese . Between 1995 and 2002, obesity doubled among boys in England from 2.9% of the population to 5.7%, and amongst girls increased from 4.9% to 7.8%. One in 5 boys and one in 4 girls is overweight or obese. Among young men, aged 16 to 24 years, obesity increased from 5.7% to 9.3% and among young women increased from 7.7% to 11.6% . The International Obesity Task Force monitors prevalence data (http://www.iotf.org).
5. What are the health consequences of obesity and overweight?
The health consequences of obesity and overweight are many and varied, ranging from an increased risk of premature death to several non-fatal but debilitating and psychological complaints that can have an adverse effect on quality of life .
The major health problems associated with obesity and overweight are:
- Type 2 diabetes
- Cardiovascular diseases and hypertension
- Respiratory diseases (sleep apnea syndrome)
- Some cancers
- Psychological problems
- Alteration of the quality of life
The degree of risk is influenced for example, by the relative amount of excess body weight, the location of the body fat, the extent of weight gain during adulthood and amount of physical activity. Most of these problems can be improved with relatively modest weight loss (10 to 15%), especially if physical activity is increased too.
5.1. Type 2 diabetes
Of all serious diseases, it is Type 2 diabetes (the type of diabetes which normally develops in adulthood and is associated with overweight) or non-insulin-dependent diabetes mellitus (NIDDM), which has the strongest association with obesity and overweight. Indeed, the risk of developing Type 2 diabetes rises with a BMI that is well below the cut-off point for obesity (BMI of 30). Women who are obese are more than 12 times more likely to develop Type 2 diabetes than women of healthy weight. The risk of Type 2 diabetes increases with BMI, especially in those with a family history of diabetes, and decreases with weight loss .
5.2. Cardiovascular disease and hypertension
Cardiovascular disease (CVD) includes coronary heart disease (CHD), stroke and peripheral vascular disease. These diseases account for a large proportion (up to one third) of deaths in men and women in most industrialised countries and their incidence is increasing in developing countries.
Obesity predisposes an individual to a number of cardiovascular risk factors, including hypertension and elevated blood cholesterol. In women, obesity is the third most powerful predictor of CVD after age and blood pressure . The risk of heart attack for an obese woman is about three times that of a lean woman of the same age.
Obese individuals are more likely to have elevated blood triglycerides (blood fats), low density lipoprotein (LDL) cholesterol ("bad cholesterol") and decreased high density lipoprotein (HDL) cholesterol (“good cholesterol”). This metabolic profile is most often seen in obese people with a high accumulation of intra-abdominal fat ("apples") and has consistently been related to an increased risk of CHD. With weight loss, the levels of triglycerides can be expected to improve. A 10 kg weight loss can produce a 15% decrease in LDL cholesterol levels and an 8% increase in HDL cholesterol .
The association between hypertension (high blood pressure) and obesity is well documented and the proportion of hypertension attributable to obesity has been estimated to be 30-65% in Western populations. In fact, blood pressure increases with BMI; for every 10 kg increase in weight, blood pressure rises by 2-3mm Hg. Conversely, weight loss induces a fall in blood pressure and typically, for each 1% reduction in body weight, blood pressure falls by 1-2mm Hg.
The prevalence of hypertension in overweight individuals is nearly three times higher than in non-overweight adults and the risk in overweight individuals aged 20-44 years of hypertension is nearly six times greater than in non-overweight adults.
Although the link between obesity and cancer is less well defined, several studies have found an association between overweight and the incidence of certain cancers, particularly of hormone-dependent and gastrointestinal cancers. Greater risks of breast, endometrial, ovarian and cervical cancers have been documented for obese women, and there is some evidence of increased risk of prostate and rectal cancer in men. The clearest association is with cancer of the colon, for which obesity increases the risk by nearly three times in both men and women.
Degenerative diseases of the weight-bearing joints, such as the knee, are very common complications of obesity and overweight . Mechanical damage to joints resulting from excess weight is generally thought to be the cause. Pain in the lower back is also more common in obese people and may be one of the major contributors to obesity-related absenteeism from work.
5.5. Psychological aspects
Obesity is highly stigmatised in many European countries in terms of both perceived undesirable bodily appearance and of the character defects that it is supposed to indicate. Even children as young as six perceive obese children as “lazy, dirty, stupid, ugly, liars and cheats” .
Obese people have to contend with discrimination. A study of overweight young women in the USA showed that they earn significantly less than healthy women who are not overweight or than women with other chronic health problems .
Compulsive overeating also occurs with increased frequency among obese people and many people with this eating disorder have a long history of bingeing and weight fluctuations .
6. What is the economic cost of obesity and overweight?
International studies on the economic costs of obesity have shown that they account for between 2% and 7% of total health care costs, the level depending on the way the analysis is undertaken . In France, for example, the direct cost of obesity-related diseases (including the costs of personal health care, hospital care, physician services and drugs for diseases with a well established relationship with obesity) amounted to about 2% of total health care expenditure . In The Netherlands, the proportion of the country’s total general practitioner expenditure attributable to obesity and overweight is around 3–4% .
In England, the estimated annual financial cost of obesity is £0.5 billion in treatment costs to the National Health Service and the impact on the economy is estimated to be around £2 billion. The estimated human cost of obesity is 18 million sick days a year; 30 000 deaths a year, resulting in 40 000 lost years of working life and a shortened lifespan of nine years on average .
7. What groups are responsible for promoting healthy lifestyles?
Promoting healthy diets and increased levels of physical activity to control overweight and obesity must involve the active participation of many groups including governments, health professionals, the food industry, the media and consumers. Their shared responsibility is to help promote healthy diets that are low in fat, high in complex carbohydrates and which contain large amounts of fresh fruits and vegetables.
Greater emphasis on improved opportunities for physical activity is clearly needed, especially with increased urbanisation, the ageing of the population and the parallel increase in time devoted to sedentary pursuits.
- World Heath Organisation, Physical status: the use and interpretation of anthropometry. Report of a WHO Expert Committee. WHO Technical Report Series, No 854, 1995.
- Han, T.S., et al., The influences of height and age on waist circumference as an index of adiposity in adults. International Journal of Obesity, 1997. 21: p. 83-89.
- Lean, M.E.J., T.S. Han, and C.E. Morrison, Waist circumference as a measure for indicating the need for weight management. British Medical Journal, 1995. 311: p. 158-161.
- Lean, M.E.J., T.S. Han, and J.C. Seidell, Impairment of health and quality of life in people with large waist circumference. Lancet, 1998. 351: p. 853-856.
- Lemieux, S., et al., Sex differences in the relation of visceral adipose tissue accumulation to total body fatness. American Journal of Clinical Nutrition, 1993. 58: p. 463-467.
- Martinez, J.A., Body-weight regulation: causes of obesity. Proceedings of the Nutrition Society, 2000. 59(3): p. 337-345.
- Astrup, A., et al., Low fat diets and energy balance: how does the evidence stand in 2002? Proceedings of the Nutrition Society, 2002. 61(2): p. 299-309.
- Stubbs, R.J., et al., Covert manipulation of dietary fat and energy density: effect on substrate flux and food intake in men eating ad libitum. American Journal of Clinical Nutrition, 1995. 62: p. 316-329.
- Bell, E.A., et al., Energy density of foods affects energy intake in normal weight women. American Journal of Clinical Nutrition, 1998. 67: p. 412-420.
- DiPietro, L., Physical activity in the prevention of obesity: current evidence and research issues. Medicine and Science in Sports and Exercise, 1999. 31: p. S542-546.
- Fogelholm, M., N. Kukkonen, and K. Harjula, Does physical activity prevent weight gain: a systematic review. Obesity Reviews, 2000. 1: p. 95-111.
- American College of Sports Medicine, Appropriate intervention strategies for weight loss and prevention of weight regain for adults. Medicine and Science in Sports and Exercise, 2001. 33: p. 2145-2156.
- Glenny, A., et al., A systematic review of the interventions for the treatment of obesity, and the maintenance of weight loss. International Journal of Obesity and Related Disorders, 1997. 21: p. 715-737.
- WHO MONICA Project, Risk factors. International Journal of Epidemiology, 1989. 18 (Suppl 1): p. S46-S55.
- World Heath Organisation, Obesity:preventing and managing the global epidemic. WHO Technical Report Series 894. 2000: Geneva.
- Ruston, D., et al., National Diet and Nutrition Survey: adults aged 19 to 64 years. Volume 4, Nutritional status (anthropometry and blood analytes), blood pressure and physical activity. 2004, TSO: London.
- Sproston, K. and P. Primetesta, Health Survey of England 2002. Volume 1, The health of children and young people. 2003, The Stationery Office: London.
- Lean, M.E.J., Pathophysiology of obesity. Proceedings of the Nutrition Society, 2000. 59(3): p. 331-336.
- Parillo, M. and G. Riccardi, Diet composition and the risk of Type 2 diabetes: epidemiilogical and clinical evidence. British Journal of Nutrition, 2004. In press.
- Hubert, H.B., et al., Obesity as an independent risk factor for cardiovascular disease: a 26-year follow-up of participants in the Framingham Heart Study. Circulation, 1983. 67: p. 968-977.
- Dattilo, A.M. and P.M. Kris-Etherton, Effects of weight reduction on blood lipids and lipoproteins: a meta analysis. American Journal of Clinical Nutrition, 1992. 56: p. 320-328.
- Seidell, J.C., et al., Overweight and chronic illness - a retrospective cohort study, with follow-up of 6-17 years, in men and women initially 20-50 years of age. Journal of Chronic Diseases, 1986. 39: p. 585-593.
- Wadden, T.A. and A.J. Stunkard, Social and psychological consequences of obesity. Annals of Internal Medecine, 1985. 103: p. 1062-1067.
- Gortmaker, S.L., et al., Social and economic consequences of overweight in adolescence and young adulthood. New England Journal of Medicine, 1993. 329: p. 1008-1012.
- Spitzer, R.L., et al., Binge eating disorder: a multisite field trial of the diagnostic criteria. International Journal of Eating Disorders, 1992. 11: p. 191-203.
- Levy, E., et al., The economic costs of obesity: the French situation. International Journal of Obesity, 1995. 19: p. 788-792.
- Seidell, J.C. and I. Deerenberg, Obesity in Europe - prevalence and consequences for the use of medical care. PharmacoEconomics, 1994. 5: p. 38-44.
- National Audit Office, Tackling Obesity in England. 2001, The stationery Office: London. |
2 visitors online
All substances are made from tiny particles called atoms. They are the smallest particle that can exist. Atoms when they join together form molecules.
Elements are made from only one type of atom. There are just over 100 elements known to man, some of which end in the letters .....ium. Eg. potassium, magnesium and aluminium. All the elements are listed in the "Periodic Table", shown below:
Metals and non-metals: The elements in the Periodic Table are classified as being either metals or non-metals. Both of these groups have certain properties that distinguish them. Below is a list that illustrates the differences between them.
|88 metals||21 non-metals|
|Conduct electricity||Poor conductors of electricity|
|Conduct heat||Do not conduct heat well|
|High melting points||Low melting points|
|High densities||Low densities|
|Metals make alloys|
|Sometimes magnetic||Never magnetic|
|Are basic||React with oxygen|
|Make oxides when reacting with oxygen||Non-metal oxides are acidic|
Mixtures are composed of more than one type of atom that have just been physically mixed together but not chemically combined. Mixtures can be separated back into their elements by fairly simple methods.
Air is a mixture of many different gases. These are the following:
- Nitrogen - 78%
- Oxygen - 21%
- Water Vapour - 0.45%
- Carbon Dioxide - 0.05%
- Noble or Inert gases (mainly Argon, but also Neon, Krypton, Xenon and Helium) - 0.5%
The part of the mixture that has been disolved is called the solute and the part that does the disolving/makes it disolve is the solvent. If a substance is able to dissolve in another, it is described as being soluble, while if it cannot disolve it is insoluble.
These are mixtures where the solid does not dissolve in the liquid but instead floats around, in "suspension". E.g. muddy water.
Compounds are made from more than one type of atom that have chemically reacted together in a fixed ratio. This means that one molecule has a set number of each atom joined together. E.g. Water (symbol H20) has a ratio of 2 hydrogen atoms to every oxygen atom.
The endings of some names help us to know what elements compounds are made from.
...ate means that it contains oxygen. So copper carbonate has copper carbon and oxygen.
...ide means that it is only the elements mentioned in the chemicals name. Copper oxide therefore contains copper and oxygen.
It is usually difficult to separate the elements contained in a compound. The properties of a compound are different from the properties of the elements it is made up from. |
Second-graders at University School have started the year off by observing ant communities as part of a year-long exploration of living and non-living organisms. Second-grade teachers set up the ant habitats in their classrooms, and students have observed them and discussed how they relate to our community.
One ant habitat is made of a clear gel, which contains the necessary nutrients and water in the medium and provides for transparent viewing of the ants’ activities. The other contains sand, which requires students to add water and nutrients.
When the habitats are ready, the students chill their ants in the refrigerator for 10 minutes to slow down the ants’ activity, and then pour them into the prepared habitat. The ants are then free to design their habitat. The second-grade scientists study the ants’ behavior, ask questions, design experiments, make observations, and draw conclusions.
“The students find it interesting to observe the ants’ tunnel designs, the creation of waste piles of sediments, underground chambers (i.e., graveyard room as they often call it), and eating habits,” said Andrew Stone, Lower School science integration teacher.
The students took a time-lapse video of the ants in order to discover what they did in their habitat when school was closed. "We are fortunate to have the instructional technology resources available to makes these types of observations," said Stone. |
What is a ‘curriculum’
A curriculum is set guidelines that that have been established to help educators decide on the content of a of study – what children will learn. It is the curriculum that gives out the lesson objectives, the contents – skills and knowledge, and methods that will be used to teach. Therefore, it prescribes not only what should be taught or how it should be taught, but also why something should be taught.
Our school curriculum includes the ‘national curriculum’, as well as religious education. The national curriculum is a set of subjects and standards used by primary and secondary schools so children learn the same things. It covers what subjects are taught and the standards children should reach in each subject.
The ‘Brackenbury Curriculum’ details how we as a school implement and teach the subjects within the National Curriculum and the methods we use to meet the needs of the children at our school.
The intention of the curriculum at Brackenbury is to provide a broad, rich and varied education that develops children’s life experiences, interests and prepares them for the next stage of their education. Brackenbury pupils experience a unique curriculum that is underpinned by key knowledge and skills that impacts positively on their performance in all subject areas.
Using a range of teaching styles and experiences we aim to foster in the children a love of learning and a passion for enquiry. Our aim is to provide children with a safe, yet challenging, environment in which they can take risks and make mistakes as well as developing their resilience and ability to be self-reflective as they do so.
We put emphasis on the experiential nature of learning, the ‘hands-on’ approach where children can learn by doing. Our active and engaging lessons build on and strengthen children’s knowledge and understanding. We make use of the local area, both within the school grounds and beyond, to support the acquisition of skills and knowledge in a way that is meaningful for the children.
We celebrate achievement in all areas and aim to have happy children who enjoy coming to school. We encourage self-expression and creativity while promoting respect for each other within the diverse community we live in. This broad and balanced curriculum is customised to meet the local needs of our learners.
Teachers are provided with PPA (Planning, Preparation and Assessment time) to plan their curriculum for the whole term and on a weekly basis with their parallel teacher. As part of this planning process, teachers need to plan the following:
A cycle of lessons for each subject, which carefully plans for progression and depth;
Challenge questions for pupils to apply their learning to a range of contexts;
Ensure that lessons and learning are targeted at the correct level for the ability of all pupils; including those with SEND and the most-able;
Adapt future learning based on outcomes and assessments of completed work to ensure; that learning is reinforced and extended over the sequence of lessons;
Trips and visiting experts that will enhance the learning experience;
A curriculum information leaflet for parents and carers so learners can be supported at home.
The core subjects of Maths and English are taught daily and all other subjects are taught throughout the week. Our curriculum is planned in a cross-curricular way and individual subjects are where possible linked to a topic or theme over a half term. We plan the curriculum to ensure that all children receive a broad range of lessons and subjects each week in every year group.
All subjects have a subject leader(s). These teachers champion their subject across the school and ensure that standards are met. They monitor the outcomes for children and work alongside their colleagues to further enhance and develop the provision offered in their subject. Time and training for subject leadership is allocated to all teachers each term.
We measure the impact of our curriculum through the following methods:
A reflection on standards achieved against the planned outcomes;
Review pupil response to the questions and learning provided;
Pupil discussions and conferences about their learning;
Book scrutinies of pupils’ learning across the school demonstrating the depth of understanding, progression and challenge;
The tracking of standards across the curriculum.
For further information on our curriculum design and the adaptions we have made to suit our pupils please click here.
At Brackenbury, we celebrate reading across all years and see it as one of the main foundations in a child’s learning. We believe that it helps children to understand the ever-changing world around them, as well as develop their social and emotional skills both at home and at school. In addition to this, we see that both fluency and enjoyment in reading are an integral part of a child’s academic progress, and therefore we teach daily guided reading sessions through quality texts to support this.
We put a high level of thought into the range of texts of texts our children read, both within guided reading sessions and as independent readers, as our school encourages the use of a wide range of exciting and interesting vocabulary to develop our children’s understanding and communication skills. Our well-stocked library is a place where all classes visit once a week, to share the books they have read with the rest of the class, hear stories read aloud to them and choose new books to take home and enjoy.
Children who are not yet ‘free readers’, will work through our school reading scheme – these are levelled books which match the children’s current attainment. We expect family at home to read these books with their child each evening and make comments in their child’s reading record for children in Key Stage 1. In Key Stage 2, we celebrate our reading in a different way, by recommending the books we read on a class reading tree. This is a daily routine that encourages children to evaluate the books they read, and read a wider range of texts recommended by their peers.
Children are read to each day by their class teacher, so that every child can feel part of a community of reading. By the time children leave Brackenbury, they are competent readers who can recommend books to their peers, are motivated to read a range of genres including poetry and participate in discussions about books including evaluating an author’s use of language and the impact this can have on the reader.
Phonics / Reading schemes
Phonics is taught daily throughout Reception and Year 1 to develop phonological awareness, early reading and speaking and listening skills. We have incorporated materials from the ‘Read, Write, Inc’ scheme in line with the government’s ‘Letters and Sounds’ guidance to provide high quality teaching of these skills. A wide range of materials are used throughout the school to develop a love of reading and children are encouraged to read a range of genres.
Reading at home
Please find some useful documents for you to use in order to deepen your child’s understanding of the texts they read, and develop their reading further.
At Brackenbury we strive to create a love for reading and writing. Each English unit of work that we do is anchored by a text chosen to inspire the children and provide an excellent stimulus for their own writing, be it Julia Donaldson stories in Key Stage 1 or Shakespeare in Upper Key Stage 2, as well as utilising a range of non-fiction texts. This ensures that children develop their skills through a whole range of different text types and are exposed to many different examples of writing. New and ambitious vocabulary is introduced regularly, both to children new to English and to those who already have an excellent vocabulary, to challenge and inspire.
We want every child to leave Brackenbury with the skills of an excellent writer who:
- Has the ability to write with fluency, thinking about the impact their writing will have on the reader
- Has an extensive, sophisticated vocabulary which they use to add and extend details and description.
- Is able to use organisational devices to structure their work within a range of text types.
- Can use a variety of sentence structures, varying the ways that they start and extend their sentences.
- Has an understanding of grammar terminology and is able to apply this to their own writing.
- Ensures that their writing in well presented, punctuated, spelled correctly and neat.
- Re-reads, edits and improves their writing so every piece of work they produce is to the best of their ability.
- Has a passion for writing.
At Brackenbury, children apply the skills they have acquired in English to writing in all subjects across the curriculum. We aim to foster a love for writing and celebrate examples of this through special displays and naming a ‘Writer of the Week’.
If you would like to know more, please visit National Curriculum for English
At Brackenbury Primary School, our aim is to deepen and strengthen children’s understanding of maths and not just accelerate learning. As a result of this, the children are able to develop fluency before moving onto reasoning and problem solving. The teaching is richly supported by the use of pictorial and concrete resources, before moving to the abstract. All progress that our pupils make in maths is valuable, and therefore offering opportunities for all pupils to deliberately practise their fluency is vitally important and this is a key feature of our lessons.
During their time at Brackenbury, our pupils will learn the following essential skills of mathematics:
An understanding of the important concepts and an ability to make connections within mathematics;
A broad range of skills in using and applying mathematics;
Fluent knowledge and recall of number facts and the number system;
The ability to show initiative in solving problems in a wide range of contexts, including the new or unusual;
The ability to think independently and to persevere when faced with challenges;
To embrace the value of learning from mistakes and false starts;
A wide range of mathematical vocabulary;
A commitment to and enthusiasm in the subject.
Each year group will study a variety of mathematical topics throughout the year such as:
- Addition and subtraction
- Multiplication and division
- Properties of shape
- Position, direction and movement
Once children have a grasp of the concepts within a topic, children will have opportunities to apply their learning to reasoning and problem solving questions and activities. By celebrating learning and through engaging challenges, and weekly awards, we inspire our pupils to increase their fluency in maths and to become increasingly sophisticated problem solvers, both in maths and across the curriculum.
Art gives children at Brackenbury the opportunity to be creative and develop their skills using a range of medium and materials. Children learn the skills of drawing, painting, printing, collage, textiles, 3D work and digital art. They are given the freedom to explore and evaluate different creative ideas.
The skills taught in art lessons are also applied to other subjects, allowing children to reflect on and explore topics in greater depth. Examples of this can be seen across the school, from using pastels to draw a setting to help create a bank of descriptive language of a magical scene in ‘A Midsummer Night’s Dream’ in English to painting propaganda posters to gain deeper understanding of World War Two in history. Additionally, many areas of art link with mathematical concepts in shape and space, for example: repeating patterns and using 3D shapes to support structures.
At Brackenbury, we take inspiration from a range of classical and modern artists as well as from each other in class. Time is given to discuss techniques used by others so that we can apply them for our own artwork.
In art, children are expected to be reflective and evaluate their learning, thinking about how they can make changes constructively and always building upon the marks they are making to create greater depth. Children are encouraged to take risks and experiment and then reflect on why some ideas and techniques might be more successful than others.
At Brackenbury, we believe that History is a journey that inspires children to know not only facts and dates of the past but become detectives through practical activities which allows them to explore the past in an exciting way. Children are encouraged to investigate and interpret the past, understand chronology, and build an overview of the history of Britain as well as that of the wider world. We help children to understand how people have lived in this past and compare this to modern life and their own experiences. The handling of real artefacts, workshops and visits to historical places, provides children with the opportunity for exploring history in a unique way.
We enable children to communicate historically whilst building a love and curiosity through thinking and acting as historians. Children benefit from learning through a wide range of high-quality activities where they engage in debates, discussions and research. This enquiry-based approach allows children to formulate historical questions about the past by examining, organising and explaining events that have happened in a creative way. This particularly allows them to think critically about how and why history is viewed.
At Brackenbury, our Geography curriculum aims to ensure that all pupils develop contextual knowledge of the location of significant places, including physical and human characteristics and understand the human and physical features of the world and change over time.
Geography enables children to develop an understanding of the world around them. As a subject, it lends itself to other curriculum areas. We ensure that our geography lessons are linked to the overall topics. For example, in Year 6 volcanos and earthquakes are taught in line with their topic ‘Dangerous Disasters.’
In EYFS, geography is covered through Knowledge and Understanding the World. Finding out about the world around them is what children do effectively through hands on experiences. This is delivered through continuous provision daily.
In Key Stage 1 and 2 units are planned and delivered every term and are based on the National Curriculum Programme of Study.
We also aim to make sure that all children are competent in the following geographical skills:
- Collecting, analysing and communicating data
- Interpret and use a range of geographical resources
- Communicating information in a variety of ways
Alongside our rich curriculum, children are given the chance to develop their geographical skills and take part in experiential learning through assemblies, trips (e.g. Natural History Museum) and workshops throughout the year.
At Brackenbury we believe that developing familiarity and confidence with technology is essential to enable our learners to succeed in a world where the pace of technological advancement continues to increase rapidly. In order to do this, the concept of ‘Computational thinking’ is at the core of our computing curriculum. Children are taught the key skills of:
decomposition: breaking down a problem into manageable parts;
pattern recognition: looking for similarities;
abstraction: focusing on the important information only;
algorithms: developing a step-by-step solution to the problem.
These skills are taught using a wide variety of methods, including robots, iPads and different coding languages.
We teach a curriculum that enables children to become effective users of technology who can:
- Understand and apply the essential principles and concepts of Computer Science, Digital Literacy and Information Technology.
- Have repeated practical experience of writing computer programs in order to solve problems in a wide variety of contexts.
- Gather and critically evaluate information using technology to further their understanding.
- Communicate and synthesise ideas effectively through a cross-curricular approach to information technology.
At Brackenbury, keeping children safe online is part of our commitment to their health and well-being. We have an Internet Policy that provides guidance for teachers and children about how to use the internet safely. Online safety forms a key component of our computing curriculum in every year group and children are made aware of the responsibility they have to be good online citizens.
At Brackenbury we believe that Science generates curiosity and excitement about the world and above all respect for the environment and all living things.
Science teaching is best when children are engaged in discovering Science reasoning, have their curiosity and enjoyment sparked through exploration and are beginning to use Scientific vocabulary in new situations. It is important to us that Lessons are not just confined to the classroom but taken outside (the playground) and offsite (the park or museums).
Children at Brackenbury apply Scientific research skills to discover and identify patterns so that they can use evidence to provide solutions to answers. As children progress, they begin to make links to the broader curriculum and their own life experiences as well as developing how to lead their own learning. They do this by carrying out and planning investigations to help provide an answer to their own question.
We have been working with the Ogden Trust partnership which is a collaboration between the Trust and a number of different schools within the borough. The trust has provided us with resources linked to different physics topics and professional development for staff members to enhance their teaching of Physics within the Science Primary curriculum.
By linking DT to our half-termly topics, each class takes part in a creative and imaginative design and technology curriculum. In three projects across the course of the school year, each child has the chance to discover and develop their technical, practical and creative skills, whilst solving problems and creating models linked to their cross-curricular topic. Year-on-year, the children follow the same structure of investigating, designing, making and evaluating. During this process, they are able to build on their skills from the previous year through a range of projects, learning to build and develop functional structures using a variety of materials and tools.
Throughout their time at Brackenbury, children explore many techniques such as constructing, stitching, cooking, all the while linking into other subjects such as Mathematics, Science, Computing and Art. This cross-curricular approach allows children to both apply their knowledge and understanding of DT to other school subjects, but also provides an excellent base for further life as they are learning key skills such as the importance of healthy eating, the basics of engineering and problem solving.
We believe that it is vital for all our pupils to learn from and about religion, so that they can understand the world around them. Through religious education, pupils develop their knowledge of the world faiths, and their understanding and awareness of the beliefs, values and traditions of other individuals, societies, communities and cultures as well as tolerance and respect for each other. We encourage our pupils to ask questions about the world and to reflect on their own beliefs, values and experiences.
We use the London Borough of Hammersmith and Fulham’s Religious Education Agreed Syllabus (AS) which has been modified and adapted for use in H&F schools.
Hammersmith and Fulham is an increasingly diverse Borough. The Agreed Syllabus was written by experts who brought together representatives of the major world faiths and humanism to adopt a syllabus to be used as a basis for teaching children and young people of all ages and abilities.
For further information please visit the RE Syllabus
PSHCE underpins areas across the curriculum and many extra-curricular activities and opportunities here at Brackenbury Primary School.
Our PHSCE curriculum has been shaped by the PSHCE Association’s programme of study and focuses on three central themes:
- Living in the wider world
- Health and wellbeing
In Key Stages 1 and 2, PSHCE lessons and Circle Time sessions are delivered on a biweekly basis and in our Early Years Foundation Stage, the personal, social and emotional development of pupils is a daily focus. All children also have the opportunity to engage with a number of extra-curricular opportunities that focus on PSHCE including special assemblies, workshops, School Council, and more.
Additionally, our school engages with a number of awards including the Healthy Schools Award, TfL’s STARS accreditation travel scheme and the Rights Respecting Schools Award. These help to develop our pupils’ understanding of themselves and their community, strengthens their relationships with peers and increases their participation and voice.
Physical Education (PE) at Brackenbury Primary School is a valued and integral part of the school curriculum. Vital skills such as communication and team work are significantly enhanced through positive and successful participation in PE. Every child should feel that PE is a chance for them to shine in a non-academic subject, but with no lesser value placed upon their success. PE also serves as a vital tool in developing children’s language skills, as children are able to link vocabulary to movements and actions.
All children have the opportunity to experience a wide range of sporting experiences at Brackenbury, in both a competitive and fun way, either in school or at after school clubs. Provision is made so that all children are able to participate actively in PE and at a level that is appropriate to their ability. PE is taught in a progressive and developmental way, building on prior knowledge and skills in order to provide a positive learning experience.
The school makes use of the Sports Premium funding to train staff and develop the provision for PE. Further details of what we use this funding for can be found on the Sport Premium page of our website.
The aims of our PE curriculum are to develop pupils who:
- Are willing to practise skills in a range of different activities and situations, alone, in small groups and in teams, and to apply these skills in chosen activities to achieve exceptionally high levels of performance;
- Have and maintain high levels physical fitness;
- Lead a healthy lifestyle which is achieved by eating sensibly and exercising regularly;
- Are able to remain physically active for sustained periods of time and have an understanding of the importance of this in promoting long-term health and well-being;
- Take the initiative and become excellent young leaders, organising and officiating, and evaluating what needs to be done to improve, and motivating and instilling excellent sporting attitudes in others;
- Employ imagination and creativity in their techniques, tactics and choreography;
- Are able to improve their own and others’ performance;
- Can work independently for extended periods of time without the need for guidance or support;
- Have a keen interest in PE – a willingness to participate eagerly in every lesson, highly positive attitudes and the ability to make informed choices about engaging fully in extracurricular sport;
- Can swim at least 25 metres before the end of Year 6 and know how to remain safe in and around water.
“Music has a power of forming the character and should therefore be introduced into the education of the young.” Aristotle
Music is an essential part of the life at Brackenbury. Singing is the core of our musical learning and it provides the means to express, create, convey and inspire our children. We develop the value of teamwork as well as the pupil’s own increasing responsibility for their own good musical outcomes. Our pupils are confident and enthusiastic performers who seize every opportunity they are given and thrive with it.
Our vision is to enable children from all backgrounds to have the opportunity to access music education, to make music with others, to learn to sing and to have the opportunity to progress within music and develop their creative talents. We believe the value of music as a subject resides in its contribution to enjoyment, self-esteem, self-confidence and enrichment for those who engage in music seriously as well as for fun. High quality music education enables lifelong participation in music, as well as underpinning excellence and professionalism for those who choose not to pursue a career in music.
Music teaching at Brackenbury starts in the Early Years (Reception) and extends across all 6 primary years. It is led by a specialist teacher and musician (specialised in Singing and Choral Singing). Instrumental tuition is available and is provided by teachers from Pelican Music.
At Brackenbury, we celebrate our musical talents in many ways and occasions. Examples of these are our annual Christmas concert, our Arts Week performance – ‘Proms in the Playground’ (Summer term), our class assemblies, Year 6 leavers’ show (always a fun musical!), and several occasions throughout the year for instrumental players to showcase their progress in front of an audience. We also have a choir who rehearse weekly and are given chances to perform each term. We sing in class as part of our lessons as well as in our weekly singing assemblies.
We offer a curriculum which focuses on the following:
- To develop singing, improvisation and composing skills by learning and writing songs and using tuned and un-tuned percussion instruments as accompaniment as well as our Charanga tool.
- To listen to and appraise a wide range of high quality recordings from different backgrounds, traditions, genres and periods. To understand how this contributes to the diversity of musical styles.
- The ability to express opinions and give verbal and written explanations using musical terminology accurately and appropriately.
- The opportunity to put ideas into action, take risks, “have a go” and to perform in front of others, individually, in groups or as a whole class.
- An understanding that music learning takes time and practise. To be patient with one’s own learning and take small steps at a time in order to master our instrumental and vocal skills.
For a greater understanding of our vision and how Music education can have a significant impact on children, we recommend this video
Learning a foreign language is a liberation from insularity and provides an opening to other cultures. A high-quality languages education should foster pupils’ curiosity and deepen their understanding of the world. At Brackenbury we aim to help pupils to express their ideas and thoughts in French and to understand and respond to its speakers, both in speech and in writing. We provide opportunities for the children to communicate for practical purposes, to learn new ways of thinking and read literature in the original language. We hope to foster an interest in learning languages which will continue into Secondary School and beyond.
Children in Key Stage 1 (KS1) and Key Stage 2 (KS2) have a dedicated weekly French session, where lessons are delivered by a specialist teacher.
KS1: Lessons consist of 20-30 minutes oral French once a week. Lessons are practical and include songs, games and the use of puppets. The topics covered are: greetings, numbers, colours, classroom objects, animals, food. The scheme of work has been written by the Modern Foreign Language (MFL) teacher.
KS2: Lessons consist of 45-60 minutes once a week. A mixture of oral, written, listening and reading skills are used. Lessons are a combination of practical and written activities. They consist of the MFL teacher’s planning, supported by the Rigolo KS2 schemes of work and resources. The children have a French exercise book and vocabulary book which they will carry with them up through the Key Stage.
Here at Brackenbury, we acknowledge the statement that “Every child deserves the best possible start in life and the support that enables them to fulfil their potential. Children develop quickly in the early years and a child’s experiences between birth and age five have a major impact on their future life chances. A secure, safe and happy childhood is important in its own right. Good parenting and high quality early learning together provide the foundation children need to make the most of their abilities and talents as they grow up.” (DfE 2012)
We believe it is essential to create an environment of emotional warmth, with consistent praise and encouragement, so that each child feels individually valued, motivated and confident to meet new challenges and reach our high expectations with a sense of achievement.
We use Development Matters guidance in the EYFS as we want all of our children to be successful learners, to be confident individuals and to become responsible citizens.
To promote the social, emotional, physical, spiritual and intellectual development of each child.
To provide a stimulating and safe environment for learning where children can engage in first hand experiences.
To support and extend children’s learning through purposeful observation, evaluation and interaction.
We believe these overarching principles shape practice and aim at improving outcomes. They reflect that it is every child’s right to grow up safe, healthy, enjoying and achieving, making a positive contribution.
We greatly value the important role that the Early Years Foundation Stage plays in laying secure foundations for future learning and development. Play underpins the delivery of the EYFS Curriculum. We use the document “Development Matters in the Early Years Foundation Stage” to inform planning in the Nursery and Reception classes. Our curriculum for the EYFS reflects the areas of learning identified in the Early Learning Goals from the Early Years Foundation Stage Profile Handbook (Standards and Testing Agency, 2014). Our pupils’ learning experiences enable them to develop competency and skill across all the learning areas. As well as the Early Learning Goals, we support the Characteristics of Effective Learning – Playing and Exploring, Active Learning, Creating and Thinking Critically – which enable the child to be an effective and motivated learner. We aim to create an attractive, welcoming and stimulating learning environment that encourages children to explore, investigate and learn through first-hand experiences. Activities are planned for both inside and outside learning and continuous provision in the EYFS includes water, tactile, sand and creative areas, ICT, math area, drawing, mark making, writing areas, reading, and role-play areas.
For more information please visit EYFS curriculum |
A region of turbulent plasma between a star's core and its visible photosphere at the surface, through which energy is transferred by convection. In the convection zone, hot plasma rises, cools as it nears the surface, and falls to be heated and rise again.
Read more in this article about some frequently asked questions and fun facts related to our definitions.
What Is The Difference Between Snow Flurries And Snow Showers?
Snow refers to the partially frozen water vapor which falls in flakes. The expression snow flurries refers to light, intermittent snowfall without significant accumulation. Snow flurries tend to come from stratiform clouds. Snow showers is the label used to refer to a short period of light-to-moderate snowfall, also characterized by a sudden beginning and ending. There is some accumulation with snow showers, and they fall from convective or cumuliform clouds. A snow …
The American Heritage® Science Dictionary Copyright © 2011. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved. |
Language may be our most powerful tool. We use it to understand our world through listening and reading, and to communicate our own feelings, needs and desires through speaking and writing. With strong language skills, we have a much better chance of understanding and being understood, and of getting what we want and need from those around us.
There are many ways to label or classify language as we learn to better control itby levels, such as formal, informal, colloquial or slang; by tones, such as stiff, pompous, conversational, friendly, direct, impersonal; even by functions, such as noun, verb, adjective. I want to introduce you to a powerful way of classifying languageby levels of abstraction or concreteness or generality or specificity (any one of those four terms really implies the others).
Approaching language in these terms is valuable because it helps us recognize what kinds of language are more likely to be understood and what kinds are more likely to be misunderstood. The more abstract or general your language is, the more unclear and boring it will be. The more concrete and specific your language is, the more clear and vivid it will be.
Let's look at these different types of language.
Abstract and Concrete Terms
Abstract terms refer to ideas or concepts; they have no physical referents.
[Stop right here and reread that definition. Many readers will find it both vague and boring. Even if you find it interesting, it may be hard to pin down the meaning. To make the meaning of this abstract language clearer, we need some examples.]
Examples of abstract terms include love, success, freedom, good, moral, democracy, and any -ism (chauvinism, Communism, feminism, racism, sexism). These terms are fairly common and familiar, and because we recognize them we may imagine that we understand thembut we really can't, because the meanings won't stay still.
Take love as an example. You've heard and used that word since you were three or four years old. Does it mean to you now what it meant to you when you were five? when you were ten? when you were fourteen (!)? I'm sure you'll share my certainty that the word changes meaning when we marry, when we divorce, when we have children, when we look back at lost parents or spouses or children. The word stays the same, but the meaning keeps changing.
If I say, "love is good," you'll probably assume that you understand, and be inclined to agree with me. You may change your mind, though, if you realize I mean that "prostitution should be legalized" [heck, love is good!].
How about freedom? The word is familiar enough, but when I say, "I want freedom," what am I talking about? divorce? self-employment? summer vacation? paid-off debts? my own car? looser pants? The meaning of freedom won't stay still. Look back at the other examples I gave you, and you'll see the same sorts of problems.
Does this mean we shouldn't use abstract terms? Nowe need abstract terms. We need to talk about ideas and concepts, and we need terms that represent them. But we must understand how imprecise their meanings are, how easily they can be differently understood, and how tiring and boring long chains of abstract terms can be. Abstract terms are useful and necessary when we want to name ideas (as we do in thesis statements and some paragraph topic sentences), but they're not likely to make points clear or interesting by themselves.
Concrete terms refer to objects or events that are available to the senses. [This is directly opposite to abstract terms, which name things that are not available to the senses.] Examples of concrete terms include spoon, table, velvet eye patch, nose ring, sinus mask, green, hot, walking. Because these terms refer to objects or events we can see or hear or feel or taste or smell, their meanings are pretty stable. If you ask me what I mean by the word spoon, I can pick up a spoon and show it to you. [I can't pick up a freedom and show it to you, or point to a small democracy crawling along a window sill. I can measure sand and oxygen by weight and volume, but I can't collect a pound of responsibility or a liter of moral outrage.]
While abstract terms like love change meaning with time and circumstances, concrete terms like spoon stay pretty much the same. Spoon and hot and puppy mean pretty much the same to you now as they did when you were four.
You may think you understand and agree with me when I say, "We all want success." But surely we don't all want the same things. Success means different things to each of us, and you can't be sure of what I mean by that abstract term. On the other hand, if I say "I want a gold Rolex on my wrist and a Mercedes in my driveway," you know exactly what I mean (and you know whether you want the same things or different things). Can you see that concrete terms are clearer and more interesting than abstract terms?
If you were a politician, you might prefer abstract terms to concrete terms. "We'll direct all our considerable resources to satisfying the needs of our constituents" sounds much better than "I'll spend $10 million of your taxes on a new highway that will help my biggest campaign contributor." But your goal as a writer is not to hide your real meanings, but to make them clear, so you'll work to use fewer abstract terms and more concrete terms.
General and Specific Terms
General terms and specific terms are not opposites, as abstract and concrete terms are; instead, they are the different ends of a range of terms. General terms refer to groups; specific terms refer to individualsbut there's room in between. Let's look at an example.
Furniture is a general term; it includes within it many different items. If I ask you to form an image of furniture, it won't be easy to do. Do you see a department store display room? a dining room? an office? Even if you can produce a distinct image in your mind, how likely is it that another reader will form a very similar image? Furniture is a concrete term (it refers to something we can see and feel), but its meaning is still hard to pin down, because the group is so large. Do you have positive or negative feelings toward furniture? Again, it's hard to develop much of a response, because the group represented by this general term is just too large.
We can make the group smaller with the less general term, chair. This is still pretty general (that is, it still refers to a group rather than an individual), but it's easier to picture a chair than it is to picture furniture.
Shift next to rocking chair. Now the image is getting clearer, and it's easier to form an attitude toward the thing. The images we form are likely to be fairly similar, and we're all likely to have some similar associations (comfort, relaxation, calm), so this less general or more specific term communicates more clearly than the more general or less specific terms before it.
We can become more and more specific. It can be a La-Z-Boy rocker-recliner. It can be a green velvet La-Z-Boy rocker recliner. It can be a lime green velvet La-Z-Boy rocker recliner with a cigarette burn on the left arm and a crushed jelly doughnut pressed into the back edge of the seat cushion. By the time we get to the last description, we have surely reached the individual, a single chair. Note how easy it is to visualize this chair, and how much attitude we can form about it.
The more you rely on general terms, the more your writing is likely to be vague and dull. As your language becomes more specific, though, your meanings become clearer and your writing becomes more interesting.
Does this mean you have to cram your writing with loads of detailed description? No. First, you don't always need modifiers to identify an individual: Bill Clinton and Mother Teresa are specifics; so are Bob's Camaro and the wart on Zelda's chin. Second, not everything needs to be individual: sometimes we need to know that Fred sat in a chair, but we don't care what the chair looked like.
If you think back to what you've just read, chances are you'll most easily remember and most certainly understand the gold Rolex, the Mercedes, and the lime green La-Z-Boy rocker-recliner. Their meanings are clear and they bring images with them (we more easily recall things that are linked with a sense impression, which is why it's easier to remember learning how to ride a bike or swim than it is to remember learning about the causes of the Civil War).
We experience the world first and most vividly through our senses. From the beginning, we sense hot, cold, soft, rough, loud. Our early words are all concrete: nose, hand, ear, cup, Mommy. We teach concrete terms: "Where's baby's mouth?" "Where's baby's foot?"not, "Where's baby's democracy?" Why is it that we turn to abstractions and generalizations when we write?
I think part of it is that we're trying to offer ideas or conclusions. We've worked hard for them, we're proud of them, they're what we want to share. After Mary tells you that you're her best friend, you hear her tell Margaret that she really hates you. Mrs. Warner promises to pay you extra for raking her lawn after cutting it, but when you're finished she says it should be part of the original price, and she won't give you the promised money. Your dad promises to pick you up at four o'clock, but leaves you standing like a fool on the corner until after six. Your boss promises you a promotion, then gives it instead to his boss's nephew. From these and more specific experiences, you learn that you can't always trust everybody. Do you tell your child those stories? More probably you just tell your child, "You can't always trust everybody."
It took a lot of concrete, specific experiences to teach you that lesson, but you try to pass it on with a few general words. You may think you're doing it right, giving your child the lesson without the hurt you went through. But the hurts teach the lesson, not the general terms. "You can't always trust everybody" may be a fine main idea for an essay or paragraph, and it may be all that you want your child or your reader to graspbut if you want to make that lesson clear, you'll have to give your child or your reader the concrete, specific experiences.
|What principles discussed on this page are at work in the following excerpt from Jeff Bigger's essay, Searching for El Chapareke?|
HIS WAS THE DAY the canyon walls of Cusarare, a Tarahumara Indian village tucked into the Sierra Madres of Chihuahua in northern Mexico, bloomed with women in colorful skirts, legions of children trailed by dogs, men in their white shirts and sombreros, all cascading down the pencil-thin trails toward the plaza. The women shifting babies saddled on their backs in rebozos sat in groups by the mission walls, wordless for hours, drinking the weekly Coke, watching as the faithful went to attend mass, young men shot hoops, and the older men hovered around benches at the back of the plaza, waiting for the weekly outdoor meeting of the community cooperative. Pigs wandered down the road in idle joy, and the dogs fought on cue outside the small shop.
You can check out this principle in the textbooks you read and the lectures you listen to. If you find yourself bored or confused, chances are you're getting generalizations and abstractions. [This is almost inevitablethe purpose of the texts and the teachers is to give you general principles!] You'll find your interest and your understanding increase when the author or teacher starts offering specifics. One of the most useful questions you can ask of an unclear presentation (including your own) is, "Can you give me an example?"
Your writing (whether it's in an essay, a letter, a memorandum, a report, an advertisement, or a resume) will be clearer, more interesting, and better remembered if it is dominated by concrete and specific terms, and if it keeps abstract and general terms to a minimum. Go ahead and use abstract and general terms in your thesis statement and your topic sentences. But make the development concrete and specific.
A Final Note Pointing Elsewhere
Sometimes students think that this discussion of types of language is about vocabulary, but it's not. You don't need a fancy vocabulary to come up with bent spoon or limping dog or Mary told Margaret she hates me. It's not about imagination, either. If you have reached any kind of a reasoned conclusion, you must have had or read about or heard about relevant experiences. Finding concrete specifics doesn't require a big vocabulary or a vivid imagination, just the willingness to recall what you already know. If you really can't find any examples or specifics to support your general conclusion, chances are you don't really know what you're talking about (and we are all guilty of that more than we care to admit).
Where do these concrete specifics emerge in the writing process? You should gather many concrete specifics in the prewriting steps of invention and discovery. If you have many concrete specifics at hand before you organize or draft, you're likely to think and write more easily and accurately. It's easier to write well when you're closer to knowing what you're talking about.
You will certainly come up with more concrete specifics as you draft, and more as you revise, and maybe still more as you edit. But you'll be a better writer if you can gather some concrete specifics at the very start.
After you have read and thought about this material, you should have a fairly clear idea of what concrete specifics are and why you want them. Your next step will be to practice.
Paragraphs & Topic Sentences
A paragraph is a series of sentences that are organized and coherent, and are all related to a single topic. Almost every piece of writing you do that is longer than a few sentences should be organized into paragraphs. This is because paragraphs show a reader where the subdivisions of an essay begin and end, and thus help the reader see the organization of the essay and grasp its main points.
Paragraphs can contain many different kinds of information. A paragraph could contain a series of brief examples or a single long illustration of a general point. It might describe a place, character, or process; narrate a series of events; compare or contrast two or more things; classify items into categories; or describe causes and effects. Regardless of the kind of information they contain, all paragraphs share certain characteristics. One of the most important of these is a topic sentence.
A well-organized paragraph supports or develops a single controlling idea, which is expressed in a sentence called the topic sentence. A topic sentence has several important functions: it substantiates or supports an essay’s thesis statement; it unifies the content of a paragraph and directs the order of the sentences; and it advises the reader of the subject to be discussed and how the paragraph will discuss it. Readers generally look to the first few sentences in a paragraph to determine the subject and perspective of the paragraph. That’s why it’s often best to put the topic sentence at the very beginning of the paragraph. In some cases, however, it’s more effective to place another sentence before the topic sentence—for example, a sentence linking the current paragraph to the previous one, or one providing background information.
Although most paragraphs should have a topic sentence, there are a few situations when a paragraph might not need a topic sentence. For example, you might be able to omit a topic sentence in a paragraph that narrates a series of events, if a paragraph continues developing an idea that you introduced (with a topic sentence) in the previous paragraph, or if all the sentences and details in a paragraph clearly refer—perhaps indirectly—to a main point. The vast majority of your paragraphs, however, should have a topic sentence.
Most paragraphs in an essay have a three-part structure—introduction, body, and conclusion. You can see this structure in paragraphs whether they are narrating, describing, comparing, contrasting, or analyzing information. Each part of the paragraph plays an important role in communicating your meaning to your reader.
Introduction: the first section of a paragraph; should include the topic sentence and any other sentences at the beginning of the paragraph that give background information or provide a transition.
Body: follows the introduction; discusses the controlling idea, using facts, arguments, analysis, examples, and other information.
Conclusion: the final section; summarizes the connections between the information discussed in the body of the paragraph and the paragraph’s controlling idea.
The following paragraph illustrates this pattern of organization. In this paragraph the topic sentence and concluding sentence (CAPITALIZED) both help the reader keep the paragraph’s main point in mind.
SCIENTISTS HAVE LEARNED TO SUPPLEMENT THE SENSE OF SIGHT IN NUMEROUS WAYS. In front of the tiny pupil of the eye they put, on Mount Palomar, a great monocle 200 inches in diameter, and with it see 2000 times farther into the depths of space. Or they look through a small pair of lenses arranged as a microscope into a drop of water or blood, and magnify by as much as 2000 diameters the living creatures there, many of which are among man’s most dangerous enemies. Or, if we want to see distant happenings on earth, they use some of the previously wasted electromagnetic waves to carry television images which they re-create as light by whipping tiny crystals on a screen with electrons in a vacuum. Or they can bring happenings of long ago and far away as colored motion pictures, by arranging silver atoms and color-absorbing molecules to force light waves into the patterns of original reality. Or if we want to see into the center of a steel casting or the chest of an injured child, they send the information on a beam of penetrating short-wave X rays, and then convert it back into images we can see on a screen or photograph. THUS ALMOST EVERY TYPE OF ELECTROMAGNETIC RADIATION YET DISCOVERED HAS BEEN USED TO EXTEND OUR SENSE OF SIGHT IN SOME WAY.
George Harrison, “Faith and the Scientist”
In a coherent paragraph, each sentence relates clearly to the topic sentence or controlling idea, but there is more to coherence than this. If a paragraph is coherent, each sentence flows smoothly into the next without obvious shifts or jumps. A coherent paragraph also highlights the ties between old information and new information to make the structure of ideas or arguments clear to the reader.
Along with the smooth flow of sentences, a paragraph’s coherence may also be related to its length. If you have written a very long paragraph, one that fills a double-spaced typed page, for example, you should check it carefully to see if it should start a new paragraph where the original paragraph wanders from its controlling idea. On the other hand, if a paragraph is very short (only one or two sentences, perhaps), you may need to develop its controlling idea more thoroughly, or combine it with another paragraph.
A number of other techniques that you can use to establish coherence in paragraphs are described below.
Repeat key words or phrases. Particularly in paragraphs in which you define or identify an important idea or theory, be consistent in how you refer to it. This consistency and repetition will bind the paragraph together and help your reader understand your definition or description.
Create parallel structures. Parallel structures are created by constructing two or more phrases or sentences that have the same grammatical structure and use the same parts of speech. By creating parallel structures you make your sentences clearer and easier to read. In addition, repeating a pattern in a series of consecutive sentences helps your reader see the connections between ideas. In the paragraph above about scientists and the sense of sight, several sentences in the body of the paragraph have been constructed in a parallel way. The parallel structures (which have been emphasized) help the reader see that the paragraph is organized as a set of examples of a general statement.
Be consistent in point of view, verb tense, and number. Consistency in point of view, verb tense, and number is a subtle but important aspect of coherence. If you shift from the more personal "you" to the impersonal “one,” from past to present tense, or from “a man” to “they,” for example, you make your paragraph less coherent. Such inconsistencies can also confuse your reader and make your argument more difficult to follow.
Use transition words or phrases between sentences and between paragraphs. Transitional expressions emphasize the relationships between ideas, so they help readers follow your train of thought or see connections that they might otherwise miss or misunderstand. The following paragraph shows how carefully chosen transitions (CAPITALIZED) lead the reader smoothly from the introduction to the conclusion of the paragraph.
I don’t wish to deny that the flattened, minuscule head of the large-bodied "stegosaurus" houses little brain from our subjective, top-heavy perspective, BUT I do wish to assert that we should not expect more of the beast. FIRST OF ALL, large animals have relatively smaller brains than related, small animals. The correlation of brain size with body size among kindred animals (all reptiles, all mammals, FOR EXAMPLE) is remarkably regular. AS we move from small to large animals, from mice to elephants or small lizards to Komodo dragons, brain size increases, BUT not so fast as body size. IN OTHER WORDS, bodies grow faster than brains, AND large animals have low ratios of brain weight to body weight. IN FACT, brains grow only about two-thirds as fast as bodies. SINCE we have no reason to believe that large animals are consistently stupider than their smaller relatives, we must conclude that large animals require relatively less brain to do as well as smaller animals. IF we do not recognize this relationship, we are likely to underestimate the mental power of very large animals, dinosaurs in particular.
Stephen Jay Gould, “Were Dinosaurs Dumb?”
SOME USEFUL TRANSITIONS
(modified from Diana Hacker, A Writer’s Reference)
- To show addition:
- again, and, also, besides, equally important, first (second, etc.), further, furthermore, in addition, in the first place, moreover, next, too
- To give examples:
- for example, for instance, in fact, specifically, that is, to illustrate
- To compare:
- also, in the same manner, likewise, similarly
- To contrast:
- although, and yet, at the same time, but, despite, even though, however, in contrast, in spite of, nevertheless, on the contrary, on the other hand, still, though, yet
- To summarize or conclude:
- all in all, in conclusion, in other words, in short, in summary, on the whole, that is, therefore, to sum up
- To show time:
- after, afterward, as, as long as, as soon as, at last, before, during, earlier, finally, formerly, immediately, later, meanwhile, next, since, shortly, subsequently, then, thereafter, until, when, while
- To show place or direction:
- above, below, beyond, close, elsewhere, farther on, here, nearby, opposite, to the left (north, etc.)
- To indicate logical relationship:
- accordingly, as a result, because, consequently, for this reason, hence, if, otherwise, since, so, then, therefore, thus
Produced by Writing Tutorial Services, Indiana University, Bloomington, IN |
NASA's Hubble Space Telescope has identified 18 "tiny" galaxies which existed 9 billion years ago and are brimming with star birth.
The galaxies - located in a field known as the Great Observatories Origins Deep Survey (GOODS) - are among 69 dwarf galaxies found in the GOODS (marked by green circles in the large image) and other fields.
While dwarf galaxies are the most common type of galaxy in the universe, the rapid star-birth observed in these newly found examples may force astronomers to reassess their understanding of the ways in which galaxies form.
To be sure, the galaxies are a hundred times less massive, on average, than the Milky Way, yet churn out stars at such a furious pace that their stellar content would double in just 10 million years. By comparison, the Milky Way would take a thousand times longer to double its star population.
The universe is estimated to be 13.7 billion years old, and these newly discovered galaxies are extreme even for the young universe - when most galaxies were forming stars at higher rates than they are today.
Astronomers using Hubble's instruments could spot the galaxies because the radiation from young, hot stars has caused the oxygen in the gas surrounding them to light up like a bright neon sign.
"The galaxies have been there all along, but up until recently astronomers have been able only to survey tiny patches of sky at the sensitivities necessary to detect them," explained Arjen van der Wel of the Max Planck Institute for Astronomy in Heidelberg, Germany. "We weren't looking specifically for these galaxies, but they stood out because of their unusual colors."
In addition to the images, Hubble also captured spectra that show the oxygen in a handful of galaxies and confirmed their extreme star-forming nature.
"Spectra are like fingerprint," said Amber Straughn at NASA's Goddard Space Flight Center in Greenbelt, Md. "They tell us the galaxies' chemical composition."
Interestingly enough, the resulting observations are somewhat at odds with recent detailed studies of the dwarf galaxies that are orbiting as satellites of the Milky Way.
"Those studies suggest that star formation was a relatively slow process, stretching out over billions of years," noted Harry Ferguson of the Space Telescope Science Institute (STScI) in Baltimore, Md.
"[The discovery] that there were galaxies of roughly the same size forming stars at very rapid rates at early times is forcing us to re-examine what we thought we knew about dwarf galaxy evolution."
Indeed, the observations suggest that the newly discovered galaxies were very common 9 billion years ago. However, it is a mystery why the newly found dwarf galaxies were making batches of stars at such a high rate.
Computer simulations show star formation in small galaxies may be episodic. Gas cools and collapses to form stars, which then reheat the gas and blow it away, as in supernova explosions. After some time, the gas cools and collapses again, producing a new burst of star formation, continuing the cycle.
"While these theoretical predictions may provide hints to explain the star formation in these newly discovered galaxies, the observed bursts are much more intense than what the simulations can reproduce," added van der Wel. |
Often called "ladybugs" or "ladybird beetles", lady beetles (Coccinellidae) are the most familiar insect predator to most people. Although dozens of species occur in Colorado, they are all typically a round-oval shape. Most are also brightly colored and often spotted.
Females periodically lay masses of orange-yellow eggs. The eggs are quite distinctive, although they somewhat resemble those produced by elm leaf beetle. Eggs are usually laid near colonies of insects (aphids, scales, etc.) which will later be fed on by the larvae.
During the summer eggs hatch in about five days. The immature or larval stages look very different from the more familiar adults and often are overlooked or misidentified. Lady beetle larvae are elongated, usually dark colored and flecked with orange or yellow. They can crawl rapidly over plants, searching for food.
Adult and larval lady beetles feed on large numbers of small soft-bodied insects such as aphids. Lady beetles also eat eggs of many insects. Pollen, nectar and honeydew are other common foods.
One group of very small black lady beetles, aptly dubbed the "spider mite destroyers" (Stethorus) are also very important in controlling spider mites. Another unusual group are Coccidophilus spp. which are important predators of scales. Larvae of some lady beetles, e.g., those which specialize on aphids within leaf curls or feed on mealybugs, produce waxy threads which cover their body.
Lady beetles reproduce rapidly during the summer and can complete a generation in less than four weeks under favorable conditions. As a result, they often overtake a pest outbreak, controlling many potential insect problems.
Unfortunately, lady beetles tend to be 'fair weather' insects that are slow to arrive in the spring and often leave the plants by late summer. (A few kinds along the Front Range even 'head for the hills', spending the cool seasons at high elevations, protected under the snow.) As a result, late season 'blooms of aphids sometimes occur, as they continue to feed and escape their natural enemies.
The information herein is supplied with the understanding that no discrimination is intended and that listing of commercial products, necessary to this guide, implies no endorsement by the authors or the Extension Services of Nebraska, Colorado, Wyoming or Montana. Criticism of products or equipment not listed is neither implied nor intended. Due to constantly changing labels, laws and regulations, the Extension Services can assume no liability for the suggested use of chemicals contained herein. Pesticides must be applied legally complying with all label directions and precautions on the pesticide container and any supplemental labeling and rules of state and federal pesticide regulatory agencies. State rules and regulations and special pesticide use allowances may vary from state to state: contact your State Department of Agriculture for the rules, regulations and allowances applicable in your state and locality. |
Difference Between Heterogeneous and Homogeneous
We come across homogeneous and heterogeneous products in our everyday lives. Basically we subdivide a mixture into homogeneous and heterogeneous mixtures. In simple words, in a homogeneous mixture, you cannot differentiate the its components easily. Components in a heterogeneous mixture can be easily differentiated. An example of a homogeneous solution can be a salt solution. We cannot see the different components of the solutions separately as water and salt. An example of heterogeneous solution is a mix of water and petrol. In this, the different components are separately visible.
In a homogeneous mixture, the appearance is uniform and so is the composition. Most homogeneous mixtures are solutions. In solutions, if one of the components is a powder or a liquid that dissolves with the other liquid, then we will not be able to see the different components separately. We see it as one liquid. Air without any visible particles also is a homogeneous mixture. But in fact air is made up of various gases.
In a heterogeneous product, we can easily distinguish the different components in it. If you try to mix sand and water, it will not dissolve. So you can easily distinguish between the sand and the water.
We also have homogeneous and heterogeneous catalysts. A homogeneous catalyst is used in the same phase as the reactants and will be in the same state of matter as the other reactants. Heterogeneous catalysts are used in a different phase from the reactants and it can be in a different matter of state from the reactants.
The components of heterogeneous mixtures can be separated very easily but separating the components of a homogeneous mixture can be difficult. If you have a heterogeneous mixture of sand and water, you can just allow the sand to sediment and separate the components. If you have a homogeneous mixture like a sugar solution, you will have to separate it by the process of evaporation. Other methods of physically separating the components of a mixture are filtration, using a magnet, crystallization, distillation, decantation, and sieving. If you have mixture of sand and iron fillings, use a magnet to separate it. Spread out the mixture on a surface and move the magnet over it. The iron fillings start clinging to the magnet.
You can say that an atom is heterogeneous as it is comprised of protons, neutrons, and electrons. But a molecule can be homogeneous as it is comprised of atoms.
Search DifferenceBetween.net :
Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family.
Leave a Response |
You Are Here
Activity 4: Calming Beads
Activity time: 15 minutes
Materials for Activity
- Waxed linen, or elastic cord and large needles for all participants
- Assorted beads, at least 16 for each participant, and shallow trays
- One large wooden or metal bead for each participant
Preparation for Activity
- Cut 14-inch lengths of cord, one for each participant. Waxed linen cord is best, because children can string beads without using a needle.
- Place smaller beads in trays and set them at work tables.
- Place the special, larger beads in another tray.
- Make a sample set of Calming Beads using the directions below.
Description of Activity
The children create a strand of beads they can use to calm themselves when they feel angry and learn a tactile counting practice to use with it.
Affirm that is okay to get angry sometimes. However, when we stay angry we hurt ourselves. This activity is about making something we can hold onto instead of holding onto our anger.
Invite each child to choose a large bead. Distribute the lengths of cord. Point out the smaller beads on the work tables.
Explain, demonstrating as needed::
1. String one end of the cord through the starter bead.
2. Then string at least ten smaller beads on the cord, while keeping hold of the end.
3. Pull the cord through the starter bead again. Keep the cord loose enough so the beads can move back and forth, then knot the cord.
4. If the starter bead has a hole too large for the knot, tie another small bead on the end: String both ends of the cord through the smaller bead and tie a knot. You can add even more beads to this "tail."
When everyone is done, call children into a circle and invite them to admire one another's work. Show participants how to take the Calming Beads into their hands, looped over their fingers, and can finger along, bead-to-bead. Ask the children to sit quietly, closing their eyes if they wish, while you repeat these centering words:
As you breathe in, feel your body opening up with air. As you breathe out, feel yourself relaxing.
Breathe in and out as each bead passes through your finger.
Repeat the centering words a few times. Then say, in your own words:
Just like you can put yourself to sleep at night, you can also calm yourself when you are angry. These beads are a great way to help you. Keep them in your pocket or around your wrist to use whenever you need.
Invite participants to continue holding the beads and using them to count quietly. Ask the group:
- Do you think your Calming Beads might be helpful to you?
- Who can try in the coming week to use your Calming Beads when you get angry?
- What other ways do you use to help you remain calm?
Have children make extra sets of Calming Beads to include in the Fidget Objects basket.
Including All Participants
This activity requires manual dexterity and in some cases, patience. Large beads are much easier to string. |
What's the Latest Development?
A new study in the journal Cognition demonstrates how children are born scientists and given to empirical investigation, i.e. playing, to interrogate the world around them. In one experiment, two groups of children were shown four beads that could activate a music box. In the first group, any of the beads would make the box play; for the other group, only two of the four beads worked. The second group, left with uncertainly, investigated the beads by making different combinations. In another study, children showed a greater tendency to investigate when the answer to a problem wasn't given to them directly.
What's the Big Idea?
Children's play is actually a form of learning, a way to investigate the causal mechanisms in the world around them. "Exploratory play is a complex phenomenon," write the authors of the new study, "presumably subserving a range of functions other than the generating informative evidence...However, to the extent that children acquire causal knowledge through exploration, the current results begin to bridge the gap between scientific inquiry and child's play." The task for parents and teachers is to present knowledge while preserving a sense of uncertainty. |
The importance of grammar and punctuation to the curriculum Grammar is concerned with the way in which sentences are used in spoken language, in reading and in writing. Sentences are the construct which help give words their sense. The purpose of grammar teaching is to enable pupils to become conscious of patterns of language which they can apply in their own work to enhance meaning.
The purpose of punctuation is to clarify the meaning of texts. Readers use punctuation to help make sense of written texts while writers use punctuation to help communicate intended meaning to the reader.
Teachers work hard to enhance pupils’ vocabulary through opportunities that arise naturally from their reading and writing. Throughout all literacy lessons, pupils are shown how to understand the relationship between words, how to understand nuances in meaning and how to develop their use of figurative language. They are also taught how to work out and clarify the meaning of unknown words and words with more than one meaning. In addition to this, new words and phrases are introduced each week. The pupils explore the meaning of the word or phrase and begin experimenting and perfecting using these in their writing. Regular work on synonyms and antonyms also helps broaden the pupils’ vocabulary.
Teaching explicit knowledge of grammar is important to enable the pupils to have a more conscious control and choice of their language. Building this knowledge is taught through focused activities and within the teaching of reading, writing and speaking. Once the pupils are familiar with a grammatical concept their teacher encourages them to apply and explore this concept in the grammar in their own speech and writing.
Each week the pupils have a set of spellings to revise. These spellings are linked to the spelling focus taught that week in their literacy lesson. |
Footage of the newly invented machines that helped launch the Industrial Revolution is a fitting beginning to this program. The economic upheaval that swept the Western world, coupled with the political turmoil that saw its first fruits in the French Revolution, were to remake European society and, as a direct consequence, to refashion the role of the Jews within the countries of Europe. Organized religion was under attack; it was no longer seen as containing the ultimate truth for mankind. Rather, human reason, as celebrated by the philosophers of the Enlightenment, appeared to be a fitting guide to contemporary society.
Jews both participated in and were affected by Enlightenment ideas. A major figure of the Jewish Enlightenment was Moses Mendelssohn, a friend of Gotthold Lessing (mentioned in the last chapter) and a prominent philosopher in his own right. In Source Reader Selection 1, Mendelssohn attempts a synthesis of Jewish teachings, especially Jewish law, and the demands of the general society.
Enlightenment ideas led to calls for a new political order in Europe. No longer would divine right be accorded kings, or a nobility or aristocracy claim any greater hold on society's benefits and honors. Society, the theorists argued, should be governed by those who showed themselves to be most fit and capable, not by those who received their mandate through heredity. The question for the Jews was where they would fit into this newly ordered society. Previously, they had been seen as a people apart. But if all legal distinctions between men were to disappear, where did that leave the Jews? Would they be able to maintain their group identity if they merged with the general society?
There were economic motivations for this new reordering of society. If each government benefited by maximally utilizing their citizens, it made no sense to discriminate against individuals on the basis of their religion. The Jews welcomed the opening of vast opportunities but wondered what their role would be in this new political and economic system.
The French Revolution erupted in 1789 and on August 27 its leaders declared that all men were equal. But did this statement include the Jews? Deputies of the National Assembly long debated the issue and came down on both sides of the question. (See Source Reader Selection 2.) For some, the Jews would always be a people apart, never taking their place in the French mainstream; for others, the Jews were to be accepted if they converted to Christianity; for still others, whose ideas eventually carried the day, the Jews had to be emancipated and granted equal rights, for only then would they be able to develop into worthwhile Frenchmen.
The Jews, however, were not all emancipated at once. The Jews who lived in southern France, who were of Sephardic origin, were given equal
rights in January 1790. (See Source Reader Selection 3.) But the Ashkenazic Jews of eastern France, who were poorer and mainly Yiddish-speaking, were not included in this declaration. When the French Constitution (see Source Reader Selection 4) declared liberty for all in September 1791, the Ashkenazim had to be emancipated. (See Source Reader Selection 5.)
Not all the Jews were sure that emancipation was a blessing. But for Berr Isaac Berr, whose words we hear in the video, it was a time for great rejoicing, as Jews were allowed to join their compatriots as Frenchmen. (Professor Michael Stanislawski has included a large excerpt from Berr's remarks in Source Reader Selection 6.)
But these revolutionary ideas affected more than French Jewry. When Napoleon began to conquer territories in Western Europe, he brought French revolutionary ideals with him, including those that maintained that the Jews were full citizens of the modern state. (See Source Reader Selection 7.) And some countries were simply affected by the French ideals: Prussia, for example, granted the Jews full political rights. (See Source Reader Selection 8.) Napoleon, while emancipating the Jews and destroying the ghetto walls that surrounded them, raised the Jewish question at home in France. He was vexed by the economic tensions that continued to erupt between the Ashkenazic Jews and the other inhabitants of northeastern France. While he struggled with this problem, the Jews were still treated as a group apart from the rest of society. Napoleon's stance on Jewish issues became more obvious when he called an assembly of Jews and asked them to reply to various questions, including whether Jews considered themselves Frenchmen. Even after legal emancipation, Jewish status was still being called into question.
Although emancipation was not revoked in France, it was turned back elsewhere in Western and Central Europe after Napoleon's defeat in 1815. The Congress of Vienna sought to reestablish the old political and social order, and the Jews' freedoms were curtailed by this reactionary program. The Jews in these countries were frustrated and sought, through political means, to put their homelands back on the liberal track. They were briefly successful during the 1848 revolution, but as explained in the TV series, the conservative forces won the day and the new reforms were rescinded.
England presented a different situation for the Jews. They were never formally emancipated, but restrictions on their political and social behavior were gradually discontinued. By 1858 Jews could be seated in Parliament, eleven years after the first Jew was elected. (See Source Reader Selection 9.) The debate over Lionel Rothschild's admission to the House of Commons is extensively treated in the video.
Modernity also affected the Jews socially and intellectually. Jews began to acculturate into the societies in which they lived as a means of economic and social advancement. The new ideas also affected Jews' views of Judaism. We have briefly mentioned Moses Mendelssohn and his ideals. But his synthesis did not appeal to those Jews who questioned
whether to remain Jewish at all. Especially in Western European countries, where political emancipation had not been granted to the Jews, some Jews attempted to refashion their Judaism to make it palatable to their compatriots. Many simply left their Jewish heritage behind, ever mindful of the poet Heine's advice, quoted in the video, that baptism was the ticket to European society.
But not all Jews wished to take this step. The movement for religious reform in Judaism argued that Jews no longer needed to follow specific rituals and observances. Rather, the core of their religion was ethical monotheism, and if a person lived according to those precepts he was a fine Jew. Students should realize that this refashioning of Judaism was both ideological and practical. (See Source Reader Selection 10.) The Jew would feel an obligation to spread his truth to others but had no connection to other Jews throughout the world; he could and wished to be a full participating member of the society in which he lived.
In their desire to reform Judaism, other Jews sought to investigate the Jewish past to determine what was the core of their religion and what could be considered peripheral. Another group, the Positive-Historical school, thought that the Reform movement had gone too far and argued for a more moderate approach to the reorganization of Judaism. (For more on this group, see Source Reader Selection 11.)
A more conservative approach was adopted by Rabbi Samson Raphael Hirsch and his Neo-Orthodox adherents. Willing to accept the benefits and responsibilities of emancipation, they nevertheless argued that they would not change any of the traditions enjoined upon the Jews. They adopted some of the philosophy and rhetoric of the Reform movement, yet they adhered closely to Jewish laws. (See Source Reader Selection 12.).
Slowly, the Jews were emancipated in the rest of Western Europe: in united Germany in 1869, and in unified Italy in 1870. (See Source Reader selections 13 and 14.) The majority of European Jews, though, were in the eastern half of the continent, and for these Jews the political, social and economic climate differed radically from that of their co-religionists in the West. Jews lived in what was formerly eighteenth-century Poland: in Western Poland, now under Prussian rule; in Galicia, now part of Austria; and in the semi-independent kingdom of Poland. Mainly they resided in Lithuania, Belorussia and the Ukraine, all part of the greater Russian Empire. Although the Russian government had not allowed Jews to live within its borders, it acquired almost one million Jews when it took over Polish lands.
The Russian Jews were Yiddish-speaking and religiously observant; they were engaged in small-scale commerce and crafts. Many of them by the nineteenth century were aligned with Hasidic groups. (See Source Reader Selection 15.) In the video, voice-overs offer the words of typical individuals in a shtetl (town) of Eastern Europe. For a contrast to the members of the Hasidic groups, listen especially for the student of the Talmud and his concerns.
Politically the Russian Empire was ruled by a monarch whose power was unlimited. Socially, serfdom was the reality for the majority of the peasants. The Jews were limited in the area of their residence to approximately the same territories they had lived in while under Polish rule. They could not reside where they wanted within Russia. (See Source Reader Selection 16.) In such a society, Enlightenment ideas hardly made any headway. Those who were attracted to aspects of modern Jewish culture were in the distinct minority.
In the first half of the nineteenth century, the Tsar began to intrude more and more into Jewish communal affairs. By 1844, Jewish communal boards were abolished. In the next decade, the government tried to encourage Russian culture among the Jewish population. While the Westernized minority among the Jews welcomed this new turn of affairs, most of the Russians distrusted the Tsar's educational "reforms." (See Source Reader Selection 17, which is very revealing of the nature of Russian Jewish attitudes towards these foreign ideas.)
The recalcitrance of the masses did not stop the Russian Jewish Enlightenment from plowing ahead, creating new literary works in Hebrew and Yiddish and translating Western thinkers into the Jews' own languages. A call for a new Jewish intellectual consciousness was issued by Judah Leib Gordon, whose poem "Awake My People" (reproduced in Source Reader Selection 18) assures Jews that progress is at hand if they will only join hands with their. Russian compatriots.
When Alexander II ascended to the throne in 1855, the Jews in Russia were filled with hope. He emancipated the serfs in 1861 and generally appeared to be modernizing Russian society. But these hopes for the future, such as those expressed by J. L. Gordon, were soon to be dashed.
By the 1870's any observer of the Jewish scene across the European continent would argue that things were getting much better for this oft-oppressed minority. Jews were emancipated in the West, and ideas of reason and progress appeared to be constantly gaining ground.
But what this view overlooked was that, for many other groups in society, the influence of modern ideas had been detrimental to their economic and political status. Agricultural laborers, clergymen, small shopkeepers and factory workers all looked upon the developments of the past century as destructive to their lives. What emerged from their distress was the belief that the Jews who appeared to benefit from the changes of modernity were the prime movers behind all of their misfortunes. Theological anti-Judaism was rediscovered, cleansed of its religious overtones and combined with racist ideas. This modern anti-Semitic ideology was employed by people on both ends of the political spectrum and was used as a basis for the founding of new political parties. A full discussion of modern anti-Semitism is presented in the television program.
In Eastern Europe, the hopes for reform, fueled by the policies of Alexander II, abruptly ended when the Tsar was assassinated in 1881. Soon after, hundreds of Jews were injured and killed and Jewish stores were destroyed as pogroms broke out in Russia. It was widely known that government officials often stood by while the Jews were attacked. The government enacted the infamous May laws, which further restricted Jewish economic life in the wake of the destruction. (See Source Reader Selection 19.)
At the same time in Western Europe, anti-Semitism was on the rise. In France, the home of the revolution, a Jewish army official was falsely accused of conspiring with Germany against France. Anti-Semites used the Dreyfus Affair to call the whole issue of Jewish patriotism into question. After many years of much protest, Dreyfus was exonerated.
One of the leaders in his behalf was the writer Emile Zola, whose famous call, "J'accuse," is found in Source Reader Selection 20. His words are also included in the TV show.
Shaping a Future
After these events, in both Eastern and Western Europe, many Jews rethought their attitude towards emancipation and acculturation. For some Jews, among them the Vienna-born journalist Theodor Herzl, the Dreyfus case was the last straw leading to the realization that the Jews would never be accepted within modern European society.
Herzl, affected by European nationalism, founded the Zionist movement, an organization pledged to create a separate national homeland for the Jewish people. (See Source Reader Selection 21, which contains an excerpt from Herzl's "The Jewish State.")
Some Zionist thinkers did not see this new Jewish state as simply a political solution to the Jewish problem. They envisioned a national entity that would nourish world Jewry intellectually and spiritually. The new Hebrew culture to be created in this homeland would be of great consequence for oppressed Jewry. Western notions of emancipation were "slavery in freedom" to Ahad Ha'am. (See Source Reader Selection 22.)
Some Jews within Russia did not long for a separate Jewish homeland because they deemed the Zionist dream impractical. Rather, they imagined that Jews should possess national autonomy within the European societies in which they lived. One of the spokesmen for this Diaspora nationalism was the historian Simon Dubnov; an excerpt from his writings is found in Source Reader Selection 23. For many Russian Jews, however, a separate national entity was no solution. They felt that Jews should fight for a social revolution to create a totally new society in which no forms of prejudice would be recognized.
Alongside this frenzied Jewish political activity of all stripes, Yiddish and Hebrew writers in Eastern Europe were creating gems of literature attuned to modern sensibilities. The video often refers to the corpus of Jewish literature that developed at this time. In Source Reader Selection 24 you will find H. N. Bialik's "In the City of
Slaughter," one of the most moving poems in modern Hebrew literature. Bialik excoriates traditional Jewish society for creating a type of Jew who could not resist his attackers during the Kishinev pogroms of 1903. Whether or not Bialik was correct in his analysis, his poem stands as a monument to the feelings of many Jews in Eastern Europe who were impatient with the oppressive status quo.
A New Century
This period of modern Jewish history came to an end in 1914 with the outbreak of World War I. In the aftermath of the "war to end all wars," new options would be created and new dangers would loom over Jewry throughout the world. |
Confucius (or Kongzi) was a Chinese philosopher who lived in the 6th century BCE and whose thoughts, expressed in the philosophy of Confucianism, have influenced Chinese culture right up to the present day. Confucius has become a larger than life figure and it is difficult to separate the reality from the myth. He is considered the first teacher and his teachings are usually expressed in short phrases which are open to various interpretations. Chief among his philosophical ideas is the importance of a virtuous life, filial piety and ancestor worship. Also emphasised is the necessity for benevolent and frugal rulers, the importance of inner moral harmony and its direct connection with harmony in the physical world and that rulers and teachers are important role models for wider society.
Confucius' Early life
Confucius is believed to have lived from c. 551 to c. 479 BCE in the state of Lu (now Shandong or Shantung). However, the earliest written record of him dates from some four hundred years after his death in the Historical Records of Sima Qian (or Si-ma Ts‘ien). Raised in the city of Qufu (or K‘u-fou), Confucius worked for the Prince of Lu in various capacities, notably as the Director of Public Works in 503 BCE and then the Director of the Justice Department in 501 BCE. Later, he travelled widely in China and met with several minor adventures including imprisonment for five days due to a case of mistaken identity. Confucius met the incident with typical restraint and was said to have calmly played his lute until the error was discovered. Eventually, Confucius returned to his hometown where he established his own school in order to provide students with the teachings of the ancients. Confucius did not consider himself a ‘creator’ but rather a ‘transmitter’ of these ancient moral traditions. Confucius’ school was also open to all classes, rich and poor.
It was whilst he was teaching in his school that Confucius started to write. Two collections of poetry were the Book of Odes (Shijing or Shi king) and the Book of Documents (Shujing or Shu king). The Spring and Autumn Annals (Lin Jing or Lin King), which told the history of Lu, and the Book of Changes ( Yi Jing or Yi king) was a collection of treatises on divination. Unfortunately for posterity, none of these works outlined Confucius’ philosophy. Confucianism, therefore, had to be created from second-hand accounts and the most reliable documentation of the ideas of Confucius is considered to be the Analects although even here there is no absolute evidence that the sayings and short stories were actually said by him and often the lack of context and clarity leave many of his teachings open to individual interpretation. The other three major sources of Confucian thought are Mencius, Great Learning and Mean. With Analects, these works constitute the Four Books of Confucianism otherwise referred to as the Confucian Classics. Through these texts, Confucianism became the official state religion of China from the second century BCE.
Chinese philosophy, and particularly Confucianism, has always been concerned with practical questions of morality and ethics. How should man live in order to master his environment, provide suitable government and achieve moral harmony? Central to Confucianism is that the moral harmony of the individual is directly related to cosmic harmony; what one does, affects the other. For example, poor political decisions can lead to natural disasters such as floods. An example of the direct correlation between the physical and the moral is evidenced in the saying, ‘Heaven does not have two suns and the people do not have two kings’. A consequence of this idea is that, just as there is only one cosmic environment, there is only one true way to live and only one correct political system. If society fails it is because sacred texts and teachings have been misinterpreted; the texts themselves contain the Way but we must search for and find it.
Another important facet of Confucius’ ideas was that teachers, and especially rulers, must lead by example. They must be benevolent in order to win the affections and respect of the populace and not do so by force, which is futile. They should also be models of frugality and high moral upstanding. For this reason, Chinese education has often favoured the cultivation of moral sensibilities rather than specific intellectual skills. Further, under Confucian influence, Chinese politics principally focussed on the intimacy of relationships rather than institutions.
Mencius & Xunzi
The thoughts of Confucius were further developed and codified by two important philosophers, Mencius (or Mengzi) and Xunzi (or Hsun Tzu). Whilst both believed that man’s sense of morality and justice separated him from the other animals, Mencius expounded the belief that human nature is essentially good whilst Xunzi, although not of an opposite position, was slightly more pessimistic about human nature and he, therefore, stressed the importance of education and ritual to keep people on the right moral track.
Confucianism, therefore, expounded the importance of four virtues which we all possess: benevolence (jen), righteousness (i), observance of rites (li) and moral wisdom (te). A fifth was later added - faith - which neatly corresponded to the five elements (in Chinese thought) of earth, wood, fire, metal and water. Once again, the belief that there is a close link between the physical and moral spheres is illustrated. By stating that all men have such virtues, two ideas are consequent: education must nurture and cultivate them and all men are equal - ‘Within the four seas all men are brothers’. With suitable application, anyone can become a sage (sheng). It is not innate talent which is important but one’s will to mould one’s character into the most virtuous possible.
Following his death in 479 BCE, Confucius was buried in his family’s tomb in Qufu (in Shandong) and, over the following centuries, his stature grew so that he became the subject of worship in schools during the Han Dynasty (206 BCE-220 CE) and temples were established in his name at all administrative capitals during the Tang Dynasty (618-907 CE). Throughout the imperial period an extensive knowledge of the fundamental texts of Confucianism was a necessity in order to pass the civil service selection examinations. Educated people often had a tablet of Confucius’ writings prominently displayed in their houses and sometimes also statues, most often seated and dressed in imperial costume to symbolise his status as ‘the king without a throne’. Portrait prints were also popular, especially those taken from the lost original attributed to Wu Daozi (or Wu Taoutsi) and made in the 8th century CE. Unfortunately, no contemporary portrait of Confucius survives but he is most often portrayed as a wise old man with long grey hair and moustaches, sometimes carrying scrolls.
The teachings of Confucius and his followers have, then, been an integral part of Chinese education for centuries and the influence of Confucianism is still visible today in contemporary Chinese culture with its continued emphasis on family relationships and respect, the importance of rituals, the value given to restraint and ceremonies, and the strong belief in the power and benefits of education.
About the Author
Warring States Project (02 June 2015)Price: $24.95
Publisher Not Specified (25 August 2016)Currently unavailable
University of Washington Press (15 June 2000)Price: $40.00
Compendium (19 July 2010)Price: $25.00
Routledge (29 November 2016)Price: $149.95 |
Leishmaniasis is an infectious disease spread by the bite of the female sandfly.
Kala-azar; Cutaneous leishmaniasis; Visceral leishmaniasis; Old world leishmaniasis; New world leishmaniasis
Leishmaniasis is caused by a tiny parasite called leishmania protozoa. Protozoa are one-celled organisms.
There are different forms of leishmaniasis.
- Cutaneous leishmaniasis affects the skin and mucous membranes. Skin sores usually start at the site of the sandfly bite. In a few people, sores may develop on mucous membranes.
- Systemic, or visceral, leishmaniasis affects the entire body. This form occurs 2 to 8 months after a person is bitten by the sandfly. Most people do not remember having a skin sore. This form can lead to deadly complications. The parasites damage the immune system by decreasing the numbers of disease-fighting cells.
Cases of leishmaniasis have been reported on all continents except Australia and Antarctica. In the Americas, leishmaniasis can be found in Mexico and South America. Leishmaniasis has been reported in military personnel returning from the Persian Gulf.
Symptoms of cutaneous leishmaniasis depend on where the lesions are located and may include:
- Breathing difficulty
- Skin sores, which may become a skin ulcer that heals very slowly
- Stuffy nose, runny nose, and nosebleeds
- Swallowing difficulty
- Ulcers and wearing away (erosion) in the mouth, tongue, gums, lips, nose, and inner nose
Systemic visceral infection in children usually begins suddenly with:
Other symptoms of systemic visceral leishmaniasis may include:
- Abdominal discomfort
- Fever that lasts for weeks; may come and go in cycles
- Night sweats
- Scaly, gray, dark, ashen skin
- Thinning hair
- Weight loss
Exams and Tests
Your health care provider will examine you and may find that your spleen, liver, and lymph nodes are enlarged. You will be asked if you recall being bitten by sandflies or if you've been in an area where leishmaniasis is common.
Tests that may be done to diagnose the condition include:
- Biopsy of the spleen and culture
- Bone marrow biopsy and culture
- Direct agglutination assay
- Indirect immunofluorescent antibody test
- Leishmania-specific PCR test
- Liver biopsy and culture
- Lymph node biopsy and culture
- Montenegro skin test (not approved in the United States)
- Skin biopsy and culture
Other tests that may be done include:
Medicines called antimony-containing compounds are the main drugs used to treat leishmaniasis. These include:
- Meglumine antimoniate
- Sodium stibogluconate
Other drugs that may be used include:
- Amphotericin B
Plastic surgery may be needed to correct the disfigurement caused by sores on the face (cutaneous leishmaniasis).
Cure rates are high with the proper medicine, especially when treatment is started before the immune system is damaged. Cutaneous leishmaniasis may lead to disfigurement.
Death is usually caused by complications (such as other infections), rather than from the disease itself. Death often occurs within 2 years.
Leishmaniasis may lead to the following:
- Bleeding (hemorrhage)
- Deadly infections due to immune system damage
- Disfigurement of the face
When to Contact a Medical Professional
Contact your provider if you have symptoms of leishmaniasis after visiting an area where the disease is known to occur.
Taking measures to avoid sandfly bites can help prevent leishmaniasis:
- Putting fine mesh netting around the bed (in areas where the disease occurs)
- Screening windows
- Wearing insect repellent
- Wearing protective clothing
Public health measures to reduce sandflies are important. There are no vaccines or drugs that prevent leishmaniasis.
Boelaert M, Sundar S. Leishmaniasis. In: Farrar J, Hotez PJ, Junghanss T, Kang G, Lalloo D, White NJ, eds. Manson's Tropical Diseases. 23rd ed. Philadelphia, PA: Elsevier Saunders; 2014:chap 47.
Magill AJ. Leishmania species. In: Bennett JE, Dolin R, Blaser MJ, eds. Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases, Updated Edition. 8th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 277.
Reviewed By: Jatin M. Vyas, MD, PhD, Assistant Professor in Medicine, Harvard Medical School; Assistant in Medicine, Division of Infectious Disease, Department of Medicine, Massachusetts General Hospital, Boston, MA. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team. |
Immunoglobulins (Ig) or antibodies serve as key detection molecules in the immune system. Antibodies refers to proteins that are produced in B-lymphocytes by the body’s immune system in response to foreign substances or infections, including bacteria or viruses.
There are five major classes (isotypes) of immunoglobulins:
- IgA (immunoglobin A)
- IgD (immunoglobin D)
- IgE (immunoglobin E)
- IgG (immunoglobin G)
- IgM (immunoglobin M)
IgM is the largest immunoglobulin among all with a size of 970 kDa and the first immunoglobulin developed during human fetal development at around 20 weeks. In addition, IgM are the first antibodies to be produced in respond to any infection and account only 10% of the total volume of the serum.
IgG is the most abundant type of antibody (75% of the total volume of the serum) and is found in all body fluids and protects against bacterial and viral infections.
IgG antibodies are smaller (150 kDa) and the only type of antibody that can cross the placenta, providing mother’s immunity to the infant while his/her humoral response is inefficient.
Why is my Doctor checking for antibodies?
An important difference between the two antibodies is related to exposure. While IgM antibodies are the first antibodies to fight against, and attack of viruses or bacteria, they are usually found in a human body right after it has been exposed to a disease. Afterward, IgM antibodies disappear within 2 or 3 weeks and are replaced by IgG antibodies which lasts for life providing lasting immunity.
Therefore, Doctors check for antibodies because IgM is an indicator of a current infection while presence of IgG antibodies against a certain antigen indicates a recent or past exposure to the illness.
Images credits: Designed by Freepik |
Folic acid helps the body build and maintain DNA and is important in helping the body make new cells, especially red blood cells. [American Cancer Institute, 2011]
Vitamin B9 can either be called a "folate" or "folic acid", both are often used interchangeably, although they are slightly different. Foliate is the proper name for it when taken in food. Folic Acid is the proper term when taken as a vitamin as either a tablet or powder form.
Low levels of folic acid in the blood have been linked with higher rates of colorectal cancer and some other types of cancer, as well as with certain birth defects. It is not clear whether consuming recommended (or higher) amounts of folic acid-from foods or in supplements can lower cancer risk in some people. These issues are being studied. [American Cancer Institute, 2011]
How folic acid might affect cancer risk is not exactly clear. Cells need folic acid to make and repair DNA when they divide to create new cells, and it. Folic acid may be involved in how cells turn certain genes on and off. Scientists believe low levels of folic acid can lead to changes in the chemicals that affect DNA, which may alter how well cells can repair themselves or divide without making mistakes. These changes might in turn lead to cancer. Further research is needed.[American Cancer Institute, 2011]
Whether folic acid works against cancer may also depend on when it is taken. Some researchers think that folic acid may not be helpful, and could even be harmful, in people who already have cancer or pre-cancerous conditions. For example, two randomized, controlled trials found that folic acid supplements had no effect on women who already had pre-cancerous conditions of the cervix. Along those same lines, drugs that block folic acid are routinely used to treat cancer. This seems contradictory, but folic acid is used to make DNA and RNA. [American Cancer Institute, 2011]
High doses of folic acid can interfere with the action of some chemotherapy drugs, such as Methotrexate. [American Cancer Institute, 2011] Interactions specifically with high levels of Citrus Fruits are also an issue with Rapamycin treatments including; Everolimus (Afinitor) more commonly known as RAD001, as well as with Sirolimus.
Folic acid can also mask symptoms of vitamin B12 deficiency by correcting the anemia caused by low vitamin B12 levels. But vitamin B12 deficiency can still cause nervous system damage, which folic acid cannot correct. In fact, high doses of folic acid can worsen the nervous system damage, and continued B12 deficiency can allow the damage to become permanent. [American Cancer Institute, 2011] |
This lesson is about entrepreneurship and its place in society. It develops speaking and writing skills and the use of context-specific vocabulary and idiomatic language. The students’ own experiences and opinions form the basis of all discussions and written work.
- To identify what it means to be an entrepreneur and discuss the importance of entrepreneurship to individuals and society
- To identify the meaning of and use vocabulary in the context of entrepreneurship
- To identify the meaning of and use idioms in the context of entrepreneurship
- To write a narrative about an entrepreneur’s life and achievements
The lesson plan and worksheets/resources below are downloadable and in pdf format - right click on the attachment and save it on your computer.
Copyright - please read
All the materials on these pages are free for you to download and copy for educational use only. You may not redistribute, sell or place these materials on any other web site without written permission from the BBC and British Council. If you have any questions about the use of these materials please email us at: [email protected] |
The damage that salt crystallization can cause in ancient stone monuments has long been recognized by conservators and conservation scientists. Recent scientific research at the GCI has increased understanding of the ways in which salts can harm cultural heritage. Because of this new work, a wider range of potential mitigation strategies can now be considered.
As part of an ongoing research project, Carlos Rodriguez-Navarro, GCI research fellow, and Eric Doehne, GCI associate scientist, used tools such as time-lapse video and the environmental scanning electron microscope to investigate in detail the behavior of salt-laden stone. The findings from their research help explain why certain salts are more damaging than others and what parameters are likely to be important in determining the extent of the damage. How concentrated the salt solution becomes before the salts crystallize appears to be a key damage factor. Rapid cooling or drying—as the result of wind, for example—was shown to greatly increase damage. Changes in the internal surface roughness of the material may also delay the onset of crystallization. The work also found that the extent of the damage also greatly depends on the properties of the solution, such as its surface tension.
This research opens the way to considering the use of certain materials, such as some surfactants, that might modify the salt solutions and thus reduce the damage they cause. In the meantime, buffering the site of a monument as much as possible from rapid cooling or rapid drying—the result of wind, sun, or low relative humidity—can help reduce the amount of damage from salts. Researchers in Australia, for example, have proposed reducing salt and thermal damage by planting a row of trees in front of a sun-exposed, cliff rock-art site. Another technique is the use of temporary, porous, sacrificial material placed in the area of current damage; this may help move the site of crystallization to above the layer to be preserved.
Much more research is needed to provide a solid foundation of knowledge regarding this complex phenomenon, but future work by the GCI research team and other groups can be expected to shed additional light on these problems. Inquiries regarding the GCI's work can be made directly to Eric Doehne ([email protected]). Scientific articles on the research will be added to "Research Webstracts" on the GCI's Web site (www.getty.edu/conservation) as soon as they become available. |
The federal government has taken many stances on the administration of the Native American peoples. From the concept of manifest destiny from which spawned acts of genocide, dishonored treaties, and cultural annihilation, to the present policy upholding the sovereign rights of tribes, the federal government has proven that it is a political rather than moral body.
And politics has been at the center of California Indian matters during the 1990s. 1n 1988, Congress passed the passed the Indian Gaming Regulatory Act, which was an attempt to balance interests between tribal sovereignty and the states, and required the states to negotiate "in good faith" to reach agreements with the tribes for the operation of casinos. During California Gov. Pete Wilson's tenure, several disputes between the state and tribes led to the shuttering of tribal casinos.
The conflict between the state and the tribes led to the Indian Self-Reliance Initiative, known as Proposition 5, which was put on the ballet to allow tribes to open casinos that offered Nevada-style gaming. California voters approved Proposition 5 by a two-to-one margin in 1998. The approval of this proposition could be deemed a small victory for California Indians, and the awareness of the voters of the need for some form of redress.
Proposition 5 was immediately challenged by Nevada-backed interests and California card room owners. The California Supreme Court ruled in August 1999 that Proposition 5 violated the state constitution. This led to the passage of Proposition 1A, which amended the state constitution to allow for compacts between tribes and state.
Casino gaming does not benefit all the tribes, and the California Indians are among the most economically deprived groups in the nation and one of the lowest socioeconomic groups in Indian Country. They are the most land-poor Indians in the country, and federal funding for California Indians is the lowest per capita of any state. While the economic power of some Indian tribes has increased, it is in large part due to the ability of the remaining California Indians to preserve their traditions, and thus their identity as a proud society that has suffered at the hands of conquerors. The sustenance of their cultural identity has allowed them to develop a political identity, which is a requirement for their survival as a people.
3.2 Students describe the American Indian nations in their local region long ago and in the recent past. (3.2.3)
12.7 Students analyze and compare the powers and procedures of the national, state, tribal, and local governments. |
Supercomputer simulations of blood moving through the vasculature in the brain account for even the plasticity of individual blood cells. Take a look at this video by researchers from Brown University.
( Blood Flow: Multi-scale Modeling and Visualization )
Healthy red blood cells are smooth and elastic; they need to squeeze and bend through tiny capillaries to deliver blood to all areas of the brain. But malaria-infected cells stiffen and stick to the walls, creating blockages in arteries and vessels. Malaria victims die because their brain tissues are deprived of oxygen. A more complete picture of how blood moves through the brain would allow doctors to understand the progression of diseases that affect blood flow, like malaria, diabetes and HIV.
"Previous computer models haven't been able to accurately account for, say, the motion of the blood cells bending or buckling as they ricochet off the walls," said Joe Insley, a principal software developer at Argonne who is working with the team. "This simulation is powerful enough to incorporate that extra level of detail."
Take a look at just the first few seconds of the following trailer from Fantastic Voyage and see how close we were in 1966 to simulating the flow of blood. |
Like Supreme Court Justice Potter Stewart said about pornography, with wealth we can say, “I know it when I see it.”
However, it too is not so easy to quantify.
Where are we going? Let’s see what we really know when we discuss wealth inequality.
Measuring the Wealth Gap
Defining wealth as the value of a family’s assets like a car, a house, jewelry and investments minus its debt, Pew Research based their conclusions about the wealth gap on the Fed’s Survey of Consumer Finances.
First they had to define upper and middle income:
Then they could tell us that the wealth gap between the two was getting bigger:
It all looks pretty clear cut.
The Wealth Gap Debate
Because it all depends on your yardstick, economists do not agree on the extent of the wealth gap.
Instead of the Survey of Consumer Finances (CSF), other studies are based on estate records and a third group focuses on tax data that relate to capital. The CSF depends on a survey that participants have volunteered for, the second looks at inheritance and the third, your returns from capital. Complicating the comparison even more so, the CSF only dates back to 1989, estates to 1916 and capital to the income tax which takes us to 1913.
Researchers using the different data pretty much share three conclusions. 1) Wealth has always been concentrated at the top; 2) the share of the top 1% peaked during the 1920s; 3) the share of the top 1% plunged with the Great Depression.
Once though we touch the 1980s, the metrics diverge. The statistics that relate to capital indicate the most affluent are accumulating considerably more wealth. By contrast, our estates and survey of consumer finances lead to different conclusions.
From: “What Do We Know About the Evolution of Top Wealth Shares in the United States?”
Each data set has its own problems. And we have not even asked if income disparities rather than wealth might be a better metric for understanding inequality.
Our Bottom Line: Incentives
Whatever data we use, as economists, we should always remember incentives. And that takes us to an entirely different perspective.
In The Haves and the Have-Nots, economist Branko Milanovic suggests we use three questions to assess the impact of inequality.
- Identify the cause of inequality. For example, determine whether income inequality increases or decreases as the economy grows.
- Identify the impact of inequality. For example, does inequality create positive or negative economic incentives?
- Identify the ethical implications of inequality. For example, are there good and bad ways to have ascended to affluence?
So, even if we do know wealth when we see it, perhaps it is most important to look at its cause, impact and ethical implications. |
Bubble Chamber Site
Photo © CERN larger image
This spiral track was made in a bubble chamber--a particle detector used in the 1970s and 1980s--and it was made by an electron that was knocked out of a hydrogen atom by one of the upward-moving particles. To learn more about this track, visit the CERN Gallery of Bubble Chamber (BC) Pictures, go to "Gallery of BC Pictures," and click on "electron."
Through Einstein's Eyes
Visit this site for a simulation of what you would see if travel were possible at light speed. You can take a rollercoaster ride, observe a tram moving near the speed of light, and tour of the solar system. Elsewhere in the site, you can explore the physics of special relativity. To learn more about Einstein, visit the Center for the History of Physics' online exhibits and go to "Albert Einstein--Image and Impact." |
Focus on Learning to Read
After decades of controversy about how to teach children to read, American universities conducted extensive research at multiple sites over an extended thirty-year period of time to determine the most effective techniques. The conclusions were that there are three “non-negotiables” of early reading. That means they are so strongly indicated in replicated research studies that they are beyond discussion. We need to give children these three things:
1. Phonemic Awareness. Phonemes are the smallest units of sound in our language. So if you say the word “cat” you hear three phonemes: /c/ /a/ /t/. If you say the word “stop” you hear four sounds or phonemes: /s/ /t/ /o/ /p/. Since English is an alphabetic language in which letters stand for sounds, in order to read and write effectively, children need to be able to detect the discrete sounds. This is phonemic awareness. It can be taught and Age of Montessori has developed a lovely sequence of steps to teach it in our early reading sequence.
Here is an example of concrete materials to work with rhyming. You take the picture on the left and say it aloud. Find another picture in the row that rhymes with it and mark them both, so in the top row we have “nail” and “veil.” (The fact that they are not spelled the same way is not important; remember, this is sound work and the child does not yet know how the words are spelled. What the adult knows about spelling is not relevant at this stage.) In the bottom row, we have “box” and “fox.”
We have activities like this for all the major phonemic awareness skills.
2. Alphabetic Principle. This is the fundamental understanding that letters represent sounds. Because this is so important, we teach the sounds the letters make rather than their names. This is not the same thing as the child learning his alphabet. Children often learn their ABCs by rote memory without truly understanding the alphabetic principle. In Montessori, this is not the case. This little fellow is tracing, seeing, hearing, and saying the sound /h/ with the sandpaper letter.
3. Application. The third non-negotiable is applying the first two essentials to reading and writing. Montessori does this with simple elegance with the movable alphabet. Here you see a little boy who has taken the object “cat.” He knows the sounds of the word: /c/ /a/ /t/, and he knows which letters represent those sounds. He gets the letters, puts them in order, and he has just formed the word “cat.” In this picture, his big sister is helping him learn to write the word he has just built.
Given that this research is from the last two decades, it is quite interesting to see what Maria Montessori was saying more than eighty years ago. Montessori’s own words, which follow, were written down by Elisabeth Caspari during her course with Maria Montessori in 1943.
Dr. Maria Montessori on Phonemic Awareness
It is only by representing formations of sounds by signs that we are able to capture language that would otherwise be lost with the sound. In the written word history springs to life; the subtlest philosophical thought waits patiently for us to solve and understand its ideal; the contemplations of poets remain for us to return to them time and again; and the trials and discoveries of scientists are treasured for posterity.
The alphabet is a sign language. This sign language is a language of frozen sound. By having analyzed each sound element, isolated it from others and given it a particular sign, we have created an alphabet. Then comes the study of the spoken word, the elements of sound which constitute each word. When we have examined a word and know its sound parts, we need only take these parts and substitute for them the corresponding signs. And lo, the sound stands a graven image before us.
This then is what we must help the child to do: 1.) Name certain signs by sound, 2.) Dissect words into sounds and 3.) Recompose the word, substituting signs for sounds.
It is so simple and so effective, yet it has eluded so many educators for so long, and American children have paid a fearsome price in soaring illiteracy rates that simply do not need to be. |
The United States declared war with Spain in 1898 to support Cuba with their struggles from Spanish control. The United States sent out a fleet to the Philippines to defeat the Spanish navy and this battle was known as the “Battle of Manila Bay”. The Spanish navy was defeated and the Filipino’s had the impression that they were liberated, free to rule their own country. However, the outcome of the SpanishAmerican War resulted in the signing of the Treaty of Paris. This agreement not only surrendered control for Cuba but also ceded Puerto Rico, Guam, and the Philippines to the United States. The continued presence of the American forces had implied to the Filipino’s that they were still there to take sovereignty over the country.
On February 4, 1899, the AmericanPhilippine War began. On January 23, 1899, the Filipinos proclaimed an independent republic and elected longtime nationalist Emilio Aguinaldo president.1 Since then, he organized a filipino revolutionary government and sent out resistance throughout the country. Thinking strategically, President McKinley along with others believed that the Philippines is too important to the U.S. to allow Filipino’s to govern themselves. President McKinley declared his intention to “educate the Filipinos, and uplift and civilize and Christianize them,” and mobilized 20,000 U.S. troops to get the job done.2 Although there were forces there already, this war required more effort and lasted longer than the Americans anticipated.
The war was fought in two phases. President Aguinaldo dominated this phase with illfated attempts to fight a conventional war against the bettertrained and equipped American troops. The American troops were at an advantage in so many ways over the Filipino forces. These advantages included having a better well trained forces, a dependable amount of military equipment and means of control of the the different waterways of the country. President Aguinaldo fighting with these kinds of advantages over his forces had resulted in critical damage on his forces and supply.
It was apparent in the beginning that because the Filipino’s had a longterm of shortages in weapons and ammunitions and lack of support that going head on with the Americans with no plan was an oversight. After the 10 months of phase one was over, the second phase of the war took place. This time President Aguinaldo had his forces go on to the battlefield using the guerrilla warfare tactics. The guerrilla tactics is a form of warfare fought by irregulars using military tactics including ambushes, raids, element of surprise, and fought in fastmoving small scale actions to dominate larger but less mobile armies. |
About SSA: What Works
"Outside In" Is Not Enough
A quick scan of schools' efforts in the past decade shows that most have responded to the issue of school violence and school safety first and foremost by preparing for a disaster. When they do turn toward prevention, it is typically from the "outside in" - increasing supervision and surveillance, setting stricter policies with tougher consequences for violation, setting up tip-lines and boxes, putting up signs and conducting assemblies.
This Outside-In approach has limited impact. There just are not enough adults to be in every hotspot.
A recent survey by the National Association of Attorneys General found that while students generally appreciate the new measures, the changes do not increase their sense of safety (and thus do not help increase academic achievement and student performance) because they have little impact on school climate (the way people treat each other on campus).
While we can force students to leave their colors, knives and guns at the school door, they still bring inside their prejudices, cliques, grudges, and attitudes. Their words become their weapons, and the casualties mount. Clearly, safe schools must be built in a different way.
The "Inside Out" Approach
Research and field experience indicate that safe schools must be built from the "Inside Out".
How to harness students' power? Research indicates that at any given time the vast majority of students (as high as 85%) are neither aggressors nor targets; they are involved only as bystanders. Most bystanders respond to the mistreatment they witness in one of two ways:
Usually they don't intervene because they fear retaliation and don't know what to do.
But when bystanders DO step in, the bullying stops. Research shows that when a peer speaks up, the length of the bullying incident is cut by 75%.
How to Mobilize the Bystanders
Since bystanders hold the key to stopping mistreatment, how can they be motivated to act? The answer: by their leaders.
As popularized in The Tipping Point and confirmed in scientific research, the social norms of a community change when a select few individuals its "opinion leaders" change their behavior, and use their status to influence others. Gather the opinion leaders, motivate them to change, and the community norms follow suit quickly.
Breaking the Code of Silence
The second part of the answer revolves around "empowerment." For example, most of the talk about "breaking the code of silence" omits one simple fact: students will only bring information forward if they feel empowered. Telling them they'll save a life, maybe their own, is like a hook without bait. When they are truly valued by the adults at their school, engaged in meaningful dialogue, heard and respected, and invited to play meaningful roles ... then it will feel natural to share important information with the adults who are their partners in making their school a safe place. Empowering students is more than enlightened educational practice; it is an essential component of building safe schools.
Download a 2-page summary of the research on which the SSA program is based.
Read the Research Report that describes the logic model at the core of the Safe School Ambassadors program.
See the core components of the Safe School Ambassadors Program Model. |
Houghton Mifflin Social Studies
Across the Centuries
What Your Child is Learning in Unit 6 "Europe: 1300-1600"
In this unit, your child will learn about medieval western Europe. He or she will compare medieval societies in Japan and Europe, and learn why feudalism lasted longer in Japan than in Europe. Your child will then examine the effect of religion on Europeans in the Middle Ages, and see how disagreements over religious authority led to a split in the Christian church. Lastly, he or she will learn about the Crusades, a series of wars between Christians and Muslims for control of the Holy Land.
Activities You Can Do at Home to Support Your Child's Learning
Chapter 12 The Renaissance
- Guidelines from an old Italian Book of Manners explained that people should not carry toothpicks behind their ears, clean their teeth with napkins, or wear bright stockings that call attention to fat, thin, or crooked legs. Help your child develop a list of guidelines for good manners today.
Chapter 13 Reformation and the Scientific Revolution
- In school, your child will learn of Newton's discovery that all bodies fall at the same rate, no matter what size or weight. Test this scientific fact with your child. Hold up two objects, such as a quarter and a piece of paper crumbled into a ball. Have your child watch carefully as you drop both objects at the same time. Look for other objects that are different weights and sizes and take turns testing those.
- In school, your child will read about the fear people felt when Edward Jenner performed experiments to test a vaccine against smallpox. Talk about new medical discoveries or changes made in your lifetime. Work with your child to list discoveries or changes in health and dental care. What are the pros and cons of each discovery or change on your list?
Chapter 14 The Age of Exploration
- Reports of the travels of Ibn Battuta and Marco Polo encouraged others to venture into new regions. With your child, make a list of places either of you have heard about from someone else who has traveled to them.
Social Studies Center |
Houghton Mifflin Social Studies |
Grade 7 Home
Education Place |
You may download, print, and make copies of Home/School Connection pages for use in your classroom, provided that you include the copyright notice shown below on all such copies.
Copyright © 1998 Houghton Mifflin Company. All Rights Reserved. |
Via Satellite, November 1999
The Global Positioning System
A National Resource
by Robert A. Nelson
On a recent trip to visit the Jet Propulsion Laboratory, I flew from Washington, DC to Los
Angeles on a new Boeing 747-400 airplane. The geographical position of the plane and its relation to nearby cities was displayed throughout the flight on a video screen in the passenger cabin. When I arrived in Los Angeles, I rented a car that was equipped with a navigator. The navigator guided me to my hotel in Pasadena, displaying my position on a map and verbally giving me directions with messages like “freeway exit ahead on the right followed by a left turn.” When I reached the hotel, it announced that I had arrived at my destination. Later, when I was to join a colleague for dinner, I found the restaurant listed in a menu and the navigator took me there.
This remarkable navigation capability is made possible by the Global Positioning System (GPS). It was originally designed jointly by the U.S. Navy and the U.S. Air Force to permit the determination of position and time for military troops and guided missiles. However, GPS has also become the basis for position and time measurement by scientific laboratories and a wide spectrum of applications in a multi-billion dollar commercial industry. Roughly one million receivers are manufactured each year and the total GPS market is expected to approach $ 10 billion by the end of next year. The story of GPS and its principles of measurement are the subjects of this article.
EARLY METHODS OF
The shape and size of the earth has been known from the time of antiquity. The fact that the earth is a sphere was well known to educated people as long ago as the fourth century BC. In his book On the Heavens, Aristotle gave two scientifically correct arguments. First, the shadow of the earth projected on the moon during a lunar eclipse appears to be curved. Second, the elevations of stars change as one travels north or south, while certain stars visible in Egypt cannot be seen at all from Greece.
The actual radius of the earth was determined within one percent by Eratosthenes in about 230 BC. He knew that the sun was directly overhead at noon on the summer solstice in Syene (Aswan, Egypt), since on that day it illuminated the water of a deep well. At the same time, he measured the length of the shadow cast by a column on the grounds of the library at Alexandria, which was nearly due north. The distance between Alexandria and Syene had been well established by professional runners and camel caravans. Thus Eratosthenes was able to compute the earth’s radius from the difference in latitude that he inferred from his measurement. In terms of modern units of length, he arrived at the figure of about
6400 km. By comparison, the actual mean radius is 6371 km (the earth is not precisely spherical, as the polar radius is 21 km less than the equatorial radius of 6378 km).
The ability to determine one’s position on the earth was the next major problem to be addressed. In the second century, AD the Greek astronomer Claudius Ptolemy prepared a geographical atlas, in which he estimated the latitude and longitude of principal cities of the Mediterranean world. Ptolemy is most famous, however, for his geocentric theory of planetary motion, which was the basis for astronomical catalogs until Nicholas Copernicus published his heliocentric theory in 1543.
Historically, methods of navigation over the earth's surface have involved the angular measurement of star positions to determine latitude. The latitude of one’s position is equal to the elevation of the pole star. The position of the pole star on the celestial sphere is only temporary, however, due to precession of the earth's axis of rotation through a circle of radius 23.5 over a period of 26,000 years. At the time of Julius Caesar, there was no star sufficiently close to the north celestial pole to be called a pole star. In 13,000 years, the star Vega will be near the pole. It is perhaps not a coincidence that mariners did not venture far from visible land until the era of Christopher Columbus, when true north could be determined using the star we now call Polaris. Even then the star’s diurnal rotation caused an apparent variation of the compass needle. Polaris in 1492 described a radius of about 3.5 about the celestial pole, compared to 1 today. At sea, however, Columbus and his contemporarie
s depended primarily on the mariner’s compass and dead reckoning.
The determination of longitude was much more difficult. Longitude is obtained astronomically from the difference between the observed time of a celestial event, such as an eclipse, and the corresponding time tabulated for a reference location. For each hour of difference in time, the difference in longitude is 15 degrees.
Columbus himself attempted to estimate his longitude on his fourth voyage to the New World by observing the time of a lunar eclipse as seen from the harbor of Santa Gloria in Jamaica on February 29, 1504. In his distinguished biography Admiral of the Ocean Sea, Samuel Eliot Morrison states that Columbus measured the duration of the eclipse with an hour-glass and determined his position as nine hours and fifteen minutes west of Cadiz, Spain, according to the predicted eclipse time in an almanac he carried aboard his ship. Over the preceding year, while his ship was marooned in the harbor, Columbus had determined the latitude of Santa Gloria by numerous observations of the pole star. He made out his latitude to be 18, which was in error by less than half a degree and was one of the best recorded observations of latitude in the early sixteenth century, but his estimated longitude was off by some 38 degrees.
Columbus also made legendary use of this eclipse by threatening the natives with the disfavor of God, as indicated by a portent from Heaven, if they did not bring desperately needed provisions to his men. When the eclipse arrived as predicted, the natives pleaded for the Admiral’s intervention, promising to furnish all the food that was needed.
New knowledge of the universe was revealed by Galileo Galilei in his book The Starry Messenger. This book, published in Venice in 1610, reported the telescopic discoveries of hundreds of new stars, the craters on the moon, the phases of Venus, the rings of Saturn, sunspots, and the four inner satellites of Jupiter. Galileo suggested using the eclipses of Jupiter’s satellites as a celestial clock for the practical determination of longitude, but the calculation of an accurate ephemeris and the difficulty of observing the satellites from the deck of a rolling ship prevented use of this method at sea. Nevertheless, James Bradley, the third Astronomer Royal of England, successfully applied the technique in 1726 to determine the longitudes of Lisbon and New York with considerable accuracy.
Inability to measure longitude at sea had the potential of catastrophic consequences for sailing vessels exploring the new world, carrying cargo, and conquering new territories. Shipwrecks were common. On October 22, 1707 a fleet of twenty-one ships under the command of Admiral Sir Clowdisley Shovell was returning to England after an unsuccessful military attack on Toulon in the Mediterranean. As the fleet approached the English Channel in dense fog, the flagship and three others foundered on the coastal rocks and nearly two thousand men perished.
Stunned by this unprecedented loss, the British government in 1714 offered a prize of £20,000 for a method to determine longitude at sea within a half a degree. The scientific establishment believed that the solution would be obtained from observations of the moon. The German cartographer Tobias Mayer, aided by new mathematical methods developed by Leonard Euler, offered improved tables of the moon in 1757. The recorded position of the moon at a given time as seen from a reference meridian could be compared with its position at the local time to determine the angular position west or east.
Just as the astronomical method appeared to achieve realization, the British craftsman John Harrison provided a different solution through his invention of the marine chronometer. The story of Harrison’s clock has been recounted in Dava Sobel’s popular book, Longitude.
Both methods were tested by sea trials. The lunar tables permitted the determination of longitude within four minutes of arc, but with Harrison's chronometer the precision was only one minute of arc. Ultimately, portions of the prize money were awarded to Mayer’s widow, Euler, and Harrison.
In the twentieth century, with the development of radio transmitters, another class of navigation aids was created using terrestrial radio beacons, including Loran and Omega. Finally, the technology of artificial satellites made possible navigation and position determination using line of sight signals involving the measurement of Doppler shift or phase difference.
Transit, the Navy Navigation Satellite System, was conceived in the late 1950s and deployed in the mid-1960s. It was finally retired in 1996 after nearly 33 years of service. The Transit system was developed because of the need to provide accurate navigation data for Polaris missile submarines. As related in an historical perspective by Bradford Parkinson, et al. in the journal Navigation (Spring 1995), the concept was suggested by the predictable but dramatic Doppler frequency shifts from the first Sputnik satellite, launched by the Soviet Union in October, 1957. The Doppler-shifted signals enabled a determination of the orbit using data recorded at one site during a single pass of the satellite. Conversely, if a satellite's orbit were already known, a radio receiver's position could be determined from the same Doppler measurements.
The Transit system was composed of six satellites in nearly circular, polar orbits at an altitude of 1075 km. The period of revolution was 107 minutes. The system employed essentially the same Doppler data used to track the Sputnik satellite. However, the orbits of the Transit satellites were precisely determined by tracking them at widely spaced fixed sites. Under favorable conditions, the rms accuracy was 35 to 100 meters. The main problem with Transit was the large gaps in coverage. Users had to interpolate their positions between passes.
The success of Transit stimulated both the U.S. Navy and the U.S. Air Force to investigate more advanced versions of a space-based navigation system with enhanced capabilities. Recognizing the need for a combined effort, the Deputy Secretary of Defense established a Joint Program Office in 1973. The NAVSTAR Global Positioning System (GPS) was thus created.
In contrast to Transit, GPS provides continuous coverage. Also, rather than Doppler shift, satellite range is determined from phase difference.
There are two types of observables. One is pseudorange, which is the offset between a pseudorandom noise (PRN) coded signal from the satellite and a replica code generated in the user’s receiver, multiplied by the speed of light. The other is accumulated delta range (ADR), which is a measure of carrier phase.
The determination of position may be described as the process of triangulation using the measured range between the user and four or more satellites. The ranges are inferred from the time of propagation of the satellite signals. Four satellites are required to determine the three coordinates of position and time. The time is involved in the correction to the receiver clock and is ultimately eliminated from the measurement of position.
High precision is made possible through the use of atomic clocks carried on-board the satellites. Each satellite has two cesium clocks and two rubidium clocks, which maintain time with a precision of a few parts in 1013 or 1014 over a few hours, or better than 10 nanoseconds. In terms of the distance traversed by an electromagnetic signal at the speed of light, each nanosecond corresponds to about 30 centimeters. Thus the precision of GPS clocks permits a real time measurement of distance to within a few meters. With post-processed carrier phase measurements, a precision of a few centimeters can be achieved.
The design of the GPS constellation had the fundamental requirement that at least four satellites must be visible at all times from any point on earth. The tradeoffs included visibility, the need to pass over the ground control stations in the United States, cost, and sparing efficiency.
The orbital configuration approved in 1973 was a total of 24 satellites, consisting of 8 satellites plus one spare in each of three equally spaced orbital planes. The orbital radius was 26,562 km, corresponding to a period of revolution of 12 sidereal hours, with repeating ground traces. Each satellite arrived over a given point four minutes earlier each day. A common orbital inclination of 63 was selected to maximize the on-orbit payload mass with launches from the Western Test Range. This configuration ensured between 6 and 11 satellites in view at any time.
As envisioned ten years later, the inclination was reduced to 55 and the number of planes was increased to six. The constellation would consist of 18 primary satellites, which represents the absolute minimum number of satellites required to provide continuous global coverage with at least four satellites in view at any point on the earth. In addition, there would be 3 on-orbit spares.
The operational system, as presently deployed, consists of 21 primary satellites and 3 on-orbit spares, comprising four satellites in each of six orbital planes. Each orbital plane is inclined at 55. This constellation improves on the “18 plus 3” satellite constellation by more fully integrating the three active spares.
There have been several generations of GPS satellites. The Block I satellites, built by Rockwell International, were launched between 1978 and 1985. They consisted of eleven prototype satellites, including one launch failure, that validated the system concept. The ten successful satellites had an average lifetime of 8.76 years.
The Block II and Block IIA satellites were also built by Rockwell International. Block II consists of nine satellites launched between 1989 and 1990. Block IIA, deployed between 1990 and 1997, consists of 19 satellites with several navigation enhancements. In April 1995, GPS was declared fully operational with a constellation of 24 operational spacecraft and a completed ground segment. The 28 Block II/IIA satellites have exceeded their specified mission duration of 6 years and are expected to have an average lifetime of more than 10 years.
Block IIR comprises 20 replacement satellites that incorporate autonomous navigation based on crosslink ranging. These satellites are being manufactured by Lockheed Martin. The first launch in 1997 resulted in a launch failure. The first IIR satellite to reach orbit was also launched in 1997. The second GPS 2R satellite was successfully launched aboard a Delta 2 rocket on October 7, 1999. One to four more launches are anticipated over the next year.
The fourth generation of satellites is the Block II follow-on (Block IIF). This program includes the procurement of 33 satellites and the operation and support of a new GPS operational control segment. The Block IIF program was awarded to Rockwell (now a part of Boeing). Further details may be found in a special issue of the Proceedings of the IEEE for January, 1999.
The Master Control Station for GPS is located at Schriever Air Force Base in Colorado Springs, CO. The MCS maintains the satellite constellation and performs the stationkeeping and attitude control maneuvers. It also determines the orbit and clock parameters with a Kalman filter using measurements from five monitor stations distributed around the world. The orbit error is about 1.5 meters.
GPS orbits are derived independently by various scientific organizations using carrier phase and post-processing. The state of the art is exemplified by the work of the International GPS Service (IGS), which produces orbits with an accuracy of approximately 3 centimeters within two weeks.
The system time reference is managed by the U.S. Naval Observatory in Washington, DC. GPS time is measured from Saturday/Sunday midnight at the beginning of the week. The GPS time scale is a composite “paper clock” that is synchronized to keep step with Coordinated Universal Time (UTC) and International Atomic Time (TAI). However, UTC differs from TAI by an integral number of leap seconds to maintain correspondence with the rotation of the earth, whereas GPS time does not include leap seconds. The origin of GPS time is midnight on January 5/6, 1980 (UTC). At present, TAI is ahead of UTC by 32 seconds, TAI is ahead of GPS by 19 seconds, and GPS is ahead of UTC by 13 seconds. Only 1,024 weeks were allotted from the origin before the system time is reset to zero because 10 bits are allocated for the calendar function (1,024 is the tenth power of 2). Thus the first GPS rollover occurred at midnight on August 21, 1999. The next GPS rollover will take place May 25, 2019.
The satellite position at any time is computed in the user’s receiver from the navigation message that is contained in a 50 bps data stream. The orbit is represented for each one hour period by a set of 15 Keplerian orbital elements, with harmonic coefficients arising from perturbations, and is updated every four hours.
This data stream is modulated by each of two code division multiple access, or spread spectrum, pseudorandom noise (PRN) codes: the coarse/acquisition C/A code (sometimes called the clear/access code) and the precision P code. The P code can be encrypted to produce a secure signal called the Y code. This feature is known as the Anti-Spoof (AS) mode, which is intended to defeat deception jamming by adversaries. The C/A code is used for satellite acquisition and for position determination by civil receivers. The P(Y) code is used by military and other authorized receivers.
The C/A code is a Gold code of register size 10, which has a sequence length of 1023 chips and a chipping rate of 1.023 MHz and thus repeats itself every 1 millisecond. (The term “chip” is used instead of “bit” to indicate that the PRN code contains no information.) The P code is a long code of length 2.3547 x 1014 chips with a chipping rate of 10 times the C/A code, or 10.23 MHz. At this rate, the P code has a period of 38.058 weeks, but it is truncated on a weekly basis so that 38 segments are available for the constellation. Each satellite uses a different member of the C/A Gold code family and a different one-week segment of the P code sequence.
The GPS satellites transmit signals at two carrier frequencies: the L1 component with a center frequency of 1575.42 MHz, and the L2 component with a center frequency of 1227.60 MHz. These frequencies are derived from the master clock frequency of 10.23 MHz, with L1 = 154 x 10.23 MHz and L2 = 120 x 10.23 MHz. The L1 frequency transmits both the P code and the C/A code, while the L2 frequency transmits only the P code. The second P code frequency permits a dual-frequency measurement of the ionospheric group delay. The P-code receiver has a two-sigma rms horizontal position error of about 5 meters.
The single frequency C/A code user must model the ionospheric delay with less accuracy. In addition, the C/A code is intentionally degraded by a technique called Selective Availability (SA), which introduces errors of 50 to 100 meters by dithering the satellite clock data. Through differential GPS measurements, however, position accuracy can be improved by reducing SA and environmental errors.
The transmitted signal from a GPS satellite has right hand circular polarization. According to the GPS Interface Control Document, the specified minimum signal strength at an elevation angle of 5 into a linearly polarized receiver antenna with a gain of 3 dB (approximately equivalent to a circularly polarized antenna with a gain of 0 dB) is - 160 dBW for the L1 C/A code, - 163 dBW for the L1 P code, and - 166 dBW for the L2 P code. The L2 signal is transmitted at a lower power level since it is used primarily for the ionospheric delay correction.
The fundamental measurement in the Global Positioning System is pseudorange. The user equipment receives the PRN code from a satellite and, having identified the satellite, generates a replica code. The phase by which the replica code must be shifted in the receiver to maintain maximum correlation with the satellite code, multiplied by the speed of light, is approximately equal to the satellite range. It is called the pseudorange because the measurement must be corrected by a variety of factors to obtain the true range.
The corrections that must be applied include signal propagation delays caused by the ionosphere and the troposphere, the space vehicle clock error, and the user’s receiver clock error. The ionosphere correction is obtained either by measurement of dispersion using the two frequencies L1 and L2 or by calculation from a mathematical model, but the tropospheric delay must be calculated since the troposphere is nondispersive. The true geometric distance to each satellite is obtained by applying these corrections to the measured pseudorange.
Other error sources and modeling errors continue to be investigated. For example, a recent modification of the Kalman filter has led to improved performance. Studies have also shown that solar radiation pressure models may need revision and there is some new evidence that the earth’s magnetic field may contribute to a small orbit period variation in the satellite clock frequencies.
Carrier phase is used to perform measurements with a precision that greatly exceeds those based on pseudorange. However, a carrier phase measurement must resolve an integral cycle ambiguity whereas the pseudorange is unambiguous.
The wavelength of the L1 carrier is about 19 centimeters. Thus with a cycle resolution of one percent, a differential measurement at the level of a few millimeters is theoretically possible. This technique has important applications to geodesy and analogous scientific programs.
The precision of GPS measurements is so great that it requires the application of Albert Einstein’s special and general theories of relativity for the reduction of its measurements. Professor Carroll Alley of the University of Maryland once articulated the significance of this fact at a scientific conference devoted to time measurement in 1979. He said, “I think it is appropriate ... to realize that the first practical application of Einstein’s ideas in actual engineering situations are with us in the fact that clocks are now so stable that one must take these small effects into account in a variety of systems that are now undergoing development or are actually in use in comparing time worldwide. It is no longer a matter of scientific interest and scientific application, but it has moved into the realm of engineering necessity.”
According to relativity theory, a moving clock appears to run slow with respect to a similar clock that is at rest. This effect is called “time dilation.” In addition, a clock in a weaker gravitational potential appears to run fast in comparison to one that is in a stronger gravitational potential. This gravitational effect is known in general as the “red shift” (only in this case it is actually a “blue shift”).
GPS satellites revolve around the earth with a velocity of 3.874 km/s at an altitude of 20,184 km. Thus on account of the its velocity, a satellite clock appears to run slow by 7 microseconds per day when compared to a clock on the earth’s surface. But on account of the difference in gravitational potential, the satellite clock appears to run fast by 45 microseconds per day. The net effect is that the clock appears to run fast by 38 microseconds per day. This is an enormous rate difference for an atomic clock with a precision of a few nanoseconds. Thus to compensate for this large secular rate, the clocks are given a rate offset prior to satellite launch of
- 4.465 parts in 1010 from their nominal frequency of 10.23 MHz so that on average they appear to run at the same rate as a clock on the ground. The actual frequency of the satellite clocks before launch is thus 10.22999999543 MHz.
Although the GPS satellite orbits are nominally circular, there is always some residual eccentricity. The eccentricity causes the orbit to be slightly elliptical, and the velocity and altitude vary over one revolution. Thus, although the principal velocity and gravitational effects have been compensated by a rate offset, there remains a slight residual variation that is proportional to the eccentricity. For example, with an orbital eccentricity of 0.02 there is a relativistic sinusoidal variation in the apparent clock time having an amplitude of 46 nanoseconds. This correction must be calculated and taken into account in the GPS receiver.
The displacement of a receiver on the surface of the earth due to the earth’s rotation in inertial space during the time of flight of the signal must also be taken into account. This is a third relativistic effect that is due to the universality of the speed of light. The maximum correction occurs when the receiver is on the equator and the satellite is on the horizon. The time of flight of a GPS signal from the satellite to a receiver on the earth is then 86 milliseconds and the correction to the range measurement resulting from the receiver displacement is 133 nanoseconds. An analogous correction must be applied by a receiver on a moving platform, such as an aircraft or another satellite. This effect, as interpreted by an observer in the rotating frame of reference of the earth, is called the Sagnac effect. It is also the basis for a laser ring gyro in an inertial navigation system.
In 1996, a Presidential Decision Directive stated the president would review the issue of Selective Availability in 2000 with the objective of discontinuing SA no later than 2006. In addition, both the L1 and L2 GPS signals would be made available to civil users and a new civil 10.23 MHz signal would be authorized. To satisfy the needs of aviation, the third civil frequency, known as L5, would be centered at 1176.45 MHz, in the Aeronautical Radio Navigation Services (ARNS) band, subject to approval at the World Radio Conference in 2000. According to Keith McDonald in an article on GPS modernization published in the September, 1999 GPS World, with SA removed the civil GPS accuracy would be improved to about 10 to 30 meters. With the addition of a second frequency for ionospheric group delay corrections, the civil accuracy would become about 5 to 10 meters. A third frequency would permit the creation of two beat frequencies that would yield
one-meter accuracy in real time.
A variety of other enhancements are under consideration, including increased power, the addition of a new military code at the L1 and L2 frequencies, additional ground stations, more frequent uploads, and an increase in the number of satellites. These policy initiatives are driven by the dual needs of maintaining national security while supporting the growing dependence on GPS by commercial industry. When these upgrades would begin to be implemented in the Block IIR and IIF satellites depends on GPS funding.
Besides providing position, GPS is a reference for time with an accuracy of 10 nanoseconds or better. Its broadcast time signals are used for national defense, commercial, and scientific purposes. The precision and universal availability of GPS time has produced a paradigm shift in time measurement and dissemination, with GPS evolving from a secondary source to a fundamental reference in itself.
The international community wants assurance that it can rely on the availability of GPS and continued U.S. support for the system. The Russian Global Navigation Satellite System (GLONASS) has been an alternative, but economic conditions in Russia have threatened its continued viability. Consequently, the European Union is considering the creation of a navigation system of its own, called Galileo, to avoide relying on the U.S. GPS and Russian GLONASS programs.
The Global Positioning System is a vital national resource. Over the past thirty years it has made the transition from concept to reality, representing today an operational system on which the entire world has become dependent. Both technical improvements and an enlightened national policy will be necessary to ensure its continued growth into the twenty-first century.
Dr. Robert A. Nelson, P.E. is president of Satellite
Engineering Research Corporation, a satellite engineering consulting firm in
Bethesda, Maryland, a Lecturer in the Department of Aerospace Engineering at the University of
Maryland and Technical Editor of Via Satellite magazine. Dr. Nelson is the instructor for the ATI course Satellite
Communications Systems Engineering. Please see our Schedule for dates and |
Natterer’s Bat, Myotis nattereri
Natterer’s bat (Myotis nattereri) is a small bat that is native to Europe. Typically, this bat bears brown fur on the back, wing, and leg membranes, with the wings appearing to be lighter in color. The underbelly is white in color. Its name is derived from Johann Natterer, an Austrian naturalist.
As is typical with bats, Natterer’s bat uses echolocation to navigate and find food. The frequencies of its calls range between 23 and 115 kHz and can last up to 3.8 milliseconds. During the hot months of summer, it will roost in coniferous or deciduous trees and even buildings, but these roosts must be close to a food source.
Although Natterer’s bat has a large range, it is rare to actually see the bat. Despite this, it is listed on the IUCN Red List as of “Least Concern”. It is also protected by the European Habitats Directive, and because of its scarcity in its UK woodland habitats, it has been considered for Site of Special Scientific Interest or Special Areas of Conservation.
Image Caption: Natterer’s bat (Myotis nattereri) in Bensheim (Germany). Credit: Armin Kübelbeck/Wikipedia(CC BY-SA 3.0) |
When describing storability of a potato we often use the term dormancy. A dormancy value or duration gives insight into how long the potato will store before it initiates sprout development.For process and fresh market potatoes, detrimental quality concerns develop once sprouting begins, such as changes in carbohydrate status, increase in respiration rate, additional weight loss and storage management issues such as impeded airflow. Seed growers may need to accelerate or retard sprout development depending upon the time of year and intended seed market.
The biological advantage for a dormancy period in a plant is survival of the species. The inherent dormancy of potatoes allow for most varieties to over-winter, barring any freezing conditions, and re-sprout in the spring, thereby reproducing and perpetuating the species. Tuber dormancy keeps the potatoes from sprouting in the fall, therefore, reducing chances of the species being killed by unfavorable winter conditions. There are three classes or types of dormancy that can be described in potatoes.
1) "Endodormancy" occurs after harvest and is due to the internal or physiological status of the tuber. Even if tubers are placed in conditions favorable for sprout development, sprouting will not occur.
2) "Ecodormancy" is when sprouting is prevented or delayed by environmental conditions, such as when potatoes stored at lower temperatures having a longer dormancy period compared to potatoes stored at warmer temperatures. Table 1 illustrates where differences in days to dormancy break are observed within a variety as storage temperature is lowered.
3) "Paradormancy" is comparable to endodormancy although the physiological signal for dormancy originates in a different area of the plant than where the dormancy occurs. An example of this is apical dominance of a tuber-the apical meristem or dominant bud/eye impedes development of secondary bud or sprout development. Some varieties have stronger paradormancy than others. The growing season or pre-harvest conditions can also affect dormancy length along with post-harvest conditions such as temperature and light.
Initiation of dormancy break actually begins before there is visible sprout development. Researchers continue to examine the physiological processes associated with dormancy and subsequent sprout development. It is believed that the five major plant hormones are involved in the process. Abscisic acid and ethylene are involved in the induction of dormancy, cytokinins are involved in dormancy break, and gibberellins and auxins are involved in sprout development.
Ideally it is best to apply chlorpropham (CIPC) prior to bud activity for greatest sprout suppression. Table 1 shows a three-year average dormancy length of several russet type varieties. Our definition of dormancy break is when 80 percent of potatoes have at least one sprout greater than or equal to 5 mm in length. Typically peeping of the buds occurs two to four weeks prior to this defined loss of dormancy. Therefore, depending upon storage temperature, some varieties may need a CIPC application soon after the curing period has ended to maximize sprout suppression potential. Other sprout suppression products, such as clove oil, are best applied when bud activity is visible. |
Summary of Activity
By building and using two different, but simple, physical models students will investigate why we see different phases of the moon here on Earth. (Either model, or both models and be used.)
- A bright lamp (e.g. an overhead projector) to represent the Sun
- A ball (e.g. tennis ball, cricket ball) to represent the Moon
- A rotating desk chair or a stool
- A shoe box with its lid
- Black paper or black paint
- A squash ball - to act as the moon
- A short pencil or a wooden dowel
- A small bright torch - to act as the Sun
In groups of 3 (see figure 2):
- One student is to sit on the chair (or stool) - he/she sees the view of the moon as from Earth.
- Another student should hold the ball in different positions around the person on the chair - he/she is the moon orbiting the Earth
- The third student should observe and record - he/she will write the name of each phase next to the correct picture on this worksheet.
Further information on how to proceed is in the teacher's presentation
Making the model:
- Line the shoe box with black paper or paint with black paint (leave to dry)
- Make one larger hole in the middle of one short side of the box (about the same diameter as the torch you will use in this experiment
- Make three or four viewing holes along each long side of the box and one viewing hole in the other short side of the box - each hole should be about the size of a 5p piece.
- The pencil/dowel is a little stand for the squash ball - you need to attach the squash ball to one end of the pencil/dowel and the other end needs to be stuck in the centre of the shoe box (see figure 4)
Using the model:
- The torch represents the Sun, the squash ball - the moon and your eye the position of the Earth.
- Place the torch at the larger hole so that it shines on the squash ball inside.
- Look at the squash ball through one of the 5p sized holes along the side of the box - you should see a bright part and shadow on the "moon" squash ball.
- Students should draw and write on this worksheet to record the position of the Sun, Moon and Earth for the different phases of the moon.
Things to find out
What do we mean by the following terms (answers in teachers presentation)?
- New moon
- First quarter
- Last quarter
- A waxing crescent moon
- A waning crescent moon
- A waxing gibbous moon
- A waning gibbous moon
Resources for the activities
Student worksheet (pdf).
Teachers answers for the worksheet (pdf).
Presentation (ppt). Contains notes for the teacher and explanations of why we see different phases of the moon.
A flick book to cut out and staple to show the Moon's orbit and phases.
Other related activities
Observing the phases of the moon.
How big is the moon? |
About 60 percent of California is experiencing “exceptional drought,” the U.S. Drought Monitor’s most dire classification. Without enough water in the soil, seeds can’t sprout roots, leaves can’t perform photosynthesis, and agriculture can’t be sustained.
Currently, there is no global network monitoring soil moisture at a local level. Farmers, scientists and resource managers can place sensors in the ground, but these only provide spot measurements and are rare across some critical agricultural areas in Africa, Asia and Latin America. The European Space Agency’s Soil Moisture and Ocean Salinity mission measures soil moisture at a resolution of 50 kilometers, but because soil moisture can vary on a much smaller scale, its data are most useful in broad forecasts.
Enter NASA’s Soil Moisture Active Passive (SMAP) satellite. The mission, scheduled to launch this winter, will collect the kind of local data agricultural and water managers worldwide need.
SMAP monitors the top 5 centimeters of soil on Earth’s surface. It creates soil moisture estimates with a resolution of about 9 kilometers, mapping the entire globe every two or three days. Although this resolution cannot show how soil moisture might vary within a single field, it will give the most detailed maps yet made.
“If farmers of rain-fed crops know soil moisture, they can schedule their planting to maximize crop yield,” said Narendra Das, a water and carbon cycle scientist on SMAP’s science team at NASA’s Jet Propulsion Laboratory in Pasadena, California. “SMAP can assist in predicting how dramatic drought will be, and then its data can help farmers plan their recovery from drought.” |
Groundbreaking experiment aims to create matter from light
In what could be a landmark moment in the history of science, physicists working at the Blackett Physics Laboratory in Imperial College London have designed an experiment to validate one of the most tantalizing hypotheses in quantum electrodynamics: the theory that matter could be created using nothing more than pure light.
Premised on a discussion that they had over one day and a few cups of coffee, the three physicists – two from Imperial College and one visiting from the Max Planck Institute in Heidelberg, Germany – recognized that their work on fusion energy also offered possibilities in the theory of light to matter creation, suggested in a theory 80 years ago by two American physicists, Breit and Wheeler. These two physicists had premised the idea that because annihilating electron-positron pairs produce two or more photons, then colliding photons should, in turn, produce electron-positron (or “Breit-Wheeler”) pairs.
In devising an experiment aimed at attempting to produce these Breit-Wheeler pairs, the physicists working at Imperial College propose a two-step process. Firstly, a high-energy electron beam accelerating electrons in a vacuum close to the speed of light would be fired into a target of pure gold several millimeters thick. Via a process called “Bremsstrahlung” (German for “Braking radiation”) the high-energy electrons bombarding the target would lose kinetic energy but, in so doing, release gamma-ray photons.
Secondly, a magnetic field within the apparatus would collimate and direct this gamma-ray photon beam into a hohlraum (German for “cavity”). Simultaneously, to ensure that any subsidiary electron-positron pairs created at this point would be separated by a further magnet containment field, the hohlraum would be bombarded with a high-energy laser beam effectively rendering it as a black-body thermal radiation chamber. As the beam of high-energy photons entered the cavity they would rise to a super-excited state where they would collide en masse with the photons generated by the laser aimed at the hohlraum and, all going to plan, hundreds of thousands of Breit-Wheeler pairs would be generated to form a continuous stream from the cavity. In fact, it is anticipated that, in using a 2-GeV electron beam and a 400-eV hohlraum laser, the yield would be in excess of 100,000 electron-positron pairs.
If this experiment comes to fruition it would represent not only the first realization of a pure photon–photon collider, but a method of achieving light to matter transformation at power levels orders of magnitude lower than previously thought possible. And, without the requirement for a massive particle accelerator, it could be easily achieved in a modestly-equipped laboratory.
Given the potential to open up a relatively low-energy, simple way to investigate a cornerstone of quantum electrodynamics, this proposal should allow many more researchers access to this field. As a result, this could help add to our knowledge of the processes that took place in the first 100 seconds of the universe and possibly shed more light on those mysterious denizens of deep-space: gamma-ray bursts emanating from exploding massive stars.
Lastly, validating the Breit-Wheeler theory would also provide the seventh and final in the line theories describing the simplest ways in which light and matter interact. These include Dirac's 1930 theory on the annihilation of electrons and positrons, Einstein's 1905 theory on the photoelectric effect, and Blackett and Occhialini’s single-photon annihilation. Those theories are all associated with Nobel Prize-winning research.
Details of the research were published this week in the journal Nature Photonics.
Source: Imperial College London |
FAIRBANKS, ALASKA — Ever since humans first settled Alaska they have been awed by the aurora borealis -- towering, rippling, multicolored curtains of light that often hang above Fairbanks on clear, dark nights. Yet only recently have scientists been able to demonstrate the forces that control these ''northern lights.''
The scientists believe they have discovered, through computer simulations and laboratory experiments, the cause of the strikingly uniform folds that often appear in the aurora.
The latest effort to explore the aurora from above is to begin this month with the launching of an Ariane rocket from French Guyana. It is to carry a high-resolution Earth-surveying satellite and a Swedish satellite whose task is to make observations above auroral displays.
The Swedish satellite, Viking, will test speculations by Hannes Alfven of Sweden, an early researcher into plasma behavior, regarding the force that accelerates electrons as they plunge earthward to produce auroras.
Study of the aurora borealis has been aided by a supersensitive, ground- based video camera that can make extended recordings of the displays.
The recordings have documented the many forms in which auroras occur, including the rhythmic undulations that led early observers to see them as some form of glowing celestial snake. When Fairbanks is directly under a display, one can look up into a towering cathedral of light whose multicolored walls are more than 100 miles high.
An explanation for the remarkably uniform ''pleated'' auroral curtains has emerged from theoretical speculation followed up by computer simulations in Fairbanks and by laboratory tests at the General Electric Co. in Schenectady, N.Y., under the near-vacuum conditions in which auroras form.
The auroral curtains are produced by sheets of high-energy electrons plunging from space into the thin traces of atmosphere between 60 and 200 miles aloft. Oxygen molecules glow red or yellow when struck by the electrons, depending on the electron energy. Hydrogen molecules also glow red. Individual oxygen atoms glow green. Nitrogen atoms emit a purple light and, at lower levels, two-atom nitrogen molecules glow pink.
Dr. Syun-Ichi Akasofu of the University of Alaska's Geophysical Institute in Fairbanks explained that the plunging electrons also induce an electric field in the electrified air, or plasma, on either side of the auroral sheet. Because the electrons are negatively charged, north of the sheet they generate an electric field directed toward the south whereas on the opposite side of the sheet the field is directed north.
This causes plasma on opposite sides of the sheet to flow in opposite directions at about 1,400 mph. It is this counterflow, according to Akasofu, that produces the folds, like swirling eddies along the boundary between streams of water flowing in opposite directions.
The auroral record was further enriched last May when astronauts aboard the space shuttle flew close to the southern auroral zone, perhaps even through some of the ''southern lights'' between Antarctica and Australia. Photographs taken by the astronaut Don Lund show a snaking wall of light, reddish at the top, white in the middle and bluish at the base. Others show a narrow, glowing white ''highway'' across the southern sky.
From observations of solar eruptions that led to enhanced auroral displays, supplemented by information on magnetic fields in intervening space and data from spacecraft, he said in a recent interview, it may soon be possible to warn the Air Force of auroral disturbances that could affect space missions, military radars or missile performance.
The warnings could spare astronauts from excessive radiation exposure while on a spacewalk at high latitudes. American spacewalks to date have all been conducted under the Earth's magnetic shield. The oval zone of maximum auroral occurrence on the average lies over central Alaska, Hudson Bay, southern Greenland and the Arctic Ocean near Scandinavia.
It marks the southern edge of the polar zone within which the Earth's magnetic field provides no protection against radiation from space. An unprotected astronaut within that hole in the magnetic shield would be exposed to bursts of high energy radiation from the sun that, at lower latitudes, would be deflected by the Earth's magnetism. The zone, however, as recorded by more than 100 all-sky cameras in both polar regions and later confirmed from Air Force and NASA research aircraft and spacecraft, has proved remarkably mobile.
Normally the aurora, at any one moment, forms a narrow, globe-encircling band that remains fixed relative to the direction of the sun as the Earth rotates beneath it. While invisible in daylight to human eyes the entire oval can be recorded in ultraviolet light from spacecraft.
The auroral oval is centered at the geomagnetic pole in the Canadian Arctic rather than the geographic pole. At local midnight its southernmost part often lies over Fairbanks. Twelve hours later, when it is midnight in Europe, the oval has rotated eastward and that same sector lies over Iceland and the Norwegian coast.
When a magnetic storm, after a solar flare, disrupts the Earth's magnetic field, the auroral zones, north and south, can swell rapidly, forming in each polar region a far larger area of radiation hazard. |
Understanding Blood pH And It’s Critical Role In The Prevention Of Cancer
Remember back in high school chemistry when you learned about acid/alkaline balance, also referred to as the body’s pH (“potential Hydrogen” or “powers of Hydrogen”)? Our pH is measured on a scale from 0 to 14, with 7.35 being neutral (normal), below 7.35 is acidic (with 0 being the most acidic) and above 7.35 is alkaline (with 14 being the most alkaline.
Hydrogen is both a proton and an electron. If the electron is stripped off, then the resulting positive ion is a proton. In short, it is important to note that alkaline substances (also called “bases”) are proton “acceptors” (“+” charge) while acids are proton “donors” (“-” charge). Since bases have a higher pH, they have a greater potential to absorb hydrogen ions and vice versa for acids.phbalance
In chemistry, we know that water (H2O) decomposes into hydrogen ions (H+) and hydroxyl ions (OH-). When a solution contains more hydrogen ions than hydroxyl ions, then it is said to be acid. When it contains more hydroxyl ions than hydrogen ions, then it is said to be alkaline. As you may have guessed, a pH of 7.35 is neutral because it contains equal amounts of hydrogen ions and hydroxyl ions.
Over 70% of our bodies are water. When cells create energy via aerobic respiration, they burn oxygen and glucose. In simple terms, in order for the body to create energy it requires massive amounts of hydrogen. As a matter of fact, each day your body uses about ½ pound of pure hydrogen. Even our DNA is held together by hydrogen bonds and since the pH of bases is higher, they have a greater potential to absorb hydrogen, which results in more oxygen delivered to the cells.
The hydrogen ion concentration varies over 14 powers of 10, thus a change of one pH unit changes the hydrogen ion concentration by a factor of 10. The pH scale is a common logarithmic scale. For those of you who never liked math, what this means is that a substance which has a pH of 5.2 is 10 times more acidic than a substance with a pH of 6.2, while it is 100 (10 squared) times more acidic than a substance with a pH of 7.2, and it is 1,000 (10 cubed) times more acidic than a substance with a pH of 8.2, etc…
Our blood must always maintain a pH of approximately 7.35 so that it can continue to transport oxygen. Thus, God has made our bodies resilient with the ability to self-correct in the event of an imbalanced pH level through a mechanism called the buffer system. In chemistry, a buffer is a substance which neutralizes acids, thus keeping the pH of a solution relatively constant despite the addition of considerable amounts of acids or bases. However, the American diet (United States) being full of junk foods, fast foods, processed foods, and sodas, puts the body through “the ringer” in order to maintain the proper pH in the blood. Although our bodies typically maintain alkaline reserves which are utilized to buffer acids in these situations, it is safe to say that many of us have depleted our reserves.
When our buffering system reaches overload and we are depleted of reserves, the excess acids are dumped into the tissues. As more and more acid is accumulated, our tissues begin to deteriorate. The acid wastes oxidize (“rust”) the veins and arteries and begin to destroy cell walls and organs. Having an acidic pH is like driving your car with the “check engine” light on. It’s a sign that something is wrong with the engine and if we don’t get it fixed, then eventually the car will break down.
According to Keiichi Morishita in his book, Hidden Truth of Cancer, as the blood becomes acidic, the body deposits acidic substances into cells to remove them from the blood. This allows the blood to remain slightly alkaline. However, it causes the cells to become acidic and toxic. Over time, many of these cells increase in acidity and some die. However, some of these acidified cells adapt to the new environment. In other words, instead of dying (as normal cells do in an acidic environment) some cells survive by becoming abnormal cells. These abnormal cells are called “malignant” cells, and they do not correspond with brain function or the DNA memory code. Therefore, malignant cells grow indefinitely and without order. This is cancer.
Putting too much acid in your body is like putting poison in your fish tank. Several years ago, we purchased a fish tank and a couple of goldfish for our children. After killing both goldfish, we quickly learned that the key factor in keeping fish alive is the condition of the water. If their water isn’t balanced, then they die quickly. We also learned that you can kill a fish rapidly if you feed it the wrong foods! Now, compare this to the condition of our internal “fish tank.” Many of us are filling our fish tanks with chemicals, toxins, and the wrong foods which lower our pH balance, and an acidic pH results in oxygen deprivation at the cellular level.
So, what other things can we do to keep our tissue pH in the proper range? The easiest thing is to eat mostly alkaline foods. The general rule of thumb is to eat 20% acid foods and 80% alkaline foods. Fresh fruit juice also supplies your body with a plethora of alkaline substances. You can take supplements, such as potassium, cesium, magnesium, calcium, and rubidium, which are all highly alkaline.
Some excellent alkaline-forming foods are as follows: most raw vegetables and fruits, figs, lima beans, olive oil, honey, molasses, apple cider vinegar, miso, tempeh, raw milk, raw cheese, stevia, green tea, most herbs, sprouted grains, sprouts, wheat grass, and barley grass.
Foods such as yogurt, kefir, and butter are basically neutral. Several acid-forming foods are as follows: sodas, coffee, alcohol, chocolate, tobacco, aspartame, meats, oysters, fish, eggs, chicken, pasteurized milk, processed grains, sugar, peanut butter, beans, and pasta. |
Americanization, term used to describe the movement during the first quarter of the 20th cent. whereby the immigrant in the United States was induced to assimilate American speech, ideals, traditions, and ways of life. As a result of the great emigration from E and S Europe between 1880 and the outbreak of World War I (see immigration), the Americanization movement grew to crusading proportions. Fear and suspicion of the newcomers and of their possible failure to become assimilated gave impetus to the movement. Joined by social workers interested in improving the slum conditions surrounding the immigrants, and by representatives of the business and industrial world, organizations were formed to propagandize and to agitate for municipal, state, and federal aid to indoctrinate the immigrants into American ways. The coming of World War I with the resultant heightening of U.S. nationalism strengthened the movement. The Federal Bureau of Education and the Federal Bureau of Naturalization joined in the crusade and aided the private Americanization groups. Large rallies, patriotic naturalization proceedings, and Fourth of July celebrations characterized the campaign. When the United States entered into the war, Americanization was made an official part of the war effort. Many states passed legislation providing for the education and Americanization of the foreign-born. The anti-Communist drive conducted by the Dept. of Justice in 1919–20 stimulated the movement and led to even greater legislative action on behalf of Americanization. Virtually every state that had a substantial foreign-born population had provided educational facilities for the immigrant by 1921. The passage of this legislation and the quota system of immigration caused the Americanization movement to subside; private groups eventually disbanded.
See J. Higham, Strangers in the Land (1963).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Americanization from Fact Monster:
See more Encyclopedia articles on: Sociology: General Terms and Concepts |
How to Graph Polynomials When the Roots Are Imaginary Numbers — An Overview
In pre-calculus and in calculus, certain polynomial functions have non-real roots in addition to real roots (and some of the more complicated functions have all imaginary roots). When you must find both, start off by finding the real roots, using techniques such as synthetic division. If you’re lucky, you’re left with a depressed quadratic polynomial to solve that’s unsolvable using real number answers. No fear! You just have to use the quadratic formula, through which you’ll end up with a negative number under the square root sign. Therefore, you express the answer as a complex number.
For instance, the polynomial g(x) = x4 + x3 – 3x2 + 7x – 6 has non-real roots. Follow these basic steps to find all the roots for this (or any) polynomial:
Classify the real roots as positive and negative by using Descartes’s rule of signs.
Three changes of sign in the g(x) function reveals you could have three or one positive real root. One change in sign in the g(–x) function reveals that you have one negative real root.
Find how many roots are possibly imaginary by using the fundamental theorem of algebra.
The theorem reveals that, in this case, up to four imaginary roots exist. Combining this fact with Descartes’s rule of signs gives you several possibilities:
One real positive root and one real negative root means that two roots aren’t real.
Three real positive roots and one real negative root means that all roots are real.
List the possible rational roots, using the rational root theorem.
The possible rational roots include
Determine the rational roots (if any), using synthetic division.
Utilizing the rules of synthetic division, you find that x = 1 is a root and that x = –3 is another root. These roots are the only real ones.
Use the quadratic formula to solve the depressed polynomial.
Having found all the real roots of the polynomial, divide the original polynomial by x-1 and the resulting polynomial by x+3 to obtain the depressed polynomial x2 – x + 2. Because this expression is quadratic, you can use the quadratic formula to solve for the last two roots. In this case, you get
Graph the results.Graphing the polynomial g(x) = x4 + x3 – 3x2 + 7x – 6.
The leading coefficient test reveals that the graph points up in both directions. The intervals include the following:
The preceding figure shows the graph of this function. |
The persistent buzz of a mosquito in the home is infuriating. Even more infuriating is the itchy bite it leaves you if it manages to escape your clutches. In some parts of the world, mosquito bites can be more harmful than a nuisance. Some mosquitos are vectors for debilitating diseases and mosquito bites can be deadly. Malaria is one of these diseases.
What is malaria?
Malaria is a serious disease spread by mosquitos and not contagious between humans. It takes one bite to become infected and if untreated, it can be fatal.
Where does malaria occur?
With over 1 million deaths reported every year, malaria is widespread. Found in over 100 countries, it is most prevalent in tropical regions of Africa, Asia, South America and the Pacific Islands
What are the symptoms of malaria?
Typically, malaria symptoms develop fast, beginning to surface between 10 to 15 days after a mosquito bites. Rarely, the symptoms can take up to a year to show.
- a high temperature (at least 38°C)
- nausea and vomiting
- fever, sweats and chills
- muscle aches and pains
If untreated, more serious complications can develop, including jaundice, kidney failure, coma and in worst cases, death.
What causes malaria?
Malaria is caused by a parasite called a Plasmodium that infects female anopheles mosquitos. They harbour the parasite before injecting and infecting the human through their bites. After entering the body, the parasite flows through the blood stream before entering the liver and multiplying.
Tip: you can identify a female anopheles mosquito if its thorax is raised from its resting surface!
How do you prevent malaria?
Mosquito bites are hard to avoid in infested areas, but not impossible if you follow these simple tips.
- Prepare your environment
Malaria-carrying mosquitos are mainly active during twilight periods. Use an effective insect repellant to help keep them at bay when dusk or dawn breaks
- Dress to prevent
Wear loose, light-coloured clothing that covers your arms and legs and don’t wear perfumes or aftershaves that could attract unwanted attention. Cover any exposed skin with body insect repellent.
- Anti-malaria medication
There are no malaria vaccines to prevent infection. However, ask your doctor about anti-malarial medication tablets a few months before you travel to high-risk areas. If prescribed, you must complete the whole course for the medication to be fully effective.
How do you treat malaria?
Whilst serious, almost all malaria sufferers make a full recovery provided that the infection is treated promptly. If you develop malaria symptoms, consult your doctor or your nearest medical centre immediately. |
Pears In the Southeast
History, Types, Disease ….
Most gardeners think pears are an easy-to-grow fruit that are not worth investing much effort. No doubt the hard-sand pear varieties most often grown in the South have encouraged this attitude.
Pears were first introduced into the US around 1630. The European pear is noted for its quality but is also known for its susceptibility to fire blight (disease that damages the tree). Asian varieties introduced later, like the sand pear (Pyrus pyrifolia), enjoy increased resistance to fire blight, but is much harder than the European pear. Asian pears, erroneously first called “pear apples,” are increasingly in demand. The fruit is mostly round like an apple but that’s where the similarity ends. The flavor is very mild—some might say bland—and the fruit is very juicy. In fact, you need a bib to eat one.
Fire blight isn’t the only ailment that damages pears. Pears in the South suffer from fungal leaf spot diseases that often defoliate trees by mid-summer, causing them to set a new crop of leaves and often causing them to bloom in the fall. Eventually this begins to reduce tree vigor and hurt spring production. Appropriately timed applications of fungicide will limit this disease.
Healthy pear trees can be extremely productive. They can be expected to produce between 15 and 25 tons of fruit per acre!
Soils and Fertility
Pears are very tolerant of both soil condition and moisture problems. They grow and produce best in well-drained sandy loam soils. Pears will grow in clay soil, light sandy soils, dry soils with a little additional irrigation, and wet soils if mounded slightly. Fertilizing is easy when it comes to pears. Basically, the best fertilization program is no program. On very poor soils, one pound of 13—13—13, or a similar complete fertilizer, can be applied to young trees in February or March. Mature trees, even in poor soils, should not be fertilized.
Pruning and Training
Young pear trees should be trained using the modified central leader system (here). Pears have an upright columnar growth habit. It is critical that you prop the branches at an angle between 45° and 60° as you train the young trees. This will encourage early production, strong branch angles, and better open-growth habits.
Summer pruning can be used to direct growth and increase the development of fruiting spurs and branches. Vigorous shoots can be tipped to slow growth and stimulate the development of side branches. Removal of water sprouts and suckers should be done as soon as they are noticed. You’ll have less regrowth of these sprouts if you break them off as soon as they appear, so check your trees often.
Mature trees should be pruned as little as possible. Annual thinning of internal shoots, water sprouts, and an occasional older branch is recommended instead of a heavy top pruning.
Flowering, Pollination, and Fruit Thinning
Fruit buds or spur development on pears is similar to that on apples. A spur is a short, leafy shoot that terminates in a cluster of 5 - 7 flowers. Any given spur will generally fruit every other year for 7 or 8 years. This means that about half of the spurs are producing each year.
Also, pears need two or more varieties that bloom at the same time to promote good fruit development.
Pears have a tendency to overproduce. Heavy fruit loads can reduce vigor and either bend or break limbs. Before this happens some fruit should be removed (i.e. thinned). This will help increase fruit size and quality of the remaining crop.
Check out your local nursery for the best pear varieties in your area! |
Source: image of anxious; public domain; http://morguefile.com/archive/display/694341
Hello, class. So anxiety disorders are some of the most common mental disorders that we can encounter alongside things like depression or mood disorders and things like ADHD and learning disorders as well. Anxiety disorders appear in almost 20% of adults. So when we talk about anxiety, we're referring to any kinds of feelings of nervousness, or worry, or unease within a person.
And anxiety isn't necessarily all bad. In fact, anxiety can be helpful for people to identify things that are particularly important, or dangerous activities, or things like that. They're almost sort of a mental indicator or clue to the person that they need to particularly pay attention to something.
However, an anxiety disorder is any kind of disorder where a person feels this anxiety, this worry or unease, in a pervasive or particularly strong or unnecessary kind away. And it impairs their life in some form. So oftentimes, people with an anxiety disorder can develop feelings of defensiveness, an insecurity towards other people, as well as inferiority. A lot of times, they might feel like they're threatened, and they can't necessarily do anything about it. So you can see how that can impair people's lives in certain ways.
Remember, when we talk about mental disorders, we're referring to extremes of mental functions or behaviors. So when we say anxiety, we don't mean in its normal form. For example, when we say somebody is getting anxious, like when they're taking a test, that's normal. You should feel worried when it's something that's particularly important to you.
However, with an anxiety disorder, for example, when a person has a panic attack, it's not just that normal feeling of anxiety, but rather it's when a person feels like their life is actually physically in danger. They start choking. It's very hard for them to breathe. They might get nauseous. They lose control of their body and literally just drop to the ground. They could have chest pains.
And all of this can last for a period of minutes to hours. So you can see how this can be detrimental to a person to an extreme degree. So anxiety disorders can take a lot of different forms. And in this lesson, we're going to look over some of the most common of these anxiety disorders.
So the first one that we're going to be taking a look at today is generalized anxiety disorder, which is a feeling of being anxious or tense without any specific cause for a person. And this feeling of anxiety occurs in a person for at least six months in length. But oftentimes, it can happen for longer periods of time, where a person feels especially jittery or on edge constantly with physical symptoms that might go along with this, things like sweating or rapid heart rate, an upset stomach, dizziness, and trouble concentrating for these people.
So notice, it's not short, quick periods of increased anxiety. That would be something like a panic disorder, which we'll talk about next. Rather, it's long, and it's a pervasive feeling of anxiety that's constantly with a person, and it can vary in its intensity. It can be very intense, or it can be general and sort of in the background with the person for the majority of the time. But you can see how having these feelings can make a person feel very psychologically distraught or stressed out as a result.
Now, a panic disorder, on the other hand, is when a person has a constant feeling of anxiety, as we said before, with frequent periods of especially intense and often unexpected panic or very intense periods of that anxiety occurring. And this is what we call a panic attack. So that short period of time where suddenly they feel incredibly worried or nervous, almost without any kind of physical control over this, and this can often be for no reason. But it can also be because of specific causes, though the intensity is inappropriate for those causes. So, for example, if someone becomes especially stressed out by something at work, they can have a panic attack, which increases that feeling of anxiety to a degree that's really unnecessary for the thing that's actually causing it to occur.
Now, oftentimes, a person that's having a panic attack can feel like they're having a heart attack as well. Or they might feel like they're actually about to die. So that's a very stressful thing for a person to have. And panic disorder can appear either with or without agoraphobia, which, again, we'll talk about next. So we can have panic disorder with agoraphobia or panic disorder without agoraphobia.
Now, agoraphobia is any kind of anxiety or fear of being in an open space or an unfamiliar space where escape is especially difficult. Now, this is different from what we would call social anxiety disorder, which is a fear of being in social situations and interacting with others because agoraphobic people would feel anxious in a crowded public place just like a person with social anxiety disorder. However, an agoraphobic person would also be-- feel very anxious or afraid if they're out in the middle of the woods, if they're out in the middle of nowhere with no people present. And that's because they're in a situation where they feel exposed, and they aren't sure where they can escape or where they could run to if they needed some kind of safety.
So someone with a panic disorder with agoraphobia may have a panic attack as a result of being out in public, so they have a panic attack as a result of their agoraphobia. Or they could be so worried that they might have a panic attack that they would stay at home. So in other words, their agoraphobia might affect or might be affected because of their panic disorder. So you can see how these two can feed on each other if they're seen together.
So panic attacks, panic disorders, generalized anxiety disorders, any of these anxiety disorders we've talked about today, they can be treated with medication. We can use anxiolytics, which are drugs that are specifically designed to decrease anxiety and feelings of fear, as well as with antidepressants. But people can also be treated with these disorders by being taught strategies to reduce or cope with the anxiety, which can help them to better mentally deal with either their general or their specific anxieties that they have throughout their lives.
A disorder in which people feel worry or unease in a pervasive or particularly strong or unnecessary way, which impairs their life in some way.
An anxiety disorder where a person has a feeling anxious or tense without any specific cause for at least 6 months.
Anxiety or fear of being in open spaces or unfamiliar ones where escape is difficult.
An anxiety disorder when a person has a constant feeling of anxiety with frequent periods of intense, unexpected panic; can occur with or without agoraphobia. |
Definitions of sustainability and sustainable development
- Webster's New International Dictionary
- "Sustain - to cause to continue (as in existence or a certain state, or in force or intensity); to keep up, especially without interruption diminution, flagging, etc.; to prolong."
Webster's New International Dictionary. (Springfield, Mass.: Merriam-Webster Inc., 1986)
- Caring for the Earth
- "improving the quality of human life while living within the carrying capacity of supporting eco-systems."
IUCN/UNEP/WWF. Caring for the Earth: A Strategy for Sustainable Living. (Gland, Switzerland: 1991).(IUCN - The World Conservation Union, UNEP - United Nations Environment Programme, WWF - World Wide Fund for Nature).
- Sustainable Seattle
- Sustainability is the "long-term, cultural, economic and environmental health and vitality" with emphasis on long-term, "together with the importance of linking our social, financial, and environmental well-being"
- Friends of the Earth Scotland
- "Sustainability encompasses the simple principle of taking from the earth only what it can provide indefinitely, thus leaving future generations no less than we have access to ourselves."
- Thomas Jefferson Sustainability Council
- "Sustainability may be described as our responsibility to proceed in a way that will sustain life that will allow our children, grandchildren and great-grandchildren to live comfortably in a friendly, clean, and healthy world . that people:
- Take responsibility for life in all its forms as well as respect human work and aspirations;
- Respect individual rights and community responsibilities;
- Recognize social, environmental, economic, and political systems to be inter-dependent;
- Weigh costs and benefits of decisions fully, including long-term costs and benefits to future generations;
- Acknowledge that resources are finite and that there are limits to growth;
- Assume control of their destinies;
- Recognize that our ability to see the needs of the future is limited, and any attempt to define sustainability should remain as open and flexible as possible."
- The Natural Step Four System Conditions
- Substances from the Earth's crust must not systematically increase in nature. (This means that fossil fuels, metals, and other minerals can not be extracted at a faster rate than their re-deposit back into the Earth's crust).
Substances produced by society must not systematically increase in nature. (This means that things like plastics, ozone-depleting chemicals, carbon dioxide, waste materials, etc must not be produced at a faster rate than they can be broken down in nature. This requires a greatly decreased production of naturally occurring substances that are systematically accumulating beyond natural levels, and a phase-out of persistent human-made substances not found in nature.)
The physical basis for productivity and diversity of nature must not be systematically diminished. (This means that we cannot harvest or manipulate ecosystems in such a way as to diminish their productive capacity, or threaten the natural diversity of life forms (biodiversity). This requires that we critically examine how we harvest renewable resources, and adjust our consumption and land-use practices to fall well within the regenerative capacities of ecosystems.)
We must be fair and efficient in meeting basic human needs. (This means that basic human needs must be met with the most resource-efficient methods possible, including a just resource distribution.)
Adapted from http://www.naturalstep.org/
- Jerry Sturmer
Santa Barbara South Coast Community Indicators
- Sustainability is meeting the needs of all humans, being able to do so on a finite planet for generations to come while ensuring some degree of openness and flexibility to adapt to changing circumstances.
- Random House Dictionary of the English Language
- "Develop - v.t. - to bring out the capabilities or possibilities of, to bring to a more advanced or effective state"
Random House Dictionary of the English Language. (New York, NY: Random House: 1987).
- Our Common Future
- "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs."
Page 8, World Commission on Environment and Development. Our Common Future. (Oxford, Great Britain: Oxford University Press, 1987). (Frequently referred to as the Brundtland report after Gro Harlem Brundtland, Chairman of the Commission)
- Hamilton Wentworth Regional Council
- "Sustainable Development is positive change which does not undermine the environmental or social systems on which we depend. It requires a coordinated approach to planning and policy making that involves public participation. Its success depends on widespread understanding of the critical relationship between people and their environment and the will to make necessary changes."
- World Business Council on Sustainable Development
- "Sustainable development involves the simultaneous pursuit of economic prosperity, environmental quality and social equity. Companies aiming for sustainability need to perform not against a single, financial bottom line but against the triple bottom line."
"Over time, human and social values change. Concepts that once seemed extraordinary (e.g. emancipating slaves, enfranchising women) are now taken for granted. New concepts (e.g. responsible consumerism, environmental justice, intra- and inter-generational equity) are now coming up the curve."
- Interfaith Center on Corporate Responsibility (ICCR)
- "Sustainable development...[is] the process of building equitable, productive and participatory structures to increase the economic empowerment of communities and their surrounding regions.
Interfaith Center on Corporate Responsibility, 475 Riverside Drive, New York, NY 10115, 212-870-2295 |
- Project plans
- Project activities
- Legislation and standards
- Industry context
- Specialist wikis
Last edited 11 Dec 2021
Classical architecture refers to a style of buildings originally constructed by the Ancient Greeks and Romans, especially between the fifth century BCE in Greece and the third century CE in Rome. The style of classical architecture has been reproduced throughout architectural history whenever architects looked to the ancient past for illumination and inspiration, and in search of what they may have regarded as lost ideals.
The Renaissance is an obvious example, but so are the Greek revivals of the 19th century in Victorian Britain and other parts of Europe. Victorian architects sometimes created exact copies of classical forms but otherwise they adopted an eclectic approach that involved recombining classical forms and motifs to create a new style or typology. For example, a Greek temple could become the model for a church, a town hall or even a railway station.
In the US, the Classical Revival or Neoclassical Style (1895-1950) is one of the most common architectural styles. It was most often used for courthouses, banks, churches, schools and mansions. Later, Hitler’s architect Albert Speer designed his vision of the new post-War Berlin entirely in a pared-down, mostly unadorned neoclassical style.
Characteristics of classical architecture
Classical buildings in ancient Greek and Roman times were typically built from marble or some other attractive, durable stone, but since then, they have also been built in brick, concrete and stone. The architecture was primarily trabeated (post and beam) and evolved from timber origins.
Greek architecture followed a highly-structured system of proportions that related individual architectural components to the whole building. This system was developed according to three basic styles, or 'orders' – Doric, Ionic and Corinthian – that formed the heart of classical Greek architecture. The Romans also used these widely but added two of their own orders: Tuscan and Composite.
- Architectural history.
- Architectural Styles.
- Baroque architecture.
- Classical orders in architecture.
- Classical Revival style.
- Elements of classical columns.
- Italian rationalism.
- Italian Renaissance Revival style.
- Jacobean architecture.
- Neoclassical architecture.
- Nineteenth century building types.
- Palladian architecture.
- Origins of Classical Architecture.
- Roman Classical orders in architecture.
- Sir Christopher Wren.
Featured articles and news
Design needs to be more diverse to be widely accessible.
It’s not always a case of the centre leading.
PHribbon toolbar developed for US market.
CIOB consultation seeks feedback on APPG 2020 report.
How to respond to changes made in the October 2021 update.
Resource provides professional installation assistance.
Adaptive planning proposal spans 15 years.
Activities summarised by the Construction Industry Council.
How Islamic architecture shaped Europe. Book review. |
At Ash Grove Academy, we believe that Geography should inspire pupils to be curious about the world, its processes and its people and recognise their place in the world and their role in its future.
Our spiral curriculum explicitly builds upon children’s prior learning in the way that it is structured and, like the history curriculum, each geography topic is presented as an ‘enquiry’ so the sequence of lessons is designed to answer a geographical question or investigate a theme.
Children revisit and build upon locational and place knowledge, fieldwork skills and human and physical geography in each year group. Each year is structured with a focus on local geography in the autumn term, national geography in the spring term and global geography in the summer term, mirroring the history curriculum. The curriculums for geography and history have been designed in parallel to enhance the natural links between them in a way that is mutually beneficial for each of these humanities subjects.
Teachers are encouraged to ‘start local’ with every geography topic so that children use their experiential knowledge as a starting point and build important fieldwork skills as well as developing their sense of scale. Each child’s unique sense of place knowledge is also developed in an informal way across the year by plotting the locations of books, news stories and topics encountered across the wider curriculum including literacy on classroom maps of the world and the UK.
Outcomes for each year group, including for subject-specific vocabulary, have been clearly identified and mapped out in a progression document. This enables teachers to understand what geographical knowledge children have already attained and what their next steps will be.
Click here and scroll down to view the Ash Grove subject-specific vocabulary progression document. |
Multiplication worksheets for grade 3 make an unlimited supply of worksheets for grade 3 multiplication topics including skip counting multiplication tables and missing factors. Exercises also include multiplying by whole tens and whole hundreds as well as some column form multiplication missing factor questions are also included.
You can use the quick links below to access them.
Multiplication activity sheet for grade 3. Skip counting individual multiplication tables multiplication tables mixed practice. Multiplication word problems for grade 3 students. This is a comprehensive collection of math worksheets for grade 3 organized by topics such as addition subtraction mental math regrouping place value multiplication division clock money measuring and geometry.
Each worksheet has a number of word problems and an answer sheet. Mathematics has beauty and romance. Buisiness center friday 2 floor pyatnitskaya street 82 34 building 2 moscow russia 115054 tel.
All worksheets are pdf documents and can be printed. Grade 3 multiplication topics. It s not a boring place to be the mathematical world.
Third grade multiplication worksheets get your child to practice the subject with color by numbers and more. Kids completing this third grade math worksheet multiply by 3 to solve each equation and also fill in a multiplication chart for the number 3. Worksheets math grade 3 multiplication.
Simply click on the download link to get your free and direct copy. The worksheets can be made in html or pdf format both are easy to print. There are also other downloadable materials below which we think will be very helpful to your kids.
Our grade 3 multiplication worksheets emphasize the meaning of multiplication basic multiplication and the multiplication tables. Free math worksheets from k5 learning. Students should be reasonably proficient at multiplication in columns before attempting the more difficult problems.
These grade 3 multiplication word problem worksheets cover simple multiplication multiplication by multiples of 10 and multiplication in columns as well as some mixed multiplication and division. Home worksheets grade 3 free printable math worksheets for grade 3. Try third grade multiplication worksheets.
Grade 3 multiplication worksheets have been moved to their own page. If you are in search of multiplication worksheets for grade 3 you are on the right site in this site you will find multiplication worksheet for grade 3 which includes basic multiplication questions the meaning of multiplication that multiplication is repeated addition for example 5 10 5 5 5 5 5 3 9 3 3 3 3 3 3 3 3 3. The worksheets are available in both pdf and html formats and they are randomly generated so unique each time.
7 495 374 06 18 8 00 20 00. Worksheets are divided into simple multiplication multiples of ten and multiplication in columns. Download for free the following multiplication worksheets for grade 3 learners. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Detail at the nanoscale reveals secrets behind tooth decay
July 01, 2020
Northwestern University researchers have cracked one of the secrets of tooth decay. In a new study of human enamel, the materials scientists are the first to identify a small number of impurity atoms that may contribute to the enamel’s strength but also make the material more soluble. They also are the first to determine the spatial distribution of the impurities with atomic-scale resolution.
Dental caries — better known as tooth decay — is the breakdown of teeth due to bacteria. (“Caries” is Latin for “rottenness.”) It is one of the most common chronic diseases and a major public health problem, especially as the average life expectancy of humans increases.
The Northwestern discovery in the building blocks of enamel — with detail down to the nanoscale — could lead to a better understanding of human tooth decay as well as genetic conditions that affect enamel formation, which can lead to highly compromised or completely absent enamel.
Enamel, the human tooth’s protective outer layer, covers the entire crown. Its hardness comes from its high mineral content.
“Enamel has evolved to be hard and wear-resistant enough to withstand the forces associated with chewing for decades,” said Derk Joester, who led the research. “However, enamel has very limited potential to regenerate. Our fundamental research helps us understand how enamel may form, which should aid in the development of new interventions and materials to prevent and treat caries. The knowledge also might help prevent or ameliorate the suffering of patients with congenital enamel defects.”
The study was published on July 1 by the journal Nature.
Joester, the corresponding author, is an associate professor of materials science and engineering in the McCormick School of Engineering and an affiliated faculty member of the International Institute for Nanotechnology. Karen A. DeRocher and Paul J.M. Smeets, a Ph.D. student and a postdoctoral fellow, respectively, in Joester’s lab, are co-first authors.
One major obstacle hindering enamel research is its complex structure, with features across multiple length scales. Enamel, which can reach a thickness of several millimeters, is a three-dimensional weave of rods. Each rod, approximately 5 microns wide, is made up of thousands of individual hydroxylapatite crystallites that are very long and thin. The width of a crystallite is on the order of tens of nanometers. These nanoscale crystallites are the fundamental building blocks of enamel.
Perhaps unique to human enamel, the center of the crystallite seems to be more soluble, Joester said, and his team wanted to understand why. The researchers set out to test if the composition of minor enamel constitutents varies in single crystallites.
Using cutting-edge quantitative atomic-scale techniques, the team discovered that human enamel crystallites have a core-shell structure. Each crystallite has a continuous crystal structure with calcium, phosphate and hydroxyl ions arranged periodically (the shell). However, at the crystallite’s center, a greater number of these ions is replaced with magnesium, sodium, carbonate and fluoride (the core). Within the core, two magnesium-rich layers flank a mix of sodium, fluoride and carbonate ions.
“Surprisingly, the magnesium ions form two layers on either side of the core, like the world’s tiniest sandwich, just 6 billionths of a meter across,” DeRocher said.
Detecting and visualizing the sandwich structure required scanning transmission electron microscopy at cryogenic temperatures (cryo-STEM) and atom probe tomography (APT). Cryo-STEM analysis revealed the regular arrangement of atoms in the crystals. APT allowed the researchers to determine the chemical nature and position of small numbers of impurity atoms with sub-nanometer resolution.
The researchers found strong evidence that the core-shell architecture and resulting residual stresses impact the dissolution behavior of human enamel crystallites while also providing a plausible avenue for extrinsic toughening of enamel.
“The ability to visualize chemical gradients down to the nanoscale enhances our understanding of how enamel may form and could lead to new methods to improve the health of enamel,” Smeets said.
This study builds on an earlier work, published in 2015, in which the researchers discovered that crystallites are glued together by an extremely thin amorphous film that differs in composition from the crystallites.
The research was supported by the National Institute of Dental and Craniofacial Research at the National Institutes of Health (grants NIH-NIDCR R03 DE025303-01 and R01 DE025702-01) and the National Science Foundation (grant DMR-1508399).
The title of the paper is “Chemical Gradients in Human Enamel Crystallites.” In addition to Joester, DeRocher and Smeets, other authors of the paper are Linus Stegbauer, Michael J. Cohen, Lyle M. Gordon and James M. Rondinelli, of Northwestern; Berit H. Goodge, Michael J. Zachman and Lena F. Kourkoutis, of Cornell University; and Prasanna V. Balachandran, of the University of Virginia. |
Ancient Relations between Japan and Korea: from prehistory through the Eighth Century
The tragic late-nineteenth and early-twentieth centuries has obscured one of the most last influences in the history of two great nations of East Asia : Japan and Korea. The relationship between Japan and Korea had been dramatically different in the centuries before 1870, with the exception of Hidyoshi’s invasions in the 1590s. Tokugawa Japan maintained a cordial relationship with Joseon Korea. The links between Korea and Japan become stronger the more one goes back into history. For, in the ancient period, Koreans and Japanese were allies. Specifically, the Korean Kingdom of Baekje was the closest ally of the ancient Yamato State in Japan. So much of early Japanese culture came, not from China, but from the Korea peninsula. It was from Korea that Buddhism spread to Japan. Indeed, Japan’s oldest surviving temples — most notably Horyu-ji — were almost certainly built by Korean laborers and show clear influences of architectural styles from Baekje. The Japanese and Baekje royal families intermarried. The rich and endlessly fascinating civilizations of Japan and Korea are like two offshoots from a single tree trunk.
No surviving maps exist for East Asia, from the perspectives of either Japan or Korea before the beginning of the fifteenth century. Therefore we will have to use the Kangnido Map of 1402 to gain some insights into how earlier Japanese and Koreans may have viewed the region. We can immediately see the centrality of China. We can also see that the shape of Korea is far more accurate in this map than that of Japan. Africa and the Arabian Peninsula (on the far left) are not nearly as accurate, though were at least known to fifteenth-century Koreans. We can assume that the Korean and Japanese views of East Asia from a thousand years earlier were far more limited and centered entirely on East Asia, with a vague understanding of places farther away like India.
To reconstruct a picture of what Japanese-Korean relations were like in the period from prehistory through the eighth century, I will be using the following sources: histories which cover the period (like the Nihongi and Samguk sagi), archaeological evidence, and surviving artistic and archaeological similarities between items and buildings found in Japan and Korea. Korean influence on the Japanese archipelago goes back before recorded history. This article will stop in the eighth century with the Japanese Emperor Kanmu. In 2001, then reigning Emperor Akihito made a statement about the Korean portion of his own ancestry — a subject of much controversy in Japan.
“I feel a connection to Korea through annotations made in the `Continued Record of Japan`, which stated that the mother of Emperor Kanmu was a descendant of Baekjae`s King Muryong.”
The controversial nature of the ethnic identity of Japan’s Imperial Family is largely a product of modern (post-1868) politics. Indeed, studying the subject is been somewhat difficult because the Japanese Imperial Household Agency has stood in the way of excavating imperial tombs from the Meiji period (1868–1912) until 2018, when it began to allow excavations. Such excavations will be essential in understanding prehistoric Japan. Most of the large, keyhole-shaped tombs in the country were constructed before writing became widespread.
The people now known as Japanese are descended from a people called the Yayoi (also a period name, from c.300 BCE — c.300 CE). The original inhabitants of the northern portion of Japanese archipelago were the Ainu, a Caucasian people who had migrated south from Siberia. Gradually, they were pushed north with an influx of immigrants from the Korean Peninsula. These immigrants brought with them wet rice agriculture and techniques in metallurgy. Chinese influences were also present, but far more common were influences from Korea.
The earliest writings of any kind in Japan come from the 400s, with many of these being inscriptions. Japanese books were not published for another couple centuries and most of these early books are not extant. The earliest surviving historical (really mythological-historical) accounts from Japan are the Kojiki(712, ‘Record of Ancient Matters) and the Nihongi (720 Chronicles of Japan from Earliest times to 697). Both of these works are products of politics originating in the previous decades when the Yamato State, under Emperor Temmu and Empress Jito, was expanding its power. This nascent Japanese state did not control all of Japan or even all of the main island of Honshu. Instead it was built up of extended clans, growing in power and influence. Predecessors of these two mytho-historical accounts were created by order of the court in order to legitimize the power of the Yamato Clan over other clans. The Kojiki and Nihongi were ultimately the new and improved versions created just after the first permanent Japanese capital was established at Heijokyo (Nara) in 710.
The Kojiki and Nihongi both recount the lives and deeds of legendary emperors going back to Jimmu in 660 BCE. The earliest reigning emperor for which there is historical evidence is Kimmei (r.539-571). Kimmei’s immediate predecessors probably did exist however, the further back one goes, the less clear things become. Therefore, it is necessary to analyze these valuable, but problematic records, critically. The Nihongi is more helpful in painting a picture of ancient Japan as the Kojiki is far more concerned with Shinto mythology as it relates to the origins of the Japanese Imperial Family. The Nihongi mentions the Korean Kingdom of Baekje and asserts that there was a Yamato outpost on the Korean peninsula, though this latter claim is contested. In the modern period, Japanese nationalists asserted that this was a colony, though this was not likely to be the case. There could have been a diplomatic or trading post but probably not more than that. Japan did not send a military force to the Korean Peninsula until 663, and even then it was to help the Kingdom of Baekje against the Tang-Silla coalition.
In the sixth century, Buddhism was brought to Japan by emissaries from the Kingdom of Baekje. The textbook date for this is 538, when delegate sent by the King of Baekje presented the Japanese court with a Buddha statue. The integration of Buddhism into Japanese culture was far from smooth — supporters and opponents of Buddhism struggled for control at court throughout the sixth century before an amalgamation of Buddhism and Shinto began to take place. Japan’s early Buddhist temples were built with the aid of Korean labor and Korean architectural ideas imported from Baekje. This includes the buildings for Horyu-ji, a UNESCO World Heritage Site and home to the oldest freestanding wooden structures on the planet.
The oldest surviving chronicle of Korean history is called the Samguk sagi, and this text dates to 1145. Though it is much later than the Japanese records, it is still a valuable resource for understanding the Korean perspective of Japanese-Korean relations during the Three Kingdoms Period (Goguryeo, Silla, Baekje) of Korean history. Historian Gim Busik (1075–1151) compiled the Samguk sagi after analyzing a variety of historical records which existed in his own time to create a history of Korea. Recently, the Korean government recognized the Samguk sagi as a National Treasure of South Korea. In the Samguk sagi, Japan appears to be more of a peripheral state in association with Baekje. The linked account, from the 2018 Journal of Korean Studies, discounts the notion that Japan could have had anything like a colony on the Korean peninsula in ancient times. This is, in my view, quite correct. Even accounts of Japanese-Korean relations recorded in the Nihongi are likely to make Japan seem more of a dominant power than it was. Japan was the nation lucky enough to be outside the Korean peninsula when the Tang-Silla alliance defeated and conquered Baekje in 660. The Japanese had come to the aid of Baekje and were defeated in a massive naval battle. It seems quite likely that, before this battle, it was the Kingdom of Baekje that was the superior (in terms of cultural sophistication and technological innovation) nation. The Samguk sagi, however, has its shortcomings. It was written long after the events it recounts and probably makes Japan more peripheral than it really was for Baekje. After all, Baekje and Japanese royal families intermarried and Baekje refugees (including princes) fled to Japan after the defeat inflicted by the Tang-Silla alliance in 660.
Relations between Japan and Korea continued on and off over the course of the following centuries. During the Tokugawa period (1603–1868), formal diplomatic relations were maintained between the Korean Kingdom of Joseon and Japan as an important exception to the Japanese isolation policy. It was the Tokugawa Government that established a relation with Korea that would likely serve as a better example for diplomats to look back upon than practically anything that occurred between Japan and Korea during the dark decades between 1870 and about 1970. |
Throughout the middle 1700s, there was a buzz going through the astronomical community, as a number of compelling reports of sightings of objects traversing the surface of the sun had begun to accumulate.
Typically small, dark and circular in shape, these objects would sometimes take no more than seconds—or at most several hours—to pass across the sun. However, in either case, the motion seemed to indicate that these “objects” were more than mere sunspots, with which astronomers were already well acquainted.
As early as March 15, 1758, astronomer Tobias Mayer observed such a dark object passing before the sun, which he estimated to be 1/20th the diameter of the solar globe. Four years later in February 1762, J.C. Staudacher also observed such an object, which appeared as a black, round-shaped spot on the sun, which had vanished the following day.
Some of these observations had likely been the passings of known planets, such as Venus, as they passed before the sun. However, some of the accounts from this period are less easily explained. Perhaps the most notable is that of Monsieur Rostan, an astronomer and member of the Economic Society at Berne, as well as the Medico Physical Society at Basle, Switzerland. On the date of August 9th, 1762, Rostan had been using a quadrant to measure the sun’s altitudes at Lausanne. On the day in question, de Rostan noted that “the sun gave but a faint, pale light,” which he guessed had been a result of “the vapours of the Leman lake.”
However, upon directing a telescope toward the sun, he made a most unusual observation, the following description of which appeared in the Annual Register several years later:
“Happening to direct a 14ft. telescope, armed with a micrometer, to the sun, he was surprised to see the Eastern side of the sun, as it were, eclipsed about three digits, taking in a kind of nebulosity, which environed the opaque body, by which the sun was eclipsed. In the space of about two hours and a half the South side of said body, whatever it was, appeared detached from the limb of the sun; but the limb, or, more properly, the northern extremity of this body, which had the shape of a spindle, in breadth about three of the sun’s digits, and nine in length, did not quit the sun’s northern limb. This spindle kept continually advancing on the sun’s body, from East towards West, with no more than about half the velocity with which the ordinary solar spots move; for It did not disappear till the 7th of September, after having reached the sun’s western limb.”
Rostan further noted that the object had been visible almost every day during this period, which lasted for close to a month. He even managed to produce a rough idea of the object’s appearance, with the help of a camera obscura, which he sent along to the Royal Academy of Sciences at Paris.
It was reported that the same unusual object was visible passing before the sun was observed at Basle, although observers in Paris were unable to discern the object. Regardless, the events of September 1762 were perhaps the most significant in the midst of a wave of similar sightings of dark masses traversing the Sun; sightings of similar objects (although usually round, rather than spindle-shaped) would continue through to the end of the century, and well into the nineteenth century as well. On occasion, there would be more than one of the objects seen, and by the middle 1800s, compelling articles—quite sensation for astronomical literature—were asking whether an elusive, intramercurial “Planet Vulcan” might exist closer to the Sun than any other known planets.
There were other interpretations about the objects that would follow. Charles Fort, writing about Rostan’s observations in 1762, believed the object to have been a “super-Zeppelin,” which had been Fort’s own trademark expression for extra-planetary spacecraft from other words.
“Because of the spindle-like form,” Fort guessed, “I incline to think of a super-Zeppelin, but another observation, which seems to indicate that it was a world, is that, though it was opaque, and “eclipsed the sun,” it had around it a kind of nebulosity—or atmosphere? A penumbra would ordinarily be a datum of a sun spot, but there are observations that indicate that this object was at a considerable distance from the sun… As to us— Monstrator.”
“In Fort’s mind,” UFO and Fortean chronicler Loren Gross would later comment, “something was out there. Something had dropped anchor.”
It is certainly not inconceivable that there could have been a large, ephemeral object observed in space during this period—either orbiting the Earth, or the sun itself—which had been observed by Rostan and at least one other around that time. The idea that the object could not be seen from Paris raises the question of whether it was indeed much closer to Earth, in which case a natural satellite, or even what has been called a “moonlet” could have been the culprit.
Then again, most space rocks are anything but “spindle-shaped,” so perhaps more exotic possibilities shouldn’t be ruled out, either. |
A pair of American Goldfinches at McInnish Park in Carrollton, Texas.
Wikipedia has this to say about American Goldfinches:
The American Goldfinch (Carduelis tristis), also known as the Eastern Goldfinch, is a small North American bird in the finch family. It is migratory, ranging from mid-Alberta to North Carolina during the breeding season, and from just south of the Canadian border to Mexico during the winter.
The only finch in its subfamily that undergoes a complete molt, the American Goldfinch displays sexual dimorphism in its coloration; the male is a vibrant yellow in the summer and an olive color during the winter months, while the female is a dull yellow-brown shade which brightens only slightly during the summer. The male displays brightly colored plumage during the breeding season to attract a mate. |
The ability of baby fish to find a home, or other safe haven, to grow into adulthood will be severely impacted under predicted ocean acidification, University of Adelaide research has found.
Published today in the journal Proceedings of the Royal Society B, the researchers report the interpretation of normal ocean sound cues which help baby fish find an appropriate home is completely confused under the levels of CO2 predicted to be found in oceans by the end of the century.
“Locating appropriate homes is a crucial step in the life cycle of fish,” says Tullio Rossi, PhD candidate with the University’s Environment Institute. “After hatching in the open ocean, baby fish travel to reefs or mangroves as safe havens to feed and grow into adults.
“Baby fish can find those places through ocean noise: snapping shrimps and other creatures produce sounds that the baby fish follow.
“But when ocean acidity increases due to increased CO2, the neurological pathways in their brain are affected and, instead of heading towards those sounds, they turn tail and swim away.”
Mr Rossi conducted experiments with barramundi hatchlings, an important fisheries species. The study was in collaboration with other researchers including Professor Sean Connell (University of Adelaide), Dr Stephen Simpson (University of Exeter) and Professor Philip Munday (James Cook University).
He and his collaborators also found that high CO2 makes baby fish move slower and show more hiding behaviour compared to normal fish. This could make it more difficult for them to find food or habitat and to avoid predators.
Research leader Associate Professor Ivan Nagelkerken says marine researchers know that ocean acidification can change fish behaviours. But it hasn’t been known how high CO2 would affect such crucial hearing behaviour as finding somewhere to settle.
“Such misinterpretation of sound cues and changes in other behaviours could severely impact fish populations, with the number of young fish finding safe habitats dramatically reduced through their increased vulnerability to predators and reduced ability to find food,” Associate Professor Nagelkerken says.
There is still time to turn around this scenario, Mr Rossi says. “We have the capacity to steer away from that worst-case scenario by reducing CO2 emissions,” he says. “Business as usual, however, will mean a profound impact on fish populations and the industries they support.”
A Youtube video further explaining Mr Rossi’s research can be viewed here.
(Photo: Andreas Marz via flickr) |
“BEEP. BEEP. BEEP.” The alarm is blaring. Time to get up. Do you hit “snooze”? … What’s in a “Zzz”?
On average, we spend 33 percent of our lives … asleep. When assessing your overall health, have you considered your sleep habits?
Sleep hygiene, as researchers call it, involves a variety of different behavioral practices which are necessary for quality sleep and full alertness during waking hours. According to Maj. Jaime Harvey, chief, Human Factors and Operational Safety Issues, Headquarters Air Force Safety Center, “One of the most beneficial ways to ensure a healthy lifestyle is to prioritize your sleep, the same as you do your best eating and exercise habits –and one of the key ways you can do that is by trying your best to maintain a regular wake and sleep pattern, every day of the week.”
The ABCs of Zzzs:
Sleep allows our bodies to rest and refuel for the next day. The sleep process is complex and active. As we sleep, there is important internal restoration and recuperation taking place. A lot of the information we take in throughout the day is processed and stored while we sleep.
The sleep-wake cycle is regulated through two systems which interact and balance each other out. These two systems are known as the circadian rhythm and sleep-wake homeostasis.
The regulatory internal circadian biological clock controls the length of periods of wakefulness and sleepiness throughout the 24-hour cycle. The system of sleep/wake homeostasis helps the body track how much time we have spent awake and when it is time to sleep.
Sleep occurs in two states: NREM (non-rapid eye movement) and REM (rapid eye movement) sleep. During NREM sleep, there is a slowdown of physiological and mental activities. While in NREM, the body experiences physical restoration, hormone production and tissue repair. NREM sleep is divided into four stages. The deepest sleep occurs during stages three and four, when there is usually very little mental activity. Dreaming occurs during REM sleep, when the brain is extremely active.
“Circa” meaning approximately, and “dian,” a 24-hour period of day, are the basis for the circadian rhythm. The 24-hour circadian rhythm follows a cycle incorporating changes in physical, mental and behavioral changes, in accordance with periods of natural light and dark in our environment.
Staying in synch with the circadian rhythm includes being exposed to light first thing in the morning and going to bed at the same time every night. Maj. Harvey explains, “The human body thrives on routine. When we incorporate a regular sleep/wake pattern, our bodies follow like a well-tuned orchestra, performing in synch. When sleep is off, our bodies behave like an orchestra warming up, with each component following its own rhythm, out of synch as a whole.”
Circadian highs and lows are based on the circadian rhythm, which has different peaks and dips throughout the day. On the assumption that the average person wakes up at 6 a.m. and goes to bed at 10 p.m., the circadian flow, goes like this:
- Circadian low: 12 a.m. to 6 a.m.
- Circadian high: 9 a.m. to 10 a.m.
- Post lunchtime dip: 1 p.m. to 3 p.m.
- “Happy hour high”: a 30 to 60-minute burst of energy around sunset
- Dip: around 6 p.m.
- Lowest dip: 3 a.m. to 5 a.m.
Steps to good sleep hygiene:
When considering Zzzs, remember RRR: routine, routine, routine!
Bring back bedtime. Bedtime is not just for children. Remaining cognizant of sleep time is crucial. Setting a routine bedtime can have immense effects on improving overall health.
Create a winding down routine. In preparation for bedtime, create a routine to help relax your mind. Try reading (something non-stimulating), journaling, showering or creating a to-do list for the next day.
Set a wake time. The flip side of maintaining a routine bedtime, is setting a regular wake time. A regular sleeping and waking pattern will help your body adjust to its natural circadian rhythm. Once awoken, avoid lying in bed. This helps maintain bed space as sleep space.
Use an alarm clock. Phone alarms work too but phones should be kept out of arm’s reach, and placed on “do not disturb” during sleep time. Make sure your phone is not disturbing your sleep.
Get in seven to nine hours. Adults require this amount of uninterrupted sleep each night and are only meant to be awake 16 hours a day. Lost sleep, or “sleep debt” accumulates. Unfortunately, we cannot “bank” sleep so the only way to reduce sleep debt is to get sufficient, quality rest every night.
Avoid electronics before bed. As a rule, 30 minutes before bed, avoid having “backlit” devices that give off blue light in front of your face. Blue light washes out melatonin, the natural hormone in the brain which triggers sleep. Each text answered, tweet posted and comment liked increases your exposure to blue light and contributes to disruption of melatonin. With loss of melatonin, we become more alert and enter a vicious cycle of returning to a state of wakefulness. Soon, eight hours of sleep goes down to seven, down to six and so forth.
Eat healthily, live actively. Keep in mind principles of healthy eating, active living. Maintaining a good balance of nutritious food and daily exercise can promote quality sleep.
Be aware of sleep inducing and wakefulness promoting foods.
- White breads
Foods for alertness:
- Nuts and seeds
- Meats/cold cuts
- Peanut butter
Perform a self-check. If you find yourself experiencing difficulty getting a good night’s rest, ask yourself these questions:
- When did you last consume caffeine?
- Did you exercise before bed? How long before?
- Did you consume a large meal before bed?
- Did you not have enough to eat before retiring for the night?
- Are you taking over-the-counter medications, vitamins, etc.? Some products may have hidden caffeine, including some daily multivitamins
Find the culprit. Complete your self-check and take action accordingly:
- Consuming too much caffeine, or too close to bedtime? Give yourself a “caffeine cut-off” time and try to cut back by at least one caffeinated beverage.
- Exercising too close to bedtime? Exercise earlier in the day
- Having large meals before bedtime? Cut down on food intake before bed.
- Going to bed hungry? Have a light snack 30 minutes prior to bed (light carb snack such as crackers or warm milk
- Taking over-the-counter medications, vitamins, etc.? Be sure to discuss use with your health care provider.
Still experiencing difficulty sleeping?
- Learn the truth and log your sleep. Many free apps are available to provide a log of your sleep, track how restful your sleep has been and wake you up in a REM cycle so you are not groggy.
- Incorporate a meditative sound, such as “pink noise.” Pink noise layers noises on top of each other --such as rain on a tin roof-- and helps to relax your mind from the worries of the day.
- Get out of bed and do something boring. Find a monotonous activity which will not get you stimulated (i.e. load the dishwasher). Keep the lights dim and remember the importance of getting out of bed while not sleeping –this practice maintains the sacredness of the bed as a place for sleep.
Look out for symptoms of underlying health issues. Problems with sleep can be signs of other health issues, such as sleep apnea or restless leg syndrome. If you experience any of the following, or other ongoing symptoms, consult your physician.
Symptoms of Restless Legs Syndrome:
- Uncomfortable sensations in legs, arms or other parts of the body
- Irresistible urge to move legs to relieve sensations
- Discomfort in the legs, including an “itchy,” “pins and needles,” or “creepy crawly” sensation
Symptoms of sleep apnea:
- Chronic loud snoring
- Pauses during snoring, followed by choking or gasping
- Rapid onset of sleepiness during quiet, inactive moments of the day
Sleep deprivation is real. One out of three adults is sleep deprived. Inadequate sleep, or insufficient restorative sleep accumulated over time can cause physical or psychiatric symptoms and affect routine task performance. Sleep deprivation can cause memory problems, a weakening of your immune system and lead to depression. Long-term effects of sleep deprivation include a high risk of obesity, heart disease, hypertension, cancer, mental distress and stroke.
Sustained wakefulness affects performance. Going without sleep, or continuing in a sustained state of wakefulness, can have effects on performance similar to effects of alcohol consumption on cognitive function. After 17 hours of sustained wakefulness, performance decreases to a level similar to performance under a .05 BAC (blood alcohol content). After spending a full 24 hours in a continued state of wakefulness, performance decreases to a level similar to performing with a .10 BAC. The legal BAC limit for operating a motor vehicle, is .08.
Fatigue can be fatal. Persistent exhaustion is a constant state of weariness, or fatigue. Fatigue reduces concentration, energy and motivation. The state of fatigue decreases a person’s cognitive abilities by 20 to 50 percent. Cognitive abilities affect everything from attention to reaction time and judgement. According to Harvey, the chance of an accident occurring increases by 400 percent after a worker is on shift 12 hours. As many as 7,500 fatalities occur each year as the result of drowsy driving. Reduced cognitive abilities increase the risk of accidents and fatal hazard. The leading cause of fatigue is inadequate amounts of sleep.
Are you on nightshift?
Since we cannot flip-flop our circadian rhythm, it is important to maintain a routine. Some of the biggest accidents happen at night. Remember to remain vigilant:
- Double-check work
- Have a work buddy
- Utilize break times
Is your bed your sofa too?
Col. Tracy Neal-Walden, chief, Psychological Health, Office of the Air Force Surgeon General, recommends to Airmen living in the dorms that if your bed is also your sofa, be sure to make distinctions in your routine to separate sofa and bed functions. Keep the time your bed is used as a sofa separate from the time you use it as a bed, and create a routine you can stick with. As your sleep time approaches, your bed should function only as a bed.
Experiencing problems sleeping?
If you believe your sleep is disruptive and you are experiencing problems in your work or home life, see your primary care doctor and ask for a visit with your local Internal Behavioral Health consultant. Lt. Col. William Isler, director, Clinical Health Psychology Postdoctoral Fellowship, Wilford Hall Ambulatory Surgical Center, Lackland AFB, Texas, advises that the Behavioral Health Optimization Program (BHOP) is the frontline for Airmen experiencing sleep disruption.
Tools for better sleep:
- CBT-i Coach is an app for self-management of sleep habits. The app is also used to augment face-to-face care with a health care professional, by people engaged in cognitive behavioral therapy for insomnia (CBT-i). The CBT-I app guides users through the process of learning about sleep, developing positive sleep routines and improving their sleep environments.
- After deployment wellness resources for the military community: SLEEP
For more information on sleep hygiene, contact your local behavioral health clinic or base Aerospace and Operational physiologist.
Supplemental information for this article was retrieved from the National Sleep Foundation. |
Science, Technology, Engineering, Art, and Mathematics (STEAM) education has been a focus of development worldwide in recent years. Building upon conventional wisdom about STEM education, new research is inspiring teachers to add an artistic dimension to their math-and-science-based curricula. As little as one visit to a museum can yield “significant and measurable” changes in students, according to a study conducted by the University of Arkansas. The research went on to show students exposed to cultural institutions had overall higher levels of engagement with their studies, better critical thinking skills, more attention to details, and more empathy than their peers.
The link between art and cognitive abilities has been observed outside the classroom as well. Several studies show a direct correlation between artistic hobbies and success in scientific fields at the most elite levels. According to a study conducted by Michigan State University, Nobel Prize-winning scientists are 2.85 times more likely to have an artistic hobby than the general scientific population. The study concludes: “there exist functional connections between scientific talent and arts, crafts, and communications talents so that inheriting or developing one fosters the other.”
But art and creative thinking for scientists are just as important as STEM is for artists. Now more than ever, artists, designers, musicians, and writers are using STEM to find new mediums in which to work, inspiration to draw upon, and tools to hone their crafts. “As an artist working with technology I have found that the sciences dominate my subject matter,” says Jocelyn Klob-DeWitt, Assistant Professor of Art and Design at East Stroudsburg University of Pennsylvania. “My current work discusses biological organisms that have evolved attractive traits. As an artist I want people to be attracted to my work, looking at natural elements like sexual dimorphism and bioluminescence seemed like a logical inspiration.”
Darlene Farris-Labar, Associate Professor of Art and Design using 3D printing to bring Science to life
Likewise, Darlene Farris-LaBar, an Associate Professor of Art and Design also at East Stroudsburg University of Pennsylvania uses science as inspiration in her art, and passes that passion on to her students. Her current focus is 3D printing microscopic plankton that are not only major producers of oxygen but incredibly beautiful organisms. “My students have a lot more opportunities out there than I did when I was in their level. Today there’s a new appreciation for what artists can bring to a multi-disciplinary team,” Farris La-Bar says. “Having a STEM skill set not only makes them competitive, but makes them flexible depending on what doors open to them at what times.”
But how are science, technology, engineering, and mathematics being integrated into arts curricula and vice versa? One increasingly popular way instructors at every grade level are boosting student engagement, fostering interest in STEAM topics, and helping their students become more competitive in the job market is through 3D printing.
Ryan Erickson, MakerSpace Coordinator at Cedar Park STEM Elementary School in Apple Valley, MN, needed a way to help students understand challenging geology concepts. “A big part of the fifth grade science standard is how landforms are made through different processes,” says Erickson. “But understanding how natural processes take place over long periods of time can be difficult for students to grasp.”
Enter 3D printing. As part of Erickson’s class, he assigns each group of students a landform and asks them to research the impacts of erosion, deposition, and weathering. “A student might choose a meandering river, so he or she would study why it curves back and forth—as a result of erosion and deposition,” he explains.
Making ideas become reality
After some background research, his students use a digital sculpting tool that simulates a block of clay to recreate the natural processes they’ve learned about. Being able to 3D print their final product, Erickson says, has a huge impact on their grasp of the material. “They hold it in their hands. It becomes real for them because they’ve actually created it. They can touch it and showcase it. 3D printing is Lego for the digital age.”
Other projects Erickson has done with students as young as kindergarten can be seen here.
Motivating students is a constant challenge for Amber Smith, a teacher at Cowan Road Middle School, a Title 1 school in the suburbs of Atlanta, Georgia. Factors outside of the school such as poverty, gang activity, and difficult home lives detract from students’ ability to focus in the classroom. But thanks to a partnership with Georgia Tech, sixth, seventh, and eighth graders have access to a 3D printer right inside Smith’s classroom which, she says, has made an incredible difference.
“I have some students who don’t do any kind of work at all, but they’ll do 3D printing,” Smith says. “It might be a basic design, but they’ll do something. It’s engaging enough that they actually want to complete assignments and then show them off. They love it.”
One project that captivates Smith’s students entails creating their own cell phone cases. Students must measure their phones, and then design and 3D print personalized plastic cases to fit them. Students have been so excited by the project, they’ve competed for the chance to design a cell phone case for the school’s principal.
Project Lead The Way (PLTW) is a non-profit organization that helps K-12 classrooms throughout the United States integrate computer science, engineering, biomedical science and technology into their curricula. The organization emphasizes project-based learning as a way to build critical thinking and problem-solving skills.
In Colorado Springs, Colorado, PLTW state director Bill Lehman helped teachers identify a solution to a problem facing schools throughout the state: the need for a fast, easy and cost-effective way for young engineering students to prototype their designs. Rapid prototyping would allow students to iterate and learn more from the design process as they test ideas, make mistakes, solve design issues, and try again.
After evaluating different prototyping methods and researching available technology, Lehman, realized 3D printing was exactly the solution they were looking for. “The price of the 3D printer had come down to a very reasonable level…compared to expensive high-end rapid prototyping systems,” he says. “We could hardly afford not to do it.”
By adding the technology to their classroom, teachers have been able to put more time into design, function, and computer-based skills, rather than traditional trade skills such as metal and woodshop. “The students get really into it,” Bryce McLean, PLTW teacher at Coronado High School says. “The printer allows us…generate a model that is to their exact standards.”
“Our assignment was to design, build and play a musical instrument,” says Joe Noble, a mechanical engineering undergraduate. With a team of two other students, Noble built a working, tunable ukulele. But the team didn’t stop there. Upon realizing they could 3D print with multiple colors, they designed a multi-color 3D version of their school mascot and 3D printed it on the body of the instrument. The instrument itself became a work of art.
“It was one of my favorite classes that I’ve taken at RIT,” Noble says. “I now have a whole other dimension to the possibilities of prototyping. It is a very intriguing field. If I ended up working in it — in the actual development of the processes — I’d be stoked.”
When a diagram doesn’t do a complex mathematical concept justice, Dr. Edward Hanson, professor of mathematics at SUNY New Paltz and volunteer math teacher at Frank McCourt High School in New York City, turns to 3D printing. “I have seen some projects that provide tangible examples of the types of solids (solids of revolution) that are constructed in a calculus II course,” Hanson says. “These are particularly exciting to me because they give a physical presence to theoretical objects that can often be difficult to describe or draw.”
Through his work at Frank McCourt High School, he has seen the incredible and immediate reaction students have to the technology. “Student response was amazing. In little time they were designing interesting objects using software. They were fascinated by the presence of the 3D printer in their classroom,” he says. The technology was so interesting to students, Hanson’s teaching opportunities expanded beyond his own classroom roster as other students stopped by regularly to ask questions and watch the 3D printer in action.
Al hacer click en “Aceptar todas las Cookies”, usted acepta el almacenamiento de cookies en su dispositivo para mejorar la navegación del sitio, analizar el uso del sitio y ayudar en nuestros esfuerzos de marketing. Política de Cookies
Centro de Preferencias de Privacidad
Administrar Preferencias de Consentimiento
Cookies Estrictamente Necesarias
Estas cookies son necesarias para que el sitio web funcione y no se pueden desactivar en nuestros sistemas. Por lo general, solo se configuran en respuesta a acciones realizadas por usted a través de la solicitud de servicios, como establecer sus preferencias de privacidad, iniciar sesión o completar formularios. Puede configurar su navegador para bloquear o le avise sobre estas cookies, pero algunas funcionalidades del sitio no funcionarán. Estas cookies no almacenan ninguna información de identificación personal.
Cookies de Desempeño
Estas cookies nos permiten contar las visitas y las fuentes de tráfico, para que podamos medir y mejorar el rendimiento de nuestro sitio. Nos ayudan a saber qué páginas son las más y menos populares y ver cómo los visitantes navegan por el sitio. Toda la información que recopilan estas cookies es concentrada y, por lo tanto, anónima. Si no permite estas cookies, no sabremos cuándo ha visitado nuestro sitio y no podremos controlar su desempeño.
Estas cookies permiten que el sitio web brinde una funcionalidad y personalización mejoradas. Pueden ser establecidas por nosotros o proveedores externos cuyos servicios fueron añadidos a nuestras páginas. Si no permite estas cookies, algunos o todos de estos servicios pueden funcionar incorrectamente.
Cookies de Segmentación
Estas cookies pueden ser establecidas a través de nuestro sitio web por nuestros socios publicitarios. Pueden ser utilizados por esas compañías para crear un perfil de sus intereses y mostrarle anuncios relevantes en otros sitios. No almacenan directamente información personal, sino que se basan en la identificación exclusiva de su navegador y dispositivo de Internet. Si no permite estas cookies, experimentará publicidad menos específica. |
Ready, Steady, Practise! at
Thatcham Park Primary School
Mrs Sheen’s Year 5 lesson for multiplying and dividing by 10, 100 and 1000 using the Ready, Steady, Practise! Mental Arithmetic pupil books.
After Mrs Sheen models the concept on the place value grid, the children return to their desks and - depending on how confident they feel - pick which activity they’d like to try from the differentiated exercises in Ready, Steady, Practise!“As a school we try to encourage the children to choose their own activity – this stretches them, as if they feel confident they can go for a more difficult exercise.”
Using the books
The children work on their exercises, comparing with their neighbours and getting help from Mrs Sheen or her Teaching Assistant. They then choose a new harder or easier exercise depending on how they’re getting along.“It makes it easier to have ready to use exercises for the lessons. We often use the books in intervention groups too. They’re easy to work from and the children seem to really enjoy using them.”
Mrs Sheen puts the answers at the front of the class so the children can mark their own work. The children huddle around and compare marks, then write down how they feel about the exercise.
“We encourage the children to asses themselves and decide whether they’ve understood the concept or need more help.” |
Mapping where animal agriculture competes with carbon dioxide removal.
Farmed animals emit greenhouse gases like methane, so consuming fewer animals is known to reduce these emissions. But what would happen with the land that used to feed them? Restoring native ecosystems could reduce carbon emissions further, but benefits vary widely.
On several trips to the Brazilian Amazon rainforest for my PhD research, I climbed a skinny tower 30 meters above the forest canopy to install equipment that measured weather and greenhouse gases. Our research contributed to a deeper understanding of how this important ecosystem, with its biodiversity, water cycling, and critical stores of carbon would be affected by climate change in the decades to come.
My team and I saw firsthand how recent fires near the research area had razed vast sections of rainforest, no less important than the ones we were studying. These were turned into pastures for the Brazilian state of Pará’s booming cattle industry, driven by rising affluence and a taste for red meat.
My colleagues and I understood that the land requirements for meat production are tremendous. Whether raising beans, beef, or papayas, growing these commodities in previously forested areas prevents trees from spreading their seeds and restoring forests. But beef is an especially land-hungry commodity, while fruit orchards can fetch producers a higher market value per hectare. An equal amount of protein in beans or nuts can grow on a small fraction of the land, sparing the remainder for forests and removing CO2 from the atmosphere in the process.
Recent calls to save the climate by planting a trillion trees seem to miss this important point: the most resilient swaths of forest like those in the Amazon or Eastern US will grow back without humans planting anything. Trees need more assistance from us in drier climates; dry regions can be especially vulnerable to wildfires, which are increasingly severe with rising temperatures in regions like the Western US.
To investigate where shifts in food production are suppressing natural ecosystem regrowth the most, we mapped areas where forests and other ecosystems would exist following human abandonment. We combined maps of this “potential vegetation” with maps of animal crop feeds and pasture to see where ecosystems could work to remove CO2, if humanity shifted to less land-exhaustive forms of agriculture.
We found that not all animal agricultural production is created equal: in some areas, like parts of Europe and South American Atlantic forests, have great potential to remove carbon by scaling back production. Although less than a quarter of ruminant livestock pastures are in naturally forested areas, two thirds of potential ecosystem carbon removal lies in these places.
We also found that future shifts to more plant-focused diets, with far less meat consumption for developed countries, could spare enough land for ecosystems to remove the past 9 years of human fossil fuel CO2 emission. At the farthest end of the spectrum, hypothetical vegan diets could remove 16 years’ worth.
Most of the opportunity for ecosystem restoration, and the CO2 removal that follows, exists in high- and upper-middle income countries, where scaling back on land-hungry meat and dairy would have relatively minor impacts on food security. In such regions, shifts toward more plant-based diets could get us a long way to toward increasing Earth’s rapidly-shrinking carbon budget to stay under 1.5 degrees C of warming. |
Although many have questioned its overall safety and cost benefits, nuclear power remains a major workhorse in low-carbon power generation today. The International Atomic Energy Agency (IAEA) has identified three technologies as the keys to the future success of nuclear energy: fast reactors, small modular reactors, and fusion reactors.
Fast reactors, also known as fast-neutron reactors, have been extensively investigated and widely deployed across the globe. Reactors that use this technology require no neutron moderating medium such as light and heavy water, and graphite, which are essential for transforming fast neutrons (high in kinetic energy) into thermal neutrons (key to sustain fission) in conventional reactors. Although fast reactors consume fuels that are more enriched, they generate less toxic nuclear waste, dramatically reduce the decay time of the waste, and exhaust almost all its fuel material (high burnup rate).
Small modular reactors are smaller in size, manufactured at a central location, and ready for shipment by trucks and planes to application site. This technology requires less on-site construction and a smaller budget, satisfies the electricity needs of remote communities, and come with built-in passive safety measure to ensure a meltdown-free operation.
Related reading: Small modular reactors to be a part of Canada's Low-Carb Energy Diet
Unlike the two more tangible options mentioned above, fusion-based power generation is far from maturity. Despite having many advantages over fission-based reactors like reduced radiation and waste, almost endless fuel, and increased safety, fusion reactions in controlled environment are still facing technical challenges such as plasma heating and stability, confinement and exhaust of energy and particles, reactor safety and environmental compatibility, not to mention the overblown budgets for construction and testing. Large scale fusion projects include International Thermonuclear Experimental Reactor (ITER) in France and National Ignition Facility (NIF) in US. Considerable amount of progress has been made since mid last century, but so far none of experimented design has managed to produce positive net energy in a meaningful time window to allow power generation. |
- To specify a range of consecutive numbers, enter the first and last number separated by a hyphen. For example, to specify all numbers between 5 and 10 inclusive, enter 5-10.
- To specify several non-consecutive numbers, separate each one with a comma. For example, for the numbers 1, 5 and 9 only, enter: 1,5,9.
- Ranges can include combinations of these two elements. For example, to specify the numbers 1, 3, and all the numbers between 10 and 15, enter: 1,3,10-15.
- Negative numbers can also be entered. For example, to specify all numbers between -2 and 4, and all numbers between -10 and -5, enter: -2-4, -10--5. Don't worry if you end up with two hypens in a row - Worksheet Genius will figure it out.
- Numbers do not need to be entered in ascending order. For example, 13,5:9,20,-5 is fine.
- You can specify a step size between each number in a range - this can be used to generate multiples. To generate all the multiples of 5 between 5 and 100, enter 5-100(+5). To generate all the odd numbers between 1 and 20, enter 1-20(+2). To generate all the two digit numbers ending in 6 between 16 and 106, enter 16-106(+10).
For those of you that preferred our older method of using a colon to define a range (5:10, for example), you can still use that too if you want to. |
Arctic Ocean CurrentsThe majority of the world's population does not live in the Arctic. But even if you don't live in the Arctic, it is very important to understand how the Arctic Ocean works because it has an impact on surrounding areas and on global climate.
If you look at the map on this page, you'll see how water moves through the Arctic Ocean. Cold, relatively fresh water comes into the Arctic Ocean from the Pacific Ocean through the Bering Strait. This water meets more fresh water from rivers and is swept into the Beaufort gyre where winds force the water into clockwise rotation. When winds slack off and the gyre weakens, fresh water leaks out of the gyre and into the North Atlantic Ocean (follow the blue lines on the illustration toward the bottom of the map). Of course, water can go both ways, and water does come into the Arctic Ocean from the North Atlantic (red lines on the map). This water is warmer and relatively salty. Because of its increased salinity, it is denser and sinks below Arctic waters.
The production of sea ice is also important to the layering of water in the Arctic Ocean. As sea ice is made near the Bering Strait, salt is released into the remaining non-frozen water. This non-frozen water becomes very salty and very dense and so it sinks below the cold, relatively fresh Arctic water, forming a layer known as the Halocline. The Halocline layer acts as a buffer between sea ice that has been made and the warm, salty waters that have come in from the Atlantic. Without the Halocline layer, the Atlantic waters would enter the Arctic and would begin to melt existing sea ice.
Scientists are very concerned about melting sea ice due to a different process -- global warming. Some fresh water flowing out of the Beaufort gyre and into the North Atlantic is expected as a result of natural processes. But melting sea ice due to global warming will increase the amount of fresh water in the Beaufort Gyre. That means more and more fresh water will spill out into the Atlantic, and many scientists think this could be a big problem that could cause major climate shifts in North America and Western Europe. Normally, these regions have mild climates because a system of ocean currents called the Global Ocean Conveyor carries heat as well as matter around the globe, and as it passes North America and Western Europe it warms those regions by releasing heat that it picked up in the tropics. If, however, there is a larger-than-normal layer of fresh water on top of the North Atlantic Ocean, scientists believe that it might act as a barrier preventing the warmer water from releasing its heat (therefore causing cooler temperatures in North America and Europe!).
There has been a lot of Arctic research in the last decade. The Arctic region is a difficult place to do research with high winds, very low temperatures, thick sea ice and even polar bears who probe (and try to eat!) instruments sitting on the ice. So in exploring Arctic currents, scientists have had to be creative. A very successful tool has been Ice-Tethered Profilers (ITP's). They measure temperature, salinity, and other water properties as they travel up and down a wire rope hanging down to 800 meters (~0.5 miles) into the Arctic Ocean. They can also measure surface currents as they drift through the Arctic. Although the floats won’t replace in-person measurements by scientists, they will allow year-round research in many areas that are too remote or dangerous for people (and they are polar bear proof!). In fact, ITP's make their measurements and send the data to computers via satellite so scientists can access the data from anywhere in the world. ITP's are just one more step forward in research that will hopefully shed more light on the Arctic Ocean, its currents, and its contributing role in regional and global climate. |
ACT OF SETTLEMENT (1781)
The Act of Settlement 1781 was passed by the British Parliament on 5th July 1781 to remove the defects of Regulating Act of 1773. The key provision of this Act was to demarcate the relations between the Supreme Court and the Governor General in Council. This Act is popularly known as “ The Amending Act of 1781” or “Declaratory Act of 1781”.
The basic and fundamental aim of this Act was to establish a new system of Courts to remove the grievances against the Supreme Court and the failure of the Regulating Act’s aim of controlling administration through Judiciary.
The conflict between the Supreme Court and Supreme Council reached its apex during 1779-1780. Then , the Supreme Council filed a petition against the improper working of supreme court in Bengal. Similar petition was filed by various zamindars , company’s servants and so on. The Parliament , therefore , appointed a Committee (Touchet) to enquire into this matter and report as quickly as possible. Then the Committee submitted its report and as a result the Parliament passed the Act of Settlement in 1781.
Reasons for passing of Act of Settlement 1781:-
- Though the Regulating Act of 1773 , brought a major change in the Government’s system , but there were certain loopholes which this act failed to achieve. Those loopholes were further fulfilled by the “Act of Settlement 1781”
- Some issues arose with the Administration of Warren Hastings , which led to a lot of discontent and criticisms amongst people. Some of the examples of such issues are :- Patna Case , Cossijurah Case , Nand Kumar Case and so on.
- There was a huge rift between the Supreme Court and Governor-General in Council which imbalanced the Administration to a certain extent.
- Agitation by the people since there was an interference by the Government in personal laws of the communities.
Aim of the Act of Settlement 1781:-
- To indemnify the Governor-General and the Officers of the Council who acted under their orders in undue resistance made to the process of the Supreme Court.
- To remove the doubts and difficulties of the Regulating Act and the Charter which basically created divisions between the Government and Court.
- To provide assistance to the Government of Bengal , Bihar and Orissa so that the revenue can be collected with certainty at any point of time.
- To protect the Rights , Usages , and Privileges of the Indigenous people.
Features of the Act of Settlement 1781
- The Governor-General and Council were exempted from the Jurisdiction of the Supreme Court for the acts done in official way.
- It excluded the matters related to revenue from the Jurisdiction of the Supreme Court.
- It even exempted the servants of the company from the Jurisdiction of the Supreme Court for their Official actions.
- It provided that the Supreme Court should have the Jurisdiction over all the Inhabitants. It also asked the Court to administer the personal law of the “Defendants”.
- It laid down that the appeals from the Provincial Courts could be taken to the Governor-General-In Council but not to the Supreme Court.
- It basically empowered the Governor-General-In-Council to frame Rules and Regulations for the Provincial Courts and Councils.
The above saying proves , that the “Act of settlement 1781” was the first attempt in India towards the Separation of the Executive from the Judiciary by defining the respective areas of Jurisdiction.
The “Act of Settlement 1781” tried to reconcile the differences and misunderstandings between the Supreme Court and Supreme Council to ensure the harmony in the working of these two vital organs of the Government System. So now let’s look at some more provisions of this act.
They are as follows :-
- This Act provided that the Governor-General and Council were completely immune from the Jurisdiction of the Supreme Court for the orders passed or Act done by them in public.
- The Supreme Court had no Jurisdiction in the matters concerning the collection of revenue.
- The Jurisdiction of the “Supreme Court” was precisely defined in this Act. Section 9 of the Act of Settlement provided that the person who had any interest or control over lands and rents into the provinces of Bengal , Bihar and Orissa were immune from the Jurisdiction of the Supreme Court.
- This Act provided that the person who were employed in company or under the Governor General were excluded from the Jurisdiction of the Supreme Court in matters relating to Inheritance , Succession , Contracts except wrong Trespasses.
- This Act provided that no action can be taken against any Judicial Officer in the Supreme Court for any act done by him/her in exercise of his/her Judicial functions.
- The British Parliament for the very first time gave recognition to the Civil and Criminal Jurisdiction of the Provincial Courts which were Independently existing. (Under this Act)
- This Act specified that the Civil and Religious Usages of the Natives and their Ancient Rites must be Protected , Preserved and Safeguarded.
- The framing of Rules and Regulations for conducting Civil Suits and Criminal Trials were only possible due to this Act , since it empowered the Governor-General and the Council for the same.
Landmark case :-
In Bampton V. Petumber Mullick Case , the dispute arose regarding the Jurisdiction of the Supreme Court over Indians regarding a part of land which was partly situated in Calcutta and partly outside it. “Section 17” of the Act of Settlement explicitly provides that the Supreme Court shall have Jurisdiction over all the Indigenous people of Calcutta.
Later it was found that the Supreme Court shall have the power and Jurisdiction over the Indigenous people of Calcutta but their cases relating to Inheritance , Succession , Rent , Goods and so on shall be decided with their Personal law i.e Hindu law for Hindus and Mohammedan law for Muslims.
Though this Act benefitted a large section of the people but still there were certain impacts of it on the society.
- This Act completely favoured the Council and gave them a Superior authority over the Judiciary.
- The Executives were strongly Strengthened to ensure that the Britishers were able to maintain their hold on the Indian Empire.
- It was the very first attempt to basically Separate Executive branch of Government from Judiciary.
Though this Act failed to remove the flaws of the Regulating Act of 1773 but it proved to be of a great support in bringing the changes to the system of the Administration and Justice.
Author: Shreya Kaul,
STUDENT AT AMITY UNIVERSITY MP (2020-25) |
To understand how the human eye works, first imagine a photographic camera—since cameras were developed very much with the human eye in mind.
How do we see what we see?
Light reflects off of objects and enters the eyeball through a transparent layer of tissue at the front of the eye called the cornea. The cornea accepts widely divergent light rays and bends them through the pupil—the dark opening in the center of the colored portion of the eye.
The pupil appears to expand or contract automatically based on the intensity of the light entering the eye. In truth, this action is controlled by the iris—a ring of muscles within the colored portion of the eye that adjusts the pupil opening based on the intensity of light. (So when a pupil appears to expand or contract, it is actually the iris doing its job.)
The adjusted light passes through the lens of the eye. Located behind the pupil, the lens automatically adjusts the path of the light and brings it into sharp focus onto the receiving area at back of the eye—the retina.
An amazing membrane full of photoreceptors (a.k.a. the “rods and cones”), the retina converts the light rays into electrical impulses. These then travel through the optic nerve at the back of the eye to the brain, where an image is finally perceived.
A delicate system, subject to flaws.
It’s easy to see that a slight alteration in any aspect of how the human eye works—the shape of the eyeball, the cornea’s health, lens shape and curvature, retina problems—can cause the eye to produce fuzzy or blurred vision. That is why many people need vision correction. Eyeglasses and contact lenses help the light focus images correctly on the retina and allow people to see clearly.
In effect, a lens is put in front of the eye to make up for any deficiencies in the complex vision process.
The main parts of the human eye include:
- Cornea: transparent tissue covering the front of the eye that lets light travel through
- Iris: a ring of muscles in the colored part of the eye that controls the size of the pupil
- Pupil: an opening in the center of the iris that changes size to control how much light is entering the eye.
- Sclera: the white part of the eye that is composed of fibrous tissue that protects the inner workings of the eye
- Lens: located directly behind the pupil, it focuses light rays onto the retina
- Retina: membrane at the back of the eye that changes light into nerve signals
- Rods and cones: special cells used by the retina to process light
- Fovea: a tiny spot in the center of the retina that contains only cone cells. It allows us to see things sharply.
- Optic Nerve: a bundle of nerve fibers that carries messages from the eyes to the brain
- Macula: a small and highly sensitive part of the retina responsible for central vision, which allows a person to see shapes, colors, and details clearly and sharply.
Special thanks to the EyeGlass Guide, for informational material that aided in the creation of this website. Visit the EyeGlass Guide today! |
Write a Python program Program for counting sort.
According to Wikipedia “In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that have each distinct key value, and using arithmetic on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum and minimum key values, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. However, it is often used as a subroutine in another sorting algorithm, radix sort, that can handle larger keys more efficiently”.
def counting_sort(array1, max_val): m = max_val + 1 count = * m for a in array1: # count occurences count[a] += 1 i = 0 for a in range(m): for c in range(count[a]): array1[i] = a i += 1 return array1 print(counting_sort( [1, 2, 7, 3, 2, 1, 4, 2, 3, 2, 1], 7 ))
[1, 1, 1, 2, 2, 2, 2, 3, 3, 4, 7]
Write a Python program Program for counting sort
Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains. |
This game is presented as a single-player game. Its application in the classroom would be most applicable in a computer lab or similar setting, where each student can play along on his or her own. Additionally, teachers could encourage kids to play through the game at home, and be prepared to discuss what they learned in class the next day.Continue reading Show less
Kids are presented with information about birds, their migration patterns, and general details about their ecosystem, as they play through mini-games that are loosely connected to this material. For example, there is an arcade-style mini-game where kids have to avoid crashing into airplanes and other hazards, which is not exactly a realistic depiction of the typical bird migration practice. However, these mini-games keep kids engaged and focused, which helps when the game interrupts the mini-games to present factual content.
The educational material in this game comes directly from the National Audubon Society, so it can be trusted and used alongside any relevant curriculum. It is not built specifically as a classroom tool, but if kids play through the game on their own, they are likely to learn something about the complex ecosystem and migration patterns of birds. Unfortunately, most of this learning is placed on top of the gaming, instead of being baked into the gameplay.
Key Standards Supported
Reading Informational Text
Ask and answer questions to demonstrate understanding of a text, referring explicitly to the text as the basis for the answers.
Determine the main idea of a text; recount the key details and explain how they support the main idea.
Refer to details and examples in a text when explaining what the text says explicitly and when drawing inferences from the text.
Determine the main idea of a text and explain how it is supported by key details; summarize the text. |
The very first mammals were reptile-like creatures that laid eggs.
It turns out the ability to nurse their young — a trait unique to mammals — could have led our distant ancestors away from egg-laying, as developing offspring were able to shift from a yolk to a milk diet.
All mammals have at least four physical traits in common. We all possess hair at some point — even whales and naked mole rats. We all have three bones in our middle ear that help amplify sound. We all possess a neocortex in our brains, a structure responsible for higher brain functions. And all mammal species can produce milk.
"The reason we're known as mammals is because of our mammary glands," explained researcher Henrik Kaessmann, an evolutionary biologist at the University of Lausanne in Switzerland. "Nourishment with milk is a key feature of mammals. It's at the center of our story. And we wanted to know how that came about, how we came about."
To better understand how the distant ancestors of humanity and other mammals evolved from reptile-like creatures that laid eggs, Kaessmann and his colleagues investigated genes linked with eggs and milk.
There are three major types of mammals alive today. These include humans and other placentals, kangaroos and other marsupials, and duck-billed platypuses and the scant few remaining egg-laying monotremes. The scientists compared the genes of representatives from these different mammal lineages with those of chickens (which are egg-laying and milkless, naturally).
DNA accumulates mutations over time, serving like a clock. The new findings suggest the genes for "casein" proteins found in milk arose in the common ancestor of all mammals between 200 million and 310 million years ago.
In contrast, genes for proteins called vitellogenins that provide the nutrients found in egg yolk were gradually lost in all mammals, except the monotremes, just 30 million to 70 million years ago. (Since monotremes still lay eggs, they naturally kept some yolk proteins.) The three genes for vitellogenins found in the chicken all became mutated, useless "pseudogenes" in placentals and marsupials, and just one functional vitellogenin gene is seen in monotremes.
The evolution of milk reduced the need that mammal offspring had for the nutrients in the yolk and therefore eggs, the researchers suggest. Eventually, marsupials and placentals abandoned egg-laying completely, leading genes linked with egg yolk to mutate and stop functioning over time.
Indeed, the evolution of milk "seemed to have triggered the chain of events behind the complete loss of egg yolk genes," Kaessman explained.
"These findings shed light on the big question of when and how the transition from eggs happened in mammals," he said.
Kaessmann and his colleagues David Brawand and Walter Wahli detailed their findings online March 17 in the journal PLoS Biology. |
High-temperature superconductors (abbreviated high-Tc or HTS) are materials that behave as superconductors at unusually high temperatures. The first high-Tc superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller, who were awarded the 1987 Nobel Prize in Physics “for their important break-through in the discovery of superconductivity in ceramic materials”.
Whereas “ordinary” or metallic superconductors usually have transition temperatures (temperatures below which they are superconductive) below 30 K (−243.2 °C), and must be cooled using liquid helium in order to achieve superconductivity, HTS have been observed with transition temperatures as high as 138 K (−135 °C), and can be cooled to superconductivity using liquid nitrogen. Until 2008, only certain compounds of copper and oxygen (so-called “cuprates”) were believed to have HTS properties, and the term high-temperature superconductor was used interchangeably with cuprate superconductor for compounds such as bismuth strontium calcium copper oxide (BSCCO) and yttrium barium copper oxide (YBCO). Several iron-based compounds (the iron pnictides) are now known to be superconducting at high temperatures.
In 2015, hydrogen sulfide (H2S) under extremely high pressure (around 150 gigapascals) was found to undergo superconducting transition near 203 K (-70 °C), the highest temperature superconductor known to date. |
A1c stands for Hemoglobin A1c test and the letters A1c stands for the type of hemoglobin A that is associated with diabetes.
Here’s some details about;
Because the short name A1c has many different meanings we only focus here on the meaning of A1c in the medicine books.
In US sit is known as A1c and Hemoglobin A1c.
While outside US has many different synonyms such as glycated hemoglobin, glycohemoglobin, and glycosylated hemoglobin,
Other Acronyms for A1c include: HA1c, HbA1c, Hgb, Hgb%, and hgba1c
Is an analysis of blood for people with diabetes, which measures:
It is about 8% of hemoglobin A is made up of small fractions which include hemoglobin A1c, A1b, A1a1, and A1a2.
While total Hemoglobin A is 90% of hemoglobin itself.
To know more about Hemoglobin itself visit this Hgb page.
Because the hemoglobin A1c is the type of hemoglobin that attach glucose molecules as the red blood cells live and run inside our blood vessels.
The cumulative sugar is produced when glucose binds to hemoglobin molecules in the red blood cells,
which is responsible for the transport of oxygen in the blood.
It turns when glucose binds to the glycoside.
As blood glucose increases, Glucose carried on hemoglobin (Glycohemoglobin) will increase, and remain until the end of the life of red blood cells
Afterwards in 2009, an international expert committee recommended the hemoglobin A1C test be used as a diagnostic test for type 2 diabetes and prediabetes.
As the red blood cells live for about 120 days it will reserve quantity of glycated hemoglobin for the same period.
Also the test is an average to the whole period because the fact that the RBC’s cycle is regenerated.
A1c blood test help identify many health risks include:
Diabetes and Prediabetes for adults and single variant kids.
Kidney inflammation that is due to increased blood sugar levels
Hgb% tells how much sugar is attached to the blood’s hemoglobin protein.
Results tells how well your prescription was effective to control the amount of sugar in the blood over the past two to three months.
To know what is a good a1c level for you please visit A1c Chart page. |
Shakespeare and the plague
Shakespeare and the plague
by William H. Benson
April 13, 2020
In Thomas Dekker’s first pamphlet, The Wonderfull Yeare, he highlights three events that occurred in England in the year 1603.
First, on March 24, 1603, Queen Elizabeth of England died in her bed of natural causes. Six months shy of her 70th birthday, she had ruled England for 44 years, since Nov. 17, 1558.
Dekker writes, “She dyed, resigning her Scepter to posterity, and her Soule to immortality. The report of her death (like a thunder clap) tooke away hearts from millions.”
Second, Dekker writes of the ascension of James I of Scotland to England’s throne.
“England and Scotland are now made sure together, and king James, his Coronation is the solemn day.”
Third, Dekker describes the plague that struck London that summer of 1603.
King James fled the city, but before leaving, he dictated a book of Orders, a list of procedures intended to halt the plague’s spread. In it, he ordered houses of the sick sealed for six weeks, and the sick were “restrained from resorting into company of others.”
Remedies he listed in his Orders included: eating vinegar, butter, cinnamon, and onions; purging and bloodletting; wearing over nose and mouth “a handkerchief dipped in vinegar;” and holding onto a bunch of herbs, such as “rosemary, juniper, bay leaves, frankincense, sage, and lavender.”
Dekker described the depressing sight of London’s streets, littered with ineffective dead herbs, lying alongside the sick and the dying.
“Where all the pavement should instead of green rushes, be strewed with blasted Rosemary, withered Hyacinthes, fatall Cipresse, thickly mingled with heapes of dead men’s bones.”
Dekker did not know that a bacteria caused the plague, Yersinia pestis, carried by fleas that would bite an infected animal, like a rat, a mouse, or a squirrel. Because London was filthy then, rats proliferated. Once the bacteria passed from rat to flea to people, it then spread person to person.
The bacteria caused buboes, or swollen and painful lymph nodes under the arms, around the neck, and in the groin; bleeding under the skin, or from the mouth and nose; severe belly pain, diarrhea and vomiting; or, in its worst form, pneumonia.
A usual first symptom was the red rash in the shape of a ring on the skin. “Ring around the rosy.” People would stuff their pockets and pouches with sweet-smelling herbs. “A pocket full posies.” Mass cremations replaced burials. “Ashes, Ashes, we all fall down.”
One of the first London businesses to close its doors whenever a wave of the plague appeared was the theatre. After the Globe Theater closed its doors in the summer of 1603, the young playwright William Shakespeare fled the city to live for months in his hometown of Stratford-upon-Avon.
One writer said it best, “William Shakespeare lived the whole of his life under the terrible cloud of Death,” caused by repeated waves of the Bubonic Plague, that struck down young and old alike.
John and Mary Shakespeare’s first child Joan died after her baptism, as did their second child, Margaret. William, their third child, was born in April of 1564, and he lived, but then their sixth child, Anne, died in 1579, when just seven years old. The plague took three of William’s siblings.
William Shakespeare’s wife, Anne Hathaway, gave birth to three children, Susanna, and a set of twins, Hamnet, a son, and Judith, a daughter. At the age of 11, Hamnet died from the plague.
The plague struck down, not only family members, but also nephews, nieces, cousins, and actors in Shakespeare’s two acting companies—the Lord’s Chamberlain’s Men, and the King’s Men.
Shakespeare used his time away from London wisely, to think about his next plays. In 1593, the plague forced the Globe to close its doors, and when it re-opened in 1594, he wrote “Romeo and Juliet.”
“Romeo and Juliet” did not receive the message that the friar’s drug would not kill her, but cause her to appear dead. The messenger said, “The searchers of the town, suspecting that we both were in a house, where the infectious pestilence did reign, sealed up the doors and would not let us go forth.”
Once the Globe Theater re-opened its doors in 1604, Shakespeare began to write his greatest tragedies: “Othello” in 1605, “King Lear” in 1606, “Macbeth” in 1606, and “Anthony and Cleopatra” in 1607.
A Columbia professor, Edward Taylor, once revealed his admiration for “King Lear.”
“This is the greatest thing written by anyone, anytime, anywhere, and I don’t know what to do with it.”
Life handed Shakespeare a lemon, a summertime plague, and he made lemonade, “King Lear.”
The plague of 1603 killed 33,347 of London’s citizens, a quarter of the city’s population. From then until 1665, only four years had no recorded cases of people dying from the plague.
Shakespeare died 404 years ago this month, on April 23, 1616, of mysterious causes. |
Cook, James (1728–1779), a British navigator. Captain Cook accurately charted vast regions of the South Pacific; provided a basis for England's claim to Australia and New Zealand; and developed a diet that prevented scurvy among seamen. Cook was born of farming parents in Yorkshire. He went to sea as a boy and joined the Royal Navy in 1755. His seamanship and diligence soon gained recognition, and four years later he was made master of a naval sloop. From 1763 to 1767 he explored the St. Lawrence River and the shores of Labrador and Newfoundland.
In 1768, with a group of scientists, Cook set out on his first expedition, sailing around Cape Horn. The immediate purpose was to observe the transit of the planet Venus from the vantage point of Tahiti. On this voyage, which continued until 1771, the party went on to explore the coasts of New Zealand and to chart the eastern coast of Australia.
As a result of this expedition, Cook was promoted to commander in the navy and was sent with two ships to determine whether there was a continent at the southern extremity of the earth. Although they did not sight Antarctica, the explorers were the first to cross the Antarctic Circle. During this expedition of 1772–75, Cook sailed around the world far to the south, mapping the South Pacific and other southern areas. By providing the crews with sufficient vegetables, Cook proved that scurvy, a disease caused by lack of vitamin C, need no longer plague men on long sea voyages.
Cook was promoted to captain and on his third voyage of discovery, 1776–78, undertook a search for the Northwest Passage—a linking of the Atlantic and Pacific oceans by way of Arctic regions. He approached from the Pacific side and discovered the Sandwich (Hawaiian) Islands. Although he found no passage through the ice, Cook explored the northwest coast up to the Bering Strait. After his return to Hawaii, he was killed by a native because of a misunderstanding over a missing boat. |
Source Program vs Object Program
Source program and object program are two types of programs found in computer programming. Source program is typically a program with human readable machine instructions written by a programmer. Object program is typically a machine executable program created by compiling a source program.
What is Source Program?
Source program is a code written by a programmer usually using a higher level language, which is easily readable by the humans. Source programs usually contain meaningful variable names and helpful comments to make it more readable. A source program cannot be directly executed on a machine. In order to execute it, the source program is compiled using a compiler (a program, which transforms source programs to executable code). Alternatively, using an interpreter (a program that executes a source program line by line without pre-compilation) a source program may be executed on the fly. Visual Basic is an example of a compiled language, while Java is an example of an interpreted language. Visual Basic source files (.vb files) are compiled to .exe code, while Java source files (.java files) are first compiled (using javac command) to bytecode (an object code contained in .class files) and then interpreted using the java interpreter (using java command). When software applications are distributed, typically they will not include source files. However, if the application is open source, the source is also distributed and the user gets to see and modify the source code as well.
What is Object Program?
Object program is usually a machine executable file, which is the result of compiling a source file using a compiler. Apart from machine instructions, they may include debugging information, symbols, stack information, relocation and profiling information. Since they contain instructions in machine code, they are not easily readable by humans. But sometimes, object programs refer to an intermediate object between source and executable files. Tools known as linkers are used to link a set of objects in to an executable (e.g. C language). As mentioned above .exe files and bytecode files are object files produced when using Visual Basic and Java respectively. .exe files are directly executable on windows platform, while bytecode files need an interpreter for execution. Most software applications are distributed with the object or executable files only. Object or executable files can be converted back to its original source files by decompilation. For example, java .class files (bytecode) can be decompiled using Decompiler tools in to its original .java files.
What is the difference between Source Program and Object Program?
Source program is a program written by a programmer, while an object program is generated by a compiler using one or more source files as input. Source files are written in higher level languages such as Java or C (so they are easily readable by humans), but object programs usually contain lower level languages such as assembly or machine code (so they are not human readable). Source files can be either compiled or interpreted for execution. Decompilers can be used to convert object programs back to its original source file(s). It is important to note that the terms source program and object program are used as relative terms. If you take a program transformation program (like a compiler), what goes in is a source program and what comes out is an object program. Therefore an object program produced by one tool can become a source file for another tool. |
Let's look at a few basic concepts that underlie the entire discipline of economics.
First, economics assumes that people make rational choices in the face of scarcity. It assumes that people can and will rationally decide what they want and what they are willing to do without. In some cases, this may seem odd. Are economists saying that people deliberately choose jobs with low pay? Are they saying that people choose poorly made products over high quality ones?
Well, in a way, yes, they are saying that. While individuals may face a limited range of choices or make bad choices, they do make choices. Economics aims to explain and predict people's choices and the ways in which various conditions affect their choices. Of necessity, economists must assume that people are making rational rather than irrational economic decisions.
Markets are organized mechanisms or systems for exchanging money for goods and services (or in a barter system, goods and services for goods and services). They may be physical places, such as the New York Stock Exchange, or they may be located in cyberspace, like the international currency market. A market enables buyers and sellers to come together and engage in transactions.
Second, economics assumes that people have preferences that underlie their economic decisions. This concept resembles the idea of rational choice, but it focuses more on people's likes and dislikes and the trade-offs they are willing to make among these likes and dislikes. The assumption is that people know what they prefer and make choices that reflect these preferences.
Third, people's choices and preferences—their decisions—take the form of transactions in the marketplace. In these transactions they buy and sell labor, products, and services in exchange for money. Furthermore, these transactions collectively amount to economic activity. For example, someone who decides to work at a certain job in a certain place for a certain salary has engaged in a transaction. When we sum up all of the transactions that people in an economy have made regarding where they will work and for how much, the result is employment activity. That activity can be quantified, described, and studied.
Fourth, economic activity occurs in markets. A market is a place where goods and services and the factors of production—raw materials, labor, and plant and equipment—are bought and sold. Although most of us think of a market as a physical place, economists define markets more broadly. The market for financial securities, for example, is located not only on Wall Street but also in cyberspace where online brokerage services enable people to buy and sell securities. Economics delves deeply into markets—how they function and how people function within them.
Finally, some mechanism is required for a market to function properly. The mechanism used in most markets is the price mechanism, and prices are expressed in money. (Bet you were wondering when we would get around talking about money.) In all modern economies, money is recognized as the medium of exchange. A medium of exchange is something people within an economy have agreed to use as a standard of value. Certain Native American tribes reportedly used wampum, the purple parts of clamshells, as money. Other forms of money have included gold, furs, and huge round stones—all of which make about as much sense as using pieces of paper displaying pictures of deceased leaders. Essentially, money is anything that people agree to use as a medium of exchange on a large scale.
These concepts—people making rational choices and expressing preferences in marketplace transactions valued in money—are indeed basic. They form the basis of all that follows in economics. Keep these concepts in mind as we examine matters like the law of supply and demand in Supply, Demand, and the Invisible Hand.
Excerpted from The Complete Idiot's Guide to Economics © 2003 by Tom Gorman. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc. |
Water: Monitoring & Assessment
Why is phosphorus important?
Both phosphorus and nitrogen are essential nutrients for the plants and animals that make up the aquatic food web. Since phosphorus is the nutrient in short supply in most fresh waters, even a modest increase in phosphorus can, under the right conditions, set off a whole chain of undesirable events in a stream including accelerated plant growth, algae blooms, low dissolved oxygen, and the death of certain fish, invertebrates, and other aquatic animals.
There are many sources of phosphorus, both natural and human. These include soil and rocks, wastewater treatment plants, runoff from fertilized lawns and cropland, failing septic systems, runoff from animal manure storage areas, disturbed land areas, drained wetlands, water treatment, and commercial cleaning preparations.
Forms of phosphorus
Phosphorus has a complicated story. Pure, "elemental" phosphorus (P) is rare. In nature, phosphorus usually exists as part of a phosphate molecule (PO4). Phosphorus in aquatic systems occurs as organic phosphate and inorganic phosphate. Organic phosphate consists of a phosphate molecule associated with a carbon-based molecule, as in plant or animal tissue. Phosphate that is not associated with organic material is inorganic. Inorganic phosphorus is the form required by plants. Animals can use either organic or inorganic phosphate.
Both organic and inorganic phosphorus can either be dissolved in the water or suspended (attached to particles in the water column).
The phosphorus cycle
The phosphorus cycle
Phosphorus changes form as it cycles through the aquatic environment.
Phosphorus cycles through the environment, changing form as it does so (Fig. 5.12). Aquatic plants take in dissolved inorganic phosphorus and convert it to organic phosphorus as it becomes part of their tissues. Animals get the organic phosphorus they need by eating either aquatic plants, other animals, or decomposing plant and animal material.
As plants and animals excrete wastes or die, the organic phosphorus they contain sinks to the bottom, where bacterial decomposition converts it back to inorganic phosphorus, both dissolved and attached to particles. This inorganic phosphorus gets back into the water column when the bottom is stirred up by animals, human activity, chemical interactions, or water currents. Then it is taken up by plants and the cycle begins again.
In a stream system, the phosphorus cycle tends to move phosphorus downstream as the current carries decomposing plant and animal tissue and dissolved phosphorus. It becomes stationary only when it is taken up by plants or is bound to particles that settle to the bottom of pools.
In the field of water quality chemistry, phosphorus is described using several terms. Some of these terms are chemistry based (referring to chemically based compounds), and others are methods-based (they describe what is measured by a particular method).
The term "orthophosphate" is a chemistry-based term that refers to the phosphate molecule all by itself. "Reactive phosphorus" is a corresponding method-based term that describes what you are actually measuring when you perform the test for orthophosphate. Because the lab procedure isn't quite perfect, you get mostly orthophosphate but you also get a small fraction of some other forms.
More complex inorganic phosphate compounds are referred to as "condensed phosphates" or "polyphosphates." The method-based term for these forms is "acid hydrolyzable."
Monitoring phosphorus is challenging because it involves measuring very low concentrations down to 0.01 milligram per liter (mg/L) or even lower. Even such very low concentrations of phosphorus can have a dramatic impact on streams. Less sensitive methods should be used only to identify serious problem areas.
While there are many tests for phosphorus, only four are likely to be performed by volunteer monitors.
- The total orthophosphate test is largely a measure of orthophosphate. Because the sample is not filtered, the procedure measures both dissolved and suspended orthophosphate. The EPA-approved method for measuring total orthophosphate is known as the ascorbic acid method. Briefly, a reagent (either liquid or powder) containing ascorbic acid and ammonium molybdate reacts with orthophosphate in the sample to form a blue compound. The intensity of the blue color is directly proportional to the amount of orthophosphate in the water.
- The total phosphorus test measures all the forms of phosphorus in the sample (orthophosphate, condensed phosphate, and organic phosphate). This is accomplished by first "digesting" (heating and acidifying) the sample to convert all the other forms to orthophosphate. Then the orthophosphate is measured by the ascorbic acid method. Because the sample is not filtered, the procedure measures both dissolved and suspended orthophosphate.
- The dissolved phosphorus test measures that fraction of the total phosphorus which is in solution in the water (as opposed to being attached to suspended particles). It is determined by first filtering the sample, then analyzing the filtered sample for total phosphorus.
- Insoluble phosphorus is calculated by subtracting the dissolved phosphorus result from the total phosphorus result.
All these tests have one thing in common they all depend on measuring orthophosphate. The total orthophosphate test measures the orthophosphate that is already present in the sample. The others measure that which is already present and that which is formed when the other forms of phosphorus are converted to orthophosphate by digestion.
Sampling and equipment considerations
Monitoring phosphorus involves two basic steps:
- Collecting a water sample
- Analyzing it in the field or lab for one of the types of phosphorus described above. This manual does not address laboratory methods. Refer to the references cited at the end of this section.
Sample containers made of either some form of plastic or Pyrex glass are acceptable to EPA. Because phosphorus molecules have a tendency to "adsorb" (attach) to the inside surface of sample containers, if containers are to be reused they must be acid-washed to remove adsorbed phosphorus. Therefore, the container must be able to withstand repeated contact with hydrochloric acid. Plastic containers either high-density polyethylene or polypropylene might be preferable to glass from a practical standpoint because they will better withstand breakage. Some programs use disposable, sterile, plastic Whirl-pak® bags. The size of the container will depend on the sample amount needed for the phosphorus analysis method you choose and the amount needed for other analyses you intend to perform.
All containers that will hold water samples or come into contact with reagents used in this test must be dedicated. That is, they should not be used for other tests. This is to eliminate the possibility that reagents containing phosphorus will contaminate the labware. All labware should be acid-washed. The only form of phosphorus this manual recommends for field analysis is total orthophosphate, which uses the ascorbic acid method on an untreated sample. Analysis of any of the other forms requires adding potentially hazardous reagents, heating the sample to boiling, and using too much time and too much equipment to be practical. In addition, analysis for other forms of phosphorus is prone to errors and inaccuracies in a field situation. Pretreatment and analysis for these other forms should be handled in a laboratory.
Ascorbic Acid Method
In the ascorbic acid method, a combined liquid or prepackaged powder reagent, consisting of sulfuric acid, potassium antimonyl tartrate, ammonium molybdate, and ascorbic acid (or comparable compounds), is added to either 50 or 25 mL of the water sample. This colors the sample blue in direct proportion to the amount of orthophosphate in the sample. Absorbance or transmittance is then measured after 10 minutes, but before 30 minutes, using a color comparator with a scale in milligrams per liter that increases with the increase in color hue, or an electronic meter that measures the amount of light absorbed or transmitted at a wavelength of 700 - 880 nanometers (again depending on manufacturer's directions).
A color comparator may be useful for identifying heavily polluted sites with high concentrations (greater than 0.1 mg/L). However, matching the color of a treated sample to a comparator can be very subjective, especially at low concentrations, and can lead to variable results.
A field spectrophotometer or colorimeter with a 2.5-cm light path and an infrared photocell (set for a wavelength of 700-880 nm) is recommended for accurate determination of low concentrations (between 0.2 and 0.02 mg/L ). Use of a meter requires that you prepare and analyze known standard concentrations ahead of time in order to convert the absorbance readings of your stream sample to milligrams per liter, or that your meter reads directly as milligrams per liter.
How to prepare standard concentrations
Note that this step is best accomplished in the lab before leaving for sampling. Standards are prepared using a phosphate standard solution of 3 mg/L as phosphate (PO4). This is equivalent to a concentration of 1 mg/L as Phosphorus (P). All references to concentrations and results from this point on in this procedure will be expressed as mg/L as P, since this is the convention for reporting results.
Six standard concentrations will be prepared for every sampling date in the range of expected results. For most samples, the following six concentrations should be adequate:
0.00 mg/L 0.12 mg/L 0.04 mg/L 0.16 mg/L 0.08 mg/L 0.20 mg/L
Proceed as follows:
- Set out six 25-mL volumetric flasks one for each standard. Label the flasks 0.00, 0.04, 0.08, 0.12, 0.16, and 0.20.
- Pour about 30 mL of the phosphate standard solution into a 50 mL beaker.
- Use 1-, 2-, 3-, 4-, and 5-mL Class A volumetric pipets to transfer corresponding volumes of phosphate standard solution to each 25-mL volumetric flask as follows:
|mL of Phosphate
Note: The standard solution is calculated based on the equation: A = (B x C) ö D
A = mL of standard solution needed
B = desired concentration of standard
C = final volume (mL) of standard
D = concentration of standard solution
For example, to find out how much phosphate standard solution to use to make a 0.04-mg/L standard:
A = (0.04 x 25) ö 1 A = 1 mL
Before transferring the solution, clear each pipet by filling it once with the standard solution and blowing it out. Rinse each pipet with deionized water after use.
- Fill the remainder of each 25 mL volumetric flask with distilled, deionized water to the 25 mL line. Swirl to mix.
- Set out and label six 50-mL Erlenmeyer flasks: 0.00, 0.04, 0.08, 0.12, 0.16, and 0.20. Pour the standards from the volumetric flasks to the Erlenmeyer flasks.
- List the standard concentrations (0.00, 0.04, 0.08, 0.12, 0.16, and 0.20) under "Bottle #" on the lab sheet.
- Analyze each of these standard concentrations as described in the section below.
How to collect and analyze samples
The field procedures for collecting and analyzing samples for phosphorus consist of the following tasks:
TASK 1 Prepare the sample containers
If factory-sealed, disposable Whirl-pak® bags are used for sampling, no preparation is needed. Reused sample containers (and all glassware used in this procedure) must be cleaned (including acid rinse) before the first run and after each sampling run by following the procedure described in Method B on page 128. Remember to wear latex gloves.
TASK 2 Prepare before leaving for the sample site
Refer to section 2.3 - Safety Considerations for details on confirming sampling date and time, safety considerations, checking supplies, and checking weather and directions. In addition to sample containers and the standard sampling apparel, you will need the following equipment and supplies for total reactive phosphorus analysis:
- Color comparator or field spectrophotometer with sample tubes for reading the absorbance of the sample
- Prepackaged reagents (combined reagents) to turn the water blue
- Deionized or distilled water to rinse the sample tubes between uses
- Wash bottle to hold rinse water
- Mixing container with a mark at the recommended sample volume (usually 25 mL) to hold and mix the sample
- Clean, lint-free wipes to clean and dry the sample tubes
Note that prepackaged reagents are recommended for ease and safety.
TASK 3 Collect the sample
Refer to Task 2 in the Introduction to Chapter 5 for details on how to collect water samples using screw-cap bottles or Whirl-pak® bags.
TASK 4 Analyze the sample in the field (for total orthophosphate only) using the ascorbic acid method.
If using an electronic spectrophotometer or colorimeter:
- "Zero" the meter (if you are using one) using a reagent blank (distilled water plus the reagent powder) and following the manufacturer's directions.
- Pour the recommended sample volume (usually 25 mL) into a mixing container and add reagent powder pillows. Swirl to mix. Wait the recommended time (usually at least 10 minutes) before proceeding.
- Pour the first field sample into the sample cell test tube. Wipe the tube with a lint-free cloth to be sure it is clean and free of smudges or water droplets. Insert the tube into the sample cell.
- Record the bottle number on the field data sheet.
- Place the cover over the sample cell. Read the absorbance or concentration of this sample and record it on the field data sheet.
- Pour the sample back into its flask.
- Rinse the sample cell test tube and mixing container three times with distilled, deionized water. Avoid touching the lower portion of the sample cell test tube. Wipe with a clean, lint-free wipe. Be sure that the lower part of the sample cell test tube is clean and free of smudges or water droplets.
Be sure to use the same sample cell test tube for each sample. If the test tube breaks, use a new one and repeat step 1 to "zero" the meter.
If using a color comparator:
- Follow the manufacturer's directions. Be sure to pay attention to the direction of your light source when reading the color development. The light source should be in the same position relative to the color comparator for each sample. Otherwise, this is a source of significant error. As a quality check, have someone else read the comparator after you.
- Record the concentration on the field data sheet.
TASK 5 Return the samples (for lab analysis for other tests) and the field data sheets to the lab/drop-off point.
Samples for different types of phosphorus must be analyzed within a certain time period. For some types of phosphorus, this is a matter of hours; for others, samples can be preserved and held for longer periods. Samples being tested for orthophosphate must be analyzed within 48 hours of collection. In any case, keep the samples on ice and take them to the lab or drop-off point as soon as possible.
TASK 6 Analyze the samples in the lab.
Lab methods for other tests are described in the references below (APHA. 1992; Hach Company, 1992; River Watch Network, 1992; USEPA, 1983).
TASK 7 Report the results and convert to milligrams per liter
First, absorbance values must be converted to milligrams per liter. This is done by constructing a "standard curve" using the absorbance results from your standard concentrations.
- Make an absorbance versus concentration graph on graph paper:
- Make the "y" (vertical) axis and label it "absorbance." Mark this axis in 0.05 increments from 0 as high as the graph paper will allow.
- Make the "x" (horizontal) axis and label it "concentration: mg/L as P." Mark this axis with the concentration of the standards: 0, 0.04, 0.08, 0.12, 0.16, 0.20.
- Plot the absorbance of the standard concentrations on the graph.
- Draw a "best fit" straight line through these points. The line should touch (or almost touch) each of the points. If it doesn't, make up new standards and repeat the procedure.
Example: Suppose you measure the absorbance of the six standard concentrations as follows:
Absorbance of standard concentrations, when plotted, should result in a straight line
The resulting standard curve is displayed in Fig. 5.13.
- For each sample, locate the absorbance on the "y" axis, read horizontally over to the line, and then more down to read the concentration in mg/L as P.
- Record the concentration on the lab sheet in the appropriate column. NOTE: The detection limit for this test is 0.01 mg/L. Report any results less than 0.01 as "<0.01." Round off all results to the nearest hundredth of a mg/L.
Results can either be reported "as P" or "as PO4." Remember that your results are reported as milligrams per liter weight per unit of volume. Since the PO4 molecule is three times as heavy as the P atom, results reported as PO4 are three times the concentration of those reported as P. For example, if you measure 0.06 mg/L as PO4, that's equivalent to 0.02 mg/L as P. To convert PO4 to P, divide by 3. To convert P to PO4, multiply by 3. To avoid this confusion, and since most state water quality standards are reported as P, this manual recommends that results always be reported as P.
APHA. 1992. Standard methods for the examination of water and wastewater. 18th ed. American Public Health Association, Washington, DC.
Black, J.A. 1977. Water pollution technology. Reston Publishing Co., Reston, VA.
Caduto, M.J. 1990. Pond and brook. University Press of New England, Hanover, NH.
Dates, Geoff. 1994. Monitoring for phosphorus or how come they don't tell you this stuff in the manual? Volunteer Monitor, Vol. 6(1), spring 1994.
Hach Company. 1992. Hach water analysis handbook. 2nd ed. Loveland, CO.
River Watch Network. 1991. Total phosphorus test (adapted from Standard Methods). July 17.
River Watch Network. 1992. Total phosphorus (persulfate digestion followed by ascorbic acid procedure, Hach adaptation of Standard Methods). July 1.
USEPA. 1983. Methods for chemical analysis of water and wastes. 2nd ed. Method 365.2. U.S. Environmental Protection Agency, Washington, DC. |
In our last blog we wrote about how chemicals or VOCs can impact indoor air quality and today’s post will be about the role mold has on indoor air quality.
There are three fairly common types of indoor mold:
- Stachybotyrs atra ( also known as black mold)
- Aspergillus family of molds
Aspergillus is the most allergenic mold type, found on foods and in A/C systems. Cladosporium is the small dots in green or black that are found around toilets, sinks and bath tubs, acrylic painted surfaces and fiberglass air ducting. Stachybotyrs atra is the black mold that generally grows in warm, humid areas where water has leaked.
You may have noticed a theme with types of mold that were listed. Moisture. Warmth. Humidity. The EPA has right on the top of their webpage regarding mold: The Key to Mold Control is Moisture Control and below are the top ten things that the EPA thinks you should know about mold.
- Potential health effects and symptoms associated with mold exposures include allergic reactions, asthma and other respiratory complaints.
- There is no practical way to eliminate all mold and mold spores in the indoor environment; the way to control indoor mold growth is to control moisture.
- If mold is a problem in your home or school, you must clean up the mold and eliminate sources of moisture.
- Fix the source of the water problem or leak to prevent mold growth.
- Reduce indoor humidity (to 30-60%) to decrease mold growth by:
- Venting bathrooms, dryers and other moisture-generating sources to the outside
- Using air conditioners and de-humidifiers
- Increasing ventilation
- Using exhaust fans whenever cooking, dishwashing and cleaning
- Clean and dry any damp or wet building materials and furnishings within 24-48 hours to prevent mold growth.
- Clean mold off hard surfaces with water and detergent, and dry completely. Absorbent materials such as ceiling tiles, that are moldy, may need to be replaced.
- Prevent condensation: Reduce the potential for condensation on cold surfaces (i.e., windows, piping, exterior walls, roof, or floors) by adding insulation.
- In areas where there is a perpetual moisture problem, do not install carpeting (i.e., by drinking fountains, by classroom sinks, or on concrete floors with leaks or frequent condensation).
- Molds can be found almost anywhere; they can grow on virtually any substance, providing moisture is present. There are molds that can grow on wood, paper, carpet, and foods.
Those are all great things to know but what can an HVAC company like Air Quality Mechanical do to help with some of those issues? We can clean your whole HVAC system. That means your furnace, A/C and your duct work. By doing that we can make sure you have a clean filter. If you have a high efficient furnace moisture can collect in the condensate tubes and collection box, we clean that. Your A/C has coils where condensation builds up. Where condensation builds up so does dirt, gunk and mold. We vacuum and clean all of the duct work, removing dust, dirt and dander. A short explanation for all the ways that having a clean system can help your indoor air quality and fight mold growth. Added benefit from having a clean system is that it runs much more efficiently saving you energy costs.
If you would like more information give us a call at the office at 406-721-7018. We will also be at the 36th Annual Missoula Home and Garden show on April the 2nd and 3rd. Come talk to us about your concerns and see a demo of our duct cleaning machine. |
A warming Earth causes the volume of mountain glaciers and their extent to decline globally for decades. At the same time, the cover of many glaciers with debris changes. However, this debris coverage has been rarely recorded so far. A study by the scientist Dirk Scherler of the German Research Centre for Geosciences GFZ and two colleagues from Switzerland - one of them employed by Google - now shows a possibility to detect the extent of debris on mountain glaciers globally and automatically via satellite monitoring.
In their work, the scientists used the cloud computing platform "Google Earth Engine". This is a web-based development environment and database of satellite imagery from forty years of remote sensing that is freely accessible to researchers. The images for the study in the journal Geophysical Research Letters came from the satellites Landsat-8 and Sentinel-2 and have a spatial resolution of 30 by 30 and 10 by 10 meters respectively per pixel. The scientists compared the images from space with an electronic glacier catalog, the Randolph Glacier Inventory, to determine the debris coverage. For this they have developed an automatic method that makes pixel-by-pixel comparisons across the globe. "Our approach, in principle, allows rapid mapping of changes in debris coverage for any period for which satellite imagery is available," says Dirk Scherler.
A manual review showed robust results. According to this, 4.4 percent of the glacier surface in mountains is covered with rubble (the Greenland ice sheet and the Antarctic were not included in the study). The distribution is uneven: Towards the poles, the debris coverage decreases as the landscape here is rather flat. In steep mountain regions, such as the Himalayas, there is more debris on the glaciers. Moreover, the study showed that the coverage ratio is higher for smaller glaciers than for larger ones. With global glaciers shrinking, the percentage of debris coverage is expected to increase, making it more important to monitor debris coverage.
Mountain glaciers are of great importance for regions where their meltwater flows: it serves as drinking water, irrigates agricultural areas or drives turbines. According to the authors, the results of the study provide a basis for future modeling of the effects of debris on the ice, from the regional scale to the global scale.
Original Study: Scherler, D., Wulf, H., Gorelick, N. (2018): Global Assessment of Supraglacial Debris Cover Extents. Geophysical Research Letters, doi: 10.1029 / 2018GL080158.
Scientific contact person: Dirk Scherler ([email protected]) |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
The UNI/O bus // is an asynchronous serial bus created by Microchip Technology for low speed communication in embedded systems. The bus uses a master/slave configuration, requiring one signal to pass data between devices. The first devices supporting the UNI/O bus were released in May 2008.
The UNI/O bus requires one logic signal:
- SCIO — Serial Clock, Data Input/Output
Only one master device is allowed per bus, but multiple slave devices can be connected to a single UNI/O bus. Individual slaves are selected through an 8-bit to 12-bit address included in the command overhead.
Both master and slave devices use a tri-stateable, push-pull I/O pin to connect to SCIO, with the pin being placed in a high impedance state when not driving the bus. Because push-pull outputs are used, the output driver on slave devices is current-limited to prevent high system currents from occurring during bus collisions.
The UNI/O specification places certain rules on the bit period:
- It is determined by the master.
- It can be within 10 µs and 100 µs (corresponding to a bit rate of 100 kbit/s to 10 kbit/s, respectively).
- It is only required to be fixed within a single bus operation (for new bus operations, the master can choose a different bit period).
In accordance with Manchester encoding, the bit value is defined by a signal transition in the middle of the bit period. UNI/O uses the IEEE 802.3 convention for defining 0 and 1 values:
- A high-to-low transition signifies a 0.
- A low-to-high transition signifies a 1.
Bit periods occur back-to-back, with no delay between bit periods allowed.
To facilitate error detection, a 2-bit wide "acknowledge sequence" is appended to the end of every data byte transmitted. The first bit is called the "master acknowledge" (shortened to "MAK") and is always generated by the master. The second bit, called the "slave acknowledge" (shortened to "SAK"), is always generated by the slave.
The MAK bit is used in the following manner:
- The master transmits a 1 bit (a MAK) to indicate to the slave that the bus operation will be continued.
- The master transmits a 0 bit (a NoMAK) to indicate that the preceding byte was the last byte for that bus operation.
The SAK bit is used in the following manner:
- Once a full device address has been transmitted (and a valid slave has been selected), if the previous data byte and subsequent MAK bit were received correctly, the slave transmits a 1 bit (a SAK).
- If an error occurs, the slave automatically shuts down and ignores further communication until a standby pulse is received. In this scenario, nothing will be transmitted during the SAK bit period. This missing transition can be detected by the master and is considered a NoSAK bit.
UNI/O defines a signal pulse, called the "standby pulse", that can be generated by the master to force slave devices into a reset state (referred to as "standby mode"). To generate a standby pulse, the master must drive the bus to a logic high for a minimum of 600 µs.
A standby pulse is required to be generated under certain conditions:
- Before initiating a command when selecting a new device (including after a POR/BOR event)
- After an error is detected
If a command is completed without error, a new command to the same device can be initiated without generating a standby pulse.
The start header is a special byte sequence defined by the UNI/O specification, and is used to initiate a new command. The start header consists of the following elements:
- The master drives the bus low for a minimum of 5 µs.
- The master outputs a 0x55 data byte.
- Slave devices measure the time necessary to receive the 0x55 byte by counting signal transitions. This time is then used by the slaves to determine the bit period and synchronize with the master.
- The master outputs a 1 for the MAK bit.
- The slave devices do not respond during the SAK bit following the start header. This is to avoid bus collisions which would occur of all slave devices tried to respond at the same time.
After the start header has been transmitted, the master must transmit a device address to select the desired slave device for the current operation. Once the device address has been sent, any slave device with an address different from that specified is required to shut down and ignore all further communication until a standby pulse is received.
UNI/O allows for both 8-bit and 12-bit device addresses. 8-bit addressing offers better data throughput due to less command overhead, while 12-bit addressing allows for more slaves with a common family code to exist on a single bus. When a slave device is designed, the designer must choose which addressing scheme to use.
For 8-bit addressing, the entire device address is transmitted in a single byte. The most significant 4 bits indicate the "family code", which is defined by Microchip in the UNI/O bus specification. The least significant 4 bits indicate the device code. The device code allows multiple slave devices with a common family code to be used on the same bus. The device code can be fixed for a given slave or customizable by the user. Choosing a device code and how it can be customized (if necessary) are the responsibilities of the slave device designer.
The current family codes for 8-bit devices, as of November 22, 2009, are as follows:
|′0100′||I/O port expanders|
|′1000′||Frequency/Quadrature/PWM encoders, real-time clocks|
|′1111′||12-bit addressable devices|
For 12-bit addressing, the device address is sent in two bytes. The most significant 4 bits of the first byte (which would correspond to the family code in 8-bit addressing), are set to ′1111′. The next 4 bits are the family code for the 12-bit address, and the second byte of the address is an 8-bit wide device code. The device code follows the same guidelines for definition as with 8-bit addressing.
Because the specified slave device is not selected until both bytes of the device address have been received, a NoSAK will occur during the acknowledge sequence following the first device address byte.
The current family codes for 12-bit devices, as of November 22, 2009, are as follows:
After the master has transmitted the device address and selected an individual slave, the master must transmit the 8-bit value for the specific command to be executed by the slave. The available commands are determined by the designer of each slave device, and will vary from slave to slave, e.g. a serial EEPROM will likely have different commands than a temperature sensor. The slave device designer will also determine if and how many data bytes are necessary for the execution of a command. If any data bytes are necessary, they are transmitted by either the master or the slave (dictated by the command type) after the command byte.
Communication will continue until either the master transmits a 0 (NoMAK) during the acknowledge sequence, or an error occurs. Assuming no errors occur, this means that commands can continue indefinitely if the master chooses.
Bus Parasitic Power
- UNI/O Bus Specification (PDF), retrieved 2009-11-22
- 1K-16K UNI/O Serial EEPROM Family Data Sheet (PDF), retrieved 2009-10-21
- AN1194, Recommended Usage of Microchip UNI/O Bus-Compatible Serial EEPROMs (PDF), retrieved 2009-10-21
- https://www.microchip.com/en-us/products/memory/serial-eeprom/single-wire-and-uniobus-serial-eeproms[bare URL]
- https://ww1.microchip.com/downloads/en/Appnotes/01213B.pdf[bare URL PDF]
- https://ww1.microchip.com/downloads/en/DeviceDoc/22199b.pdf[bare URL PDF]
- UNI/O Bus Specification - Microchip |
English Reading Books: Reading Extra (Pdf)
English Reading Books: Reading Extra (Pdf) is a resource book containing photocopiable materials for supplementary classroom work. The activities provide self-contained lessons for the busy teacher. Each activity consists of a page of clear, step-by-step instructions for the teacher and a photocopiable page for the students. The material is aimed at young adult (16+) and adult learners.
However, most activities can be easily adapted for the needs of younger students. Reading Extra offers teachers an exciting collection of topic-based skills activities from elementary to upper-intermediate level
The materials in Reading Extra aim to do two things.
Firstly, to give students practice in the reading skills they need in real life, e.g. scanning a TV schedule to find out what time a specific programme is on, skimming a magazine article to identify the writer’s opinion, intensive reading of instructions to find out how something works.
Secondly, and perhaps more importantly, to give students practice in dealing with unknown words – by using inference from context, general knowledge, morphology – so that they become sufficiently confident to tackle authentic texts, both inside and outside the classroom. While the material has not been written specifically for exam preparation classes, much of it will be suitable for such students.
There are two benefits from working with reading texts in the classroom. The more students read, the better they will read. Furthermore, their knowledge of the language will increase at the same time. For students who are keen to improve their English, reading is the best way forward.
Reading Extra (Pdf) Content
Unit 1 Personal information
Unit 2 The family
Unit 3 Daily activities
Unit 4 Homes
Unit 5 Town and country
Unit 6 Travel and tourism
Unit 7 Food and drink
Unit 8 Describing people
Unit 9 Describing things
Unit 10 Friends and relationships
Unit 11 Health and fitness
Unit 12 Leisure time
Unit 13 Education
Unit 14 The world of work
Unit 15 Money
Unit 16 Past experiences and stories
Unit 17 Science and technology
Unit 18 Social and environmental issues
English Reading Books: Reading Extra (Pdf) is a resource book containing photocopiable materials for supplementary classroom work. The activities provide self-contained lessons for the busy teacher |
How it works
The flu vaccine causes antibodies to develop in the body about two weeks after vaccination.
These antibodies provide protection against infection by the small quantities of viruses that are in the vaccine.
Why you should get a shot annually
The World Health Organisation (WHO) and the CDC monitor each new strain of the flu virus as it appears. They assess which may be the predominant virus in the following year's flu season and then, using this data, develop a vaccine to be used against the specific virus.
When to get vaccinated
Getting vaccinated before the flu season (June to September) starts will give your body a chance to build up full immunity, according to Clicks pharmacist Waheed Abdurahman.
“We make the vaccination available in all our clinics well before winter starts because it can take up to 10 days for the vaccination to reach its full effectiveness,” he said.
“You may experience mild flu-like symptoms as your immunity builds up,” Abdurahman added, “but most people have no problems with the vaccine at all.”
Keep hydrated by drinking six to eight glasses of water a day, and avoid alcohol as this not only slows down your metabolism but also dehydrates your system.
Wash your hands regularly to protect yourself from cold and flu germs.
Why an antibiotic does not work
Taking antibiotics when you have a virus will not help, and in some cases they do more harm than good. They only work on a bacterial infection, and if taken when they are not needed, your risk of developing antibiotic resistance is increased.
According to the National Institute for Communicable Diseases, these are the symptoms you should be looking out for:
* Sudden onset of fever;
* Acute upper respiratory symptoms: dry cough, sore throat;
* General symptoms: malaise, headache, fatigue, muscle pain and body aches, cold shivers and hot sweats;
* Some people may have vomiting and diarrhoea, though this is more common in children. |
The Importance of Breeding Elephants
The Saint Louis Zoo -- like all responsible, accredited zoos -- takes very seriously its role as a steward of a portion of the planet’s natural heritage. As such, we have an obligation to manage our Asian elephant population in a humane and scientific manner to help ensure the species’ survival.
Zoos play an important role in the conservation and study of elephants, as well as many other endangered species. Many scientists believe the Asian elephant could become extinct in the wild by the middle of the next century, if current habitat destruction trends continue. Given the uncertain future for elephants in the wild, captive management programs are becoming increasingly important to the survival of the species. These programs can create secure reservoirs of animals and their gene pools to re-establish wild populations that have become extinct. They can also reinforce remnant wild populations debilitated by genetic and demographic problems.
In addition, zoos also have a unique opportunity to contribute to the body of scientific knowledge about elephants, knowledge that is nearly impossible to gain from elephants in the wild. Because of our ready access to captive animals, and the trusting relationships built between elephants and their keepers, zoos have amassed valuable data on elephant physiology and diseases, nutritional requirements, behavior (including memory), genetics and reproduction. In addition, the captive breeding of both Asian and African elephants has been more successful in recent years because of increased efforts in natural reproduction and technical advances in assisted reproduction.
We are proud that Zoo observations and studies of captive elephants are adding to the growing body of zoological data on elephants, and are being used to contribute to global conservation efforts on behalf of the species as a whole.
Elephant Dating Games
So how do you go about breeding endangered species? When it comes to elephants, it's no simple matter. Not only does a Zoo need adequate facilities and staff, but it also needs to know which elephant to breed with which. This is important in preventing inbreeding and loss of genetic variability in the species.
That’s why the Saint Louis Zoo participates in Species Survival Plans (SSPs) for elephants and other endangered animals. Species survival plans are cooperative conservation programs developed by North American zoos and aquariums to manage the breeding of captive animal populations. The goal is to maintain healthy, self-sustaining populations that are genetically diverse and demographically stable. SSPs also include research, public education, reintroduction programs and field projects.
The SSP for Asian elephants helped us determine which female elephants in North American zoos would be potential candidates to breed with our bull, Raja. That’s what led to our Zoo acquiring Sri from the Woodland Park Zoo in Seattle, and Ellie and Rani from the Jacksonville Zoo in Florida.
Once we humans arranged these elephant matches, the animals themselves took over their own "dating games." Judging from the new additions to our elephant herd, the games were a success! |
To those who grew up in the long shadow of the Apollo program, the moon was a fixed goal in space. The triumphant Apollo 11 landing on July 20, 1969, set the stage for Wernher von Braun’s grand vision of the human conquest of space. Next would come a permanent moon base and a space station that together would serve as the jumping-off point to Mars. Three and a half years later, that dream faded as Eugene Cernan and Harrison Schmitt climbed into a frail-looking, foil-wrapped lunar module and blasted off the moon’s surface. No human has set foot there since. The Saturn V rocket became a museum piece, and NASA bet its money instead on a fleet of space shuttles circling in low Earth orbit.
"Had we pressed on with the Saturn V, we could have had a lunar base by 1975," says Paul Lowman, a geophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Public apathy toward the current manned space program, a $100 billion International Space Station that is dismissed by most of the scientific community, and the recent Columbia disaster seem to have just about killed the old spirit of exploration. Although a new rallying cry is building for going straight to Mars, many old-time space enthusiasts insist we should start by picking up where we left off: Go back to the moon, this time to stay.
"The difficulties of a manned Mars program are staggering. When you go to the moon, you are making an interplanetary trip. It’s a short one, but you are rehearsing cheaply and relatively safely for longer interplanetary trips," Lowman says. This thinking fits with NASA’s long-held philosophy of taking incremental steps. The Space Exploration Initiative, a 30-year NASA road map presented by President Bush in 1989, called for the creation of a moon base before setting off to the Red Planet. The Aurora Programme, a strategic plan endorsed by the European Union Council of Research two years ago, similarly envisions a return to the moon as a prelude to Mars.
As a bonus, a moon base would be an ideal site for cutting-edge science. "We would use the moon as a platform for astronomy and to study the space environment," says Wendell Mendell, manager of NASA’s Office for Human Exploration Science in Houston. The moon has no wind, clouds, light pollution, or atmospheric distortion. Seismic activity is insignificant, and the rotation rate of the moon is 1/500 that of the orbital
period of the Hubble Space Telescope. A lunar telescope could conduct unbroken observations for 14 days at a time.
NASA has been enticing scientists with talk about manned observatories on the moon for nearly 40 years. "There were many of us who were willing to go on a one-way trip to set up these stations," says astronomer William Tifft at the University of Arizona, who was a NASA astronaut finalist in 1965. Today 44-year-old John Grunsfeld is the only active professional astronomer-astronaut in the corps. He has clocked more than 45 days in space on four shuttle flights, three of them focused on astronomy. He hopes science on the moon will come next, following the model of research in Antarctica: "The National Science Foundation would have never established a base at the South Pole just to support astronomy. But once the base was established, astronomy followed, and some of the most exciting results have come from there."
By coincidence, the south polar region of the moon-a 1,500-mile-wide depression known as the Aitken Basin-is where the first permanent lunar base is most likely to be built. Mendell describes a concept that echoes NASA’s plans from the 1960s: "The initial missions will almost certainly be multiple and Apollo-like but longer, perhaps two weeks, with four people on the surface. These missions will in a short time lead to a longer-term mission at the lunar south pole. If we can get the thing rolling, do the technology right, manage the cost, and convince the nation that it’s the right thing to do, we could be landing on the moon again in 10 years."
A prime attraction of the moon’s south pole is its "mountain of eternal light," a peak that receives sunlight at least 70 percent of the time. Solar panels there could generate near-constant power for people and instruments. Equally enticing, some permanently shadowed craters near the lunar south pole seem to contain ice, which could provide water and air for the base. The south pole also intrigues planetary scientists who believe some of the rocks there may have originated deep within the lunar interior. A study of this region might reveal the moon’s true composition, and hence its origin. "One of the original rationales for going to the moon was to find out how it formed, but we still don’t know. If we could put people down again, we would have a definitive answer within 15 years," Lowman says. The prevailing theory holds that the moon was created from debris knocked loose when a Mars-size body collided with the embryonic Earth. This collision may also have triggered Earth’s plate tectonics, key to the recycling of carbon dioxide through our ecosystem.
If a permanent base is established, Lowman envisions constructing at least four optical astronomical sites-two 180 degrees apart on the moon’s equator and one each at the lunar north and south poles. Another possibility would be to deploy an optical interferometer, a device that combines the light from multiple telescopes to create a single superhigh resolution instrument. Both NASA and the European Space Agency (ESA) are contemplating interferometry missions in space, but servicing the instruments and maintaining the precise alignment of separate telescopes in the void is difficult. Near a moon base, neither access nor stability would be a problem.
Building a large-scale interferometer would signal a revolution in optical astronomy. "It would have several hundred times the resolution of the Hubble," says Mike Shao, an optical engineer and physicist at the Jet Propulsion Laboratory (JPL). "You would see what you could see with the James Webb Space Telescope, but with angular resolution that is a hundred times higher." On the moon, interferometry could also be applied to the submillimeter spectrum, halfway between radio and infrared wavelengths. Submillimeter emissions are typically produced by carbon and water molecules in distant galaxies and star-forming regions. Detecting these waves is difficult at best on Earth, due to interference from water in the atmosphere. The moon would offer astronomers a high-resolution window onto the submillimeter universe. "If anyone would offer us the opportunity to put a submillimeter array on the moon, we would grab it," says Tom Phillips, the director of the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. "Submillimeter has the ability to look at the cool, distant universe in a way that no other frequency can."
The lunar surface may also be the best spot in the solar system to tune in to very low frequency (VLF) radio waves. For all practical purposes, ground-based VLF astronomy does not exist. That is because almost all VLF waves are blocked by Earth’s ionosphere, and our planet itself is a natural source of emissions. Dayton Jones and Thomas Kuiper, radio astronomers at JPL, have sketched a plan for deploying a rover to build a VLF radio telescope-essentially a huge network of wires acting as radio-wave receivers-in a crater on the lunar farside, where the moon’s bulk blots out Earth’s radio noise. VLF waves might reveal "fossil" galaxies that were once highly active; they could also be used to map ancient supernova remnants in the Milky Way.
So far, neither NASA nor the ESA has finalized a specific plan for a moon base. Engineers on both sides of the Atlantic say that technology is not a stumbling block, however. The ESA’s Aurora Programme proposes using an enhanced Ariane 5 launcher to ferry a crew to lunar orbit, in the style of the old Saturn V; Mendell says Boeing’s Delta IV launcher could also be enhanced to carry humans to the moon. Further ahead, NASA’s Orbital Space Plane, planned for a 2012 launch, might be able to travel to lunar orbit, depending on the final design. Once there, the plane could dock with a lunar lander placed in orbit before it arrived. Gary Martin, named space architect at NASA in 2002, says that a manned Mars mission would most likely use a similar set of steps.
The monumental hurdle for NASA will be mustering funding and political support to revisit a world it explored more than a generation ago. Last year, Representative Nick Lampson, a Texas Democrat whose home district includes the Johnson Space Center, introduced a bill mandating manned missions to the moon within 15 years. It never even came to a vote.
In classic NASA style, Martin hedges his bets when discussing the agency’s human-exploration plans: "Our strategy is to build a very sustainable program where people are routinely going into deep space. If you rush it, we will go there for one time and we’ll have another hiatus for 30 years." |
Middle LEVEL SSAT SAMPLE QUESTION: Algebra
Here’s a SSAT sample question to help you practice the type of Algebra problem you may find on the quantitative section of the Middle Level SSAT.
What is an Algebra Problem?
Algebra uses numbers, symbols, and letter symbols to solve problems. The basic concept is that one side of the equal sign “=” is, in total, the same as the other. Another important item is the symbol ( ), which indicates that you make the calculations within the symbol (parentheses) before you make those outside the symbol. In algebra, a letter is used to represent an unknown number until the value of that number is discovered. For example, when 5 is added to some unknown number, the sum is 14 and may be written as follows:
5 + n = 14
SSAT Sample Question for Algebra:
Solve for x:
15(6 + 3) = x
Simplify the multiplication on the left side of the = sign: 6 + 3 = 9
15 × 9 = x
135 = x
Would you like to see more SSAT sample questions from the makers of the SSAT itself? Let us know what topics you’d like covered most. Email the blogger at: [email protected]
Ready to take the test? Register for the SSAT at SSAT.org. |
What is scarlet fever?
Scarlet fever is an uncommon infection caused by a type of bacteria called
Streptococcus pyogenes, also known as group A streptococci. As well as
scarlet fever, these bacteria can cause a range of other conditions, including
throat infections and tonsillitis, skin infections (impetigo), wound infections,
and acute rheumatic fever.
The disease most commonly presents in children or adults with a “strep
throat” infection or tonsillitis, followed by the development of a skin rash. It is
not considered highly contagious and the infection is very treatable with
antibiotics. Most children will recover fully within a week or so. Deaths from
scarlet fever are now extremely rare.
How is scarlet fever spread?
The bacterium is found in the nose and/or throat of infected persons and can
be spread to other people by:
coughing or sneezing (by breathing in droplets containing the
direct contact with an infected person, where bacteria may be
transferred by kissing or on hands.
sharing food or drink with an infected person.
What are the signs and symptoms?
sore throat and fever (high temperature) are the typical first
a bright red (scarlet) rash then soon develops. This is caused by a
toxin (poison) that is released into the blood stream by the
the rash starts as small red spots, usually on the neck and upper
chest. It soon spreads to many other parts of the body and may feel
like sandpaper. The rash tends to blanch (go white) if you press on it.
The face is usually spared by the rash, but may become quite
the tongue may become pale but coated with red spots ('strawberry
tongue'). After a few days the whole tongue may look red.
other common symptoms include: headaches, nausea and vomiting,
being off food, and feeling generally unwell.
Infected children should be excluded from child care or school until they are
well, and at least 24 hours after starting antibiotic treatment.
How long does it take to develop?
The time between exposure (contact with the sick person) and getting sick is
on average 1-3 days.
How long is it infective?
People with scarlet fever can spread the disease to others until around 24
hours after commencing antibiotic treatment.
Who is at risk?
anyone can be infected with group A streptococci, but scarlet fever is
more likely to occur in young and primary school-aged children.
people living in the same household.
people in close contact with an infected person who is coughing or
How is scarlet fever diagnosed?
A swab from the back of the throat is usually taken to confirm the diagnosis.
How is scarlet fever treated?
Treatment is important, and consists of a course of antibiotics (usually
penicillin) to kill the bacterium and prevent serious complications that are
sometimes associated with group A streptococcal infections, including heart
(rheumatic fever) and kidney disease.
What you can do?
paracetamol may be given to reduce high temperature (fever) and to
relieve a sore throat
see your doctor. If your doctor prescibes antibiotics it is important to
complete the course.
How can spread of scarlet fever and “strep throat” be prevented?
cover your mouth when coughing or sneezing
wash your hands after wiping or blowing your nose, coughing and
wash hands before preparing food
see your doctor if you or your child have symptoms of sore throat and
wash your hands after touching soiled tissues.
For further information – Contact your local Public Health Unit. |
The Kruskal-Wallis Test is a non-parametric test that examines whether a sample originates from a particular distribution. Much like the ANOVA Test, it is a one-way analysis of variance by ranks. The actual test does not aim to detect the differences between populations being tested, but rather aims to quantify the degree of deviation between the two sets of sample populations – thus, generating a p-value statistic to test for significance.
Much like the Friedman Test, and unlike the ANOVA test, the Kruskal-Wallis Test does not assume a normal distribution of the sample population. However, it does assume that the two groups being tested have the same type of distribution.
The Kruskal-Wallis Test Statistic is commonly written in the following form:
K = (N-1)[( ∑_g_n_i(r_i-r)^2/∑_g∑_n_(r_ij-r)^2)
n_i is the number of observations in group i
r_ij is the rank of observation j from group I
N is the total number of observations
r_i = ∑r_ij/n_i
r = 1/2(N+1)
After deriving the value of K, one can look at distribution tables of the K-statistic to obtain the p-value. If the p-value is smaller than 0.05, the null hypothesis of equal population medians can be rejected. However, if the p-value is higher than 0.05, then one would fail to reject the null hypothesis at the 5% significance level. Thus, understanding the Kruskal-Wallis Test is crucial for examining differences in two sets of sample populations. |
Using Primary Sources in the Classroom
The Creek War
Lesson 2: Geography Determines History
1. Learning Objectives:
Upon completion of this lesson, students should be able to:
1. Describe the geographic location of the Creek War.
2. Suggested Activity for Entire lesson
1. Make a copy of Documents 1, 2, 3 and 4 for each student.
2. Ask the students to arrange the maps in chronological order. They may use their textbooks or other references to help with this.
3. After students complete these tasks, organize the class into four groups. Assign one map to each group. Ask each group to use the general guidelines for analyzing a map and compile their observations and report their conclusions to the class.
3. Suggested Activity for Document 1:
Note that the orientation of the map is unusual - the title and names of sites, printed sideways, distort the traditional north-south orientation. You may need to bring this to the attention of younger children; older students should discover this in their analysis.
1. Use this map to locate forts, battles, towns, etc., mentioned in other documents relating to the Creek War.
2. Orientation activity-- use blank Alabama map and ask students to mark locations of battles mentioned in documents and write them in with correct orientation.
3. Discuss the importance of the rivers for transportation and also the problems associated with travel for militia (waterways too shallow, flooded, need for boats, etc.).
4. Suggested Activity for Document 2:
The Creek Indian attack on Fort Mims was one of the primary causes of the Creek War of 1813-14. Over five hundred people lost their lives in the battle. During this period, General Ferdinand Leigh Claiborne served as leader of the Mississippi volunteers who defended settlers along the Alabama River. He and his forces defeated the Creeks at the Battle of Holy Ground in 1813 December, effectively ending the uprising between the Alabama River and Lake Tensaw.
This Map of Fort Mims and its environs belonged to Gen. Claiborne The map delineates, with sketches of trees and shrubs, the clearing in which Fort Mims stood, and it shows a layout of the fort with simple sketches of the buildings within the barricades. The main road to the fort from the Pensacola road is marked as well as the main ferry landing on the Alabama River.
Various homes and businesses are noted. The map also contains numerous notations about the fort, the massacre, and the surrounding area. Notes identify the directions from which the Creek Indians advanced on the fort, the placement of troops defending the fort, and the fate of the homes and businesses in the area around the fort. The map of Fort Mims was probably created after the massacre.
1. When do you think the map was drawn? Before or after the battle? Why?
5. Suggested Activity for Document 3:
The Battle of Talladega occurred 9 November 1813 near present day Talladega, Alabama. The forces of General Andrew Jackson attacked a large number of Creek Indians, hostile to the Americans,who had surrounded a fort containing a number of Creek Indians, allies of the Americans. Jackson's men killed over two hundred warriors and won the battle. This mp, which appears to be of the Battle of Talladega, is not dated. The creator of this map is unknown. It includes the names of the United States commanders, a list of their troops' positions, and the directions in which their forces moved against the hostile Creeks. The map also shows the location of a camp of hostile Indians and a fort of friendly Indians along a small stream. The location of the hostile Creeks is highlighted in red pencil. A legend is also on the map.
2. Why is the legend important in understanding the map?
3. Do you think this map was made before or after the battle?
4. How did General Jackson know "friendly" (those allied with the Americans) Creeks were in the fort?
5. What would you include in a map if you were a spy giving information to your commanding officer?
6. Why do you think this map was made?
6. Suggested Activity for Document 4:
Leonard Tarrant was an officer during the Creek Indian War of 1813-14. Later, President Andrew Jackson appointed Tarrant as Indian Agent. He was also a Methodist minister who would later serve as a member of the Alabama legislature. The map of the Battle of Horseshoe Bend was made for Captain Tarrant after the battle when the Creeks had been defeated. The map of the Battle of Horseshoe Bend shows the position of the United States forces and the opposing Creek Indians in the bend along the Tallapoosa River for which the battle was named. The map also shows the location of the Creek's fortifications in the bend and the positions taken during the battle by General Andrew Jackson's forces. The location of the baggage and stores of the United States forces is noted, as well as the site of the Indian village, Tohopeka, in the bend and a line of "cragge" hills opposite the bend.
2. How did General Jackson overcome the natural barriers as well as the man-made barriers found at this site?
3. Why did the Cherokee Indians help fight against the Creeks?
4. Why was it important to note the "cragge" hills opposite the bend?
5. Compare this battle site with the others battle sites in Documents 1, 2, and 3.
Document 1: Map of the War in South Alabama in 1813 and 1814, CB-47, Alabama Department of Archives and History, Montgomery, Alabama.
Document 2: Ferdinand Leigh Claiborne, Map of Fort Mims and Environs, CB-23, Alabama Department of Archives and History, Montgomery, Alabama.
Document 3: Map of the Battle of Talladega, A-43, Alabama Department of Archives and History, Montgomery, Alabama.
Document 4: Leonard Tarrant, Map of the Battle of Horseshoe Bend, A-44, Alabama Department of Archives and History, Montgomery, Alabama.
Updated: February 23, 2010 |
Introduction to Web Services Part 3: Understanding XML
This article of the series may seem as a diversion from our focus area of Web Services. Nonetheless, as we mentioned in Part 1, Web services uses XML as it's base language. With that large-scale use of XML in Web Services, it would be useful to take a look at some basic and advanced concepts of XML. With this article, we will aim to lay the foundation for a better understanding of what XML is all about.
About XML—a Primer
XML (eXtensible Markup Language) is a universally agreed markup meta-language primarily used for information exchange. A good example of a markup language is the Hyper Text Markup Language (HTML) The beauty of XML lies in the fact that it is extensible. Simply put, XML is a set of predefined rules (syntactical framework) that you need to follow when structuring your data. For a long time, programmers and application vendors have built applications and systems deployed in an enterprise that processes data that can be interpreted by the enterprise systems—essentially, data structured in a proprietary fashion. But as information exchange between applications and systems across enterprises became prevalent, it became very difficult to exchange data because the systems were never designed to accept data from external, unknown systems. XML provides a standard and common data structure for sharing data between disparate systems. Additionally, XML has built-in data validation, which guarantees that the structure of the data that is received is valid.
Let us take a look at how data is represented using XML:
<employee><shift id= "counter" time="8-12"> <phone id = "1"> All phone information <number>3444333</number > </phone></shift ><shift id="help_desk" time="1-5"> <phone id = "2"> All phone information <number>332333</number > </phone></shift >...<home-address> <street>3434 Norwalk street</street> <city>New York</city> <state>NY</state></home-address></employee>
This illustrates that an employee having more than one shift (for example, mornings he works on the counter and evenings at the help desk).
In the preceding example, we represent the personal information and shift data for an employee in an organization. Notice how XML uses the distinctive "<> </>" tags similar to the tags used in HTML? This is because XML is a markup language much like HTML. The two primary building blocks of XML used in the preceding example are elements and attributes.
Elements are tags, just like the ones used in HTML, and have values. Further, elements are structured as a tree. Hence, you have elements organized in a hierarchical fashion with a base element (parents element) and child elements; child elements themselves further can have more child elements, and so on. In the preceding example, <employee> is the root element and has <shift> as its child element; further down, <phone> is the child of <shift>.
Elements have certain characteristics. Some of these characteristics are:
- Elements can contain data, such as the <number> element in the example.
- On the flip side, elements may not contain data but just attributes, such as the <shift> element.
- Alternatively, elements may have both attributes as well as data, and may also contain child elements, like the <phone> element in the example.
There are many more features and rules associated with elements, such as what valid names an element tag can have, elements have to be properly nested, and so on.
Attributes help you to give more meaning and describe your element more efficiently and clearly. In the preceding example, the <shift> element has an attribute "id" with values "counter" and "help_desk". With the use of such attributes, you can easily know that an employee can be working at a counter or help desk. This helps make the data in the XML document self-describing. You should always remember that the core purpose of attributes is to provide more information about the element and should not be used to contain the data itself. Just as with elements, attributes have many rules associated with them.
Document Type Definition (DTD)
Just as when you start coding in any programming language you need to know the language specification, in the same way DTD is a specification, which has to be followed while creating an XML. Also, just like one of the task of compiler for any programming language is to see if the specification was followed, similarly there are parsers that use the DTD against an XML document to check for the document's validity.
A DTD helps you to define the structure of your XML document. It provides a strict framework and rules to be followed when creating XML documents. In addition, DTD can be used to check the validity and integrity of the data contained in an XML document. A few salient features of DTD are listed below:
- DTD is used to specify valid elements and attributes that can be used in the XML document.
- With a DTD, you can define a tree hierarchy of the elements.
- Sequential organization of a collection of child elements that can exist in an XML document can also be defined by using a DTD.
DTD can be used directly inside the XML source or can exist outside XML document with a link specified in the XML document to that DTD.
Basically, DTD consists of these items:
|DTD Element||Metadata about an element. It specifies what kind of data will element have, the number of occurrences of each element, relationships between elements and so on.|
|DTD Attributes||Specify various rules and specifications associated with the data.|
|DTD Entities||Used to reference a external file or provide a short cuts to common text.|
Example: <!ELEMENT employee (shift+, home-address, hobbies*)>
employee can have one or more shift a day and should have one home-address and can have zero or more hobbies.
Example: <!ATTLIST shift id CDATA #REQUIRED>
shift should have an id attribute.
In short, DTD is used to define a document structure by specifying the details regarding all the elements that are to be used and hence can be used to check the validity of an XML that is supposed to follow the rules laid down by this DTD.
Page 1 of 2 |
Lesson 1 (from Chapter 1, Cache Lake, Portage to Contentment)
Chapter 1, Cache Lake, Portage to Contentment
Over the following 30 days, students will read "Cache Lake Country: Life in the North Woods." The objective of this lesson is to stimulate interest in studying John J. Rowlands' book.
1) Research Topic: Biographical overview of John J. "Jim" Rowlands. Focus on his background concerning the outdoor topics addressed in the book. Consider the chronological calendar style employed.
2) Class Discussion: In writing "Cache Lake Country: Life in the North Woods", what background or personal history was likely to have influenced Jim? What were the author's aims? How significant toward the achievement of those goals do students anticipate to have ranked Henry B. Kane's illustrations? What have those with published reviews of "Cache Lake Country: Life in the North Woods" had to say? What sort of interest do the students anticipate? Was there anything they came across in their research, or...
This section contains 7,166 words
(approx. 24 pages at 300 words per page) |
Schools as Learning Playgrounds
"Play is our brain's favorite way of learning." - Diane Ackerman
Get your copy of Hacking Digital Learning, The 30 Goals Challenge, or Learning to Go. Ask me about training your teachers, [email protected]!
In Chapter 26, Goal: Encourage Play, of The 30 Goals for Teachers, I talk about the importance of play for our learning, development, creativity, and mental wellness. With so much standardized testing and stress on data, play is being taken away. Children are being tested as early as Kindergarten.
Strive to have your learners play outdoors a few times this year. Play is important for all ages. Below is a slide presentation full of ideas for getting your students to learn and play. Scroll down to access the bookmarks.
- Host outdoor board game challenges!
- Play sports! Host a field day, an Olympics games day, or teach them different sports popular in other countries, like curling.
- Students can works in groups to invent a sport. They decide the equipment, create the rules, then teach it to the class.
- Students can study the math and physics of the slides, swings, or other playground equipment.
- They can take what they learn and apply it into building their own playgrounds. You can have local engineers and contractors mentor them.
- Students can measure their shadows at different times of the day. Get them to bring in other objects and draw what they predict the shadows will be depending on the time and location.
- Get them to test different distances and angles with their bodies playing different sports to improve their game!
- Have fun learning with chalk! Learners can draw vocabulary their peers guess, create positive messages around the school then interview students the next day to determine the impact, learn math with hopscotch, or sketch out math word problems.
- Jump rope! Many chants teach literacy, vocabulary, grammar, and math. One example is this one: “A my name is ALICE, my brother/ sister's name is AL, we live in ALABAMA and we bring back APPLES. B my name is B___, my brother/ sister's name is B___, we live in B___ and we bring back B___.”
- Geocaching is where you find little treasures around the area people create. Others find it through free apps that list hints, the longitude, and latitude. Do a school version where students hide small containers of treasure and their peers find them via their longitude and latitude.
- Play and learn with a ball! Play ball Q&A where students catch a ball and answer questions. If the students doesn’t know the answer he/she can throw the ball to another peer.
- Stick masking tape strips with icebreaker questions on a ball. Students catch the ball and answer the question touched. Then that students throws the ball to a peer.
- Makerspaces, learning stations, and genius hour projects allow students to explore hands-on and create.
- Send them on field research. In Texas, I'd take my students collecting water samples with SAWS engineers, bird watching with park rangers, fossil hunting with a paleontologist, and so forth.
- Take them on walks exploring the nature around them.They can create digital books classifying rocks, identifying bugs, naming plants and potential uses, or capturing the sounds of various birds.
- I recommend these tools and apps for creating their multimedia scrapbooks and posters- Bookcreator Canva Buncee Visme Tackk Thinglink Piktochart Biteslide Smore Glogster
- BookCreator iOS/Android App Redjumper.net/bookcreator
- Go on a scavenger hunt! Try these apps and web tools- KlikaKlu app, Goose Chase app, QRWild.com, and the Qr Treasure Hunt Generator.
- Send them on photo challenges. Get them taking pictures of fractions, vocabulary, etc. Give them digital badges for completing these challenges.
- Send them on an epic selfie adventure! Find a free template I created that your learners can adapt here.
Below are several resources I have collected about the history of PLNs, how to build a PLN, and the tools needed to build a PLN. Click on the boxes to blow the resource up. |
Why Mars rover will be blasting its heat ray as it searches for life
The Mars rover Curiosity, which is due on the Red Planet next week, is outfitted with an infrared laser and telescope package called ChemCam that will vaporize bits of rock to study its chemical makeup.
In less than a week, a machine from another planet will arrive on an alien world, soon to start zeroing in on targets and zapping them with its heat ray.
War of the Worlds? Not quite.
It's the Mars rover Curiosity, the robotic star of NASA's $2.5-billion Mars Science Laboratory mission. Any zapping serves to answer a question that has captured the imagination of generations of scientists and the public: Has Mars ever hosted life?
Curiosity is slated to arrive on Mars early in the morning Eastern time on Aug. 6. If the landing goes well, Curiosity will explore the red planet's Gale Crater and its imposing Mt. Sharp. Both show tantalizing geological evidence that the dent in Mars' surface once might have sported environments capable of supporting at least simple forms of life.
The story is written in the chemical make-up of the rocks Curiosity examines. And a first cut at determining which rocks to drive to for analyzing in detail will be made from information gathered by ChemCam, an infrared laser and telescope package that sits atop Curiosity's extendable "neck."
The device, one of 10 science instruments on the rover, also will be hunting for water, either bound up in minerals or as ices in the soil Curiosity traverses. Researchers have identified water as a key requirement for the emergence and survival of life as they've come to know it on Earth.
ChemCam's approach, using a laser and mini telescope to identify atoms present in a distant object, already has found wide use on Earth in situations that would be dangerous for humans, says Darby Dyar, an astronomer at Mt. Holyoke College in Hadley, Mass., and a member of the ChemCam team.
Nuclear-power-plant operators use similar technology as a kind of fuel gauge for the uranium-oxide fuel rods in commercial nuclear reactors. The rods' composition changes as they are used up, she explains. Archaeologists have used the technique to identify the composition of artifacts. Scrap-metal recyclers use it to identify the types of steel they receive. And security specialists are eying it as a tool that could help screen for explosives at airports and along US borders.
The technology was adapted for space missions by a team led by Los Alamos National Laboratory geochemist Roger Wiens, ChemCam's lead scientist. The Mars Science Laboratory's mission marks the instrument's maiden flight.
On Mars, ChemCam represents the Annie Oakley among the rover's science packages. It can place its powerful laser beam on a spot the size of a period on a printed page at 23 feet – farther in the lab, Dr. Dyar acknowledges, but for Mars, 23 feet will do.
The beam plants 1-million-watt pulses on the spot for about five-billionths of a second each, heating the rock or dust it encounters to more than 3,500 degrees Fahrenheit, vaporizing the material.
From a hypothetical Martian's standpoint, the beam's encounter with rock looks like the spark from a butane barbecue lighter. But the spectrum from that tiny bit of light carries an enormous amount of information about the types of atoms present in the material vaporized and their relative abundance.
Indeed, the device is the only one aboard the rover that can identify atoms across the entire periodic table of elements, giving researchers more opportunity to test the makeup of rock types they didn't anticipate finding.
By comparing the results ChemCam delivers from Mars with the spectra of up to 2,000 so-called calibration samples on Earth, researchers will be able to identify the rocks and minerals ChemCam zaps.
And if the rock of interest is covered with dust? No worries. A series of pulses from ChemCam's laser becomes the high-tech whisk broom that exposes the rock surface scientists really want to analyze.
Yet even before ChemCam reaches the Martian surface, researchers are trying to tailor the approach to provide dates for rocks on other planets, much as geochemists date rocks on Earth.
The idea is to detect a sample's spectrum in even finer detail than ChemCam does, so that it picks up not just the signatures of atoms, but their variants, known as isotopes.
By comparing the relative abundance of specific isotopes, researchers will have a more precise tool for gauging the age of rock formations they encounter with future rovers. Currently, they get merely a qualitative estimate of age by counting craters or mapping the relative positions of different geologic features.
On Earth with state-of-the-art technology, dating rocks with a high degree of precision is still a difficult process, says Ralph Milliken, a planetary scientist at Brown University in Providence, R.I., and a member of the Mars Science Laboratory science team. It's highly unlikely for rover-based approaches to match the precision of measurements made on labs on Earth, he says.
Still, "even if you could get the absolute age of something plus or minus 500 million years, that would be huge for Mars," he says.
Gale Crater, Curiosity's landing site, "is a good example. There is a debate as to the age of the material inside that crater" – a debate that yields an estimate that ranges over a billion years.
The latter half of that estimate bumps up against the end of the planet's earliest, and presumably wettest, period. With rock ages uncertain to within a billion years, did the rocks of interest in the crater really come from a wetter period, or from later dryer period?
It's a debate that bears directly on whether a young Mars – at least at this location – hosted potential habitats for life. It's a debate that the technology behind ChemCam eventually could help answer. |
Extinction, a word that refer to the death of one or more species in this world. A lot of organisms appeared in this world. Unfortunately, 95 percent of organisms that ever exsist in this world already extinct. Only 5 percent of the organisms that still exsist in this world until now.There are a lot of things that may trigger extinction. Humans careless activities, environmental damages, pollution, climate change, even disaster may trigger not only extinction, but also mass extinction.
Because of extinction, a lot of organisms were dissapeared from this world. More than 90 percent of the organisms that ever exsisted were already extinct. There are many organism extinction in this world. But, only five mass extinctions ever happen until now. The mass extinction in happened at the end of Ordovisium Era, The end of Devon Era, The end of Perm era, The end of Triassic Era, and The end of Creatceous Era.
The first mass extinction at the end of Ordovisium Era cause more than 90 percent of sea creature and more than 70 percent terrrestial creature include insecta extinct. This extinction was said as the largest extinction ever happen on the Earth. The others mass extinction also cause almost all of the organism varieties extinct. But, the most well-known mass extinction is The Mass Extinction that happened at the end of Creataceous Era. This mass extinction caused the Dinosaur species suddenly dissapeared from the Earth around 65 million years ago. This famous mass extinction was caused by an impact on Earth surface that come from a giant meteor. This meteor hit the Earth surface near Yucatan Peninsula with a low angle, causing tsunami happened. The impact also caused dust, gravel, rocks, and a lot of Earth materials thrown away. the dust cloud covered the sky in a long period and caused sunshine unable to reach the earth surface and automatically triggered the mass extinction.
Nowadays, although mass extinction isn't happened, a lot of organisms still in danger of extinction. Even, some of the dangered organisms aren't known yet. |
are words that modify verbs. They can also be used to modify another adverb or an adjective, and can be created from adjectives. Both adjectives and adverbs can be used to create comparisons.
In the sentence “He is quick,” the adjective “quick” describes the pronoun “he.” If the sentence changes to describe something he does, such as “he works quickly,” the adverb “quickly” is used because it modifies the verb “works.” In English, many adverbs are created by adding the suffix “–ly” to an adjective. Many adverbs in Spanish are created by adding the suffix – mente to the end of an adjective. When you see a Spanish word that ends in – mente, try picturing “–ly” on the end of the word and you may recognize a simple cognate that looks very similar to its English equivalent.
In both languages, there are some adverbs that are simple, independent words, but many adverbs are based on an adjective. To create this type of adverb in Spanish, you must use the feminine form of the adjective, if it exists. For example, the word finalmente, does not have a feminine form. The basic rules for creating the feminine form of adjectives are included with the examples in this section.
Add – mente to the end of the singular, feminine form (whenever possible) of an adjective, and you have an adverb. Adverbs do not vary in form even though you must use the feminine form of the adjective to create the adverb. Table 1 uses several examples to demonstrate how to create an adverb from an adjective that ends in – o.
An adjective that ends in an – e is the same in its feminine form, so you just need to add – mente to make it into an adverb, as shown in Table 2.
An adjective that ends in a consonant normally does not add an – a to the end to make it feminine (unless it is an adjective of nationality). Therefore, as you can see in Table 3, you just add – mente to an adjective that ends in a consonant to make the adverb form.
Notice in the following examples that an adverb created from an adjective that has a written accent mark will retain the same written accent.
A few specific adverbs have no suffix and are identical to the adjective. The following words can be used as adjectives or adverbs. When used as adverbs, they look like a singular masculine form of the adjective, but since they actually modify a verb, they do not have gender and will not change endings.
Table 4 shows those words in action as an adverb and as an adjective.
There is one irregular adverb that is troublesome in both languages. The adverb form of “good” is “well,” which is irregular in English. Not only is the adverb “well” formed irregularly, but the adjective form “good” is often used incorrectly to modify verbs. To describe a noun or a pronoun, you must use the adjective “good.” To describe a verb, you must use the adverb “well.”
For example, “The book is good” uses the adjective “good” to modify the noun “book.” In “The author writes well,” the adverb “well” modifies the verb “writes.” It is common to hear the incorrect sentence, “he writes good.” The same problem occurs in Spanish. The word bueno is the equivalent to the English adjective “good,” and the adverb form of bueno is irregular, also. The adverb bien is the equivalent to the English adverb “well.”
A similar phenomenon occurs with the adjective malo (bad) and the adverb mal (badly). It is somewhat easier to remember that malo is the adjective form because it ends in – o, so you would have to determine the gender of the noun it modifies in order to use the right form of the adjective. If you can't find a noun and realize that malo modifies a verb, you must use the adverb mal instead.
In the following examples, notice that mejor (better) and peor (worse) are extremely unusual. The same word can be used as an adjective or adverb, but it does not change endings when used as an adverb. |
Forward head posture is a postural problem in which the head is pushed forward and out of the bodies centre of gravity, which increases the risk of cervical disc problems and degenerative problems for cervical and thoracic spine.This posture is usually associated with thoracic spine kyphosis (‘hunch’ of the upper back) and flattening of the lower back.
Backpacks: Children are now using backpacks to carry school books weighing up to an alarming 30-40 lbs! This forces the head forward to counter balance the weight resulting in abnormal stress to the discs, joints and nerves of the neck, shoulders and lower back.
Computer Ergonomics: Positioning computer screens too low or too high, coupled with the repetitive motion of moving the head forward to read the screen. This may be due to something as simple as insufficient screen brightness or small text size.
Use of bifocal glasses: causes a person to look up when they try to read using the bottom half of their glasses.
Video games/TV: Most kids use poor posture when playing video games and watching TV. Repetitively sitting in one position for long periods of time causes the body to adapt to this bad posture.
Trauma: Falls and trauma can cause whiplash resulting in muscle imbalance. This pulls the spine out of alignment forcing the head forward.
Weak muscles: The lack of developed back muscle strength causes muscle imbalances which may include:
- Tight chest and neck extensor tightness (Scalenes and Levator Scapula)
- Weakness of neck deep flexors (at the front of the neck)
- Weakness of the and shoulder blade muscles (Rhomboids and erector spinae).
These people ususally have the symptoms like tingling or numbless in the arm, pain and stiffness of neck and shoulder muscles,and burning pain between shoulder blades. There may also be pain and grind problems at the Jaw.
The first step is correcting the posture of the patient with a stretching program for the chest and neck muscles and a strengthening program for shoulder blades and upper back muscles.
Physio Savvy offers a comprehensive exercise program for both groups of muscles, those which need strengthening and those which need stretching. This program is inclusive of the correcting posture training in different daily activities, while offering different supportive devices such as cushion and roller to correct the posture of the patient.We apply various taping technique to stimulates the muscles to have their normal functions. Manual therapy to release tight structures will always be incorporate into treatment to help with pain.
Prevention can be achieved by having:
- Good ergonomics in our life style to maintain the proper posture and alignment of the body in daily activities:
- Using the right shoes to stabilise the body from it’s base
- Using a proper pillow.
- Strengthening our shoulder and neck muscles before pain starts can be helpful to avoid these problems |
Sea Ice Patterns
On July 20, the U.S. Coast Guard Cutter Healy steamed south in the Arctic Ocean toward the edge of the sea ice.
The ICESCAPE mission, or "Impacts of Climate on Ecosystems and Chemistry of the Arctic Pacific Environment," is a NASA shipborne investigation to study how changing conditions in the Arctic affect the ocean's chemistry and ecosystems. The bulk of the research took place in the Beaufort and Chukchi seas in summer 2010 and 2011. Credit: NASA/Kathryn Hansen
NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission.
Follow us on Twitter
Like us on Facebook
Find us on Instagram |
Sean R. Avent
Introduction - Goals - Data Types - Autocorrelation - Correlograms - Box-Jenkins Models -
Frequency Analysis - References
A time series is defined as a collection of observations made sequentially in time. This means that there must be equal intervals of time in between observations.
This page is designed for those who have a basic knowledge of elementary statistics and need a short introduction to time-series analysis. Many references are included for those who need to probe further into the subject which is suggested if these methods are to be applied. This guide will hopefully help people to decide if these are the correct applications to use on their data and to give a quick summary of the basics involved. For analyzing the data there are a number of statistical packages available.
Goals of Time Series Analysis
Time series analysis can be used to accomplish different goals:
1) Descriptive analysis determines what trends and patterns a time series has by plotting or using more complex techniques. The most basic approach is to graph the time series and look at:
Overall trends (increase, decrease, etc.)
Cyclic patterns (seasonal effects, etc.)
Outliers points of data that may be erroneous
Turning points different trends within a data series
2) Spectral analysis is carried out to describe how variation in a time series may be accounted for by cyclic components. This may also be referred to as "Frequency Domain". With this an estimate of the spectrum over a range of frequencies can be obtained and periodic components in a noisy environment can be separated out.
Example: What is seen in the ocean as random waves may actually be a number of different frequencies and amplitudes that are quite stable and predictable. Spectral analysis is used on the wave height vs. time to determine which frequencies are most responsible for the patterns that are there, but cant be readily seen without analysis.
3) Forecasting can do just that - if a time series has behaved a certain way in the past, the future behavior can be predicted within certain confidence limits by building models.
Example: Tidal charts are predictions based upon tidal heights in the past. The known components of the tides (e.g., positions of the moon and sun and their weighted values) are built into models that can be employed to predict future values of the tidal heights.
4) Intervention analysis can explain if there is a certain event that occurs that changes a time series. This technique is used a lot of the time in planned experimental analysis. In other words, 'Is there a change in a time series before and after a certain event?'
Example: 1. If a plant's growth rate before changing the amount of light it gets is different from that afterwards, an intervention has occurred - the change in light is the intervention. 2. When a community of goats changes its behavior after a bear shows up in the area, then there may be an intervention.
5) Explanative Analysis (Cross Correlation)
Using one or more variable time series, a mechanism that results in a dependent time series can be estimated. A common question to be answered with this analysis would be "What relationship is there between two time series data sets?" This topic is not discussed within this page although it is discussed in Chatfield (1996) and Box et al. (1994).
Example: Atmospheric pressure and seawater temperature affect sea level. All of
these data are in time series and can relate how and to what degree pressure and
temperature affect the sea level.
Types of Time Series Data
Continuous vs. Discrete
Continuous - observations made continuously in time
1. Seawater level as measured by an automated sensor.
2. Carbon dioxide output from an engine.
Discrete - observations made only at certain times.
1. Animal species composition measured every month.
2. Bacteria culture size measured every six hours.
Stationary vs. Non-stationary
Stationary - Data that fluctuate around a constant value
Non-stationary - A series having parameters of the cycle (i.e., length,
amplitude or phase) change over time
Deterministic vs. Stochastic
Deterministic time series - This data can be predicted exactly.
Stochastic time series - Data are only partly determined by past values and future values have to be described with a probability distribution. This is the case for most, if not all, natural time series. So many factors are involved in a natural system that we can not possibly correctly apply all of them.
Transformations of the Data
We can transform data to:
1. Stabilize the variance - use the logarithmic transformation
2. Make the seasonal effect additive - this makes the effect constant from year to year - use the logarithmic transformation.
3. Make data normally distributed - this reduces the skewness in the data so that we may apply appropriate statistics - use the Box-Cox (logarithmic and square root) transformation
There are many more transformations not discussed here that are available to use for
the many different things we may want to do with the time series data. These are
discussed in the various texts listed througout this page.
A series of data may have observations that are not independent of one another.
A population density on day 8 depends on what that population density was at on day 7. And likewise, that in turn is dependent on day 6 and so forth.
The order of these data has to be taken into account so that we can assess the autocorrelation involved..
To find out if autocorrelation exists:
Autocorrelation Coefficients measure correlations between observations a certain distance apart.
Based on the ordinary correlation coefficient r (see Zar for a full explanation), we can see if successive observations are correlated. An autocorrelation coefficient at lag k can be found by:
This is the covariance (xt xt+k)divided by the variance (xt).
An rk value of (± 2/Ö N) denotes a significant difference from zero and signifies an autocorrelation.
Also note that as k gets large, rk becomes smaller.
Another test for the presence or absence of autocorrelation, a Durbin-Watson d-statistic can be employed:
Fig. 1 shows the five regions of values in which autocorrelation is accepted or not.
Figure 1. The five regions of the Durban-Watson d-statistic.
A Note on Non-Stationary Data
As stated above, non-stationary data has the parameters of the cycle involved changing over time. This is a trend that must be removed before the calculation of rk and the resulting correlograms seen below. Without this trend removal, the trend will tend to dominate the other features of the data.
The autocorrelation coefficient rk can then be plotted against the lag (k) to develop a correlogram. This will give us a visual look at a range of correlation coefficients at relevant time lags so that significant values may be seen.
The correlogram in Fig.2 shows a short-term correlation being significant at low k and small correlation at longer lags. Remember that an rk value of (± 2/Ö N) denotes a significant difference (a = 0.05) from zero and signifies an autocorrelation. Some procedures may call for a higher a value since this constitues expectation that one out of every twenty obsservations in a truly random data series will be significant.
Figure 2. A time series showing short-term autocorrelation together with its correlogram.
Fig. 3 shows an alternating (negative correlation) time series.
The coefficient rk alternates as does the raw data (r1 is negative and r2 is positive ..) This series of rk is negative.
Figure 3. An alternating time series with its correlogram.
A greater discussion on the correlograms and associated periodograms can be found in
Chatfield (1996), Naidu (1996), and Warner (1998).
Box-Jenkins Models (Forecasting)
Box and Jenkins developed the AutoRegressive Integrative Moving Average (ARIMA) model which combined the AutoRegresive (AR) and Moving Average (MA) models developed earlier with a differencing factor that removes in trend in the data.
This time series data can be expressed as: Y1, Y2, Y3, , Yt-1, Yt
With random shocks (a) at each corresponding time: a1, a2, a3, ,at-1, at
In order to model a time series, we must state some assumptions about these 'shocks'. They have:
1. a mean of zero
2. a constant variance
3. no covariance between shocks
4. a normal distribution (although there are procedures for dealing with this)
An ARIMA (p,d,q) model is composed of three elements:
d: Integration or Differencing
q: Moving Average
A simple ARIMA (0,0,0) model without any of the three processes above is written as:
Yt = at
The autoregression process [ARIMA (p,0,0)] refers to how important previous values are to the current one over time. A data value at t1 may affect the data value of the series at t2 and t3. But the data value at t1 will decrease on an exponential basis as time passes so that the effect will decrease to near zero. It should be pointed out that f is constrained between -1 and 1 and as it becomes larger, the effects at all subsequent lags increase.
Yt = f1 Yt-1 + at
The integration process [ARIMA (0,d,0)] is differenced to remove the trend and drift of the data (i.e. makes non-stationary data stationary). The first observation is subtracted from the second and the second from the third and . So the final form without AR or MA processes is the ARIMA (0,1,0) model:
Yt = Yt-1 + at
The order of the process rarely exceeds one (d < 2 in most situations).
The moving average process [ARIMA (0,0,q)] is used for serial correlated data. The process is composed of the current random shock and portions of the q previous shocks. An ARIMA (0,0,1) model is described as:
Yt = at - q1at-1
As with the integration process, the MA process rarely exceeds the first order.
Time Series Intervention Analysis
The basic question is "Has an event had an impact on a time series?"
The null hypothesis is that the level of the series before the intervention (bpre) is the same as the level of the series after the intervention (bpost). or
Ho: bpre - bpost = 0
After building the ARIMA model, an intervention term (It) can be added and the ARIMA equation is now a noise component (Nt):
Yt = f(It) + Nt
The intervention component can be of four different types that are described by their onset and duration characteristics (Fig. 4):
Figure 4. Types of intervention components. From Mc Dowall et al. (1980).
Frequency analysis is used to decompose a time series into an
array of sine and cosine functions which can be plotted by their wavelengths. This
spectrum of wavelengths can be analyzed to determine which are most relevant (see Fig.
5). In Fig.5 you cant tell what the major components are of the raw data, but when a
spectral analysis is completed, yu can pick out the relevant wavelengths.
In any one of these analyses, the data is considered to be stationary. If it is not, then a filter should be applied to the data before instituting the appropriate analysis. All angles are presented as radians.
Figure 5. Frequency analysis data sets. The top four plots are the raw data as where the bottom four are the periodograms for the top four, but are not in order.
A Harmonic Analysis (a type of regression analysis) is used to fit a model when the period or cycle length is known apriori. This can estimate the amplitude, cycle phase, and mean.
Xt =m + A cos(wt) + B sin(wt) + et
w = 2p/t (We know what the period (t) is).
t = observation time or number
A and B = coefficients
e = residuals that are uncorrelated
Given t, we can use OLS regression methods to estimate the amplitude and the phase of the cycle.
Amplitude: R = (A2 + B2) 1/2
Phase: f = arctan (-B, A)
Using SPSS, we use multiple regression using sinw and cosw as variables to give us estimates of A and B. Once we have this info, we can calculate the amplitude and phase and the model is fit.
A Periodogram or Spectral Analysis is used if there is no reason to suspect a certain period or cycle length. These methods fit a suite of cycles of differing lengths or periods to the data.
To find which sinusoidals describe the data and to what degrees, a generalization of the harmonic analysis is applied to the residuals of the data. The overall SS variance is partitioned into N/2 periodic components each with df=2. Then a harmonic analysis is done on each component and summed in an ANOVA source table. From this, we get estimates of A and B (SSs) for each component and as they are additive to the SStotal, we can get estimates of variances for each component.
The null hypothesis is that the variances are all the same and this is indicative of white noise. This is plotted with intensity or SS on the Y-axis, while the X-axis is composed of the frequencies. A large peak represents a frequency that varies the data significantly.
Xt = m + S [A cos (wt) + B sin (wt) ]
|Dependent variable:||Xt = time series|
|Independent variables:||A = cosine parameter is regression coefficient|
|B = sine parameter is regression coefficient|
A and B determine the degree to which each function is correlated with the data.
Since the sine and cosine functions are orthogonal (mutually independent), Periodogram Values (Pk) are created and ploted against the frequency. These values are interpreted as variances of the frequencies.
Pk = A2 + B2 * N/2Pk = periodogram value at uk
N = overall length of series
Since the true data are not sampled continuously, the significant period peak may leak into other adjacent frequencies. To alleviate this problem, deployment of the following are suggested:
2. Tapering or windowing
These methods can be found in Warner et al. (1998), Chatfield (1996), and Gardner (1988).
In order to get a power spectrum, we must smooth the data from the periodogram so that the each periodogram intensity is replaced by an average that includes weighted neighboring data. This gives a better and more reliable picture of the distribution of power (or variance accounted for). Smoothing procedures can differ by window width and weighting function.
Fourier frequencies are chosen with the longest cycle equal to the length of the series and the shortest cycle having a period of two cycles. All frequencies in between are equally spaced and dont overlap. A Fast Fourier Transform uses the Euler relation deriving complex numbers (Chatfield, 1996) and is too math-intensive to practically do by hand. SPSS has a fast Fourier transfrom built in for these analyses.
Spectrum analysis significance tests use upper and lower bounds of a confidence interval that are derived using a c2 distribution. The degrees of freedom will depend on what kind of smoothing was used. This confidence interval can be superimposed on the Power Spectrum so that significant values may be seen.
For a more complete description see any one of the spectral analysis books listed below, but especially Chatfield (1996) and Warner (1998).
There are many references out there for time series analysis. Most refer
to applications involving econometrics or social sciences, but most techniques can be
applied to the biological sciences. Most of the web pages involve vaery advanced theories
Books held by the SFSU Library
|Box, G.E.P., G.M. Jenkins, and G.C. Reinsel. 1994. Time series analysis Forecasting and control. 3rd ed. Prentice Hall, Englewood Cliffs, NJ, USA||A great introductory section, although the rest of the book is very involved and mathematically in-depth.|
|Chatfield, C. 1996. The analysis of time series an introduction. 5th ed. Chapman and Hall, London, UK.||A very good and readable book that goes over most aspects of time series data. Highly recommended.|
|Gardner, W.H. 1988. Statistical spectral analysis - A nonprobabilistic theory. Prentice-Hall Inc. Englewood Cliffs, NJ, USA.||An in-depth book with advanced features and methods.|
|Harvey, H.C., 1981. Time series models. Halstead Press, New York, NY, USA.||A moderately involved book with some understandable sections on model building.|
|McDowall, D., R. McCleary, E.E. Meidinger, and A.H. Richard Jr. 1980. Interupted Time Series Analysis. Sage Publications,Inc., Thousand Oaks, CA, USA.||A good book in the Sage series on intervention analysis that covers the basics quite well. Very readable.|
|Naidu, P.S. 1996. Modern spectrum analysis of time series. CRC Press Inc., Boca Raton, FL, USA||A complete account of spectrum analysis, but very involved and assumes great comfort with basic statistics.|
|Ostrom, C.W., 1978. Time series analysis : regression techniques. Sage Publications, Beverly Hills, CA. USA.||A good and short book in the Sage series that goes over the basics with decent ease.|
|Warner, R. M. 1998. Spectral analysis of time-series data. Guilford Press, New York, NY, USA.||A very good book on spectral analysis that is especially good with experimental design and data collection/entry.|
|SPSS for Beginners - $5.95||This can be downloaded in a pdf (Acrobat Reader) file for a small fee. Chapter 17, Time Series Analysis can be downloaded separately for free from the SPSS site.|
|An online textbook from Statsoft that cover most aspects of Time Series Analysis.||Very complete and readable.|
|Autobox tutorial||A rather bulky tutorial on ARIMA Models|
|Carnegie Mellon Univerity - Datasets||Very wide range of datasets to play with.|
|Rob J Hyndman's Forecasting Pages||A set of pages with everything forecasting.|
|Time Series Analysis and Chaosdynamics - Rotating Fluids||A very in depth page on advanced time series analysis.|
|Forum: sci.stat.edu||If you get stuck, you can post a question to this forum .|
|Forum: sci.stat.consult||Or this forum .|
|Forum: comp.soft-sys.stat.spss||Or for any SPSS question, use this forum.|
|SPSS||Lots of Software - a great statistics package|
|AFS - Autobox||Looks useful, but I havent played with it. Starts at $400 and goes up from there. Forecasting and intervention analysis.|
|UCLA Statistics Bookmark Database||Need Software - look here!|
Page last updated 14 Dec 1999. |
What is CCS?
CCS is a 3 step process which involves capturing the CO2 from power plants and other industrial and energy-related sources, transporting it to storage points then storing it safely in depleted oil and gas fields, deep saline aquifers as well as possible sites onshore.
CO2 capture is the process of removing CO2 (carbon dioxide) produced by hydrocarbon combustion (coal, oil and gas) before it enters the atmosphere. The process will be most cost effective when it is used on large point sources of CO2 such as power stations and industrial plants. These currently make up more than half of all man-made CO2 emissions.
There are currently three main methods of capturing CO2:
- Post-combustion capture - removing the dilute CO2 from flue gases after hydrocarbon combustion
- Pre-combustion capture - removal of CO2, prior to combustion, to produce hydrogen
- Oxy-fuel combustion capture - burning fossil fuels in pure oxygen as opposed to air resulting in a more complete combustion
CO2 capture is likely to be most economic at large point sources of CO2 such as power stations and large industrial plants. In most cases these will not be close to a suitable underground reservoir for storing the CO2 and therefore the CO2 will have to be transported.
Transport is currently the least complicated element in the CO2 capture and storage chain as the technology is already in existence and costs can be realistically estimated.
The main complication with CO2 transport is that CO2 behaves differently under varying pressures and temperatures and therefore transport of CO2 must be carefully controlled to prevent solidification and blockages occurring.
There are currently two methods used to transport large volumes of CO2 by industry:
- Pipeline Transport
- Ship Transport
CO2 storage is simply the process of taking captured CO2 and then placing in a location where it will not be in contact with the atmosphere for thousands of years. Storage of the CO2 in underground sites beneath a layer of impermeable rock (cap rock) which acts as a seal to prevent the CO2 from leaking out is the most obvious option at present.
There are three main types of proposed underground storage sites:
- Depleted Oil and Gas Reservoirs
- Deep Saline Aquifers
- Deep Unminable Coal Seams
You can read more about CCS at The Scottish Centre for Carbon Storage http://www.geos.ed.ac.uk/sccs |
By using a supersonic nozzle more commonly found at the business end of rocket and jet engines, researchers at the University of Illinois at Chicago have devised a very simple and inexpensive way of producing high-quality, defect-free sheets of graphene on a range of substrates (materials). Other methods at producing large quantities of defect-free graphene have so far been very elusive — and, for the purposes that we’re interested in (replacing silicon in microelectronics), anything less than defect-free just won’t do.
The method, developed by the University of Illinois in association with some researchers in South Korea, is painfully simple — which is usually a very good sign, when it comes to scaling a process up to industrial levels. Basically, the researchers take a commercially available graphene suspension (a fluid with low-quality graphene flakes dispersed in it), and then use a supersonic spray gun to deposit the graphene on a substrate. No further treatment is required, apparently.
Usually, this process would just lead to the substrate being covered unevenly in the suspension, with random aggregations of graphene flakes that lack the awesome properties that we’ve come to know and love. But, of course, this being graphene, something magical seems to happen when it’s sprayed at supersonic speeds: When the graphene hits the substrate, there’s enough kinetic energy that it spreads out perfectly into a thin, single-atom-thick layer of pristine graphene. “Imagine something like Silly Putty hitting a wall — it stretches out and spreads smoothly,” says Alexander Yarin, co-leader of the research. “That’s what we believe happens with these graphene flakes. They hit with enormous kinetic energy, and stretch in all directions. We’re tapping into graphene’s plasticity — it’s actually restructuring.”
The secret sauce, if there is any (it really is a depressingly simple approach) is the use of a de Laval nozzle. The de Laval nozzle is a stretched hourglass shape, with a pinch in the middle that forces fluids to accelerate to supersonic speeds, and then to shape the exhaust. The nozzle is usually used in rocket and jet engines, to accelerate the pressurized gases and to generate more thrust. In this case, it’s just about accelerating the graphene suspension so that it leaves with enough kinetic energy to trigger the Silly Putty Effect. [DOI: 10.1002/adfm.201400732 – “Self-Healing Reduced Graphene Oxide Films by Supersonic Kinetic Spraying”]
Because this really is just supersonic spraying, and because no post-processing is required, the researchers say the method can be used to coat many different materials and shapes with high-quality graphene. I’m not aware of spraying being used in current chip fabrication processes, but I don’t think it would be hard to include it (it’s a lot simpler than current chemical vapor deposition techniques). [Read: Graphene aerogel is seven times lighter than air, can balance on a blade of grass.]
Moving forward, the Chicagoans and Koreans want to scale this method up, with the hope of fostering the development of industrial scale applications of graphene. Over the last few months, we’ve seen a few methods of high-quality graphene production that work on the small scale, and could potentially work on the industrial scale — but now it’s time for a research lab to put its money where its mouth is and actually try some industrial-scale production. With the amount of research that’s going into graphene, I wouldn’t be surprised if that occurs this year. |
- 1 Terminology
- 2 Distances and zones
- 3 Layout and structure
- 4 Formation
- 5 Sun
- 6 Interplanetary medium
- 7 Inner planets
- 8 Asteroid belt
- 9 Outer planets
- 10 Comets
- 11 Kuiper belt
- 12 Scattered disc
- 13 Heliosphere
- 14 Inner Oort cloud
- 15 Oort cloud
- 16 Boundaries
- 17 Galactic context
- 18 Discovery and exploration
- 19 Notes
- 20 See also
- 21 References
The Solar System is a planetary system containing a central star, the Sun, with gravitationally bound, orbiting members including the Earth, seven other planets with their moons, dwarf planets and their moons, and thousands of other small bodies: asteroids, comets, meteors, and interplanetary dust. The Solar System orbits the core of the Milky Way galaxy, along with billions of other stars.
It is divided into regions. They are, in order of proximity to the Sun: the four inner planets closest to the Sun, an inner belt of asteroids, the four giant outer planets, the Kuiper belt (a belt of asteroids and icy bodies), a region called the scattered disc, the heliopause, which marks the boundary of the Sun's radiation, and a hypothetical region known as the Oort Cloud.
The planets orbit the Sun in the following order: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. Six of these planets have their own natural satellites (usually termed "moons" after Earth's Moon). In addition, the four giant planets are encircled by planetary rings of dust and other particles.
There are five dwarf planets: Pluto, the largest known Kuiper belt object; Haumea and Makemake, also in the Kuiper belt; Ceres, the largest object in the asteroid belt; and Eris in the scattered disc.
According to the International Astronomical Union (IAU), objects orbiting the Sun are divided into three classes: planets, dwarf planets, and small solar system bodies.
A planet is any body
- (a) in orbit around the Sun
- (b) that has enough mass to form itself into a spherical shape
- (c) that has cleared its immediate neighborhood of all smaller objects.
There are eight known planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. Pluto was originally classified as a planet but is now classified as a dwarf planet.
A dwarf planet meets two of the three IAU planetary requirements:
- (a) it is in orbit around the Sun
- (b) it has enough mass to form itself into a spherical shape
Given their much smaller mass, dwarf planets are not required to clear their neighborhood of other celestial bodies.
Small solar system bodies
Distances and zones
Astronomers most often measure distances within the Solar System in astronomical units (AU). One AU is the approximate distance between the Earth and the Sun or roughly 149 598 000 km (93,000,000 mi). Pluto is roughly 38 AU from the Sun while Jupiter lies at roughly 5.2 AU. One light year, the best known unit of interstellar distance, is roughly 63,240 AU.
Informally, the Solar System is sometimes divided into separate zones. The inner Solar System includes the four terrestrial planets and the main asteroid belt. Some define the outer Solar System as comprising everything beyond the asteroids. Others define it as the region beyond Neptune, with the four gas giants considered a separate "middle zone".
Layout and structure
The principal component of the Solar System is the Sun, a main sequence G2 star that contains 99.86% of the system's known mass and dominates it gravitationally. Jupiter and Saturn, the Sun's two largest orbiting bodies, account for more than 90% of the system's remaining mass.[b] The currently hypothetical Oort cloud would also hold a substantial percentage were its existence confirmed.
Most objects in orbit around the Sun lie near the ecliptic, a shallow plane parallel to that of Earth's orbit. The planets are very close to the ecliptic while comets and kuiper belt objects are usually at significantly greater angles to it.
All of the planets and most other objects also orbit with the Sun's rotation in a counter-clockwise direction as viewed from a point above the Sun's north pole. There are exceptions, such as Halley's Comet. Objects travel around the Sun following Kepler's laws of planetary motion. Each object orbits along an ellipse with the Sun at one focus of the ellipse. The closer an object is to the Sun the faster it moves. The orbits of the planets are nearly circular, but many comets, asteroids and objects of the Kuiper belt follow highly elliptical orbits.
To cope with the vast distances involved, many representations of the Solar System show orbits the same distance apart. In reality, with a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between it and the previous orbit. For example, Venus is approximately 0.33 AU farther out than Mercury, while Saturn is 4.3 AU out from Jupiter and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a correlation between these orbital distances (see Bode's Law) but no such theory has been accepted.
The Solar System is believed to have formed according to the nebular hypothesis, first proposed in 1755 by Immanuel Kant and independently formulated by Pierre-Simon Laplace. This theory holds that 4.6 billion years ago the Solar System formed from the gravitational collapse of a giant molecular cloud. This initial cloud was likely several light-years across and probably birthed several stars. Studies of ancient meteorites reveal traces of elements only formed in the hearts of very large exploding stars, indicating that the Sun formed within a star cluster, and in range of a number of nearby supernovae explosions. The shock wave from these supernovae may have triggered the formation of the Sun by creating regions of overdensity in the surrounding nebula, allowing gravitational forces to overcome internal gas pressures and cause collapse.
The region that would become the Solar System, known as the pre-solar nebula, had a diameter of between 7000 and 20,000 AU and a mass just over that of the Sun (by between 0.1 and 0.001 solar masses). As the nebula collapsed, conservation of angular momentum made it rotate faster. As the material within the nebula condensed, the atoms within it began to collide with increasing frequency. The center, where most of the mass collected, became increasingly hotter than the surrounding disc. As gravity, gas pressure, magnetic fields, and rotation acted on the contracting nebula, it began to flatten into a spinning protoplanetary disk with a diameter of roughly 200 AU and a hot, dense protostar at the center.
Studies of T Tauri stars, young, pre-fusing solar mass stars believed to be similar to the Sun at this point in its evolution, show that they are often accompanied by discs of pre-planetary matter. These discs extend to several hundred AU and reach only a thousand kelvins at their hottest.
After 100 million years, the pressure and density of hydrogen in the centre of the collapsing nebula became great enough for the protosun to begin thermonuclear fusion. This increased until hydrostatic equilibrium was achieved, with the thermal energy countering the force of gravitational contraction. At this point the Sun became a fully fledged star.
From the remaining cloud of gas and dust (the "solar nebula"), the various planets formed. They are believed to have formed by accretion: the planets began as dust grains in orbit around the central protostar; then gathered by direct contact into clumps between one and ten kilometres in diameter; then collided to form larger bodies (planetesimals) of roughly 5 km in size; then gradually increased by further collisions at roughly 15 cm per year over the course of the next few million years.
The inner solar system was too warm for volatile molecules like water and methane to condense, and so the planetesimals which formed there were relatively small (comprising only 0.6% the mass of the disc) and composed largely of compounds with high melting points, such as silicates and metals. These rocky bodies eventually became the terrestrial planets. Farther out, the gravitational effects of Jupiter made it impossible for the protoplanetary objects present to come together, leaving behind the asteroid belt.
Farther out still, beyond the frost line, where more volatile icy compounds could remain solid, Jupiter and Saturn became the gas giants. Uranus and Neptune captured much less material and are known as ice giants because their cores are believed to be made mostly of ices (hydrogen compounds).
Once the young Sun began producing energy, the solar wind (see below) blew the gas and dust in the protoplanetary disk into interstellar space and ended the growth of the planets. T-Tauri stars have far stronger stellar winds than more stable, older stars.
The Sun is the Solar System's parent star, and far and away its chief component. Its large mass gives it an interior density high enough to sustain nuclear fusion, which releases enormous amounts of energy, mostly radiated into space as electromagnetic radiation such as visible light.
The Sun is classified as a moderately large yellow dwarf, but this name is misleading as, compared to stars in our galaxy, the Sun is rather large and bright. Stars are classified by the Hertzsprung-Russell diagram, a graph which plots the brightness of stars against their surface temperatures. Generally, hotter stars are brighter. Stars following this pattern are said to be on the main sequence; the Sun lies right in the middle of it. However, stars brighter and hotter than the Sun are rare, while stars dimmer and cooler are common.
It is believed that the Sun's position on the main sequence puts it in the "prime of life" for a star, in that it has not yet exhausted its store of hydrogen for nuclear fusion. The Sun is growing brighter; early in its history it was 75 percent as bright as it is today.
Calculations of the ratios of hydrogen and helium within the Sun suggest it is halfway through its life cycle. It will eventually move off the main sequence and become larger, brighter, cooler and redder, becoming a red giant in about five billion years.
The Sun is a population I star; it was born in the later stages of the universe's evolution. It contains more elements heavier than hydrogen and helium ("metals" in astronomical parlance) than older population II stars. Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the universe could be enriched with these atoms. The oldest stars contain few metals, while stars born later have more. This high metallicity is thought to have been crucial to the Sun's developing a planetary system, because planets form from accretion of metals.
Along with light, the Sun radiates a continuous stream of charged particles (a plasma) known as the solar wind. This stream of particles spreads outwards at roughly 1.5 million kilometres per hour, creating a tenuous atmosphere (the heliosphere) that permeates the Solar System out to at least 100 AU (see heliopause). This is known as the interplanetary medium. The Sun's 11-year sunspot cycle and frequent solar flares and coronal mass ejections disturb the heliosphere, creating space weather. The Sun's rotating magnetic field acts on the interplanetary medium to create the heliospheric current sheet, the largest structure in the solar system.
Earth's magnetic field protects its atmosphere from interacting with the solar wind. Venus and Mars do not have magnetic fields, and the solar wind causes their atmospheres to gradually bleed away into space. The interaction of the solar wind with Earth's magnetic field creates the aurorae seen near the magnetic poles.
Cosmic rays originate outside the solar system. The heliosphere partially shields the Solar System, and planetary magnetic fields (for planets which have them) also provide some protection. The density of cosmic rays in the interstellar medium and the strength of the Sun's magnetic field change on very long timescales, so the level of cosmic radiation in the solar system varies, though by how much is unknown.
The interplanetary medium is home to at least two disclike regions of cosmic dust. The first, the zodiacal dust cloud, lies in the inner Solar System and causes zodiacal light. It was likely formed by collisions within the asteroid belt brought on by interactions with the planets. The second extends from about 10 AU to about 40 AU, and was probably created by similar collisions within the Kuiper belt.
The four inner or terrestrial planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of minerals with high melting points, such as the silicates which form their solid crusts and semi-liquid mantles, and metals such as iron and nickel, which form their cores. Three of the four inner planets (Venus, Earth and Mars) have substantial atmospheres; all have impact craters and tectonic surface features such as rift valleys and volcanoes. The term inner planet should not be confused with inferior planet, which designates those planets which are closer to the Sun than the Earth is (i.e. Mercury and Venus).
- Mercury (0.4 AU) is the closest planet to the Sun and the smallest planet (0.055 Earth masses). Mercury has no natural satellites, and its only known geological features besides impact craters are "wrinkle ridges", probably produced by a period of contraction early in its history. Mercury's almost negligible atmosphere consists of atoms blasted off its surface by the solar wind. Its relatively large iron core and thin mantle have not yet been adequately explained. Hypotheses include that its outer layers were stripped off by a giant impact, and that it was prevented from fully accreting by the young Sun's energy.
- Mercury's sidereal period is 87.97 days per year. Mercury's distance from Sun is 0.31 AU at perihelion (the closest approach to the sun), 0.47 at aphelion (its nearest approach and it has an orbital inclination of 7.0° relative to the solar plane.
- Mercury’s surface temperature fluctuates from more than 400°C to -180°C . Planetary rotation means that alternating areas are exposed to the sun’s heat, with the result that the surface of a planet will heat and cool in alternating periods. However, while Mercury does in fact rotate about its axis, Mercury’s rotational and orbital periods are coupled, that is, nearly the same. This means that some places on Mercury’s surface receive 2.5 times more solar radiation than other areas. As the planet closest to the sun it receives the highest ratio of solar radiation. Solar radiation decreases by the inverse square law as it reaches further away from the sun. The Moon, by comparison, which also has no atmosphere but is much further from the sun that Mercury reaches temperatures of only about 110°C
- Venus (0.7 AU) is close in size to Earth (0.815 Earth masses), and, like Earth, has a thick silicate mantle around an iron core, a substantial atmosphere and evidence of internal geological activity. However, it is much drier than Earth and its atmosphere is 110 times as dense. Venus has no natural satellites. It is the hottest planet, with surface temperatures as high as 470 °C, most likely due to the amount of greenhouse gases, predominantly CO2, in the atmosphere which retain solar radiation. No definitive evidence of current geological activity has been detected on Venus, but it has no magnetic field that would prevent depletion of its substantial atmosphere, which suggests that its atmosphere is regularly replenished by volcanic eruptions.
- Earth (1 AU) is the largest and densest of the inner planets, the only one known to have current geological activity, and the only planet known to have life. Its liquid hydrosphere, unique among the terrestrial planets, is probably the reason Earth is also the only planet where plate tectonics has been observed, because water acts as a lubricant for subduction. Earth's atmosphere is radically different from the other terrestrial planets, having been altered by the presence of life to contain 21 percent free oxygen. Earth has one satellite, the Moon, the only large satellite of a terrestrial planet in the Solar System.
- Mars (1.5 AU) is smaller than Earth and Venus (0.107 Earth masses). It possesses a tenuous atmosphere of carbon dioxide. Its surface, peppered with vast volcanoes such as Olympus Mons and rift valleys such as Valles Marineris, shows geological activity that may have persisted until very recently. Mars has two tiny moons, Deimos and Phobos, thought to be captured asteroids.
Asteroids are mostly small solar system bodies composed mainly of rocky and metallic non-volatile minerals.
The main asteroid belt occupies the orbit between Mars and Jupiter, between 2.3 and 3.3 AU from the Sun. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter.
Asteroids range in size from hundreds of kilometers to microscopic. All asteroids save the largest, Ceres, are classified as small solar system bodies, but some asteroids such as Vesta and Hygieia may be reclassed as dwarf planets if they are shown to have achieved hydrostatic equilibrium.
The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometre in diameter. Despite this, the total mass of the main belt is unlikely to be more than a thousandth of that of the Earth. The main belt is very sparsely populated; spacecraft routinely pass through without incident. Asteroids with diameters between 10 and 10-4 m are called meteoroids.
- Ceres (2.77 AU) is the largest body in the asteroid belt and its only dwarf planet. It has a diameter of slightly under 1000 km, large enough for its own gravity to pull it into a spherical shape. Ceres was considered a planet when it was discovered in the nineteenth century, but was reclassified as an asteroid in the 1850s as further observation revealed additional asteroids. It was again reclassified in 2006 as a dwarf planet.
- Asteroid groups
- Asteroids in the main belt are divided into asteroid groups and families based on their orbital characteristics. Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners. The asteroid belt also contains main-belt comets which may have been the source of Earth's water.
Trojan asteroids are located in either of Jupiter's L4 or L5 points, (gravitationally stable regions leading and trailing a planet in its orbit) though the term is also sometimes used for asteroids in any other planetary Lagrange point as well. Hilda asteroids are those Trojans whose orbits are in a 2:3 resonance with Jupiter; that is, they go around the Sun three times for every two Jupiter orbits.
The inner solar system is also dusted with rogue asteroids, many of which cross the orbits of the inner planets.
The four outer planets, or gas giants (sometimes called Jovian planets), collectively make up 99 percent of the mass known to orbit the Sun. Jupiter and Saturn's atmospheres are largely hydrogen and helium. Uranus and Neptune's atmospheres have a higher percentage of “ices”, such as water, ammonia and methane. Some astronomers suggest they belong in their own category, “Uranian planets,” or “ice giants.” All four gas giants have rings, although only Saturn's ring system is easily observed from Earth. The term outer planet should not be confused with superior planet, which designates planets outside Earth's orbit (the outer planets and Mars).
Jupiter (5.2 AU), at 318 Earth masses, masses 2.5 times all the other planets put together. It is composed largely of hydrogen and helium. Jupiter's strong internal heat creates a number of semi-permanent features in its atmosphere, such as cloud bands and the Great Red Spot. Jupiter has sixty-three satellites. The four largest, Ganymede, Callisto, Io, and Europa show similarities to the terrestrial planets, such as volcanism and internal heating. Ganymede, the largest satellite in the Solar System, is larger than Mercury.
Saturn (9.5 AU), famous for its extensive ring system, has similarities to Jupiter, such as its atmospheric composition, but it is far less massive, being only 95 Earth masses. Saturn has fifty-six moons: two, Titan and Enceladus, show signs of geological activity, though they are largely made of ice. Titan is larger than Mercury and the only satellite in the solar system with a substantial atmosphere.
Uranus (19.6 AU), at 14 Earth masses, is the lightest of the outer planets. Uniquely among the planets, it orbits the Sun on its side; its axial tilt is over ninety degrees to the ecliptic. It has a much colder core than the other gas giants, and radiates very little heat into space. Uranus has twenty-seven satellites, the largest ones being Titania, Oberon, Umbriel, Ariel and Miranda.
Neptune (30 AU), though slightly smaller than Uranus, is denser at 17 Earth masses. It radiates more internal heat, but not as much as Jupiter or Saturn. Neptune has thirteen moons. The largest, Triton, is geologically active, with geysers of liquid nitrogen. Triton is the only large satellite with a retrograde orbit. Neptune possesses a number of Trojan asteroids.
Comets are small solar system bodies, usually only a few kilometres across, composed largely of volatile ices. They have highly eccentric orbits, generally a perihelion within the orbits of the inner planets and an aphelion far beyond Pluto. When a comet enters the inner Solar System, its proximity to the Sun causes its icy surface to sublimate and ionise, creating a coma: a long tail of gas and dust often visible to the naked eye.
Short-period comets have orbits lasting less than two hundred years. Long-period comets have orbits lasting thousands of years. Short-period comets, such as Halley's Comet, are believed to originate in the Kuiper belt, while long period comets, such as Hale-Bopp, are believed to originate in the Oort Cloud. Many comet groups, such as the Kreutz Sungrazers, formed from the breakup of a single parent. Some comets with hyperbolic orbits may originate outside the Solar System, but determining their precise orbits is difficult. Old comets that have had most of their volatiles driven out by solar warming are often categorized as asteroids.
The area beyond Neptune, often called the outer solar system or the "trans-Neptunian region", is still largely unexplored. It appears to consist of small bodies (the largest having a diameter only a fifth that of the Earth and a mass far smaller than that of the Moon) composed mainly of rock and ice.
Named for Gerard Kuiper who postulated the existence of a belt region in 1951. He was attempting to provide an solution of the origin of some comets.
The first Kuiper Belt Objects (KBOs) were discovered by Dave Jewitt (University of Hawaii) and Jane Luu (UC Berkeley)in 1992. Their first find was an object 44 AU from the Sun outside the orbit of Pluto, now designated 1992 QB1.
There are an estimated 70,000 "trans-Neptunians" objects with diameters larger than 100 km in the radial zone which extends outwards in a ring from the orbit of Neptune at 30 AU to 50 AU orbiting the Sun in the ecliptic plane of the solar system.
The Kuiper Belt extends between 4.5 to 7.5 billion km (2.8 billion to 4.6 billion miles), or 30 and 50 AU from the Sun with a total mass of only a tenth or even a hundredth the mass of the Earth. Many Kuiper belt objects have multiple satellites and most have orbits that take them outside the plane of the ecliptic.
Pluto and Charon
- Pluto (39 AU average), a dwarf planet, is the largest known object in the Kuiper belt. When discovered in 1930 it was considered to be the ninth planet; this changed in 2006 with the adoption of a formal definition of planet. Pluto has a relatively eccentric orbit inclined 17 degrees to the ecliptic plane and ranging from 29.7 AU from the Sun at perihelion (within the orbit of Neptune) to 49.5 AU at aphelion.
- It is unclear whether Charon, Pluto's largest moon, will continue to be classified as such or as a dwarf planet itself. Both Pluto and Charon orbit a barycenter of gravity above their surfaces, making Pluto-Charon a binary system. Two much smaller moons, Nix and Hydra, orbit Pluto and Charon.
- Pluto lies in the resonant belt, having a 3:2 resonance with Neptune (it orbits twice round the Sun for every three Neptunian orbits). Kuiper belt objects which share this orbit are called Plutinos.
The scattered disc overlaps the Kuiper belt but extends much further outwards. Scattered disc objects are believed to come from the Kuiper belt, having been ejected into erratic orbits by the gravitational influence of Neptune's early outward migration. Most scattered disc objects (SDOs) have perihelia within the Kuiper belt but aphelia as far as 150 AU from the Sun. SDOs' orbits are also highly inclined to the ecliptic plane, and are often almost perpendicular to it. Some astronomers consider the scattered disc to be merely another region of the Kuiper belt, and describe scattered disc objects as "scattered Kuiper belt objects."
- Eris (68 AU average) is the largest known scattered disc object and caused a debate about what constitutes a planet, since it is at least 5% larger than Pluto with an estimated diameter of 2400 km (1500 mi). It is the largest of the known dwarf planets. It has one moon, Dysnomia. Like Pluto, its orbit is highly eccentric, with a perihelion of 38.2 AU (roughly Pluto's distance from the Sun) and an aphelion of 97.6 AU, and steeply inclined to the ecliptic plane.
The Centaurs, which extend from 9 to 30 AU, are icy comet-like bodies that orbit in the region between Jupiter and Neptune. The largest known Centaur, 10199 Chariklo, has a diameter of between 200 and 250 km. The first centaur discovered, 2060 Chiron, has been called a comet since it develops a coma just as comets do when they approach the sun. Some astronomers classify Centaurs as inward scattered Kuiper belt objects along with the outward scattered residents of the scattered disc.
The heliosphere is divided into two separate regions. The solar wind travels at its maximum velocity out to about 75-90 AU, or three times the orbit of Pluto.
The edge of this region is the termination shock, the point at which the solar wind collides with the opposing winds of the interstellar medium and drops below the speed of sound. Here the wind slows, condenses and becomes more turbulent, forming a great oval structure known as the heliosheath that looks and behaves very much like a comet's tail, extending outward for a further 40 AU at its stellar-windward side, but tailing many times that distance in the opposite direction. The outer boundary of the heliosphere, the heliopause, is the point at which the solar wind ions and the galactic ions make contact at about 110 AU and the beginning of interstellar space.
The shape and form of the outer edge of the heliosphere is likely affected by the fluid dynamics of interactions with the interstellar medium, as well as solar magnetic fields prevailing to the south, e.g., it is bluntly shaped with the northern hemisphere extending 9 AU's (roughly 900 million miles) farther than the southern hemisphere. Beyond the heliopause, at around 230 AU, lies the bow shock, a plasma "wake" left by the Sun as it travels through the Milky Way.
No spacecraft have yet passed beyond the heliopause, so it is impossible to know for certain the conditions in local interstellar space. How well the heliosphere shields the Solar System from cosmic rays is poorly understood. A dedicated mission beyond the heliosphere has been suggested.
Inner Oort cloud
90377 Sedna is a large, reddish Pluto-like object with a gigantic, highly elliptical orbit that takes it from about 76 AU at perihelion to 928 AU at aphelion and takes 12,050 years to complete. Mike Brown, who discovered the object in 2003, asserts that it cannot be part of the scattered disc or the Kuiper Belt as its perihelion is too distant to have been affected by Neptune's migration. He and other astronomers consider it to be the first in an entirely new population, one which also may include the objects 2000 CR|105, which has a perihelion of 45 AU, an aphelion of 415 AU, and an orbital period of 3420 years, and 2000 OO|67, which has a perihelion of 21 AU, an aphelion of over 1000 AU, and an orbital period of 12,705 years. Brown terms this population the "Inner Oort cloud," as it may have formed through a similar process, although it is far closer to the Sun. Sedna is very likely a dwarf planet, though its shape has yet to be determined with certainty.
The hypothetical Oort cloud is a great mass of up to a trillion icy objects that is believed to be the source for all long-period comets and to surround the Solar System at around 50,000 AU, and possibly to as far as 100,000 AU. It is believed to be composed of comets which were ejected from the inner Solar System by gravitational interactions with the outer planets. Oort cloud objects move very slowly, and can be perturbed by infrequent events such as collisions, the gravitational effects of a passing star, or the galactic tide.
The vast majority of our Solar System is still unknown. Its extent is determined by that of the Sun's gravitational field, which is currently estimated to concede to the gravitational forces of surrounding stars at roughly two light years (125,000 AU) distant. Some astronomers contend that the outer extent of the Oort cloud, by contrast, may not extend farther than 50,000 AU. Despite discoveries such as Sedna, the region between the Kuiper belt and the Oort cloud, an area tens of thousands of AU in radius, is still virtually unmapped. There are also ongoing studies of the region between Mercury and the Sun. Objects may yet be discovered in the Solar System's uncharted regions.
The Solar System is located in the Milky Way galaxy, a barred spiral galaxy with a diameter of about 100,000 light years containing about 200 billion stars. Our Sun resides in one of the Milky Way's outer spiral arms, known as the Orion Arm or Local Spur. While the orbital speed and radius of the galaxy are not accurately known, estimates place the solar system at between 25,000 and 28,000 light years from the galactic center and its speed at about 220 kilometres per second, completing one revolution every 225-250 million years. This revolution is known as the Solar System's galactic year.
The Solar System's orbit appears unusual. It is both extremely close to being circular, and at nearly the exact distance at which the orbital speed matches the speed of the compression waves that form the spiral arms. Evidence suggests that the Solar System has remained between spiral arms for most of the existence of life on Earth. The radiation from supernovae in spiral arms could theoretically sterilize planetary surfaces, preventing the formation of complex life. The Solar System also lies well outside the star-crowded environs of the galactic center. There, gravitational tugs from nearby stars could perturb bodies in the Oort Cloud and send many comets into the inner Solar System, producing collisions with potentially catastrophic implications for life on Earth. Even at the Solar System's current location, some scientists have hypothesised that recent supernovae may have adversely affected life in the last 35,000 years, by flinging pieces of expelled stellar core towards the Sun in the form of radioactive dust grains and larger, comet-like bodies.
The Solar apex, the direction of the Sun's path through interstellar space, is near the constellation of Hercules in the direction of the current location of the bright star Vega. At the galactic location of the solar system, the escape velocity with regard to the gravity of the Milky Way is at least 500 km/s.
The immediate galactic neighborhood of the Solar System is known as the Local Interstellar Cloud or Local Fluff, an area of dense cloud in an otherwise sparse region known as the Local Bubble, an hourglass-shaped cavity in the interstellar medium roughly 300 light-years across. The bubble is suffused with high-temperature plasma that suggests it is the product of several recent supernovae.
There are relatively few stars within ten light years (95 trillion km) of the Sun. The closest is the triple star system Alpha Centauri, which is about 4.4 light years away. Alpha Centauri A and B are a closely tied pair of Sun-like stars, while the small red dwarf Alpha Centauri C (also known as Proxima Centauri) orbits the pair at a distance of 0.2 light years. The stars next closest to the Sun are the red dwarfs Barnard's Star (at 6 light years), Wolf 359 (7.8 light years) and Lalande 21185 (8.3 light years). The largest star within ten light years is Sirius, a bright blue dwarf star roughly twice the Sun's mass and orbited by a white dwarf called Sirius B. It lies 8.6 light years away. The remaining systems within ten light years are the binary red dwarf system UV Ceti (8.7 light years) and the solitary red dwarf Ross 154 (9.7 light years). Our closest solitary sunlike star is Tau Ceti, which lies 11.9 light years away. It has roughly 80 percent the Sun's mass, but only 60 percent its luminosity.
Discovery and exploration
For most of human history, people, with a few notable exceptions, did not believe the Solar System existed. The Earth was believed not only to be stationary at the centre of the universe, but to be categorically different from the divine or ethereal objects that moved through the sky. While Nicholas Copernicus and his predececessors, such as the Indian mathematician-astronomer Aryabhatta and the Greek philosopher Aristarchus of Samos, had speculated on a heliocentric reordering of the cosmos, it was the conceptual advances of the 17th century, led by Galileo Galilei, Johannes Kepler, and Isaac Newton, which led gradually to the acceptance of the idea not only that Earth moved round the Sun, but that the planets were governed by the same physical laws that governed the Earth, and therefore could be material worlds in their own right, with such earthly phenomena as craters, weather, geology, seasons and ice caps.
- Telescopic observations
The first exploration of the solar system was conducted by telescope, when astronomers first began to map those objects too faint to be seen with the naked eye.
Galileo Galilei was the first to discover physical details about the individual bodies of the Solar System. He discovered that the Moon was cratered, that the Sun was marked with sunspots, and that Jupiter had four satellites in orbit around it. Christiaan Huygens followed on from Galileo's discoveries by discovering Saturn's moon Titan and the shape of the rings of Saturn. Giovanni Domenico Cassini later discovered four more moons of Saturn, the Cassini division in Saturn's rings, and the Great Red Spot of Jupiter.
Edmund Halley realised in 1705 that repeated sightings of a comet were in fact recording the same object, returning regularly once every 75-6 years. This was the first evidence that anything other than the planets orbited the Sun.
New planetary discoveries
In 1781, William Herschel was looking for binary stars in the constellation of Taurus when he observed what he thought was a new comet. In fact, its orbit revealed that it was a new planet, Uranus, the first ever discovered.
Giuseppe Piazzi discovered Ceres in 1801, a small world between Mars and Jupiter that was initially considered a new planet. However, subsequent discoveries of thousands of other small worlds in the same region led to their eventual separate reclassification: asteroids.
By 1846, discrepancies in the orbit of Uranus led many to suspect a large planet must be tugging at it from farther out. Urbain Le Verrier's calculations eventually led to the discovery of Neptune. The excess perihelion precession of Mercury's orbit led Le Verrier to postulate the intra-Mercurian planet Vulcan in 1859 —but that would turn out to be a red herring.
Further apparent discrepancies in the orbits of the outer planets led Percival Lowell to conclude yet another planet, "Planet X" must still be out there. After his death, his Lowell Observatory conducted a search, which ultimately led to Clyde Tombaugh's discovery of Pluto in 1930. Pluto was, however, found to be too small to have disrupted the orbits of the outer planets, and its discovery was therefore coincidental. Like Ceres, it was initially considered to be a planet, but after the discovery of many other similarly sized objects in its vicinity it was reclassified in 2006 as one of three then-designated dwarf planets.
In 1992, astronomers David Jewitt of the University of Hawaii and Jane Luu of the Massachusetts Institute of Technology discovered 1992 QB|1. This object proved to be the first of a new population, which came to be known as the Kuiper Belt, an icy analogue to the asteroid belt, of which such objects as Pluto and its large moon Charon were deemed a part.
Mike Brown, Chad Trujillo and David Rabinowitz announced the discovery of Eris in 2005, a Scattered disc object larger than Pluto and the largest object discovered in orbit round the Sun since Neptune.
Observations by spacecraft
All planets in the solar system have now been visited to varying degrees by spacecraft launched from Earth. Through these unmanned missions, humans have been able to get close-up photographs of all of the planets and, in the case of landers, perform tests of the soils and atmospheres of some.
The first successful probe to fly by another solar system body was Luna 1 (part of Project Luna), which sped past the Moon in 1959. Mariner 2 was the first probe to fly by another planet, Venus, in 1962. The first successful flyby of Mars commenced with Mariner 4 in 1964. Mercury was first encountered by Mariner 10 in 1974.
The first probe to explore the outer planets was Pioneer 10, which flew by Jupiter in 1973. Pioneer 11 was the first to visit Saturn, in 1979. The Voyager probes performed a grand tour of the outer planets following their launch in 1977, with both probes passing Jupiter in 1979 and Saturn in 1980 – 1981. Voyager 2 then went on to make close approaches to Uranus in 1986 and Neptune in 1989. The Voyager probes are now far beyond Neptune's orbit, and are on course to find and study the termination shock, heliosheath, and heliopause. According to NASA, both Voyager probes have encountered the termination shock at a distance of approximately 93 AU from the Sun.
No Kuiper belt object has yet been visited by a spacecraft. Launched on January 19, 2006, the New Horizons probe is currently en route to becoming the first man-made spacecraft to explore this area. This unmanned mission is scheduled to fly by Pluto in July 2015. Should it prove feasible, the mission will then be extended to observe a number of other Kuiper belt objects.
In 1966, the Moon became the first solar system body beyond Earth to be orbited by an artificial satellite (Luna 10), followed by Mars in 1971 (Mariner 9), Venus in 1975 (Venera 9), Jupiter in 1995 (Galileo, which also made the first asteroid flyby, 951 Gaspra, in 1991), the asteroid 433 Eros in 2000, and Saturn in 2004 (Cassini–Huygens). The MESSENGER probe is currently en route to commence the first orbit of Mercury in 2011, while the Dawn spacecraft is currently set to orbit the asteroid Vesta in 2011 and the dwarf planet Ceres in 2015.
The first probe to land on another solar system body was the Soviet Union's Luna 2 probe, which impacted the Moon in 1959. Since then, increasingly distant planets have been reached, with probes landing on or impacting the surfaces of Venus in 1966 (Venera 3), Mars in 1971 (Mars 3, although a fully successful landing didn't occur until Viking 1 in 1976), the asteroid 433 Eros in 2001 (NEAR Shoemaker), and Saturn's moon Titan in 2005 (Huygens). The Galileo orbiter also dropped a probe into Jupiter's atmosphere in 1995; since there is no physical surface of Jupiter, it was simply designed to burn up in the atmosphere and does not count as a landing.
- ^Capitalization of the name varies. The IAU, the authoritative body regarding astronomical nomenclature, specifies capitalizing the names of all individual astronomical objects (Solar System). However, the name is commonly rendered in lower case (solar system) including in the Oxford English Dictionary, Merriam-Webster's 11th Collegiate Dictionary, and Encyclopædia Britannica.
- ^The mass of the Solar System excluding the Sun, Jupiter and Saturn can be determined by adding together all the calculated masses for its largest objects and using rough calculations for the mass of the Kuiper Belt (estimated at roughly 0.1 Earth mass) and the asteroid belt (estimated to be 0.0005 Earth mass)
- List of solar system objects: By orbit—By mass—By radius—By name
- Attributes of the largest solar system bodies
- Astronomical symbols
- Geological features of the solar system
- Numerical model of solar system
- Table of planetary attributes
- Timeline of discovery of solar system planets and their natural satellites
- Solar system model
- Space colonization
- Solar System in fiction
- Celestia - Space-simulation on your computer (OpenGL)
- Family Portrait (Voyager)
- Scott S. Sheppard. The Jupiter Satellite Page. University of Hawaii. Retrieved on 2006-07-23.
- Glossary FAQs Dictionary NASA
- Kuiper Belt Jewitt, David. University of Hawaii
- Hubble Identifies a Long-Sought Population of Comets Beyond Neptune (2005) Hubble Site News center
- Evidence for extended scattered discB. Gladman, M. Holman, T. Grav, J. Kavelaars, P. Nicholson, K. Aksnes, and J-M. Petit (2001). Centre National de la Recherche Scientifique
- Voyager PWS: Heliopause Radio and Plasma Wave Group, University of Iowa
- Oort Cloud Jewitt, David. University of Hawaii
- The Final IAU Resolution on the definition of "planet" ready for voting, IAU, 2006-08-24. Retrieved on 2007-03-02.
- nineplanets.org. An Overview of the Solar System. Retrieved on 2007-02-15.
- Amir Alexander (2006). New Horizons Set to Launch on 9-Year Voyage to Pluto and the Kuiper Belt. The Planetary Society. Retrieved on 2006-11-08.
- M Woolfson. The origin and evolution of the solar system. University of York. Retrieved on 2006-07-22.
- Marochnik, Leonid S.; Mukhin, Lev M.; Sagdeev, Roal'd. Z.. Estimates of mass and angular momentum in the Oort cloud. Institut Kosmicheskikh Issledovanii, Moscow. Retrieved on 2006-07-23.
- See, T. J. J. (April 23, 1909). "The Past History of the Earth as Inferred from the Mode of Formation of the Solar System". Proceedings of the American Philosophical Society 48 (191): 119-128. Retrieved on 2006-07-23.
- Lecture 13: The Nebular Theory of the origin of the Solar System. University of Arizona. Retrieved on 2006-12-27.
- Jeff Hester (2004). New Theory Proposed for Solar System Formation. Arizona State University. Retrieved on 2007-01-11.
- Irvine, W. M.. The chemical composition of the pre-solar nebula. Amherst College, Massachusetts. Retrieved on 2007-02-15.
- Rawal, J. J. (January 1985). "Further Considerations on Contracting Solar Nebula" (PDF). Physics and Astronomy 34 (1): 93-100. DOI:10.1007/BF00054038. Retrieved on 2006-12-27. Research Blogging.
- Yoshimi Kitamura; Munetake Momose, Sozo Yokogawa, Ryohei Kawabe, Shigeru Ida and Motohide Tamura (2002-12-10). "Investigation of the Physical Properties of Protoplanetary Disks around T Tauri Stars by a 1 Arcsecond Imaging Survey: Evolution and Diversity of the Disks in Their Accretion Stage". The Astrophysical Journal 581 (1): 357-380. DOI:10.1086/344223. Retrieved on 2007-01-09. Research Blogging.
- Greaves, Jane S. (January 7, 2005). "Disks Around Stars and the Growth of Planetary Systems". Science 307 (5706): 68-71. DOI:10.1126/science.1101979. Retrieved on 2006-11-16. Research Blogging.
- Present Understanding of the Origin of Planetary Systems. National Academy of Sciences (April 5, 2000). Retrieved on 2007-01-19.
- Manfred Küker, Thomas Henning and Günther Rüdiger (2003). Magnetic Star-Disk Coupling in Classical T Tauri Systems. Science Magazine. Retrieved on 2006-11-16.
- Chrysostomou and Phil W Lucas The formation of stars. Department of Physics Astronomy & Mathematics University of Hertfordshire. Retrieved on 2007-05-02.
- Peter Goldreich and William R. Ward (1973). The Formation of Planetesimals. The American Astronomical Society. Retrieved on 2006-11-16.
- Jean-Marc Petit and Alessandro Morbidelli (2001). The Primordial Excitation and Clearing of the Asteroid Belt. Centre National de la Recherche Scientifique, Observatoire de Nice,. Retrieved on 2006-11-19.
- Mummma, M. J.; M. A. DiSanti, N. Dello Russo, K. Magee-Sauer, E. Gibb, and R. Novak (June 2003). "Remote infrared observations of parent volatiles in comets: A window on the early solar system" (PDF). Advances in Space Research 31 (12): 2563-2575. DOI:10.1016/S0273-1177(03)00578-7. Retrieved on 2006-11-16. Research Blogging.
- Edward W. Thommes, Martin J. Duncan and Harold F. Levison. The formation of Uranus and Neptune in the Jupiter–Saturn region of the Solar System. Department of Physics, Queen's University, Kingston, Ontario; Space Studies Department, Southwest Research Institute, Boulder, Colorado. Retrieved on 2007-04-02.
- Elmegreen, B. G. (November 1979). "On the disruption of a protoplanetary disk nebula by a T Tauri like solar wind" (PDF). Astronomy and Astrophysics 80 (1): 77-78. Retrieved on 2007-02-11.
- Heng Hao (November 1979). "Disc-Protoplanet interactions" (PDF). Astronomy and Astrophysics 80 (1): 77-78. Retrieved on 2006-11-19.
- Smart, R. L.; Carollo, D.; Lattanzi, M. G.; McLean, B.; Spagna, A. (2001). The Second Guide Star Catalogue and Cool Stars. Perkins Observatory. Retrieved on 2006-12-26.
- Kasting, J.F.; Ackerman, T.P. (1986). "Climatic Consequences of Very High Carbon Dioxide Levels in the Earth’s Early Atmosphere". Science 234: 1383-1385.
- Richard W. Pogge (1997). The Once and Future Sun. Perkins Observatory. Retrieved on 2006-06-23.
- T. S. van Albada, Norman Baker (1973). "On the Two Oosterhoff Groups of Globular Clusters". Astrophysical Journal 185: 477–498.
- Charles H. Lineweaver (2000). An Estimate of the Age Distribution of Terrestrial Planets in the Universe: Quantifying Metallicity as a Selection Effect. University of New South Wales. Retrieved on 2006-07-23.
- Solar Physics: The Solar Wind. Marshall Space Flight Center (2006). Retrieved on 2006-10-03.
- Phillips, Tony (2001-02-15). The Sun Does a Flip. Science@NASA. Retrieved on 2007-02-04.
- Artist's Conception of the Heliospheric Current Sheet. Wilcox Solar Observatory. Retrieved on 2006-06-22.
- Lundin, Richard (March 9, 2001). "Erosion by the Solar Wind". Science 291 (5510): 1909. DOI:10.1126/science.1059763. Retrieved on 2006-12-26. Research Blogging.
- Langner, U. W.; M.S. Potgieter (2005). "Effects of the position of the solar wind termination shock and the heliopause on the heliospheric modulation of cosmic rays". Advances in Space Research 35 (12): 2084-2090. DOI:10.1016/j.asr.2004.12.005. Retrieved on 2007-02-11. Research Blogging.
- Long-term Evolution of the Zodiacal Cloud (1998). Retrieved on 2007-02-03.
- ESA scientist discovers a way to shortlist stars that might have planets. ESA Science and Technology (2003). Retrieved on 2007-02-03.
- Landgraf, M.; Liou, J.-C.; Zook, H. A.; Grün, E. (May 2002). "Origins of Solar System Dust beyond Jupiter". The Astronomical Journal 123 (5): 2857-2861. DOI:10.1086/339704. Retrieved on 2007-02-09. Research Blogging.
- Schenk P., Melosh H.J. (1994), Lobate Thrust Scarps and the Thickness of Mercury's Lithosphere, Abstracts of the 25th Lunar and Planetary Science Conference, 1994LPI....25.1203S
- Bill Arnett (2006). Mercury. The Nine Planets. Retrieved on 2006-09-14.
- Benz, W., Slattery, W. L., Cameron, A. G. W. (1988), Collisional stripping of Mercury's mantle, Icarus, v. 74, p. 516-528.
- Cameron, A. G. W. (1985), The partial volatilization of Mercury, Icarus, v. 64, p. 285-294.
- A sidereal period is the time it takes a planet to return to an orbital position relative to the stars
- The orbits of the planets National Maritime Museum
- The effect of rotation National Maritime Museum
- What happens to the Sun's radiation when it reaches a planet?NMM
- hot enough to melt lead
- Mark Alan Bullock (1997). The Stability of Climate on Venus (PDF). Retrieved on 2006-12-26.
- Planets with atmosphere NMM
- Paul Rincon (1999). Climate Change as a Regulator of Tectonics on Venus. Johnson Space Center Houston, TX, Institute of Meteoritics, University of New Mexico, Albuquerque, NM. Retrieved on 2006-11-19.
- Shear stresses on megathrusts: Implications for mountain building behind subduction zones. Department of Earth Sciences, University of Oxford, Oxford, UK. Retrieved on 2006-07-23.
- Anne E. Egger, M.A./M.S.. Earth's Atmosphere: Composition and Structure. VisionLearning.com. Retrieved on 2006-12-26.
- David Noever (2004). Modern Martian Marvels: Volcanoes?. NASA Astrobiology Magazine. Retrieved on 2006-07-23.
- Scott S. Sheppard, David Jewitt, and Jan Kleyna (2004). A Survey for Outer Satellites of Mars: Limits to Completeness. The Astronomical Journal. Retrieved on 2006-12-26.
- New study reveals twice as many asteroids as previously believed. ESA (2002). Retrieved on 2006-06-23.
- Krasinsky, G. A.; Pitjeva, E. V.; Vasilyev, M. V.; Yagudina, E. I. (July 2002). "Hidden Mass in the Asteroid Belt". Icarus 158 (1): 98-105. DOI:10.1006/icar.2002.6837. Research Blogging.
- Beech, M.; Duncan I. Steel (September 1995). "On the Definition of the Term Meteoroid". Quarterly Journal of the Royal Astronomical Society 36 (3): 281–284. Retrieved on 2006-08-31.
- NASA. History and Discovery of Asteroids. NASA. Retrieved on 2006-08-29.
- Phil Berardelli (2006). Main-Belt Comets May Have Been Source Of Earths Water. SpaceDaily. Retrieved on 2006-06-23.
- Jack J. Lissauer, David J. Stevenson (2006). Formation of Giant Planets. NASA Ames Research Center; California Institute of Technology. Retrieved on 2006-01-16.
- Pappalardo, R T (1999). Geology of the Icy Galilean Satellites: A Framework for Compositional Studies. Brown University. Retrieved on 2006-01-16.
- J. S. Kargel (1994). Cryovolcanism on the icy satellites. U.S. Geological Survey. Retrieved on 2006-01-16.
- Hawksett, David; Longstaff, Alan; Cooper, Keith; Clark, Stuart (2005). 10 Mysteries of the Solar System. Astronomy Now. Retrieved on 2006-01-16.
- Podolak, M.; Reynolds, R. T.; Young, R. (1990). Post Voyager comparisons of the interiors of Uranus and Neptune. NASA, Ames Research Center. Retrieved on 2006-01-16.
- Duxbury, N.S., Brown, R.H. (1995). The Plausibility of Boiling Geysers on Triton. Beacon eSpace. Retrieved on 2006-01-16.
- Sekanina, Zdenek (2001). "Kreutz sungrazers: the ultimate case of cometary fragmentation and disintegration?". Publications of the Astronomical Institute of the Academy of Sciences of the Czech Republic 89 p. 78–93.
- Królikowska, M. (2001). "A study of the original orbits of ``hyperbolic comets". Astronomy & Astrophysics 376 (1): 316-324. DOI:10.1051/0004-6361:20010945. Retrieved on 2007-01-02. Research Blogging.
- Fred L. Whipple (04/1992). The activities of comets related to their aging and origin. Retrieved on 2006-12-26.
- What lurks in the outer solar system? NASA News
- Kuiper Belt University of Hawaii
- Kuiper Belt NASA
- Audrey Delsanti and David Jewitt (2006). The Solar System Beyond The Planets. Institute for Astronomy, University of Hawaii. Retrieved on 2007-01-03.
- Fajans, J.; L. Frièdland (October 2001). "Autoresonant (nonstationary) excitation of pendulums, Plutinos, plasmas, and other nonlinear oscillators". American Journal of Physics 69 (10): 1096-1102. DOI:10.1119/1.1389278. Retrieved on 2006-12-26. Research Blogging.
- David Jewitt (2005). The 1000 km Scale KBOs. University of Hawaii. Retrieved on 2006-07-16.
- Mike Brown (2005). The discovery of
2003 UB313Eris, the 10th planetlargest known dwarf planet.. CalTech. Retrieved on 2006-09-15.
- Stansberry (2005). TNO/Centaur diameters and albedos. Retrieved on 2006-11-08.
- Patrick Vanouplines (1995). Chiron biography. Vrije Universitiet Brussel. Retrieved on 2006-06-23.
- List Of Centaurs and Scattered-Disk Objects. IAU: Minor Planet Center. Retrieved on 2007-04-02.
- The Sun's Heliosphere & Heliopause NASA POD
- Diagramme of Heliosphere
- Voyager Enters Solar System's Final Frontier. NASA. Retrieved on 2007-04-02.
- Fahr, H. J.; Kausch, T.; Scherer, H. (2000). A 5-fluid hydrodynamic approach to model the solar system-interstellar medium interaction. Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn. Retrieved on 2006-06-23.
- P. C. Frisch (2002). The Sun's Heliosphere & Heliopause. University of Chicago. Retrieved on 2006-06-23.
- R. L. McNutt, Jr. et al. (2006). "Innovative Interstellar Explorer". AIP Conference Proceedings.
- Interstellar space, and step on it!. New Scientist (2007-01-05). Retrieved on 2007-02-05.
- David Jewitt (2004). Sedna - 2003 VB12. University of Hawaii. Retrieved on 2006-06-23.
- Mike Brown. Sedna. CalTtech. Retrieved on 2007-05-02.
- Stern SA, Weissman PR. (2001). Rapid collisional evolution of comets during the formation of the Oort cloud.. Space Studies Department, Southwest Research Institute, Boulder, Colorado. Retrieved on 2006-11-19.
- Bill Arnett (2006). The Kuiper Belt and the Oort Cloud. nineplanets.org. Retrieved on 2006-06-23.
- T. Encrenaz, JP. Bibring, M. Blanc, MA. Barucci, F. Roques, PH. Zarka (2004). The Solar System: Third edition. Springer.
- Durda D.D.; Stern S.A.; Colwell W.B.; Parker J.W.; Levison H.F.; Hassler D.M. (2004). A New Observational Search for Vulcanoids in SOHO/LASCO Coronagraph Images. Retrieved on 2006-07-23.
- A.D. Dolgov (2003). Magnetic fields in cosmology. Retrieved on 2006-07-23.
- R. Drimmel, D. N. Spergel (2001). Three Dimensional Structure of the Milky Way Disk. Retrieved on 2006-07-23.
- Stacy Leong (2002). Period of the Sun's Orbit around the Galaxy (Cosmic Year). Retrieved on 2007-04-02.
- Leslie Mullen (2001). Galactic Habitable Zones. Astrobiology Magazine. Retrieved on 2006-06-23.
- Supernova Explosion May Have Caused Mammoth Extinction. Physorg.com (2005). Retrieved on 2007-02-02.
- C. Barbieri (2003). Elementi di Astronomia e Astrofisica per il Corso di Ingegneria Aerospaziale V settimana. IdealStars.com. Retrieved on 2007-02-12.
- Thacker, R. J.; H. M. P. Couchman (20 December 2000). "Implementing Feedback in Simulations of Galaxy Formation: A Survey of Methods". The Astrophysical Journal 545 (1): 728–752. Retrieved on 2007-02-04.
- Near-Earth Supernovas. NASA. Retrieved on 2006-07-23.
- Stars within 10 light years. SolStation. Retrieved on 2007-04-02.
- Tau Ceti. SolStation. Retrieved on 2007-04-02.
- Eric W. Weisstein (2006). Galileo Galilei (1564-1642). Wolfram Research. Retrieved on 2006-11-08.
- Discoverer of Titan: Christiaan Huygens. ESA Space Science (2005). Retrieved on 2006-11-08.
- Giovanni Domenico Cassini (June 8, 1625 - September 14, 1712). SEDS.org. Retrieved on 2006-11-08.
- Comet Halley. University of Tennessee. Retrieved on 2006-12-27.
- Herschel, Sir William (1738-1822). enotes.com. Retrieved on 2006-11-08.
- Discovery of Ceres: 2nd Centenary, 1 January 1801 - 1 January 2001. astropa.unipa.it (2000). Retrieved on 2006-11-08.
- J. J. O'Connor and E. F. Robertson (1996). Mathematical discovery of planets. St. Andrews University. Retrieved on 2006-11-08.
- Jane X. Luu and David C. Jewitt (2002). KUIPER BELT OBJECTS: Relics from the Accretion Disk of the Sun. MIT, University of Hawaii. Retrieved on 2006-11-09.
- Minor Planet Center. List of Trans-Neptunian Objects. Retrieved on 2007-04-02.
- Eris (2003 UB313. Solstation.com (2006). Retrieved on 2006-11-09.
- Time Line of Space Exploration (2002). Retrieved on 2006-07-01.
- New Horizons NASA's Pluto-Kuiper Belt Mission (2006). Retrieved on 2006-07-01.
- Audrey Delsanti and David Jewitt. The Solar System Beyond The Planets. Institute for Astronomy, University of Hawaii. Retrieved on 2007-03-09. |
« PreviousContinue »
To construct a figure similar to the figure P, and equivalent to the figure Q.
Find M, the side of a square equivalent to the figure P, and N, the side of a square equivalent to the figure Q. Let X be a fourth proportional to the three given lines, M, N, AB; upon the side X, homologous to AB,
describe a figure similar to the figure P ; it will also be equivalent to the figure Q.
For, calling Y the figure described upon the side X, we have P : Y :: AB2 : X2; but by construction, AB : X :: M: N, or AB2 : X2 :: M2: N2; hence P Y :: M2: N2. But by construction also, M2=P and N2=Q; therefore P : Y :: P : Q; consequently Y=Q; hence the figure Y is similar to the figure P, and equivalent to the figure Q.
To construct a rectangle equivalent to a given square, and having the sum of its adjacent sides equal to a given line.
Let C be the square, and AB equal to the sum of the sides of the required rectangle.
Upon AB as a diameter, describe a semicircle; draw the line DE parallel to the diameter, at a distance AD from it, equal to the side of the given square C; from the point E, where the parallel cuts the circumference, draw EF perpendicular to the diameter; AF and FB will be the sides of the rectangle required.
For their sum is equal to AB; and their rectangle AF.FB is equivalent to the square of EF, or to the square of AD; hence that rectangle is equivalent to the given square C.
Scholium. To render the problem possible, the distance AD must not exceed the radius; that is, the side of the square C must not exceed the half of the line AB.
To construct a rectangle that shall be equivalent to a given square, and the difference of whose adjacent sides shall be equal to a given line.
Suppose C equal to the given square, and AB the difference of the sides.
Upon the given line AB as a diameter, describe a semicircle: at the extremity of the diameter draw the tangent AD, equal to the side of the square C; through the point D and the centre O draw the secant DF; then will DE and DF be the adjacent sides of the rectangle required.
For, first, the difference of these sides is equal to the diameter EF or AB; secondly, the rectangle DE, DF, is equal to AD2 (Prop. XXX.): hence that rectangle is equivalent to the given square C.
To find the common measure, if there is one, between the diagonal and the side of a square.
Let ABCG be any square whatever, and AC its diagonal.
We must first apply CB upon CA, as often as it may be contained there. For this purpose, let the semicircle DBE be described, from the centre C, with the radius CB. It is evident that CB is contained once in AC, with the remainder AD; the result of the first operation is therefore the quotient 1, with the remainder AD, which latter must now be compared with BC, or its equal AB.
We might here take AF-AD, and actually apply it upon AB ; we should find it to be contained twice with a remainder: but as that remainder, and those which succeed it, con
tinue diminishing, and would soon elude our comparisons by their minuteness, this would be but an imperfect mechanical method, from which G no conclusion could be obtained to determine whether the lines AC, CB, have or have not a common measure. There is a very simple way, however, of avoiding these decreasing lines, A F and obtaining the result, by operating only upon lines which remain always of the same magnitude.
The angle ABC being a right angle, AB is a tangent, and AE a secant drawn from the same point; so that AD: AB :: AB AE (Prop. XXX.). Hence in the second operation, when AD is compared with AB, the ratio of AB to AE may be taken instead of that of AD to AB; now AB, or its equal CD, is contained twice in AE, with the remainder AD; the result of the second operation is therefore the quotient 2 with the remainder AD, which must be compared with AB.
Thus the third operation again consists in comparing AD with AB, and may be reduced in the same manner to the comparison of AB or its equal CD with AE; from which there will again be obtained 2 for the quotient, and AD for the remainder.
Hence, it is evident that the process will never terminate ; and therefore there is no common measure between the diagonal and the side of a square: a truth which was already known by arithmetic, since these two lines are to each other :: √2:1 (Prop. XI. Cor. 4.), but which acquires a greater degree of clearness by the geometrical investigation.
REGULAR POLYGONS, AND THE MEASUREMENT OF THE
A POLYGON, which is at once equilateral and equiangular, is called a regular polygon.
Regular polygons may have any number of sides: the equilateral triangle is one of three sides; the square is one of four.
PROPOSITION I. THEOREM.
Two regular polygons of the same number of sides are similar figures.
Suppose, for example, that ABCDEF, abcdef, are two regular hexagons. The sum of all the angles is the same in both figures,being in each equal to eight right angles (Book I. Prop. XXVI. Cor. 3.). The angle A is the sixth part of that sum; so is the angle a: hence the angles A and a are equal; and for the same reason, the angles B and b, the angles C and c, &c. are equal.
Again, since the polygons are regular, the sides AB, BC, CD, &c. are equal, and likewise the sides ab, bc, cd, &c. (Def.); it is plain that AB ab: BC: bc:: CD: cd, &c.; hence the two figures in question have their angles equal, and their homologous sides proportional; consequently they are similar (Book IV. Def. 1.).
Cor. The perimeters of two regular polygons of the same number of sides, are to each other as their homologous sides, and their surfaces are to each other as the squares of those sides (Book IV. Prop. XXVII.).
Scholium. The angle of a regular polygon, like the angle of an equiangular polygon, is determined by the number of its sides (Book I. Prop. XXVI.).
PROPOSITION II. THEOREM.
Any regular polygon may be inscribed in a circle, and circumscribed about one.
Let ABCDE &c. be a regular polygon describe a circle through the three points A, B, C, the centre being O, and OP the perpendicular let fall from it, to the middle point of BC: draw AO and H OD.
If the quadrilateral OPCD be placed upon the quadrilateral OPBA, they will coincide; for the side OP is common; the angle OPC=OPB, each being a right angle; hence the side PC will apply to its equal PB, and the point C will fall on B: besides, from the nature of the polygon, the angle PCD= PBA; hence CD will take the direction BA; and since CD= BA, the point D will fall on A, and the two quadrilaterals will entirely coincide. The distance OD is therefore equal to AO; and consequently the circle which passes through the three points A, B, C, will also pass through the point D. By the same mode of reasoning, it might be shown, that the circle which passes through the three points B, C, D, will also pass through the point E; and so of all the rest: hence the circle which passes through the points A, B, C, passes also through the vertices of all the angles in the polygon, which is therefore inscribed in this circle,
Again, in reference to this circle, all the sides AB, BC, CD, &c. are equal chords; they are therefore equally distant from the centre (Book III. Prop. VIII.): hence, if from the point O with the distance OP, a circle be described, it will touch the side BC, and all the other sides of the polygon, each in its middle point, and the circle will be inscribed in the polygon, or the polygon described about the circle.
Scholium 1. The point Othe common centre of the inscribed and circumscribed circles, may also be regarded as the centre of the polygon; and upon this principle the angle AOB is called the angle at the centre, being formed by two radii drawn to the extremities of the same side AB.
Since all the chords AB, BC, CD, &c. are equal, all the angles at the centre must evidently be equal likewise; and therefore the value of each will be found by dividing four right angles by the number of sides of the polygon. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.