content
stringlengths 275
370k
|
---|
Dracula may not have been able to tolerate sunlight, but some nocturnal bat species may actually rely on the sun for navigation. A study this week in PNAS shows that the greater mouse-eared bat (Myotis myotis) calibrates its internal geomagnetic compass using cues from the setting sun.
At sunset, the researchers exposed bats to a magnetic field rotated 90° east from its normal orientation. The bats, which were displaced from their home range and released in a completely new setting, flew in a direction 77.6° counterclockwise from the actual direction of their roost. Control bats, which had been exposed to a normal geomagnetic field, flew directly toward home.
In the second phase of the study, another experimental group of bats was exposed to a similarly-skewed geomagnetic field, but only after the glow from the setting sun had disappeared. In this treatment, both experimental and control bats flew directly toward their roosts. Taken together, these two experiments show that the timing of the calibration is vital, and the bats must be taking some cue from the setting sun.
Greater mouse-eared bats are nocturnal and generally emerge from their roosts after the sun sinks below the horizon, but before the residual light in the sky disappears. Their nocturnal strategy makes it rather surprising that the sun—rather than the stars, as previously thought—calibrates their navigational systems. The next step in the scientists' research plan: to figure out what aspect of the sunset the bats are using as a cue. |
Objective: To determine reaction time and to access the accuracy of the measured time.
Equipment: Timer, pencil or pen, meter stick, coffee filter, and metal ball
Discussion: The significance of this experiment is to help us measure and understand our reaction time. Reaction time is the amount of time between something happening and you responding to it. We also did some free fall experiments with a ruler, a coffee filter, and a steel ball. This means that we dropped objects from a resting position that has nothing attached to it is only under the influence of gravity. Since we have a few seconds delay starting and stopping a timer after the object has begun to drop and after it hits the floor we used the measurements
…show more content…
There is no real solution for this except that we could practice trying to improve our reaction time. Another error could be if we drop it from different heights each time. The higher we drop the ball from causes it to take longer to hit the floor and vice versa if we drop it from a lower height it would take a shorter time to hit the floor. We could try to make sure that we do our best to drop the ball from the same location each time. Then you have air friction which does not allow the objects to fall in a straight line causing them to bump into the people standing in the area and causing it to take a longer time to come to a rest. You can’t get rid of air friction, but you can try to minimize it. The ball’s rotational inertia could play a small role on error percentage. Also you have air The time accuracy of the coffee filter and metal ball was good because the average deviation was fairly low. The coffee filter had an average deviation of +/-.463 which is ok because about 60% of the drops fell between the 1st deviation. The metal ball had an average deviation of +/-.076 and about 70% of the numbers fell between the 1st deviation. So I would take the times to be in the ball park they should be in for the objects falling. There was a 3 % error due to all that factors that might have played apart on the falling speed |
Narration is a verbal or text description of a series of events. It's like structured like storytelling, but often includes factual and other information. Narration is widely used in media as a support for visual information, and to provide more information regarding subjects. Narrators are usually trained speakers using scripts which are tailored to an interested audience.
Examples of Narration: |
Being able to store terabytes of data on a hard drive is pretty easy as nowadays, all you need to do is hook up a hard drive to your computer and you can start storing content immediately. But being able to keep that data lasting for longer than several years can be quite a challenge, especially if you regularly use the hard drive. It looks like that will be a thing of the past as a team of researchers have built a hard drive that can reportedly store data for over one million years.
The researchers hail from the University of Twente in the Netherlands and first looked into the theory of how information ages prior to making their amazing discovery. What they found was data needs to be stored in a way where its distinct from other pieces of data. The researchers then proceeded to put their discovery to use by building a disk that would do just that, which resulted in a thin tungsten disk that has a series of fine lines, that has been coated in a protective layer of silicon nitride.
If what the researchers is actually usable by the common everyday person, we’d certainly be interested to see what they think civilizations millions of years in the future should know about us. If I had to make one suggestion, it’d be the invention of the Shake Weight. |
Tide, the rhythmic rising and falling of the surface of the oceans, seas, and other bodies of water. (Similar rhythmic movements, also called tides, occur in the earth's crust and atmosphere, but these movements can be detected only with sensitive scientific instruments.)
When the water flows in toward the land, it is at flood tide; the highest level it reaches is high tide. The water recedes during ebb tide; the lowest level it reaches is called low tide. The part of the shore between the high-tide line and the low-tide line is the intertidal zone. Here live plants and animals adapted to survive both on land and underwater. Barnacles, kelp, and some forms of crabs and clams live in the intertidal zone.
In most areas, there are two high tides and two low tides in approximately a day. Tidal movement occurs over the entire area of any large body of water. It is noticeable, however, only where the water's rise and fall can be measured against the land.
Tides have been carefully observed because of their importance in navigation. Most of the world's harbors are affected by noticeable tides. The depth of submerged hazards, the amount of water in channels, and the direction of the current, which all depend at least in part on the tide, affect the safety of ships passing into and out of the harbors.
Tides vary greatly from place to place around the world. Tide predictions for a specific area must therefore be based on an extensive series of observations and measurements in that area. Even so, tide predictions are not completely accurate because variable factors, such as weather conditions, have a measurable effect on the tides. Tidal information is published in the newspapers of coastal cities, in almanacs, and in tide tables used aboard ships.
The rising and falling water of the tides possesses great kinetic energy (the energy of motion). Part of this energy is expended against the shores. The erosion of shorelines is due in part to the tides. In a few countries, including France and Canada, the tides have been used to generate electricity. .)
In some rivers the tide produces a tidal bore. Tidal waves are not in any way related to tides. These destructive waves are caused by undersea earthquakes and volcanic eruptions or by storms at sea. |
How does the kidney work what controls the rate and concentration of urine review nephron in kidney steps in urine formation and concentration. Formation of urine: blood is filtered to the glomerulus capillary walls are thin blood pressure is higher inside capillaries than in bowman’s capsule collecting duct. Blood cleaning by the kidneys, as taught for a-level human biology urine formed via the three processes outlined above trickles into the kidney pelvis. Urine may be a waste product, but it is a carefully created waste product there are three main stages in urine formation, and this lesson covers. Start studying urine formation steps learn vocabulary, terms, and more with flashcards, games, and other study tools. There are four basic processes in the formation of urine starting with plasma filtration filtration is the mass movement of water and solutes from plasma to the. 3 urine formation content 1) formation of urine is a process important for the whole the first step is the secretion of h + into the tubular fluid through.
Urine formation - argosy medical. Get an answer for 'explain how the kidneys produce urine in the simplest terms possible' and find homework help for other science questions at enotes of steps. The urinary system is a critical system in the body there are four general steps of urine formation urine formation first starts with filtration. A & p ii test 3 urine formation description urine formation total cards 57 subject anatomy what are the three basic steps to urine formation definition.
Describe the three stages of urine production be sure to include how describe the three stages of urine master your assignments with step-by-step solutions. Urine formation involves following three steps: 1) ultrafiltration: it occurs in renal corpuscle (glomerulus & bowman's capsule) it is the filtration under pressure. Urine formation in order to form urine the kidneys have to carry out three processes: to understand all the intermediary steps between blood.
Im really confused as to where in the nephron of the kidney each of the three steps of urine formation occur the steps include: 1 glomerular filtration 2. This lesson will introduce the process of urine formation and give an introduction to the 3 steps involved in the process. Urine formation is a long drawn process performed by the nephrons of the kidneys the first step in formation of urine is filtration. Urine formation i: glomerular filtration • kidneys convert blood plasma to urine in three stages – glomerular filtration – tubular reabsorption and secretion.
Glomerular filtration the first step in the formation of urine is the filtration of blood the blood in the glomerulus is separated from the cavity within the bowman. Every one of us depends on the process of urination for theremoval of certain waste products in the body. Different steps involved in the urine formation are glomerular filtration, reabsorption and secretion glomerular filtration is the first step in urine formation.
Learn about urine - mechanism of urine formation, osmoregulation, constituents of urine | structure of kidney and nephron, physiology of kidney in byju's. Urine formation 1 for the production of urine, the kidneys do not simply pick waste products out of the bloodstream and send them along for final. An animation explaining how urine is formed in the kidneys how the body works, an interactive teaching atlas using colourful animated illustrations. Formation of urine - nephron function, animation - duration: 6:51 alila medical media 146,233 views 6:51 how the urinary system works - duration: 5. The first step in urine formation is the filtration of blood in the kidneys in a healthy human the kidney receives between 12 and 30% of cardiac output. |
When we make the effort to eat a balanced diet, it’s easy to assume we’ll get all the nutrition we need from our meals. But, once food is eaten, there’s potential for nutrient loss to occur during the different stages of digestion.
Much of our nutrition depends on how well foods are broken down and absorbed into our bodies. A majority of this absorption happens in the small and large intestines. It’s here where things get complex and trouble can brew, resulting in nutrient loss.
We depend on the microbes in our intestines to produce digestive enzymes that help break down food. These enzymes target a variety of carbohydrates (simple and complex), some fibers (soluble and digestion-resistant oligosaccharides), and various fats and proteins.
Many of us lack the right digestive enzymes to get the most nutrition out of our food. Deficiencies can occur due to age and other factors. Stress and prescription drugs, for instance, can affect the release of digestive enzymes. Additionally, a general lack of microbial diversity can impact our ability to digest certain types of foods, which can lead to stomach upset.
Loss of Microbial Diversity
A Western-style diet comprised largely of processed foods high in saturated fats and refined sugars could be to blame for poor microbial diversity. Most recently, a study found that immigrants moving from a non-Western nation to the United States experienced a loss of gut microbiome diversity. Among other things, these study participants lost bacteria that produce enzymes known for helping digest plant-based foods high in fiber (1). This includes a variety of high-fiber fruits and vegetables.
Without these bacterial enzymes for digestive support, these individuals are at increased risk for digestive concerns like uncomfortable gas and bloating after consuming higher-fiber foods. A depleted gut microbiome may also prevent the absorption of nutrients such as vitamins and minerals due to anti-nutritive components like phytates and hemicelluloses in plant foods.
The disappearance of these microbes might also predispose the immigrants who were part of this study to the same types of chronic health problems linked to Western-style diets (1). Topping the list is the risk of weight gain that can lead to excess visceral fat and central obesity.
The Role of Digestive Enzymes
Digestive enzymes may help counter the effects of a Western-style diet, but they should not be considered the sole method of attaining long-term digestive and overall health. Including a variety of fiber-rich, healthy foods in our diets is a key step toward promoting diverse gut microbiota.
Taking digestive enzymes daily with meals can assist in restoring digestive strength and protecting against potential nutrient loss. Some digestive enzymes such as hemicellulase, beta-glucanase, and phytase can help break down hard-to-handle antinutrient compounds that are present in some plant foods. Healthy digestion paired with added support from digestive enzymes can increase availability of nutrients such as folic acid, vitamin C, magnesium, and calcium.
Additional digestive enzymes, such as lipases, work to break down fats and are helpful for supporting absorption of fat-soluble nutrients, including lutein and lycopene (2). Proteases that break down proteins in foods can boost absorption of vitamins like B12 and minerals like iron. Many protein digestion issues have to do with the loss of digestive enzymes and microbial diversity that often occurs with age (3,4).
These examples show how digestive enzymes can improve nutrient availability from foods, but enzymes may also increase the likelihood of eating a diet with a variety of healthful foods. It shouldn’t be surprising that individuals are more likely to avoid fiber-rich foods if these foods cause them to experience uncomfortable gas and bloating. If anything, digestive enzymes can help to make eating the healthy foods you love more enjoyable.
To find out the latest information on digestive health innovation, click here.
- Vangay P, Johnson AJ, Ward TL, Al-Ghalith GA, Shields-Cutler RR, Hillmann BM, Lucas SK, Beura LK, Thompson EA, Till LM, et al. US Immigration Westernizes the Human Gut Microbiome. Cell [Internet]. Elsevier Inc.; 2018;175:962–972.e10. Available from: https://doi.org/10.1016/j.cell.2018.10.029
- Kopec RE, Gleize B, Borel P, Desmarchelier C, Caris-Veyrat C. Are lutein, lycopene, and β-carotene lost through the digestive process? Food Funct. 2017;
- O’Toole PW, Jeffery IB. Gut microbiota and aging. Science. 2015.
- Saltzman JR, Russell RM. The Aging Gut: Nutritional issues. Gastroenterology Clinics of North America. 1998. |
The so-called “Black Marble” images display all the human and natural matter that glows and can be sensed from space. What appear most prominently are city lights.
“Nothing tells us more about the spread of humans across the Earth than city lights,” said NOAA’s Chris Elvidge.
This enlightened imagery is made possible through the “day-night” band of the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument on the Suomi NPP satellite .
“Unlike a camera that captures a picture in one exposure, the day-night band produces an image by repeatedly scanning a scene and resolving it as millions of individual picture elements or pixels,” NOAA writes.
This nighttime imagery has multiple uses. NASA explains:
Social scientists and demographers have used night lights to model the spatial distribution of economic activity, of constructed surfaces, and of populations. Planners and environmental groups have used maps of lights to select sites for astronomical observatories and to monitor human development around parks and wildlife refuges. Electric power companies, emergency managers, and news media turn to night lights to observe blackouts.
See this Black Marble animation:
In addition to the spherical “Black Marble” imagery, NOAA and NASA have also released flat maps of city lights, including the global and U.S. views shown below.
The Moon via Earth observing satellite
At 4:15 p.m. yesterday, the GOES-East weather observing satellite - which usually just displays weather systems on Earth in its imagery - captured a rare appearance of the moon in the same view as the clouds.
Capital Weather Gang’s tropical weather expert Brian McNoldy, who noticed this so-called “lunar photobomb”, posted images on Facebook.
“A fun tidbit: the clouds and the surface of the earth are about 22,200 miles away from the “camera”, while the moon at this time is about 12x further away (around 268,000 miles),” McNoldy wrote.
Sighting the moon in weather imagery can only happen under certain circumstances. The University of Wisconsin’s CIMSS Satellite blog explains:
As it turns out, the Moon can actually be seen on GOES images a handful of times every year, depending on the viewing angle of the satellite in relation to the position of the Moon
Related: Halloween Moon on GOES-13 imagery (CIMSS) |
Definition of Firewall: A firewall is a network security system that monitors and controls over all your incoming and outgoing network traffic based on advanced and a defined set of security rules.
Broadly speaking, a computer firewall is a software program that prevents unauthorized access to or from a private network. Firewalls are tools that can be used to enhance the security of computers connected to a network, such as LAN or the Internet. They are an integral part of a comprehensive security framework for your network.
A firewall absolutely isolates your computer from the Internet using a "wall of code" that inspects each individual "packet" of data as it arrives at either side of the firewall — inbound to or outbound from your computer — to determine whether it should be allowed to pass or be blocked.
Firewalls have the ability to further enhance security by enabling granular control over what types of system functions and processes have access to networking resources. These firewalls can use various types of signatures and host conditions to allow or deny traffic. Although they sound complex, firewalls are relatively easy to install, setup and operate.
Most people think that a firewall is a of device that is installed on the network, and it controls the traffic that passes through the network segment.
However, you can have a host-based firewalls. This can be executed on the systems themselves, such as with ICF (Internet Connection Firewall). Basically, the work of both the firewalls is the same: to stop intrusion and provide a strong method of access control policy. In simple definition, firewalls are nothing but a system that safeguards your computer; access control policy enforcement points.
What Firewalls Do?
Basically, firewalls need to be able to perform the following tasks:
- Defend resources
- Validate access
- Manage and control network traffic
- Record and report on events
- Act as an intermediary
What is Personal Firewall
It is important to understand why we need a firewall and how it helps us in the world of secure computing. We need to understand the goals of information security because it helps us to understand how a firewall may address those needs.
Why you need Personal Firewall
In the age of high-speed Internet Access, you electronically connect your computer to a broad network over which, unless you have installed a personal firewall, you have limited control and from which you have limited protection. Until recently, unless you worked for an organization that provided high-speed internet access.
Like anything, the high-speed connection has its own drawbacks. Ironically, the very feature that makes a high-speed connection attractive is also the reason that makes it vulnerable. In a way, connecting to the internet via high-speed connection is like leaving the front door of your house open and unlocked. This is because high-speed Internet connections have the following features:
- A constant IP - Make it easy for an intruder who has discovered your computer on the internet to find you again and again.
- High-Speed Access - Means that the intruder can work much faster when trying to break into your computer.
- Always active connection - means that your computer is vulnerable every time when it is connected to the internet.
Defending yourself with a Personal Firewall
So now you have an idea of how you are vulnerable every time when you are online on a high-speed Internet connection, compared to an ordinary 56Kbps connection. What you now need to know is how you can defend yourself against the threat posed by this type of connection
A Personal firewall is important when
- You surf the internet at home using an 'always on' broadband connection
- You connect to the internet via a public WiFi network in a park, cafe or airport
- You run a home network which needs to be kept isolated from the internet
- You wish to be kept informed when any program on your computer attempts to connect to the internet
- Most Personal Firewalls are highly configurable so you can easily create security policies to suit your individual needs |
Lab: Cell respiration
| Cellular respiration is the set of the metabolic reactions and processes that take place in the cells of organisms to convert biochemical energy from nutrients into adenosine triphosphate (ATP), and then release waste products. The reactions involved in respiration are catabolic reactions that involve the redox reaction (oxidation of one molecule and the reduction of another). Respiration is one of the key ways a cell gains useful energy to fuel cellular reformations.
Aerobic respiration requires oxygen in order to generate energy (ATP). The equation below shows the complete oxidation of glucose.
yielding a maximum of 38 ATP molecules per oxidised glucose molecule, although due to certain inefficiencies the yield is estimated at between 29 to 30 ATP per glucose.
This lab offers an opportunity for students to observe evidence for respiration in seeds. A seed is a living organism. It is a small embryonic plant, enclosed in a covering called the seed coat, usually along with some stored food. A seed is considered dormant until the necessary conditions for germination are met. Germination "involves the reactivation of the metabolic pathways that lead to growth." As a seed moves into germination the rate of cellular respiration greatly increases.
There are a few possible ways that the results of respiration might be observed. Look back at the glucose oxidation equation to consider how we might measure respiration. Compare your ideas to those provided below (click the down arrow on the right to show the options):
|We could measure....
In this lab you will compare the relative volume of oxygen consumed by germinating and non-germinating (dry) pea seeds and investigate the effect of temperature on the rate of consumption (one of a number of factors that could affect the respiration process). Respirometers will be used to measure the change in gas volume for germinating and non-germinating pea seeds at two different temperatures.
In preparation for the experiment:
- Review cell respiration in a textbook, using the The Biology Place BioCoach or using another resource.
- Review the basics of how this experiment works at Cell Respiration, one of the labs at The Biology Place Lab Bench.
The ideal gas law is fundamental to the understanding of how the respirometer works. The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation is:
- [math]pV = nRT\,[/math]
- p is the absolute pressure of the gas;
- V is the Volume of the gas;
- n is the number of molecules of gas;
- R is the ideal, or universal, gas constant; and
- T is the absolute temperature of the gas (in Ko).
The following principles follow from relationship denoted in the ideal gas law:
- If temperature and pressure are constant, then the volume of the gas is directly proportional to the number of molecules of gas.
- If the temperature and volume are constant, then the pressure of the gas is in direct proportion to the number of molecules of gas present.
- If the number of gas molecules and the temperature are constant, then the pressure is inversely proportional to the volume.
- If the temperature changes and the number of gas molecules are constant, then either pressure or volume (or both) will change in direct proportion to the temperature.
In this lab, we will use sodium hydroxide (NaOH) as an agent to combine with the CO2, produced as a result of cellular respiration, to form solid sodium carbonate (Na2CO3) according to the following reaction:
- CO2 + 2 NaOH → Na2CO3 + H2O
Question: Knowing that the O2 gas is consumed during cellular respiration, and if the CO2 gas is removed, which of the above principles related to the ideal gas law will apply to the gas in the respirometer? (Click the arrow below on the right for an explanation.)
|The relevant principle is
A respirometer will be set-up for each of the experimental conditions: 3 sources of respiration (germinating pea, dry pea, plastic bead) and 2 temperatures (room temp and 10oC. Study the diagram of the respirometer setup at BioTopics, The Respiration Process. (Scroll down to near the bottom, "a simple respirometer".)
Printout the materials and procedure for reference during the lab.
- Masking tape
- Peas, half germinating and half dry
- Beads (or other small non-floating objects)
- 2 Containers for water baths
- Food coloring, 1-2 dark colors
- Cotton balls
- KOH (potassium hydroxide) or NaOH (sodium hydroxide)*
- Vaseline (or putty or non-hardening clay....used to seal the respirometers)
- *Potassium hydroxide and sodium hydroxide are harmful if brought into contact with the eye, skin or if swallowed. Goggles should be worn when using these solutions. See links for further safety information:
Before beginning the experiment, prepare a data table to record the results for the 6 experimental conditions.
Use the following procedure as a guide in performing the experiment:
- Prepare a room-temperature bath (approx. 25oC) and a cold-water bath (approx. 10oC).
- Prepare two containers of dark-colored water.
- Prepare the contents of the respirometers which will go in the room temperature bath. Set the beads aside.
- Determine the number of germinating peas that fit into the 9-dram vial in the respirometer. Find the volume of these germinating peas, using water displacement.
- Measure the volume of the same number of dry peas and add beads to attain an equal volume to the germinating peas.
- Measure an amount of beads to the same volume as the germinating peas.
- Repeat the previous step to prepare contents for respirometers which will go in the 10oC bath.
- Number each of 6 respirometers (a 9-dram vial with tubing inserted and caulked through lid) using tape/sharpie.
- Place a small wad of absorbent cotton in the bottom of each vial and, using a pipette, saturate the cotton with 15% NaOH (sodium hydroxide). It is important that the same amount of NaOH be used for each respirometer.
- Place a small wad of dry, nonabsorbent material on top of the saturated cotton.
- Place the first set of germinating peas, dry peas and beads in vials 1-3, respectively.
- Place the next set of germinating peas, dry peas and beads in vials 4-6, respectively.
- Recap the vials. Seal the edge of the lid with vaseline or non-hardening clay
- Weight each vial by attaching something heavy to the outside.
- Place the vials in their respective baths for 7 minutes.
- After 7 min, put the free ends of the tubing into the beakers of colored water. A little water should enter the tubing and then stop. If the water continues to enter the tubing, check for leaks in the respirometer.
- Allow the respirometers to equilibrate for 3 more minutes and then mark the initial position of the water in each tube (time 0) with a piece of tape.
- Check the temperature in both baths and record.
- Every 5 minutes for 20 minutes, measure the water level (in centimeters from the starting position) in the six tubes. Record measurements in your data table.
Some ideas for results/conclusion/discussion
Use the following ideas to further your understanding of the results:
- The beads are included as a control. How can the results for the beads be used to "correct" the results for the peas?
- Graph the results for the germinating peas and dry peas for each temperature.
- How does the amount of O2 consumed change over time in the different conditions.
- Determine the rate of O2 consumption (the slope of the line) of germinating and dry peas during the experiments at each temperature.
- What is the effect of germination on the respiration in peas?
- Why did the vial have to be completely sealed around the lid and around the tubing/lid connection?
- Seed. In Wikipedia, accessed 27 Mar 2011.
- Steane, Richard. The Respiration Process. BioTopics, accessed 27 Mar 2011.
- Billingsley, J. and Miller, L. Cellular Respiration Lab, EDHS Green Sea, accessed 27 mar 2011.
- Nuño, J. S. Laboratory 5: Cell Respiration, jdenuno.com, accessed 27 Mar 2011.
- Ideal gas law. In Wikipedia, accessed 27 Mar 2011. |
Recently, NASA released images captured by the Hubble Space Telescope that show off what scientists are describing as a "Cosmic Caterpillar" that stretches across the universe for nearly 6,000,000,000,000 miles. The so-called Caterpillar is actually a massive cloud of space dust and gas that is collapsing in on itself to form a new star.
Unfortunately for the could-be new star, there are roughly 65 very large and extremely hot stars lurking nearby that can be seen on the right side of the image. These started producing what is said to be a powerful stellar wind, which is doing its best to disperse the cloud of gas and dust and form the long tail-like structure seen in the image. Additionally, 500 less bright stars are in the vicinity, which are adding to the destructive forces at large.
At the moment, it is unclear if the "caterpillar cloud"--or IRAS 20324+4057--will be able to fight back by gathering enough mass to counteract the erosion. However, it could eventually one day collect enough material to collapse into a very bright and quite large star, but everyone reading this will have long been dead and forgotten about before that event happens. |
The Civil War changed a lot of things in the United States—slavery was abolished, new battlefield medicine was perfected, the West was opened up to railroads and the nation was united. It also changed our money. Before the war, there were 8,000 different kinds of money being used in the United States. It wasn’t until after the war that the U.S. started to really use the dollar.
Banks printed their own paper money. And, unlike today, a $1 bill wasn’t always worth $1. Sometimes people took the bills at face value. Sometimes they accepted them at a discount (a $1 bill might only be worth 90 cents, say.) Sometimes people rejected certain bills altogether.
Those dollar bills looks quite different from our bills today, which weren’t designed until 1963, says The Dollar Bill Collector:
The current design of the United States one dollar bill ($1) technically dates to 1963 when the bill became a Federal Reserve Note as opposed to a Silver Certificate. However, many of the design elements that we associate with the bill were established in 1929 when all of the country’s currency was changed to its current size. Collectors call today’s notes “small size notes” to distinguish them from the older, larger formats. The most notable and recognizable element of the modern one dollar bill is the portrait the first president, George Washington, painted by Gilbert Stuart.
That design means so much to us that we like our money spotless, rather than dirty. As Smart News has reported:
People like their cash fresh and clean, like OutKast’s wardrobe, and they’re more likely to hold on to those neat bills than spend them quickly. Dirty cash, on the other hand, encourages fast spending. At least that’s the conclusion of a new study published in the Journal of Consumer Research.
More from Smithsonian.com: |
Going On a Shape Hunt: Integrating Math and Literacy
Integrating mathematics and literacy allows students to develop an understanding of the place of mathematics in their world. Students are introduced to the idea of shapes through a read-aloud session with an appropriate book. They then use models to learn the names of shapes, work together and individually to locate shapes in their real-world environment, practice spelling out the names of shapes they locate, and reflect in writing on the process. This lesson provides opportunities to engage students using many different learning modalities. |
June 28, NASA tested the LDSD , which stands for Low Density Supersonic Decelerator . Behind this statement a new type of parachute lies “supersonic“, much larger than those currently in use. The project is aimed at exploration missions to Mars to allow the landing of heavy loads (and delicate), on the surface of the Red Planet are not enough conventional parachute, a technology tried and tested but no longer in step with the needs ( it is, moreover, for projects in the 70’s). For the experimental launch was used the U.S. Navy’s Pacific Missile Range Facility, near the island of Kauai, Hawaii , at 8.40 local.
LDSD has a length of 30.5 meters, and is paired with two devices similar to flying saucers, the Supersonic Inflatable Aerodynamic decelerators (SIADs). One of the SIAD is 6 meters wide, while the other measures 8 meters. The vehicle used for the test (with a weight of 3.175kg) was attached to a balloon filled with helium, which rose up to 36.5 kilometers high into the stratosphere. At this point the vehicle is off the ball, and used its thrusts to go up to 55 km, reaching a speed of Mach 4 (four times the speed of sound). The altitude was chosen to simulate the Martian atmosphere, which is only 1% denser than the Earth, at sea level.
In return phase to the surface, the SIAD would have to swell and slow the vehicle test, making it off the speed at Mach 2.5, with the subsequent intervention of the parachute, which would have allowed a safe splashdown in the Pacific Ocean. During the tests, however, the system did not work perfectly : the SIAD is inflated correctly, but the parachute did not open properly.Despite all the landing was successful (at 23.40 Italian time). The scientists were satisfied: the test is not carried out properly are still considered very valuable, because they allow to understand the mistakes. |
Lesson 1: Explorative search
“Somewhere, something incredible is waiting to be known…” – Carl Sagan.
Listen to what the librarian says:
When you are looking for or maybe already have an idea for a research topic it might be the right time to make an explorative search. An explorative search is a broad search, which gives you an overview of what literature and data is available within your topic. It helps you to clarify your mind and later it will serve as your foundation to describe a rationale – the scientific background for the study, and to write your problem formulation.
Begin e.g. your explorative search by reading trade journals, encyclopaedias, reference books and do some browsing on the Internet. This can help you to assure you focus on exactly what you want to research and it can be an early review to establish the context and rationale for your study. You will find out if it is possible to get hold of literature and other information about your topic idea and if it is at all interesting?
Whether your research topic or your idea is rooted in a wonder at something you have observed in practice or arises out of the literature, the explorative search should give you sufficient knowledge of your area to determine if the idea is good enough as basis for a problem formulation.
Start by exploring whether your idea meets the following criteria:
- Is my topic novel?
- Is my topic feasible?
- Is my topic relevant?
To save both your own and your supervisor’s time, ask the above mentioned questions before you start to spend time on writing your problem formulation – and before you present your idea to your supervisor.
Find out how to check for the above-mentioned criteria on the following page. |
Hyms Session V
For the gateway programme
Critical and reflective writing
Part A: Academic Integrity [30 mins]
Plagiarism is the use of ideas, works or words of another person and presenting them as if they were your own. Plagiarised ideas can come from any source including articles, books, online sources, television programmes, lectures or any other information source. Writing with integrity is the best way to avoid plagiarism. This requires you to reference the ideas of others, and properly quote and reference the words of others. See the links at the bottom of this page for further advice on referencing.
Plagiarism often occurs accidentally when students don't reference properly. However, it can also be done purposefully if you are presenting someone else's work as your own. This doesn't mean you should not use the ideas of others! Using the ideas and work of others is how you provide academic evidence for your thinking. To avoid plagiarism, however, you need to reference these ideas correctly.
Remember: Plagiarism is easy to avoid! All you need to do is ensure you are referencing sources correctly. This does not just involve the use of in-text citations or footnotes but also requires the use of appropriate punctuation to indicate when you are quoting the work of others.
Plagiarism is a serious issue for academic integrity. If you do not reference a source properly, such as paraphrasing it without acknowledging it, or not mentioning it at all, then the true origin of the material is hidden from the marker. This is counter to academic approaches to writing, where evidence should be clearly referenced and tracible back to the original source.
Plagiarism may take the form of direct copying, reproducing or paraphrasing ideas, sentences, drawings, graphs, internet sites or any other source and submitting them for assessment without appropriate acknowledgement. Plagiarism can also include copying another student’s work without their knowledge or submitting work that has already been published in another language. The latter relates to the copying of translated material, copying and re-arranging material, or taking the ideas and findings of the material without attribution.
If you have a print impairment you may need books in an accessible format, usually electronic. You may find the Library’s extensive collection of eBooks useful. If eBooks are still not accessible to you, you may be eligible to use the Alternative Formats service.
Part b [15 mins] |
2.9 billion people still live under laws which criminalise same-sex relationships. That’s nearly half of the world’s population.
In this article, we remind ourselves why we need to uphold the human rights of lesbian, gay, bisexual and transgender (LGBT) people and combat homophobic discrimination around the world.
How have we done at home?
We’ve come a long way since Henry VIII passed a law making male homosexual activity punishable by death. In 1835 the last two men were hanged for this, but prosecutions continued. It was not until 1967 that the law was changed in England and Wales to decriminalise consensual homosexual acts taking place in private between men over 21 years old. (Lesbianism has never been criminalised.)
Throughout the 1900s, many gay people were subjected to violent hate crime and other forms of discrimination. A notable example is the persecution of war-time code-breaker Alan Turing, who was prosecuted in 1952 for ‘gross indecency’ and given a choice between imprisonment or hormonal treatment to ‘reduce’ his homosexuality. Turing chose hormone treatment. An inquest later found that he had committed suicide. In 2009, British Prime Minister Gordon Brown made an official public apology for “the appalling way he was treated”. In 2013, Queen Elizabeth II granted Turing a posthumous pardon.
The growing influence of human rights underpinned several changes in the law. In 1981, the European Court of Human Rights declared that Northern Ireland’s criminalisation of homosexual acts between consenting adult males violated the right to respect for private life. As a result, the law was changed.
But this growing trend didn’t prevent Margaret Thatcher’s government passing ‘Section 28‘ in 1988, which banned schools from ‘promoting’ homosexuality. Section 28 was repealed in 2003. Current Prime Minister David Cameron has since admitted that voting to keep Section 28 was a mistake and has apologised for the policy, acknowledging that it was offensive.
What’s happening now?
Recently there have been several significant advances in equality, including the equal right to adopt, protections against sexual orientation discrimination in the workplace, access to civil partnerships and same-sex marriage.
Despite these advances, the need to address homophobia remains urgent:
- Between 2010 and 2013, one in six lesbian, gay and bisexual people was the victim of homophobic hate crime.
- A National Union of Students survey revealed that one in five lesbian, gay and bisexual students experienced bullying or harassment at university.
- Lesbian, gay and bisexual people are twice as likely as heterosexual people to have suicidal thoughts or to make suicide attempts.
- Homophobia remains a real issue in certain professions. It has been identified as one of the most prevalent forms of discrimination in sport.
- Prominent UK-based gay rights organisation Stonewall has reported worrying attitudes to homosexuality in health and social care services, as well as significant levels of homophobic bullying in UK primary and secondary schools.
What about outside the UK?
Our domestic courts have the power to stop people being deported to countries where they would live in fear because of their sexual orientation. A bisexual man, Orashia Edwards, feared he would be killed if he were deported to his home country of Jamaica. Sexual activity between men is a criminal offence in Jamaica, punishable by up to 10 years imprisonment. In 2014, Human Rights Watch reported that LGBT people were subject to “unchecked violence” in Jamaica. Orashia was permitted to stay in the UK after a three-and-a-half year legal battle with the Home Office.
Nonetheless, the UK regularly returns LGBT asylum seekers to their country of origin. In some cases, this can have devastating consequences. Jackie Nanyonjo was persecuted in Uganda for being a lesbian. She sought asylum in Britain but was forcibly deported in January 2013. Ugandan authorities held Jackie for many hours, despite the fact that she was in pain and vomiting blood. Jackie went into hiding for fear that she had been exposed as a lesbian. Without medical attention, Jackie’s health deteriorated, and she died on 8 March 2013.
It’s a shocking picture. And it shows the importance of human rights in combatting homophobia both at home and abroad. The fight goes on.
Take a look at our explainers on how human rights have overturned homophobic laws and promoted equality. Learn about the prohibition on discrimination and the right to private and family life by clicking the links in this paragraph. |
September is sepsis awareness month
Sepsis can kill, but recognising sepsis early can mean the right treatment is given and can save lives. So around the world people are starting to run campaigns to raise awareness of this life-threatening condition. In England there are around 123,000 cases of sepsis and an estimated 37,000 deaths are due to the condition each year. Sepsis can be hard to diagnose but we have put a few tips together to help you recognise it.
What is sepsis?
Sepsis is also sometimes known as septicaemia or blood poisoning. It is a life-threatening reaction to infection.
It is fairly common for people to get infections and often these are easy for the body to deal with. Normally the body reacts to infection by mounting an immune response, which clears the infection but can make us feel unwell while we are fighting it. If the infection is cleared quickly the reaction can be quite mild, for example in the case of a cold, where fighting a virus can give us a sore throat, swollen glands and headache. However, sometimes the reaction to infection is severe and in an attempt to fight the infection the immune system begins to damage our own tissues and organs. This makes the person very unwell and is called sepsis.
Can I get sepsis?
It is possible for anyone of any age to get sepsis. However, it is more common in people with a weakened immune system. Those at highest risk of sepsis include babies under 1 year of age, people over 75 years, anyone with conditions affecting the immune system or some chronic conditions such as diabetes, people who have recently had surgery or women who are pregnant or have just given birth.
How do I know if it is sepsis?
It can be difficult to tell if you are developing sepsis so if in doubt seek medical attention. Here are some checklists to help you recognise the signs, which are different in adults and children:
In adults think SEPSIS:
S: Slurred speech / confusion
E: Extreme shivering / muscle pain
P: Passing no urine in a day
S: Severe breathlessness
I: It feels like you’re going to die
S: Skin mottled / discoloured
In babies and young children symptoms suggesting an emergency are:
Blue, pale or blotchy skin
A rash that doesn’t disappear when you roll a glass over it
Difficulty breathing / breathing very fast
A weak high-pitched cry unlike their normal cry
Not responding like they normally do
Being sleepier than normal
What should I do if I think someone has sepsis?
If you recognise the signs and symptoms above and you believe someone has sepsis the best thing is to help them seek medical advice urgently.
If they seem confused it would be good for someone who knows what medications they are on and any allergies they have to accompany them to see the doctor, but only if this won’t cause any delays.
What will happen when I see a doctor for sepsis?
If you or your child see a doctor for these symptoms they will want to do some tests such as:
- blood pressure
- rate of breathing
- heart rate
- level of oxygen (using a monitor on the end of your finger)
and they may ask you some questions to assess your level of consciousness.
This is because there are some early warning signs of sepsis, which flag to the doctor that you are developing sepsis. If you are diagnosed with sepsis you may need treatment in hospital including antibiotics.
Watching out for the signs and symptoms of sepsis can save lives. Don’t delay – be sepsis aware today! |
Electroless nickel plating also known as nickel electro-deposition, is becoming an increasingly popular process for a variety of different manufacturing applications. Electro nickel plating is a process that uses an electrical current to coat a conductive material, typically made of metal, with a thin layer of nickel. Other metals used for electroplating include stainless steel, copper, zinc, and platinum.
Benefits of Electro Nickel Plating
In general, electroless nickel plating improves a wide range of characteristics not inherently present in the base material. Some of these benefits include:
- Increased resistance to corrosion
- Improved hardness
- Superior strength
- Resistance to wear
- Improved ductility
How Electroless Nickel Plating Works
To transfer nickel onto the surface of a product properly, a negative charge must be applied to the base material. To achieve this, the product is typically attached to a rectifier, battery or other power supply via a conductive wire. Once attached, a rod made of nickel is connected in a similar fashion to the positive side of the rectifier or power source. Once the initial steps have been completed, the base material is submerged in a solution that features a salt with a chemical makeup, including the electroplating metal. With electro nickel plating, this solution consists of water and nickel chloride salt. Due to the electric current present in the solution, the nickel chloride salt dissociates to negative chloride ions and positive nickel cat-ions. The negative charge of the base metal then attracts the positive nickel ions, while the positive charge of the nickel rod attracts the negative chloride anions. Through this chemical reaction, the nickel in the rod oxidizes and dissolves into the solution. From here, the oxidized nickel is attracted to the base material, and subsequently coats the product.
Current Density in the Electroless Nickel Plating Process
Electro nickel plating involves a wide range of current density levels. Current density directly determines the deposition rate of nickel to the base material—specifically, the higher the current density, the quicker the deposition rate. Current density, however, also affects plating adherence and plating quality, with higher current density levels delivering poorer results. Therefore, the optimal level of current density depends on the type of base material and specific type of results the final product requires.
One way to avoid working at lower current densities is by employing a discontinuous direct current to the electroplating solution. By allowing between one and three seconds of break time between every eight to fifteen seconds of electrical current, high current densities can produce a higher level of quality. A discontinuous current is also beneficial for avoiding over-plating of specific sections on the base material. Another solution to the current density issue involves incorporating a strike layer to the initial electro nickel plating process. |
The following is an article that originally appeared on Russian7.ru (Русская Семерка). The original can be read here. The following translation to English has been provided by SRAS Home and Abroad Scholar Lindsey Greytak.
Sharing common roots, Russian and Ukrainian, at first glance, look very similar. This is not so. In reality they have more differences than similarities.
From the Same Roots
It is well known that Ukrainian and Russian belong to the same group of Eastern Slavic languages. They have a common alphabet, similar grammar, and significant lexical uniformity. However, the particularities in the development of cultures of the Ukrainian and Russian people led to noticeable differences in their language systems.
This first difference between Russian and Ukrainian is apparent in the alphabet. The Ukrainian alphabet took shape at the end of the 19th century, and it differs from Russian in that Ukrainian does not use the letters Ёё, Ъъ, Ыы, Ээ, but does have Ґґ, Єє, Іі, Її, which are not present in Russian.
Consequently, uncharacteristic in Russian pronunciation are some sounds of the Ukrainian language. Thus, absent in Russian is the letter “Ї”, which sounds closer to “ЙИ”, the “Ч” is pronounced more harshly in Ukrainian, like in Belarusian or Polish, and “Г” is produced with a guttural, fricative sound.
Modern research shows that the Ukrainian language is closer to other Slavic languages: Belarusian (29 common characteristics), Czech and Slovak (23), Polish (22), Croatian and Bulgarian (21), and only 11 common characteristics with Russian.
Some linguists, on the basis of these facts, even place doubt that Russian and Ukrainian should be placed in a single language group.
Statistics show that only 62% of words shared between Russian and Ukrainian have common characteristics. Therefore, the Russian language in relation to similarities with Ukrainian, sits at fifth place behind Polish, Czech, Slovak, and Belarusian. To note in comparison, English and Dutch are lexically more similar at 63% in shared common characteristics, which is more than Russian and Ukrainian.
Divergence of Paths
Fact: Only 62% of words have common characteristics between Russian and Ukrainian.
The differences between the Russian and Ukrainian languages are largely due to the particularities between the formation of two nations. The Russian nation centralized its formation around Moscow, which led to the dilution of its lexicon with Finno-Ugric and Turkic words. The Ukrainian nation was formed by the unification of the South Russian ethnic groups, and therefore the Ukrainian language to a considerable degree has retained its ancient origins.
By the middle of the XVI century, Ukrainian and Russian already had significant differences.
Texts of that time in the Old Ukrainian language are generally understandable to modern Ukrainians, but for example, documents from the the epoch of Ivan the Terrible are more difficult to understand by modern Russians.
Even more noticeable discrepancies between the two languages began to develop with the beginning of the formation of the Russian literary language in the first half of the XVIII century. The abundance of Church Slavonic words in the new Russian language made it hard to understand for Ukrainians.
For example, Old Russian takes the Church Slavonic word «благодарỳ/blagodaru» thank you, and is known to Russians today as «благодарю/blagodaryu». The Ukrainian language, on the other hand, preserved the old Russian word «дáкую» for thank you, which now looks like «дякую» in modern Ukrainian.
From the end of the 18th century, the Ukrainian literary language began to take shape, which being on course with other pan-European languages, gradually broke away from its ties to the Russian language.
In particular, a rejection of Church Slavonicism occurred, instead there was a focus on folk dialects, as well as word borrowing, primarily from other Eastern European languages.
The following table visually shows how much closer the basic vocabulary of Modern Ukrainian is to other Eastern European languages and how far it is from Russian.
Also important, especially in Ukrainian, is its dialectical diversity. This is a result of individual regions in Western Ukraine having been part of states such as Austria-Hungary, Romania, Poland, and Czechoslovakia. So, the dialect of a resident from the Ivano-Frankivsk Region in western Ukraine is always understood by someone from Kyiv, while a Muscovite and a Siberian also speak the same dialect.
False Friends in Russian and Ukrainian
Fact: In Ukrainian the word «жаль» (pity/sorry) may also be used as a noun. «Ой настала жаль туга да по всій Україні» (It was such a pity for all of Ukraine).
Despite the fact that Russian and Ukrainian have many common words, and even more words similar in pronunciation and spelling, they nevertheless often possess different semantic nuances.
Take, for example, the Russian word «иной» (different/other) and its related Ukrainian word «iнший» (another). The sound and spelling of these words are similar, but their meanings have noticeable differences.
A more accurate correspondence to the Ukrainian word «iнший» in Russian would be «другой» (other), but it is somewhat more formal and does not bear such emotional and artistic expressiveness as the other Russian word «иной».
Another word, «жаль» (pity/sorry), is identical in both languages in spelling and pronunciation, but there are differs in its semantic meaning. In Russian the word is used as a predicative adverb. Its main task is to express regret for something, or pity for someone.
In the Ukrainian language, when used as an adverb, the word «жаль» has a similar meaning to Russian. However, it can also be a noun, and as a noun its semantic nuances are noticeably enhanced in accordance with words such as grief, bitterness, and pain. «Ой настала жаль туга да по всій Україні» In this context, the Russian version of the word «жаль» is not used.
Divergent Grammar in Russian and Ukrainian
It is it often possible to hear from foreign students that Ukrainian is more closely related to other European languages than to Russian. It has long been noticed that translating from French or English into Ukrainian is in some respects simpler and easier, than translating into Russian.
Everything has to do with certain grammatical structures. Linguists have a certain joke that in European languages one says «поп имел собаку» (The pope had a dog), and only in Russian «у попа была собака» (With the pope was a dog). Indeed, in Ukrainian in similar cases, along with the verb «есть» (is), the verb «иметь» (have) is also used. For example, the English phrase “I have a younger brother” in Ukrainian can be said like «У мене є молодший брат» (With me is a younger brother), but also, with equal meaning, «Я маю молодшого брата» (I have a younger brother).
Fact: The name of the «Українська мова» (Ukrainian language), a common language throughout the entire ethnic territory, was spread and consolidated only in 20th century.
The Ukrainian language, in contrast to the Russian language, adopted modal verbs from other European languages. So, in the phrase «Я маю це зробити» (I have to do it), the modal verb is used with the meaning of obligation – like in the English “I have to do it.” In Russian, this function of the verb «иметь» “to have” has long since disappeared from use.
Another indicator in the differences in grammar is that the Russian verb «ждать» (to wait) is transitive, but in Ukrainian «чекати» (to wait) is not transitive. Thus, in Ukrainian the verb is used with a preposition «чекаю на тебе» (I wait for you) – like in English. In Russian the verb is used without a preposition «жду тебя» (literally: “I wait you”).
However, there are cases when Russian borrows from other European languages and Ukrainian does not. Thus, the names of the months in Russian are copied from Latin: for example, «март» is March in English, martii in Latin, März in German, mars in French. Here, the Ukrainian language has retained the connection with its original Slavic vocabulary using the word «березень» (berezyen’) for the month. This name is actually taken from the word for “birch tree.” |
Find the de Broglie wavelength. Find the slit separation. If then and 69.Strategy The wavelength of the electrons must equal the wavelength of the photons to give the same diffraction pattern. The wavelength of a photon is related to its energy by The wavelength of the electrons is given by the de Broglie wavelength with since the Dec 22, 2007 · Energy released in quanta is equal to planck's constant times the frecuency of the photon. E = hv = (6.62606876E-34 J/Hz)(3.747E16 Hz) (I am not doing your homework for you though)
Cronus zen warzone script
- The theory of the energy distribution of blackbody radiation was developed by Planck and first appeared in 1901. Planck postulated that energy can be absorbed or emitted only in discrete units or photons with energy E = hν = ~ω The constant of proportionality is h = 6.626×10−34Js. |
- The calculator will find the radius and interval of convergence of the given power series. Show Instructions. |
- The frequency of light which corresponds to energy equal to 3E, is. An electron & a photon have same energy E. Find the ratio of de Broglie wavelength of electron to wavelength of photon. Given mass of electron is m & speed of light is C. The energy of a photon of light with wavelength is approximately .
With this Wien's law calculator, you can easily estimate the temperature of an object, basing on the peak wavelength or frequency of its thermal emission spectrum. Read about Wien's displacement law, learn the Wien's law formula, and evaluate the temperature of Sun's surface, lava, or any hot body by yourself! E = h × h = 6.63 × 10-34J·s useful equations c = × c = 3.00 × 108m/s 1 m = 1 × 109nm 1 kJ = 1000 J. example. Light with a wavelength of 525 nm is green. Calculate the energy in joules for a green light photon. - find the frequency: c= l·u l. c v=.
Calculate the energy of a photon of radiation whose wavelength is 421nm The formula c= lambda(v) wavelength and frequency are proportional? The de-broglie wavelength for an electron ray beam of 100 ev kinetic energy, if m=9.1*10-28 gm will.a) 1.20a. Calculate the wavelength of radiation with a frequency of 8.0 x 1014 Hz. Calculating Energy & Frequency of EM radiation. Defining variables. Frequency (Hz) = Energy (Joules) = Planck’s constant = Value of Planck’s constant: 6.626 x 10-34 Joules*s. Deriving equations. Given the formula ……Energyphoton = h * v. What is the formula for calculating v?
The energy associated with a single photon is given by E = h ν, where E is the energy (SI units of J), h is Planck's constant (h = 6.626 x 10 –34 J s), and ν is the frequency of the radiation (SI units of s –1 or Hertz, Hz) (see figure below). Frequency is related to wavelength by λ = c / ν, where c, the speed of light, is 2.998 x 10 8 ... Nov 06, 2007 · Use this formula. hc/lambda. h= Plancks constant c= speed of light and lambda is our wavelength. Despense with units, as I can not remember them all. (6.626 X10^-34) (2.998 X 10^8)/ 7.5 X10^-7. =...
energy its atoms have. Since atoms in hotter objects have more energy, they can emit photons with more energy than cooler objects can. (When an atom emits a photon the photon energy comes from the atom, so an atom can’t emit a photon with more energy than the atom had.) So hot objects emit high energy photons, or short wavelength light. Since ... Red light with a wavelength of 700.0 nm has a frequency of 4.283 x 1014s-1. Substituting this frequency into the Planck-Einstein equation gives the following result. A single photon of red light carries an insignificant amount of energy. these photons carries about 171,000 joules of energy, or 171 kJ/mol.
Listed below are the approximate wavelength, frequency, and energy limits of the various regions of the electromagnetic spectrum. Wavelength (m). Frequency (Hz). Energy (J).For light of 600 nm wavelength, how many photons does this correspond to? 1.) Determine the energy of 1 photon: E photon = mhc = (6.626 x 10-34 J.s)( 2.998 x 108 /s) = 3.31 x 10-19 J/ photon 600 x 10-9 nm 2.) Calculate # photons needed to produce given amount of energy:
Start studying Wavelength/Frequency/Energy Calculations Honors. Learn vocabulary, terms and more with flashcards, games and other study tools. Only RUB 79.09/month. Wavelength/Frequency/Energy Calculations Honors. STUDY. Flashcards.
- Questions on clutchesSep 14, 2020 · The formula for energy involving wavelength is = where is the energy of the system in Joules (J), is Planck’s constant: 6.626 x 10-34 Joule seconds (J s), is the speed of light in a vacuum: 3.0 x 10 8 meters per second (m/s), and is the wavelength in meters (m).
- Endocrine ed biomanIn chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus.The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus.
- Appendix interview transcript apaComment/Request. Calculate the energy in Gigawatts. Sending completion. To improve this 'Frequency and Wavelength Calculator', please fill in questionnaire. Male or Female ?
- 5th grade mathematics conversion chartThis free BMR calculator estimates basal metabolic rate based on well-known formulas. Learn more about variables that affect BMR, and explore hundreds of other calculators addressing topics such as fitness, health, math, and finance, among others.
- Dandd rules redditdiscovered that radiation of an appropriate wavelength can promote electron transitions between the different energy levels within an atom. More specifically, the energy levels of the hydrogen atom were found to be given by the equation 13.60
- Truth table generator javananomaterial industries. The band gap energy of insulators is large (> 4eV), but lower for semiconductors (< 3eV). The band gap properties of a semiconductor can be controlled by using different semiconductor alloys such as GaAlAs, InGaAs, and InAlAs. A table of materials and bandgaps is given in Reference 1. UV/Vis/NIR Spectrometer application ...
- Novelas turcas subtituladas youtubeConvert wavelength to frequency using this online RF calculator. Enter the Wavelength to Calculate the Frequency.
- Best bitcoin mining pool reddit(b) The wavelength of the given particle is of the same order of magnitude as which type of electromagnetic radiation? 3) Determine the frequency of a photon whose energy is 3.00 x 10-19 joule. 4) A photon has a wavelength of 9.00 x 10-10 meter. Calculate the energy of this photon in joules. [Show all work, including the
- Ford motor company ll5 salaryCalculate Photon Energy using Planck's Constant Calculator that calculate the photon energy using Plancks constant. A photon is characterized by either a wavelength, denoted by λ or equivalently an energy, denoted by E. There is an inverse relationship between the energy of a photon and the wavelength of the light given by the equation.
- Unraid vlan tag
- 19th century sewing
- Noveske 9 review
- Harley bcm code b2141
- Quitclaim deed florida
- Ynnari dark eldar
- Shadowlands best dps
- Advantages and disadvantages of information and communication technology pdf
- Mt6572 da file
- Vegas7games biz mobile
- Modern warfare pc rainbow textures
Dd15 rod and main bearings
Flex ca glue
Pressure density altitude chart
Stardict dictionaries russian
Linksys router internet light not blinking
Magpul magazine coupler
Big cartel vs redbubble
Newmar rv for sale in texas
Amaboko song mp3
Ocs polar or nonpolar atom closest to negative side
Imx291 datasheet94 nissan pickup timing marks
Anniversary yumi matsutoyaHonda coil packs |
NEW YORK (February 13, 2017) – A team of scientists reporting in the journal Nature Climate Change say that negative impacts of climate change on threatened and endangered wildlife have been massively underreported.
In a new analysis, authors found that nearly half of the mammals and nearly a quarter of the birds on the IUCN Red List of Threatened Species are negatively impacted by climate change, with nearly 700 species affected. Previous assessments said only seven percent of mammals and four percent of birds on the Red List were impacted.
The paper reviewed 130 studies, making it the most comprehensive assessment to date on how climate change is affecting our most well-studied species.
Impacts for mammals are wide ranging and include a lower ability to exploit resources and adapt to new environmental conditions. For example, primates and marsupials, many of which have evolved in stable tropical areas, are vulnerable to rapid changes and extreme events brought on by climate change. In addition, primates and elephants, which are characterized by very slow reproductive rates that reduce their ability to adapt to rapid changes in environmental conditions, are also vulnerable. On the other hand, rodent species that can burrow, and thus avoid some extreme conditions, will be less vulnerable.
For birds, negative responses in both breeding and non-breeding areas were generally observed in species that experienced large changes in temperatures in the past 60 years, live at high altitudes, and have low temperature seasonality within their distributions. Many impacted species inhabit aquatic environments, which are considered among the most vulnerable to temperature increase due to habitat loss, fragmentation, and harmful algal blooms. In addition, changes in climate in tropical and subtropical forest areas, already exacerbated by habitat degradation, may threaten forest-dependent species.
Said lead author Michela Pacifici of the Global Mammal Assessment Program at Sapienza University of Rome: “It is likely that many of these species have a high probability of being very negatively impacted by expected future changes in the climate.”
Said co-author Dr James Watson of the Wildlife Conservation Society and University of Queensland: “Our results clearly show that the impact of climate change on mammals and birds to date is currently greatly under-estimated and reported upon. We need to greatly improve assessments of the impacts of climate change on species right now, we need to communicate this to wider public and we need to ensure key decisions makers know that something significant needs to happen now to stop species going extinct. Climate change is not a future threat anymore.”
The authors recommend that research and conservation efforts give greater attention to the `here and now' of climate change impacts on life on Earth. This also has significant implications for intergovernmental policy fora such as the Convention on Biological Diversity and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, and the revision of the strategic plan of the United Nation Framework Convention on Climate Change. |
Aortic valve disease is a condition in which the valve between the main pumping chamber of your heart (left ventricle) and the main artery to your body (aorta) doesn't work properly. Aortic valve disease may be a condition present at birth (congenital heart disease), or it may result from other causes.
Types of aortic valve disease include:
Aortic valve stenosis
In this condition, the flaps (cusps) of the aortic valve may become thickened and stiff, or they may fuse together. This causes narrowing of the aortic valve opening. The narrowed valve isn't able to open fully, which reduces or blocks blood flow from your heart into your aorta and the rest of your body.
Aortic valve regurgitation
In this condition, the aortic valve doesn't close properly, causing blood to flow backward into the left ventricle.
Your treatment depends on the type and severity of your aortic valve disease. In some cases you may need surgery to repair or replace the aortic valve.
Aortic valve disease care at Mayo Clinic
Some people with aortic valve disease may not experience symptoms for many years. Signs and symptoms of aortic valve disease may include:
- Abnormal heart sound (heart murmur) heard through a stethoscope
- Shortness of breath, particularly when you have been very active or when you lie down
- Chest pain or tightness
- Irregular heartbeat
- Fatigue after being active or having less ability to be active
- Not eating enough (mainly in children with aortic valve stenosis)
- Not gaining enough weight (mainly in children with aortic valve stenosis)
When to see a doctor
If you have a heart murmur, your doctor may recommend that you visit a cardiologist or have a test called an echocardiogram (ultrasound of the heart). If you develop any symptoms that may suggest aortic valve disease, see your doctor.
Your heart has four valves that keep blood flowing in the correct direction. These valves include the mitral valve, tricuspid valve, pulmonary valve and aortic valve. Each valve has flaps (cusps or leaflets) that open and close once during each heartbeat. Sometimes, the valves don't open or close properly, disrupting the blood flow through your heart and potentially impairing the ability to pump blood to your body.
In aortic valve disease, the aortic valve between the lower left heart chamber (left ventricle) and the main artery that delivers blood from the heart to the body (aorta) doesn't work properly. It may not be closing properly, which causes blood to leak backward to the left ventricle (regurgitation), or the valve may be narrowed (stenosis).
Aortic valve disease may be caused by a heart defect present at birth (congenital). It can also be caused by other conditions, including age-related changes to the heart, infections, high blood pressure or injury to the heart.
Risk factors of aortic valve disease include:
- Older age
- Certain heart conditions present at birth (congenital heart disease)
- History of infections that can affect the heart
- Chronic kidney disease
- History of radiation therapy to the chest
Aortic valve disease can cause complications, including:
- Heart failure
- Blood clots
- Heart rhythm abnormalities |
The vampire bat wants to suck your blood, but how does he find it? New research shows that the bat uses specialized sensors near its nose that are extremely sensitive to heat.
"What the vampire bat has done is through some specialized genetic machinery, it has changed the structure of it [the heat sensor], so it changes the temperature at which it is activated," study researcher David Julius, of the University of California, San Francisco, told LiveScience. "It allows it to pick up the signal of changing body temperatures due to blood flow."
These receptors are very similar to human receptors that sense heat, but also those that sense pain. Figuring out how adaptations to these sensors change their properties in nature can help us treat things like chronic pain and inflammation.
The vampire bat feeds off sleeping animals, including birds and mammals (yes, even humans). To get its blood fix, the bat first needs to find an animal, and then determine if it is sleeping. Previous research showed that these bats have special brain cells that are sensitive to the deep breathing sounds of snoozing animals. [Image Gallery: Bats of the World]
Once they find a sleeping animal, they need to feed on it without waking it. There are no second chances when it comes to feeding off an animal's blood. Their special heat sensors enable them to distinguish between areas of skin that cover vessels full of delicious, hot, wet blood and areas covered in unpalatable hair. They then use their razor-sharp teeth to make a 0.2 inch by 0.2 inch (5 mm by 5 mm) square divot in the skin and suck out the sleeping animal's blood without waking them.
The bat uses a receptor found in all mammals, which we use to sense heat on our skin and to sense capsaicin, the "heat" factor in chili peppers. The bat's receptor is modified to be able to detect much lower levels of heat, around 86 degrees Fahrenheit (30 degrees Celsius) from about 8 inches (20 centimeters) away.
Our heat sensors are tripped at around 110 degrees Fahrenheit (43 degrees Celsius) and in all but the most extreme cases (say, a burner on a stove) we require physical contact to feel heat from an object.
By analyzing the genetics and expression of heat receptors in the noses of fruit and vampire bats, the researchers discovered that the vampire bat's heat receptor on its nose and lips is different than the receptor of the fruit bat. These modified heat receptors are expressed in a special pit on the animals' face, which has lots of connections to the vampire bat's brain.
The vampire bat's receptor is extra sensitive, because of changes to its structure. These changes come from an intermediate step in protein production, not at the level of a genetic change (like a mutation), which lets the bats still express the receptor normally in the rest of their bodies.
Most animals sense heat in very similar ways: Their receptors detect higher temperatures mostly through touch. Extra-sensitive heat receptors like the bats' have only been discovered in a few types of snakes before, never in a mammal. It's likely that other vampire bats also use similar sensory organs to "see" blood, though that hasn't been studied.
The study was published today (Aug. 3) in Nature. |
Competencies: Social Studies, History
Social Studies Skills 5.1: Uses critical reasoning skills to analyze and evaluate positions.
Social Studies Skills 5.2.2: Uses a graphic organizer to organize main ideas and supporting details from visuals and literary, narrative, informational, and expository texts.
Social Studies Skills 5.4: Creates a product that uses social studies content to support a thesis and presents the product in an appropriate manner to a meaningful audience.
CBA: Cultural Contributions
Objective: Students watch the PowerPoint slides about the Bill Haddon mural, and are introduced to various perspectives and historical biases. Students make their own mural of Issaquah history, each student contributing a portion.
Materials: Bill Haddon mural pictures or Bill Haddon power point, pamphlet explaining the mural, large butcher paper, crayons, pens or other coloring materials
Note to teacher:
There are many biases shown in the mural. This lesson provides an opportunity to teach how those who record history have an affect on how people and events are portrayed in history. It is an excellent opportunity to point out biases.
- Show the Bill Haddon Mural Power Point images, discussing what is shown in each section of the mural.
- Explain that this mural is how one person saw Issaquah’s history.
- Using the pictures of the mural, discuss which people and which events this artist chose to represent Issaquah’s history. Pose the question, “What people or events would you choose if you were to illustrate the history of Issaquah?”
- Discuss what a bias is, and how biases can affect the history that is recorded. Point out the following biases in the mural:
- There are far more men represented.
- Specific people represented tend to be wealthy businessmen. The large portrait of the Native American, lumberjack and miner are “typical” samples.
- The Casto incident (sometimes referred to as a “massacre”) shows Native Americans that are probably inaccurate. The Native Americans in this area did not wear loincloths. By this time in Issaquah history they were probably in western dress, and they were probably not carrying torches. A total of three white people died and two Native Americans. The Native Americans that killed the Castos were actually employed by Mr. Casto.
- The illustration of the Chinese men being shot is labeled as the “Chinese Riot” or “Chinese Massacre.” The Chinese people were not rioting or massacring anyone. The white settlers were the ones running the Chinese people out of town. This was actually an anti-Chinese incident. There were four Chinese people that died. Again, the clothing/hairstyle depiction is probably not accurate.
- Use these inaccuracies to point out the importance of careful research, attention to detail, and consideration of all perspectives when portraying history.
- Inform the class that they will be given an opportunity to create their own mural, depicting what they view as the most important people and events in Issaquah’s history.
- Review the timeline and photos in the history kit. Feel free to expand beyond these resources for ideas.
- On the board, list the people and events they wish to illustrate in their mural.
- In small groups, students illustrate a portion of the mural.
- Display the final product in the hallways, library or cafeteria |
What is autism?
Autism, or Autism Spectrum Disorder is how we describe a number of conditions that can affect how a person experiences and perceives the world around them, including how they interact with other people. We use the word ‘spectrum’ to describe the disorder, because there is huge variation in how autism can affect a person.
People with autism often experience social and communication difficulties, such as not recognising body language or facial expressions, or finding it difficult to start and maintain conversations. They can also display repetitive, inflexible patterns of behaviour, particularly at times of stress. Some people with autism will have learning disabilities, or other conditions while others are sometimes classed as high-functioning.
While most people will be identified as having autism as young children (typically between the ages of two and three), there have been many cases of people who do not receive a diagnosis until much later in life. In addition, more boys/men are diagnosed as having autism than girls/women – though some of this may be due to missed diagnosis in girls and women leading to missed support.
Autism is not an illness, or a disease. It is a lifelong condition and there is no ‘cure’. However, people with autism can be supported to live the best life possible.
Signs of autism
As autism is a spectrum disorder, people with autism may display any combination of the following: –
- Difficulty making eye contact – or in acknowledging new people
- Difficulty in interpreting/recognising body language, facial expressions or other people’s emotions and feelings
- Difficulty in understanding abstract ideas, including jokes and sarcasm
- Inability to speak, or reduced speech. Sometimes speech takes longer to develop than in ‘neurotypical’ children, while some people with autism will prefer to use forms of sign language
- Sensory overload – people with autism will sometimes experience the world around them more intensely, making trips to busy, noisy places difficult
- Repetitive, inflexible patterns of behaviour, such as switching lights on and off, or twisting hands
- ‘Tantrums’ or ‘meltdowns’ when exposed to change or when upset, or when overloaded by the world around them
- Lack of interest in peers, or in role-play games
While many people with autism function well in society – and may even go un-diagnosed for many years, others will have lower than average IQs. Of those with lower IQs, many will also have severe learning difficulties.
Help and support
- Hft supports adults with learning disabilities and autism nationally, and has a number of specialist autism-focused services for adults. Contact your local service to discuss support options.
- Hft’s Family Carer Support Service can help with guidance and advice on support options and some of the benefits that are available to people with autism.
- The National Autistic Society provides information and guidance on everything from diagnosis and health to benefits and social care support.
- NHS Choices provides a range of information and links to further useful resources. |
Kelp are sea creatures. They are large algae seaweeds that can grow faster than a tropical bamboo. They have more or less 30 different genera. They live in the shallow oceans or you can say in the underwater forests. They were first time discovered in the Miocene, 23 to million years ago. Kelp can grow about 3-5 inches per day and 10-20 in the bay. In under ideal conditions, giant kelp may grow up to 2 feet per day.
Kelp are very simple organisms. They only consist blade, hold-fast, and a stipe. At the bottom, they have a root-like shape called a hold-fast. This area will be anchored into the rocks and other materials in the sea floors. Young kelp must compete for space to settle and grow.
As the rocky bottom will be filled with the small algae and invertebrates. A stipe is similar to a plant’s stem. It is strong yet flexible allowing the kelp sway in the current ocean. Many fishes will use this area as a hidden space when they hunt for a prey. The blades contain a special gas which act like a float keeping the kelp close to the surface of the water when they absorb the energy from the sun.
Giant kelp normally grows in turbulent water. In this condition, it can bring renewed supplies of nutrients allowing these plants to grow until 175 feet. Kelp require a nutrient-rich water which has 6-14 degrees C (43 and 57 degrees F) They are famous for its high growth rate. The Macrocystis and Nereocystis even can grow half a meter per day, and they can reach around 30-80 meters. Some of them can live more than 7 years.
- Endangered Plants in The Ocean
- Types of Jellyfish
- Endangered Species in The Ocean
- List of Marine Invertebrates
Kelp normally grows in along rocky coastlines. They can live about 2 m up to 30 m below the surface. Kelp lives in clear water conditions that have enough sunshine. Kelp will successfully grow in the area which has ocean layers overturn, nutrient-rich waters, and also cool. This particular condition normally will be found in southern California. Kelp will positively grow if they have a strong substrate. If they anchored in the strong and large rock, Kelp will successfully survive.
Kelp will attach with the sea floor and eventually will grow towards the water’s surface and relies on the sunlight when they generate food and energy. In general, kelp will live longer and further in the tropics than coral reefs, mangrove forest, and warm-water sea-grass beds. This will make kelp not overlapping its system. Kelp are the home for thousands of species in the sea such as the invertebrates, fishes, and also other algae.
Meanwhile, here we inform you about how many types of kelp live in Ocean, as follows:
Laminaria is one of seaweed types that is native to Japan. These kelp contain iodine, an element that the body needs to make thyroid hormones. Its characteristics are long, leathery laminae and mostly have a large size.
Even it is native to Japan, they also can be found in the Atlantic Ocean and the Northern Pacific ocean. They are sticky and thick when they contact with the water.
They live 8 to 30 m under the surface. Then, in the Mediterranean Sea and off Brazil they can grow up to 120 m because they have warmer water. Laminaria is the home for many fish and invertebrates. Laminaria mostly used as medicine, energy, food, and etc.
2. Kelp Forest
Kelp forest can be seen in along the west coast of the North America. These kelp is the most beautiful and astonishing kelp among all. They normally can be found in the shallow open coastal waters with less than 20°C in temperature. In southern California, they can grow around 30 cm per day.
These underwater towers of kelp provided food and shelter for the creature for marine creatures. Many marine creatures using this as a shelter or place to hunt or even prevent them from the storm.
Kelp forest has a greater variety and diversity of plants and animals than almost any other ocean community. Then, healthy kelp forest maintains the ecosystem in the ocean such as stock of fishes, plants, and animals.
3. Giant Kelp
Giant kelp commonly live along the coast of the eastern Pacific Ocean. It is normally found from north Baja California to southeast Alaska, and also found in the southern oceans near South America, South Africa, Australia, and New Zealand. This area can remain below 21°C as this is the best habitat for them.
Giant kelp is the fastest kelp to growing organisms in the world. These kelp may grow up to 45 meters (150 feet) long in average which means 60 cm per day.
As we know, Giant kelp grows dense stands or we know it as kelp forests and also the home as well as foods for many ocean creatures. Humans sometimes harvest this species in a limited number as direct food. They are rich in iodine, potassium and other minerals. They normally can be used for cooking and particular bean dishes.
4. Winged Kelp
Winged kelp are really well known in Ireland. It is a common alga that grows in the coastal area that severe from the wave. They are the family of kelp or Laminariales. They can grow to a maximum length of 2 m. The whole body is brown with a distinct midrib with wavy membranous lamina more than 7 cm in both sides.
Their color may change green in the spring and turn into yellow brown in the summer. However, they have a variety color. They can be dark green or almost black. Winged Kelp normally being harvested in summer and they will be dried. Firstly, they cut into ribbons. Next, dry them in sun. Do not store them while they are still wet.
Important note to take when you want to harvest them is leaving the hold-fast intact and stipe. The stipe length should 20 cm. With this way, the winged kelp may be harvested again.
5. Feather Boa Kelp
Feather Boa Kelp or Egregia menziesii is one of Kelp Species. They live in western North America from Alaska to Baja California. Feather Boa Kelp normally living in rocky areas. They have dark brown or olive in color with shiny and bumpy texture. They might grow up to 4 meters long.
They grow a branch stipe from a thick hold-fast. They grow small blades in each of few centimeters of their body. Their body contains nitrogen and phosphate more than the Giant kelp.
This kelp is a natural fertilizer. In the past, people collect this kelp off the beaches and use them as a fertilizer for their crops. Feather Boa Kelp also a source of Alginic Acid. This material mostly used in the making of detergents, cosmetics, and food products.
Kombu or Saccharia japnica is a native Japanese kelp that is included in the family of Laminariaceae. It’s a marine species of brown algae that is cultivated mostly in Japan and Korean, They are harvested on ropes in the sea. Kombu is a common food in East Asia.
As we mentioned before this marine creature were grow and harvested using rope. They will use the rope to grow the kelp. In China they called this species as Haidai.
While Japanese called them as Ma-Konbu, and Korean called them as Dasima. This type of kelp is widely cultivated in China, Russia, France, Korea and Japan. China even produces more than 10 thousand tonnes of Kombu every single year. In Japan itself, 90% of Japanese are cultivating Kombu. Mostly in Hokkaido.
Kombu is a good natural source of glutamic acid. This kelp also contains a high level of iodine. In a normal dose, the mineral that contains inside their body is a good mineral for growth. But, having too much of it can cause an overdose.
Alaria is a genus of brown Algae. A member of Laminariales. They only grow around 15 cm and have around 15 m in length. Alaria mostly living in Pacific and Atlantic ocean. They are typically live in the sub-littoral area. The most important point of the growth of Alaria is the temperature. They normally can grow at 16 degrees or less. Because of the temperature factor, alaria growth is decreasing.
These kelp were commonly being eaten in the Far East (China, Japan, Korea), these places are the place that has a high consumption of seaweeds.
Then, as we mentioned before, seaweed is one of the high nutrients foods, they typically have low fat but a good vitamin and minerals contains inside them. Alaria itself is an excellent and good natural source of protein and iodine.
Oarweed is typically found in the sites on shores in the littoral area where it may form an extensive meadows and a dominant algae species. They have a very significant growth rate compare to the other algae, around 5,5 % per day. They may reach and grow about 4 m in length. Their distribution will be limited by their salinity, beach wave exposure, the temperature and their general stress.
Oarweed mostly used as a fertilizer in the past and spread on the land. In the 19th century Oarweed was used for the extraction of iodine. However, this kind of business were died because the cheaper sources were coming.
Nowadays, Oarweed still being used as a fertilizer and also for the extraction of Alginic Acid. This alginic acid commonly being used in manufacture of cosmetics and toothpaste. In Japan and China, they still become the most favorite ingredients for making dashi, a soup stock and many culinary purposes.
Wakame is one of edible brown seaweed and the most famous seaweed in Japan and Korea. Wakame has a high level in nutrient value, and they also have a very low calories with minimal in fat. They contains about 5 calories per serving which means this can help you to burn fat. Wakame may help you to prevent heart diseases, cancer, diabetes, and obesity.
Wakame also beneficial for the effects on stroke, high blood pressure, tumors, and inflammation. They also able to improve a good immune system.
Moreover, Wakame is a great ingredients for any diets. They provide you with nutrients and health benefits.viral infections, tumors, oxidation and inflammation while promoting a good immune system. Wakame is a great addition to any diet as there are few things we can eat that are so replete with nutrients and health benefits.
On the other hand, Wakame has Magnesium that help you in the relaxation of your muscles, giving you a production of protein. They also have Iodine that needed to have a strong metabolism of cells,. It will help you to process of converting food into energy. They also contains calcium that allows you to absorb calcium easily to your body.
Thongweed is a brown algae that normally grow in a lower shore of rocky shores where there is an extreme waves. They may grow up to a meter in length. When they were a little they will grow like a small button or a mushroom with only 2 cm across. By the end of the year there will be a grow emerges. Thing weed typically found in the lower zone of a rocky seashore.
Thongweed normally found in all shores with its wide tolerance for the extreme wave actions. In the early year, Thingweed will begin to grown on a shore. The button can be found on all rocky seashore.
Thongweed mostly found in Baltic sea, the North sea and the North East Atlantic ocean start from Scandinavia to Portugal. They have 30 mm wide and 25 mm high. Thongweed have two stage of morphology. Thong weed commonly live around 2-3 years and reproduce once before they finally died.
because of its variety of nutrients, kelp has been a good companion for Asian cultures for centuries. Nowadays, western start to get the popularity of this tree-shaped plants. As more people discover many benefits this vegetable can give, we will give you briefly several amazing benefits of kelp:
1. Lose Weight
Kelp is one of the most nutrient-rich foods that really beneficial for any diet. This vegetable also has specific fat-fighting materials inside them. According to the research of the University of Newcastle, kelp have the alginates – fibers within them that significantly reduce fat digestion and absorption.
We can deny that kelp are rich of nutrients. In each of them, they have a natural source of vitamins A, B1, B2, C, D and E, as well as minerals including zinc, iodine, magnesium, iron, potassium, copper and calcium.
The iodine content that every kelp has offers a lot of benefits. In the study showed that this iodine in kelp may effectively remove the free radicals – chemicals that accelerate aging in the human blood cells. Try to put some kelp mask and let us know the result.
4. Avoid Cancer
Kelp cannot heal you from cancer, but it may help you to slow the growth of cancer. Kelp have fucoxanthin. This material can help cancer patients to remove drug resistance. Chemotherapy treatments is an undergoing dangerous process. Therefore, by reducing the number of harmful drugs into one’s system can help the patients treat cancer.
Kelp play an important role for the oceans. They are the foods and shelter for a thousand species of marine creatures. So, although they become commercial food products, taking kelp in a large amount is really prohibited. Kelp are not only needed for the human, but also the other creatures. Be considerate to other creatures on the planet. |
By: Tom Jeltes, Eindhoven University of Technology
The Internet of Things (IoT) consists of billions of sensors and other devices connected to each other via internet, all of which need to be protected against hackers with malicious purposes. A low-cost and energy efficient solution for the security of IoT devices uses the unique characteristics of the built-in memory chips. Ph.D. candidate Lieneke Kusters investigated how to make optimal use of the chip's digital fingerprint to generate a security key.
The higher the number of devices connected to each other via the Internet of Things, the greater the risk that malicious hackers might gain access to important information, or even take over entire systems. Quite apart from all kinds of privacy issues, it's not hard to imagine that that someone who, for example, has control over temperature sensors in a chemical or nuclear plant, could cause serious damage.
There is a different way: namely by deducing the security key from a unique physical characteristic of the memory chip (Static Random-Access Memory, or SRAM) that can be found in practically every IoT device. Depending on the random circumstances during the chip's manufacturing process, the memory locations have a random default value of 0 or 1.
"That binary code which you can read out when activating the chip, constitutes a kind of digital fingerprint of the device," says Kusters, who gained her doctorate at the Information and Communication Theory Laboratory at the TU/e department of Electrical Engineering. This fingerprint is known as a Physical Unclonable Function (PUF). "The Eindhoven-based company Intrinsic ID sells digital security based on SRAM-PUFs. I collaborated with them for my doctoral research, during which I focused on how to generate, in a reliable way, a key from that digital fingerprint that is as long as possible. The longer, the safer."
The major advantage of security keys based on SRAM-PUFs is that the key exists only at the moment when authentication is required. "The device restarts itself to read out the SRAM-PUF and in doing so creates the key, which subsequently gets erased immediately after use. That makes it all but impossible for an attacker to steal the key."
Noise and reliability
But that's not the entire story, because some bits of the SRAM do not always have the same value during activation, Kusters explains. Ten to fifteen percent of the bits turn out not to be determined, which makes the digital fingerprint a bit fuzzy. How do you use that fuzzy fingerprint to make a key of the highest possible complexity that nevertheless still fits into the receiving lock—practically—each time?
"What you want to prevent is that the generated key won't be recognized by the receiving party as a consequence of the 'noise' in the SRAM-PUF," Kusters explains. "It's alright if that happens one in a million times perhaps, preferably less often." The probability of error is smaller with a shorter key, but such a key is also easier to guess for people with bad intentions. "I've searched for the longest reliable key, given a certain amount of noise in the measurement. It helps if you store extra information about the SRAM-PUF, but that must not be of use to a potential attacker. My thesis is an analysis of how you can reach the optimal result in different situations with that extra information."
Originaly posted here. |
Sea turtles use their flippers as hands to eat food
Even though sea turtles have flippers for the purpose of guiding their movement, a new study has revealed that they also use their flippers to handle prey.
The researchers found that this behavior, which was thought to be unlikely in marine tetrapods, is very widespread. Furthermore, sea turtles may have begun co-opting their flippers as far back as 70 million years ago.
Study co-author Dr. Kyle Van Houtan is the Director of Science at Monterey Bay Aquarium.
“Sea turtles don’t have a developed frontal cortex, independent articulating digits or any social learning,” said Dr. Van Houtan.”And yet here we have them ‘licking their fingers’ just like a kid who does have all those tools. It shows an important aspect of evolution – that opportunities can shape adaptations.”
The research team used crowd-sourced photographs and videos to search for various ways that sea turtles use their limbs. While this type of behavior has been discovered in other marine mammals such as walruses, it not been previously documented in sea turtles.
The study revealed that sea turtles use their flippers for a variety of foraging tasks. The scientists found many informative images, including a loggerhead rolling a scallop on the floor of the ocean and a green turtle holding a jelly.
“Sea turtles’ limbs have evolved mostly for locomotion, not for manipulating prey,” said lead author Jessica Fujii. “But that they’re doing it anyway suggests that, even if it’s not the most efficient or effective way, it’s better than not using them at all.”
The experts were surprised to find that sea turtles were co-opting their flippers, primarily due to the fact that they are considered to have simple brains and simple flippers. It raises the question of whether marine mammals are learning new behaviors through observation.
“We expect these things to happen with a highly intelligent, adaptive social animal,” said Dr. Van Houtan. “With sea turtles, it’s different; they never meet their parents. They’re never trained to forage by their mom. It’s amazing that they’re figuring out how to do this without any apprenticing, and with flippers that aren’t well adapted for these tasks.”
The study is published in the journal PeerJ.
Image Copyright Fujii et al. shared under Creative Commons CC BY |
X chromosomes are very special genetic material. They differ in number between men and women. To achieve equality between sexes, one out of two X chromosomes in women is silenced. In flies, the opposite happens: in male flies, the only available X chromosome is highly activated, to compensate for the absence of the second X-chromosome. Researchers from the Max Planck Institute of Immunobiology and Epigenetics (MPI-IE) in Freiburg have now shown how the RNA molecules and proteins involved in the activation find and stick to each other. Similar to a monkey that grabs a liana with hands and feet, one of the proteins holds on to the RNA. Then it moulds the molecular liana with its hands and thus generates a dynamic RNA - protein meeting place.
Just a few years ago, they were assumed to be genetic trash: DNA sequences that are not translated into proteins. But this has rapidly changed during the last years. Nowadays, it is widely known among scientists that much of the DNA is transcribed into RNA that, in turn, can act as gene regulator and structural element. Also in the regulation of sex chromosomes, RNA plays a central role. In both female humans and male flies one X chromosome is covered by a protein-RNA complex. In humans, this leads to chromosome silencing, while in flies it results in a double activation of the chromosome. Misregulation is lethal. Although known for many years, the interaction between the central proteins and the distinct role of the RNA strand was unclear.
Asifa Akhtar of the MPI-IE and her team now unravelled the function of the RNA and the interaction of the proteins. The protein MLE that is known to be a central player in X chromosome activation binds to the RNA in a very special manner. Like a monkey that grabs a liana with hands and feet, the protein grabs the RNA in two different ways. While one site is a simple anchor (the feet), the other (the hands) changes the form of the RNA. “The protein MLE moulds the RNA strand. This allows MLE to bind the RNA in a dynamic manner”, says Asifa Akhtar, head of the study. Like one monkey helping the other to catch the liana MLE could thus help other proteins to grab the RNA strand. Thus, the whole X chromosome can be covered by the RNA-protein complex.
During his PhD work, first author Ibrahim Ilik investigated why MLE was found at the same places on the X chromosome but did not directly interact with other proteins. “The biochemical and the biological results seemed to point in different directions in the beginning”, says Ilik. “But when we realised that the proteins bind highly specifically to certain regions of the very long RNA, this was a very exciting moment.”
The researchers also found that individual mutations in the RNA hardly harm the protein-RNA binding. Only multiple mutations lead to a non-functional RNA and thus to lethality of male flies. “The system is very robust for evolutionary influences. This shows how important it is for the survival of the animals. In this, RNA could provide the necessary plasticity”, says Akhtar. The scientists now want to explore the evolutionary conservation of the RNA-protein system and its equivalent in mammals.
Scientists at the Max Planck Institute of Immunobiology and Epigenetics (MPI-IE) in Freiburg investigate the development of the immune system over the course of evolution and during lifetime. They analyse genes and molecules that are important for immune cells maturation and activation. Researchers in the field of epigenetics investigate the inheritance of traits that are not caused by changes in the DNA sequence. Epigenetic research is expected to lead to a better understanding of many complex diseases, such as cancer and metabolic disorders. |
So what’s with the “a” in front of familiar words like beam, stern and aft?
Adding the “a-” prefix turns the noun into an adjective. Nouns, you’ll recall, name things. They answer the What? or Who? in a sentence. Adverbs, on the other hand, modify a verb. They answer questions like When? Where? Why? and How?
Adding the “a” to parts of a boat tells us where something else is in relationship to our boat. This use of the a- prefix is from Old English use of a- to mean on, in, at.
- A + Beam = On our beam, or beside our boat.
- A + Stern = At our stern, or behind us.
- A + Baft = To the rear (baft is Old English for behind). “Abaft the beam” is a phrase commonly used when practicing the Quick Stop Crew Overboard procedure.
|Pete’s class sneaks up astern of our class.|
|After my quick “paparazzi tack,” Pete is abeam of our boat.| |
In photography, exposure is the amount of light per unit area (the image plane illuminance times the exposure time) reaching a photographic film or electronic image sensor, as determined by shutter speed, lens aperture and scene luminance. Exposure is measured in lux seconds, and can be computed from exposure value (EV) and scene luminance in a specified region.
In photographic jargon, an exposure is a single shutter cycle. For example: a long exposure refers to a single, protracted shutter cycle to capture enough low-intensity light, whereas a multiple exposure involves a series of relatively brief shutter cycles; effectively layering a series of photographs in one image. For the same film speed, the accumulated photometric exposure (Hv) should be similar in both cases.
"Correct" exposure may be defined as an exposure that achieves the effect the photographer intended.
A more technical approach recognises that a photographic film (or sensor) has a physically limited useful exposure range, sometimes called its dynamic range. If, for any part of the photograph, the actual exposure is outside this range, the film cannot record it accurately. In a very simple model, for example, out-of-range values would be recorded as "black" (underexposed) or "white" (overexposed) rather than the precisely graduated shades of colour and tone required to describe "detail". Therefore, the purpose of exposure adjustment (and/or lighting adjustment) is to control the physical amount of light from the subject that is allowed to fall on the film, so that 'significant' areas of shadow and highlight detail do not exceed the film's useful exposure range. This ensures that no 'significant' information is lost during capture.
The photographer may carefully overexpose or underexpose the photograph to eliminate "insignificant" or "unwanted" detail; to make, for example, a white altar cloth appear immaculately clean, or to emulate the heavy, pitiless shadows of film noir. However, it is technically much easier to discard recorded information during post processing than to try to 're-create' unrecorded information.
In a scene with strong or harsh lighting, the ratio between highlight and shadow luminance values may well be larger than the ratio between the film's maximum and minimum useful exposure values. In this case, adjusting the camera's exposure settings (which only applies changes to the whole image, not selectively to parts of the image) only allows the photographer to choose between underexposed shadows or overexposed highlights; it cannot bring both into the useful exposure range at the same time. Methods for dealing with this situation include: using some kind of fill lighting to gently increase the illumination in shadow areas; using a graduated ND filter or gobo to reduce the amount of light coming from the highlight areas; or varying the exposure between multiple, otherwise identical, photographs (exposure bracketing) and then combining them afterwards in some kind of HDRI process.
A photograph may be described as overexposed when it has a loss of highlight detail, that is, when important bright parts of an image are "washed out" or effectively all white, known as "blown-out highlights" or "clipped whites". A photograph may be described as underexposed when it has a loss of shadow detail, that is, when important dark areas are "muddy" or indistinguishable from black, known as "blocked-up shadows" (or sometimes "crushed shadows", "crushed blacks", or "clipped blacks", especially in video). As the adjacent image shows, these terms are technical ones rather than artistic judgments; an overexposed or underexposed image may be "correct" in the sense that it provides the effect that the photographer intended. Exposure_compensation|Intentionally over- or underexposing (relative to a standard or the camera's automatic exposure) is casually referred to as "exposing to the right" or "exposing to the left" respectively, as these shift the histogram of the image to the right or left.
In manual mode, the photographer adjusts the lens aperture and/or shutter speed to achieve the desired exposure. Many photographers choose to control aperture and shutter independently because opening up the aperture increases exposure, but also decreases the depth of field, and a slower shutter increases exposure but also increases the opportunity for motion blur.
"Manual" exposure calculations may be based on some method of light metering with a working knowledge of exposure values, the APEX system and/or the Zone System.
A camera in automatic exposure or autoexposure (usually initialized as AE) mode automatically calculates and adjusts exposure settings to match (as closely as possible) the subject's mid-tone to the mid-tone of the photograph. For most cameras, this means using an on-board TTL exposure meter.
Aperture priority (commonly abbreviated as A, or Av for aperture value) mode gives the photographer manual control of the aperture, whilst the camera automatically adjusts the shutter speed to achieve the exposure specified by the TTL meter. Shutter priority (often abbreviated as S, or Tv for time value) mode gives manual shutter control, with automatic aperture compensation. In each case, the actual exposure level is still determined by the camera's exposure meter.
The purpose of an exposure meter is to estimate the subject's mid-tone luminance and indicate the camera exposure settings required to record this as a mid-tone. In order to do this it has to make a number of assumptions which, under certain circumstances, will be wrong. If the exposure setting indicated by an exposure meter is taken as the "reference" exposure, the photographer may wish to deliberately overexpose or underexpose in order to compensate for known or anticipated metering inaccuracies.
Cameras with any kind of internal exposure meter usually feature an exposure compensation setting which is intended to allow the photographer to simply offset the exposure level from the internal meter's estimate of appropriate exposure. Frequently calibrated in stops, also known as EV units, a "+1" exposure compensation setting indicates one stop more (twice as much) exposure and "–1" means one stop less (half as much) exposure.
Exposure compensation is particularly useful in combination with auto-exposure mode, as it allows the photographer to bias the exposure level without resorting to full manual exposure and losing the flexibility of auto exposure. On low-end video camcorders, exposure compensation may be the only manual exposure control available. |
You will need
- Access to a computer
- Battery packs
- USB A to USB B cables
Before you begin
- Don’t panic if you’ve never used (or have never even heard of) a micro:bit before. There’s plenty of info below, and this activity’s a great way for everyone to learn. It’s a good idea to spend some time reading the information and practising before you lead the activity, though.
- If you don’t have enough computers for everyone to work in pairs, you could run this activity as one base, so groups visit one at a time. You could also look in to visiting a local library or school to use their computers.
- If your meeting place has internet access, everyone can use the online editor here.
- Don’t worry if you don’t have internet access. Just download and install the Mu editor before you begin. You can download it at here.
- You may already know people with digital skills who’d love to help run this activity. Why not reach out to parents, carers, and others in your community?
Get to know micro:bits and Mu
- The person leading the activity should introduce the micro:bit. They may want to use the information in ‘What is a micro:bit?’ to help them explain what it is, what it can do, and what everyone will be using it for in this activity.
- Everyone should get into small groups. Each small group should gather around a computer with a micro:bit, a battery pack, and a USB A to USB B cable.
- The person leading the activity should give each group a copy of the ‘Notes and handout’ pack.
- Everyone should follow the instructions in the ‘Notes and handout’ pack to connect their micro:bit to the computer, write and check the simple Python code, and send it to the micro:bit so it displays ‘Hello world’.
Code the step counter
- Once each group’s comfortable that they can write, check, and transfer simple code, they should follow the instructions in the ‘Notes and handout’ pack to make a step counter.
- Each group should continue to follow the instructions until they have a step counter that counts steps and displays at least one ready-made and one custom icon.
- Everyone should come together and explain their code. Anyone who found and fixed any bugs could share their experiences with the rest of the group.
This activity was all about developing skills. People only needed to use few lines of code to show how a step counter works – they didn’t need to learn absolutely loads to prototype a single project. Does anyone know what prototyping is? It’s about testing an idea with a small amount of code, before improving and developing it further. As people learn new skills, they could add more to the project. When else might people use digital devices with sensors in outdoor activities?
How did people feel as they worked through the instructions? Was it easy, or were some parts tricky? Did they feel pressured to create more and more complicated designs, or did they take time to celebrate every time something worked? Sometimes it can feel difficult to work on a big project – breaking it down, learning one thing at a time, and celebrating every success can make it easier. When else might it be useful to break a big project into smaller chunks?
- Online safety
Supervise young people when they’re online and give them advice about staying safe.
For more support around online safety or bullying, check out the NSPCC website. If you want to know more about specific social networks and games, Childnet has information and safety tips for apps. You can also report anything that’s worried you online to the Child Exploitation and Online Protection command.
As always, if you’ve got concerns about a young person’s welfare (including their online experiences), follow the Yellow Card reporting processes. |
Daily Math Fluency Centers Kit is a resource of 40 center activities and materials used to reinforce and support the beginning of new skills and strategies, as well as the use of models that promote the development of number sense. These are the same skills, strategies, and models that are seen throughout the Number Strings and Math Talks in the Daily Math Fluency program.
Basic Addition - Doubles, Near Doubles, Use 5, Make 10, 10 and Adjust
Basic Subtraction - Take from 10, Get to 10, Remove 10
100 Classifying Counters
50 Two-Color Counters
0-100 Number Cards
12 Number Cubes, 4 colors
12 Dice, 3 Colors
100 UniLink Cubes
2 Sets of 14, 5-Group Demonstration Cards
4 Mini Rekenreks
6 Number Paths
200 Color Tiles
Centers Activity Flipbook
CHOKING HAZARD! Small parts. Not for children under 3 yrs. |
Building Your Own Robots: Design and Build Your First Robot!
Fun robotics projects that teach kids to make, hack, and learn!
There's no better way for kids to learn about the world around them than to test how things work. Building Your Own Robots presents fun robotics projects that children aged 7 – 11 can complete with common household items and old toys. The projects introduce core robotics concepts while keeping tasks simple and easy to follow, and the vivid, full-colour graphics keep your kid's eyes on the page as they work through the projects.
Brought to you by the trusted For Dummies brand, this kid-focused book offers your child a fun and easy way to start learning big topics! They'll gain confidence as they design and build a self-propelled vehicle, hack an old remote control car to create a motorized robot, and use simple commands to build and program a virtual robot—all while working on their own and enjoying a sense of accomplishment!
- Offers a kid-friendly design that is heavy on eye-popping graphics
- Focuses on basic projects that set your child on the road to further exploration
- Boasts a small, full-color, accessible package that instills confidence in the reader
- Introduces basic robotics concepts to kids in a language they can understand
If your youngster loves to tinker, they'll have a whole lot of fun while developing their creative play with the help of Building Your Own Robots.
About the Author:
Gordon McComb, has written over 65 books and several thousand magazine and newspaper articles. His speciality is teaching young minds about new technology. |
The Ediacaran began 635 million years ago, and it ended 541 million years ago. It was a period when abundant fossils of large multicellular life appeared for the first time. These organisms were totally different from the ones which live today. Before the Ediacaran, most lifeforms were single-celled or chains of cells and very tiny. There were blue algae which built reefs and they produced a huge amount of oxygen. This made the development of higher animal life possible. However, Ediacaran fossils are often difficult to place in the tree of life; scientists often disagree on which fossils are plants, animals, fungi, or possibly none of the above! Ediacaran lifeforms tended to be fragile with shallow roots dug into mats of bacteria. The Ediacaran ended when worms evolved and destroyed the mats that Ediacaran life depended upon. The Ediacaran was a period of fluctuating tempuratures, the climate getting warmer after a very long ice age known as a snowball earth event. While there was a brief glaciation 7 million years after complex life appeared, most of this period was ice free.
On the picture, you can see typical animals from the Ediacaran like Charnia, Dickinsonia and Spriggina. In the background there are the reefs which were built by the blue algae. These blue algae are also called cyanobacteria and their reef structures are called stromatolites.
DRAWINGS BY PATRICK HÄNSEL FOR MORE QUESTIONS ABOUT THE DRAWINGS, PLEASE CONTACT: [email protected] |
Vaccines are successful in preventing pandemic flu and reducing the number of patients hospitalised as a result of the illness, a study led by academics at The University of Nottingham has found.
The work, published in the journal Vaccine, was led by Professor Jonathan Van Tam and Dr Louise Lansbury in the University’s Health Protection and Influenza Research Group in collaboration with other scientists in the UK, Japan, Bosnia and the Netherlands.
Professor Van Tam said: “”The 2009 swine flu pandemic was the first in human history when pandemic vaccines have been available worldwide. It’s therefore really important to pull all of these data together and ask the question: did these vaccines really work?
“We found that the vaccines produced against the swine flu pandemic in 2009 were very effective in both preventing influenza infection and reducing the chances of hospital admission due to flu. This is all very encouraging in case we encounter a future pandemic, perhaps one that is more severe. Of course, we recognise that it took five to six months for pandemic vaccines to be ready in large quantities; this was a separate problem. However, if we can speed up vaccine production times, we would have a very effective strategy to reduce the impact of a future flu pandemic.”
In early 2009, a novel influenza A(H1N1) virus appeared in humans, containing a unique combination of influenza genes which had not previously been identified in animals or people. The first cases were reported in the United States in March 2009 but the new virus spread rapidly to other countries and in June 2009 the WHO declared a pandemic caused by this strain, known as influenza A(H1N1)pdm09, or ‘swine flu’.
An estimated 61 million people were infected worldwide. Vaccines against the new strain were developed and rolled out across the world from September to December 2009. The majority of vaccines available contained inactivated A(H1N1)pdm09 influenza virus rather than live virus. Some formulations also contained an ‘adjuvant’ to strengthen the body’s immune response to the vaccine and allow smaller doses of antigen to be used (adjuvanted vaccines).
Many individual studies have looked at how effective the available vaccines were at preventing illness and hospitalisation caused by the pandemic influenza strain but up until now no-one has summarised all the available data. This systematic review and meta-analysis is the most comprehensive summary and offers insight into the relative effectiveness of both adjuvanted and non-adjuvanted vaccines in different age groups.
The researchers found 38 studies published between June 2011 and April 2016 that measured the effectiveness of the inactivated pandemic influenza vaccines, covering a population of more than 7.6m people. Twenty-three of these studies reported results that were suitable for meta-analysis – a statistical method used to combine the results from multiple individual studies that are broadly similar in terms of vaccine used and types of people in the study and which is statistically more powerful and can provide a more precise estimate of the effect of vaccination than any individual study contributing to the analysis.
Overall, pandemic influenza vaccines were found to be 73 per cent effective at preventing laboratory-confirmed influenza illness and 61 per cent effective at preventing hospitalisation in the population as a whole. However, when the vaccines’ effectiveness was examined in different age groups, they were shown to be less effective in adults over 18 years than in children, and effectiveness was lowest in adults over 50 years of age. Adjuvanted vaccines in particular were found to be more effective in children than in adults against laboratory confirmed illness (88 per cent in children versus 40 per cent in adults) and hospitalisation (86 per cent in children versus 48 per cent in adults).
Overall the inactivated pandemic influenza vaccines used in the 2009 pandemic were effective in preventing laboratory-confirmed illness and hospitalisation. Adjuvanted vaccines tended to be more effective than non-adjuvanted vaccines but only in children. The lower effectiveness in older people may be due to them having pre-existing antibodies against A(H1N1)pdm09 from previous exposure to a similar virus, with corresponding lower incidence of the infection in this age group.
The results showed that pandemic influenza vaccines produced globally during the 2009-10 pandemic were largely effective in reducing illness and hospitalisation. The results from the study could be used to help public health officials to plan a more effective response to future pandemics, such as rolling out vaccines at a much earlier time and targeting specific types of vaccines at different age groups.
- The Top 10 Biggest Vaccine Stories Of 2016
- Anti-vaccine movement: Hayflick’s thoughts
- Vaccines: 450K deaths in the US prevented thanks to 1962 breakthrough, according to study
- High-dose flu vaccine appeared more effective at preventing deaths in older adults
- Vaccines are safe. Vaccines are effective. Vaccines save lives. –Letter to President Trump |
You can talk UP to children in your own books:
Ask Open Ended Questions
Open-Ended questions are questions that require a detailed response. For example, instead of asking, “Is it good to listen to your parents?” Ask “How does listening to your parents make them feel?” The two questions will guide your story in different directions. An open-ended question may also develop your story in more complex ways.
Let Your Characters Speak for Themselves
Get to know your characters well by developing them in your story. Make sure they do things they would do and say things they would say. Don't let adult characters control the kid characters or run the story.
Avoid Stereotypical Labels
Children are changeable, as are adults. As such, there is no need to limit them. A part of talking UP to children is to recognize that they grow and help them recognize it too.
Show, Don't Tell
If you have to share a message, don't tell your audience outright. Show them by example in the story you write and in the characters’ actions.
Get Down to Their Level
Maurice Sendak was known to say, “I don't write for children. I write and someone says it's for children.”
I appreciate the sentiment, but it is not exactly appropriate. Kids are not dumb, they are not simple, but they are an audience with special interests. The way I see it is, you must become an honest consumer of children’s literature. That way, the subjects you are interested in will also interest your audience. |
- Member Games
- Free Games
- Game Information
- Game Help Page
- Support Edheads
|Design a Cell Phone Game Information|
Design a Cell Phone
Recommended Grade Levels: 5-8 (ages 11-14)
Run Time: All of the students we observed went right past the research section and started designing right away. This usually results in a failed design. If students design, test, go back to the research section and then design, test and get their sales results, this takes about 40 minutes of time. If you want to cut down on the time, you can have them start with the research section first.
Story Line: Elena is a project manager who is also an engineer. She assigns projects to engineers that work at Edheads' Engineering Design Area. Today's project is to design a cell phone for senior citizens for our client, Mr. Tomasko. Students have access to market research on cell phones, but have the choice of checking out the research first or designing first. After they design, they will take their specifically designed phone into a test setting where five senior citizens will comment on it. Spoiler alert: There's an outlier in the group! The senior citizens will let the students know if the design is great, better than before but not perfect, or not that good. Students have the option of going back to check out the research information, going back to redesign, or building the phone. They can design and test as many times as they like. Once they build the phone, they will be given sales results by the client, Mr. Tomasko who will be thrilled, somewhat satisfied or very disappointed in the sales figures, depending on how well the students met the expectations of the senior citizens. Students will use skills reading charts and graphs, learn about engineering design, and use critical thinking skills.
Technical: This is a Flash game, so you will need the Puffin Academy Browser if you are going to play this game on a mobile device. We also recommend having ear buds to play the activity in class or in public areas. Speakers are fine for home use.
Educational Standards: (this will be updated to Common Core and NGSS soon)
Abilities To Do Technological Design
2. Revise an existing design used to solve a problem based on peer review.
3. Explain how the solution to one problem may create other problems.
2. Evaluate observations and measurements made by other people and identify reasons for any discrepancies.
3. Use evidence and observations to explain and communicate the results of investigations.
6. Explain why results of an experiment are sometimes different (e.g., because of unexpected differences in what is being investigated, unrealized differences in the methods used or in the circumstances in which the investigation was carried out, and because of errors in observations).
1. Explain how technology influences the quality of life.
2. Explain how decisions about the use of products and systems can result in desirable or undesirable consequences (e.g., social and environmental).
5. Design and build a product or create a solution to a problem given one constraint (e.g., limits of cost and time for design and production, supply of materials and environmental effects).
· Understandings about scientific Inquiry.
· Abilities of technological design and understandings about science and technology.
· Personal health risks and benefits, science and technology in society.
· Understandings about scientific inquiry.
· Abilities of technological design, understandings about science and technology.
· Natural and human-induced hazards, science and technology in local, national, and global challenges.
· Understanding of the nature of scientific knowledge and science as a human endeavor. |
In 2002, India had massive grain stocks of 63 million tones, much of which was exported at a loss. But today wheat stocks are almost zero, and the government has decided to import three million tonnes.
Worse, the Met Office forecasts a bad monsoon, with precipitation likely to be just 93% of normal. This could mean a major drought. So, food imports could gallop upward.
Why has India gone from food surpluses to deficits? Because its population has risen while food production has stagnated or fallen for six years (see chart). Grain output in 2005-06 is estimated to be no higher than the 209 million tones reached six year ago. Wheat production peaked at 76.4 million tones in 1999-00, and then declined (the current harvest is estimated at 72-73 million tones). Rice production peaked at 93.3 million tones in 2001-02, and is estimated at 87-88 million tones this year. The output of pulses has stagnated at 13-14 million tones. Only coarse grain production has risen a bit, thanks to new hybrid varieties.
Why has grain production stagnated? One reason is the failure of agricultural universities to come up with new, higher-yielding varieties. But a more important reason is environmental degradation. Free rural power has encouraged farmers to pump groundwater to grow water-guzzling crops in environmentally incorrect low-rainfall areas (rice in Punjab, sugar cane in Maharashtra). This unsustainable pumping has sent water tables plummeting, lakhs of wells and tubewells have run dry, and farmers with dry wells have committed suicide. Yet Ministers say it is politically impossible to charge farmers for power, let alone groundwater depletion. The richest farmers with the deepest tubewells still have water, but many others face a water crisis. One consequence has been stagnant or falling food production.
This sounds a tragedy, but also has the seeds of an opportunity. Indian farmers have little future if they stick to growing foodgrains on plots that grow ever smaller as the population rises. India’s countryside has 600 million people dependent on 250 m. hectares of farmland, an average of less than half a hectare per person. This limited land cannot yield prosperity if sown with cereals. But it could bring prosperity if sown with high-value crops such as fruits, vegetables, flowers and medicinal herbs.
India must switch from cereals to high-value crops. This will improve incomes and reduce environmental degradation at the same time. Fruit, medicinal herbs, bio-diesel crops (jatropha and jojoba) and most flowers and vegetables need much less irrigation than rice or sugarcane. India’s wide variety of climates can harnessed to grow a wide range of high-value crops. These will have a good export market, apart from meeting the domestic demand of a growing middle class.
Diverting land from cereals to high-value crops will mean that India has to import food. Back in the 1960s this would have been viewed as a disaster. But if we import food while exporting high-value commodities, we will be replicating successful economies like Chile, Israel and Italy, which export farm products (fruit, wine, olives, speciality seeds), and import food.
However, in many ways, China may be a more appropriate comparison. China has greatly improved incomes by moving people out of agriculture into industry and services. India must do the same.
As China prospers, its 1.2 billion people are increasing their consumption of every commodity from oil and copper to food and fibres. Once a major oil exporter, China is now the fastest-growing oil importer. Even so, its per capita consumption of all commodities is a small fraction of Japan’s or America’s. China’s population is so high relative to its area that, as its household consumption rises, it cannot possibly grow or mine all that it needs. Its fast-rising appetite for commodities has sparked a commodity boom that has greatly benefited exporters in Africa, Latin America and Australia. China pays for these comfortably through manufactured exports.
Like China, India has a very high population relative to its area. By 2020, India too will be unable to grow or mine most of the commodities it needs. Like China, it will begin swallowing the commodity output of the whole world to meet its needs. This year’s wheat imports are just the beginning of the trend. You ain’t seen anything yet.
Foodgrain output (million tonnes) |
What is synovitis?
Your joints are held together by a “capsule” of tissues and ligaments. The innermost tissue of the capsule is a membrane called the synovium.
The synovial membrane secretes a clear fluid called synovial fluid that lubricates joint surfaces and provides the cartilage with nutrients. Sometimes this capsule becomes inflamed. The result is the painful condition called synovitis.
Here’s what happens when synovitis occurs:
- Disease (such as rheumatoid arthritis) or injury causes white blood cells to move from your blood stream into your synovium
- The synovium cells grow and divide abnormally. Fluid collects as the synovium becomes thickened and inflamed
- The synovial cells release enzymes
- The enzymes may eventually destroy joint cartilage and bone, as well as surrounding muscles, ligaments, and tendons
What causes synovitis? Who’s at Risk?
Synovitis is associated with certain diseases that raise the risk of inflammation. They include:
- rheumatoid arthritis
- lupus, a chronic inflammatory disease that can affect parts of the body including the joints
- gout, a buildup of uric acid crystals in the joints of the body, causing inflammation, swelling, and pain
Synovitis can also be caused by injury to the joints, which respond with inflammation. Sometimes, the cause is unknown.
How is Synovitis diagnosed and Treated?
Warm, swollen joints can be an indicator of synovitis. Your joints may be painful both at rest and with movement. If synovitis is suspected, our doctor may withdraw a sample of your synovial fluid from the joint to send to a laboratory to test for infection or the crystals that indicate gout.
Treatment depends on the cause of the synovitis. It is likely to include anti-inflammatory medications or injections.
In some cases, our doctor may recommend surgical removal of the inflamed synovium. If appropriate, the surgery may be the minimally invasive surgery called arthroscopic surgery, for e.g. knee arthroscopy or shoulder arthroscopy.
What can I except long term?
Conservative treatment with anti-inflammatories may help symptoms and give your joint a chance to heal. Individuals with long-lasting synovitis, including individuals with rheumatoid arthritis, may need further care. |
Students explore the inhalation/exhalation process that occurs in the lungs during respiration. Using everyday materials, each student team creates a model pair of lungs.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standard Network (ASN), a project of JES & Co. (www.jesandco.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
Click on the standard groupings to explore this hierarchy as it applies to this document.
- Colorado: Science
- International Technology and Engineering Educators Association: Technology
- Next Generation Science Standards: Science
- Define a simple design problem reflecting a need or a want that includes specified criteria for success and constraints on materials, time, or cost. (Grades 3 - 5) ...show
- Describe the function of the respiratory system.
- Create a model of the lungs and explain what happens to them when you inhale and exhale.
- Give examples of engineering advancements that have helped with respiratory systems.
- 2-liter empty plastic bottle with cap
- 2 plastic drinking straws (available inexpensively at restaurant supply stores or donated by fast-food chains; do not use the flexible drinking straws)
- 2 9-inch balloons
- 1 larger balloon (for example, for a punch ball)
- 2 rubber bands
- Lung Worksheet, one per student
|bronchi:||The two large tubes connected to the trachea that carry air to and from the lungs.|
|diaphragm:||A shelf of muscle extending across the bottom of the ribcage.|
|lungs:||Spongy, saclike respiratory organs that occupy the chest cavity, along with the heart; provide oxygen to the blood and remove carbon dioxide from it.|
Before the Activity
- Gather materials and make copies of the Lung Worksheet.
- Drill 2 holes (just big enough for a straw to fit through) in each of the caps of the 2-liter bottles. (Note: make sure to drill the holes far enough apart that the holes do not become one big hole!)
- Using a pair of scissors, cut off the bottoms of each of the 2-liter bottles.
With the Students
- Peel off the label, if any, on the 2-liter bottle.
- Tell students that the 2-liter bottle represents the human chest cavity.
- Stick the two straws through the two holes of the bottle cap.
- Place one 9-inch balloon on the end of each straw, and secure them with rubber bands, as shown in Figure 2.
- Tell students that the straws represent the bronchi and the balloons represent the lungs.
- Stick the balloon ends of the straws through the bottle opening and screw the lid on tightly.
- Stretch out the larger balloon and place it over the open bottom of the bottle.
- Tell students that this larger balloon represents the diaphragm. They now have a finished model of the lungs (see Figure 3); now it's time to make the lungs work!
- Pull the diaphragm (balloon) down (that is, away from the lungs) in order to inflate the lungs. (Note: This makes the chest cavity larger and decreases the pressure.)
- Push the diaphragm (balloon) in (towards the lungs) in order to deflate the lungs. (Note: This makes the chest cavity smaller and increases the pressure.)
- Have students complete the Lung Worksheet
- How do the lungs work? How do you inhale and exhale?
- Does your breathing change when you exercise? How?
Activity Embedded Assessment
U.S. Department of Health and Human Services, National Institutes of Health, National Heart, Lung and Blood Institute, Diseases and Conditions Index, "How is Asthma Treated?" Accessed May 23, 2006. http://training.seer.cancer.gov/anatomy/respiratory
U.S. National Cancer Institute's Surveillance, Epidemiology and End Results (SEER) Program, Training Website, Bronchi, Bronchial Tree, and Lungs, "Bronchi and Bronchial Tree." Accessed May 23, 2006. http://training.seer.cancer.gov/anatomy/respiratory/passages/bronchi.html
Wikipedia, The Free Encyclopedia, Wikipedia,com, Respiratory system. Accessed May 23, 2006. www.wikipedia.org
Teresa Ellis, Malinda Schaefer Zarske, Janet Yowell
© 2006 by Regents of the University of Colorado.
Integrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
Last modified: July 3, 2015 |
Today’s students are preparing to enter a world in which data literacy – the ability to find, analyze, interpret, and describe data – is absolutely essential for academic and career success. The Roper Center’s archive of public opinion data offers educators the opportunity to integrate data into curriculum in multiple ways by offering understandable, relevant quantitative data on a broad range of topics in history, health, culture, government, and media studies. To support educators at the graduate, undergraduate, and high school level in their work, the Roper Center offers the following materials to facilitate the use of polling data in the classroom. For an overview of polling concepts, methods and analysis, please see Polling Fundamentals and Analyzing Polls.
Introductory-level lesson plans appropriate to an advanced high school or introductory college curriculum.
This section provides educators with sample teaching assignments helpful in the classroom for getting students acquainted with polling data. Assignments utilize several Roper Center resources to support learning of fundamental polling principles.
These hands-on group exercises can be used to familiarize anyone new to polling – students, librarians, researchers from other fields, educators – with the basics of understanding public opinion polls and navigating the Roper Center archives. |
The Common Core State Standards for Mathematics build on the best of existing standards and reflect the skills and knowledge students will need to succeed in college, career, and life. Understanding how the standards differ from previous standards—and the necessary shifts they call for—is essential to implementing them.
The following are the key shifts called for by the Common Core:
Greater focus on fewer topics
The Common Core calls for greater focus in mathematics. Rather than racing to cover many topics in a mile-wide, inch-deep curriculum, the standards ask math teachers to significantly narrow and deepen the way time and energy are spent in the classroom. This means focusing deeply on the major work of each grade as follows:
- In grades K–2: Concepts, skills, and problem solving related to addition and subtraction
- In grades 3–5: Concepts, skills, and problem solving related to multiplication and division of whole numbers and fractions
- In grade 6: Ratios and proportional relationships, and early algebraic expressions and equations
- In grade 7: Ratios and proportional relationships, and arithmetic of rational numbers
- In grade 8: Linear algebra and linear functions
This focus will help students gain strong foundations, including a solid understanding of concepts, a high degree of procedural skill and fluency, and the ability to apply the math they know to solve problems inside and outside the classroom.
Coherence: Linking topics and thinking across grades
Mathematics is not a list of disconnected topics, tricks, or mnemonics; it is a coherent body of knowledge made up of interconnected concepts. Therefore, the standards are designed around coherent progressions from grade to grade. Learning is carefully connected across grades so that students can build new understanding onto foundations built in previous years. For example, in 4th grade, students must “apply and extend previous understandings of multiplication to multiply a fraction by a whole number” (Standard 4.NF.4). This extends to 5th grade, when students are expected to build on that skill to “apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction” (Standard 5.NF.4). Each standard is not a new event, but an extension of previous learning.
Coherence is also built into the standards in how they reinforce a major topic in a grade by utilizing supporting, complementary topics. For example, instead of presenting the topic of data displays as an end in itself, the topic is used to support grade-level word problems in which students apply mathematical skills to solve problems.
Rigor: Pursue conceptual understanding, procedural skills and fluency, and application with equal intensity
Rigor refers to deep, authentic command of mathematical concepts, not making math harder or introducing topics at earlier grades. To help students meet the standards, educators will need to pursue, with equal intensity, three aspects of rigor in the major work of each grade: conceptual understanding, procedural skills and fluency, and application.
Conceptual understanding: The standards call for conceptual understanding of key concepts, such as place value and ratios. Students must be able to access concepts from a number of perspectives in order to see math as more than a set of mnemonics or discrete procedures.
Procedural skills and fluency: The standards call for speed and accuracy in calculation. Students must practice core functions, such as single-digit multiplication, in order to have access to more complex concepts and procedures. Fluency must be addressed in the classroom or through supporting materials, as some students might require more practice than others.
Application: The standards call for students to use math in situations that require mathematical knowledge. Correctly applying mathematical knowledge depends on students having a solid conceptual understanding and procedural fluency. |
Tented roofs, a feature of medieval ecclesiastical architecture, were commonly employed to cover high, conical roof structures on churches. It had the shape of a polygonal spire but had a different purpose in that it was usually employed to roof the main internal area of a church rather than as an ancillary building. The word "tent" here means a large circular or polygonal room with a domed ceiling. These rooms were used for various purposes by medieval churches including as baptisteries, tombs, and refectories.
The word "church" here refers to the main body or place of worship of a Christian congregation; it may be defined as "the place where Christians meet to hear God's Word preached and to celebrate the sacraments." In ancient times there was no distinction made between church and chapel, but nowadays the term "chapel" is generally applied to smaller buildings set aside for religious purposes while the term "church" denotes a larger institution with other functions attached to it. However, in early Christianity chapels were often built into the walls of homes where prayers could be said directly facing heaven; these became known as "tombs" because the occupants were saying farewell to life itself. Thus, a tent roof chapel is a small structure set next to a tomb which served as a place of prayer for those who lived there.
In order to preserve heat during cold seasons and protect worshipers from rain, some medieval churches had tent-shaped roofs.
Timber was employed in the Dome of the Rock because the usage of wood in domes had shown to be quite useful in churches. It made the building lighter and more flexible, but it needed to be weatherproofed with copper or lead. The timber used in the construction of the dome was pine.
Domes are particularly useful for church buildings because they allow in light while protecting the interior space from the elements. They also function as high-quality windows when used in large quantities. There are several types of domes used in architecture, including: shell domes, tent domes, and cupolas. Churches that use domes as their roof design include some of the most famous buildings in the world. These include the Great Mosque of Mecca, the Dome of the Rock in Jerusalem, Saint Peter's Basilica in Rome, and Nipponzack Church near New York City.
In conclusion, churches use domes as a means of protection from the elements while allowing in light. They are particularly useful for church buildings because they allow in light while protecting the interior space from the elements.
They were gradually incorporated into the church construction and crowned with more complex roofs until the spire was completed. Towers are a prevalent feature of religious architecture across the world, and they are often considered as attempts to strive aloft toward the skies and the holy.
There are many reasons why churches would need towers. The most obvious is defense: the tower provides protection for people inside the walls of the church when attacked from without or threatened from within. It can also be used as a lookout post, or even as an antenna for sending messages over long distances. There are other ways in which towers find use in religion that aren't readily apparent today. For example, before mechanical clocks were invented, priests used to let people know the time by sounding bells at specific points in the day. Since people needed to be reminded of these times, the bells had to be easy for them to hear from outside the church walls.
The first churches had simple structures with little or no decoration. Over time, builders began to incorporate decorative features into their churches, most notably stained glass windows. These additions to religious buildings are called "afterpieces" and include detailed paintings on the wall or ceiling in place of glass. The artists who created these works often based their designs on biblical stories or historical events related to Christianity. For example, one artist might paint a series of scenes from the life of Christ while another paints pictures for a festival celebrating Easter.
A basic church can be constructed using mud brick, wattle and daub, split logs, or rubble. Its roof might be made of thatch, shingles, corrugated iron, or banana leaves. However, beginning in the fourth century, church groups attempted to build church buildings that were both lasting and artistically attractive. These early churches used concrete for their foundations and walls. They often had flat roofs covered with tiles or slates.
Concrete was first developed in China about 2,000 years ago. It was introduced into Europe around 500 AD by Arab merchants. Concrete has been used for large structures such as bridges, but it was not suitable for use in building small houses. In the 17th century, French scientist Blaise Pascal invented the pneumatic tube system, which is now used in subway stations all over the world.
Pascal also invented a machine called a "pneumatograph," which was the first air-conditioned room. It functioned by passing air through tubes wrapped around a cylinder attached to a fan; the wind from the fan moved through the tubes, cooling off the interior space. This invention helped make big buildings possible again after the great fire of 1668 destroyed much of London.
In the United States, some churches are built with wood, while others are built with concrete. The Church of Jesus Christ of Latter-day Saints (LDS Church) uses steel instead.
Their roofs were mostly thatched, but they might also be built of wood or clay on occasion. During the Middle Ages, lumber was an essential component of the majority of constructions. Essentially, wood was used for the majority of a house's framework as well as the roof structure. Oak was commonly utilized in England owing of its high resistance to humidity. The main advantage of using wood instead of metal for these structures is price: it is much cheaper to build a house with wood than it is to build one with metal. A disadvantage of this method is that wood can expand and contract over time, which could cause problems with the stability of the building.
Clay was also used during this period of time. It was easy to work with and available in most areas. However, it didn't last very long so it wasn't very effective at preventing leaks.
Metal has many advantages over wood and clay. It is stronger, more durable, less dense, and does not decay like wood does. Metal also has many different shapes and styles that you can create with it such as flat sheets, tubes, wires, and all kinds of other designs. With these advantages, it is no surprise that metal became popular instead of wood or clay.
During the Middle Ages, most houses were made of wood. However, because of its advantages, metal started to replace wood later on. In fact, houses built in Europe today are usually made of metal because it is easier to repair when needed. |
The United States is a huge country. It spans an entire continent from the Atlantic Ocean in the east to the Pacific Ocean in the west. Its geographic area occupies 3.794 million square miles. That’s a lot!
It is not surprising that the country is often divided into regions when we talk about economics, weather, language mannerisms and so on. The US Census Bureau, which is responsible for counting population and tracking demographics, officially divides the nation into four major regions. The region names are easy to remember because they use map directions: Northeast, Midwest, South and West.
Furthermore, each region contains divisions, or groups of states. For example, the Northeast region contains the New England and Mid-Atlantic divisions. The New England division consists of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Visit the podcast blog site at www.SlowAmericanEnglish.net to see a color map of the Census Bureau regions and a list of the divisions within each region.
People who live in the different regions can be described by adding -er to a region name. So, people are called Westerners, Easterners, Northerners, Midwesterners or Southerners.
Although they are good general guides, the Census Bureau designations are not the only ways Americans refer to geographic areas of the country. Region names and meanings vary by location and have evolved because of cultural differences and geographic features, not governmental units. Therefore, each group might have a different name for each other’s region due to different dialects and attitudes.
Here are some examples:
The Mid South states are farther north than the Deep South states. Southerners say that people up North (in the Northeast and Midwest) are Yankees. Easterners may travel out West or to the Pacific Northwest (Oregon and Washington). The East Coast is often referred to as the Eastern Seaboard. A slang phrase for the West Coast is the Left Coast. Arizona, Colorado, Nevada, New Mexico and Utah are considered the Southwest. States in the lower right section on a map of the US are thought of as the Southeast.
Geographic features also are used for region names. Appalachia covers roughly the region of the Appalachian Mountains, stretching from the southern part of New York State to parts of the Deep South. Upper Plains States are located in the northern part of the Midwest because of the flat land there. The Great Lakes region consists of the states near the Great Lakes on the northern border of the US.
Sometimes the term “belt” is used to refer to a continuous geographic area with a common characteristic. For example, the Corn Belt is the region of the Midwest where corn is the primary crop. The Bible Belt refers to southern states where fundamental religion is prevalent. The Sun Belt is the southern, hot-weather portion of the country from coast to coast. Conversely, the Frost Belt is the northern area prone to very cold weather. The Rust Belt is the northern area where industrial factories were common but are now unused and decaying.
Another way to refer to America’s regions is by time zone. Listen to Episode 1508 of this podcast for details about that topic.
There are many, many more ways Americans refer to the regions of their country. The ones presented here are just a few of them, but they are common and may help you understand regional phrases in American English better.
US Census Bureau Map and Regions:
Region 1: Northeast
Division 1: New England
Division 2: Mid-Atlantic
Region 2: Midwest
Division 3: East North Central
Division 4: West North Central
Region 3: South
Division 5: South Atlantic
Division 6: East South Central
Division 7: West South Central
Region 4: West
Division 8: Mountain
Division 9: Pacific
Note: Puerto Rico, island areas and territories are not part of any census region.
### End of Transcript ###
Click here to buy additional study materials, such as Exercise Worksheets, Workbooks and Natural-Speed Recordings for this and all Slow American English podcast episodes:
Slow American English Shop |
A popular way to look for planets these days is to measure the amount of light a star is giving off. When a planet passes in front of its host star, it will cause a small, but detectable, drop in brightness. And by measuring the frequency of these dips, plus the size, it's possible to determine much about the nature of the planet, like if it is potentially habitable and thus home to alien life. Sometimes, however, the telescopes doing the observing see things that are harder to explain.
KIC 8462852 is a star in the Cygnus constellation approximately 1400 light years away from Earth, slightly brighter than the Sun. Unlike a star with a planet in orbit, this star displayed brightness dips of up to 20%, and they definitely weren't regular. One explanation was a cloud of comet fragments that found their way into a tight orbit around the star, but another theory proposes something a lot more concerning.
In 1960, physicist Freeman Dyson proposed a theory that an intelligent alien civilization might grow to a point where it required more energy than could be generated on a single planet. He theorised that such an advanced civilization might be able to construct a massive orbiting structure called a Dyson Sphere, that would be able to capture a significant proportion of the solar energy of a system's star and make it available to the population. Such a megastructure would capture most of the visible light of the star, but would still emit some infrared radiation, and would therefore be identifiable.
A variation of this theory, known as a Dyson Swarm, has been proposed as an explanation for what's happening around KIC 8462852. In this scenario, the civilization is building a swarm of orbiting satellites to achieve a similar goal to the sphere, but without the complications of trying to actually build a ball around a star.
Any civilization that is capable of building even a Dyson Swarm would be so far ahead of us technologically, we can't even imagine what they are capable of. And while NASA has found no evidence of radio emissions coming from that part of the sky, if they are capable of constructing Dyson Swarms, they have probably found a quicker way to communicate over large distances than electromagnetic radiation, not to mention quick ways to eradicate inferior galactic neighbors. |
Earth's atmosphere has gone through multiple distinct phases throughout its life, from a hydrogen-rich early period to the modern oxidizing chemistry. The first atmosphere Earth had was chemically very similar to the composition of the primordial dust and gas cloud from which the solar system formed. This chemistry can be seen in some asteroids, and it is a combination of hydrogen, helium and complex organic molecules.Continue Reading
That first atmosphere didn't last long. It was composed almost exclusively of light gases and exposed to the solar wind. Early in its history, Earth lacked a differentiated core. For this reason, the planet lacked a strong magnetic field to deflect charged particles ejected by the Sun. This, combined with the tendency for light gases to waft away into space, depleted Earth's first atmosphere.
The second atmosphere was composed mainly of compounds that were outgassed by Earth's many active volcanoes. This atmosphere was rich in water vapor, carbon dioxide, sulfur dioxide, sulfur and chlorine. Hydrogen was also present in this environment, as was molecular nitrogen. By this time, Earth's core had differentiated, and a strong magnetic field permitted the retention of a large atmosphere.
Eventually, life evolved to convert sunlight into chemical energy. Molecular oxygen, which is not found in volcanic gases, is a byproduct of photosynthesis and was released in large amounts between 2 and 2.8 billion years ago, causing characteristic banded-iron formations in rocks of that age.Learn more about Atmosphere |
Government is defined as the governing body of a nation, state or community. Politics are the activities associated with the governance of a country or other area, especially the debate or conflict among individuals or parties having or hoping to achieve power.Continue Reading
Politics is essentially the art and science of governing, and the actions performed by the government body in association to the ruling over the people. Although, the words are not to be used as interchangeable. Government is used in the sense of a body that prescribes the rules and regulations pertaining to governing a country. Politics is used in the sense of a branch of knowledge that deals with affairs of state. The government is a group of people, while politics is more of an idea.
Another major difference between the two is that the government is run by a select group of people, and citizens are not often involved in the affairs. Little information about the workings of the government is released, and only those within have any pull over what decisions on made. However, anyone may get involved in politics, and what is going on within the country politically is open for citizens to observe and take part in.Learn more about Types of Government |
Greenland ice sheet melt causing four-fold global sea level increase
A new report suggests that over the past two decades, ice loss from the Greenland Ice Sheet increased four-fold contributing to one-quarter of global sea level rise.
However, the chain of events and physical processes that contributed to it has remained elusive. One likely trigger for the speed up and retreat of glaciers that contributed to this ice loss is ocean warming.
A review paper by physical oceanographers Fiamma Straneo at Woods Hole Oceanographic Institution (WHOI) and Patrick Heimbach at MIT explains what scientists have learned from their research on and around Greenland over the past 20 years and describes the measurements and technology needed to continue to move the science forward.
The Greenland Ice Sheet is a 1.7 million-square-kilometer, 2-mile thick layer of ice that covers Greenland. At its edge, glaciers that drain the ice sheet plunge into coastal fjords that are over 600 meters deep - thus exposing the ice sheet edges to contact with the ocean.
The waters of the North Atlantic Ocean, which surround southern Greenland, are presently the warmest they have been in the past 100 years. This warming is due to natural climate variability and human induced climate change, and climate models project that it will keep getting warmer.
Therefore, it is important to understand if the present ocean warming has contributed to ice loss from the Greenland Ice Sheet and how future warming may result in even more ice loss.
The paper describes the mechanisms causing the melting of the ice sheet, particularly at its margin, where the glaciers extend into the ocean.
This so-called "submarine melting" has increased as the ocean and atmosphere have warmed over the past two decades.
The findings are published in the journal Nature.
(Posted on 07-02-2014) |
External Web sites
Britannica Web sites
Articles from Britannica encyclopedias for elementary and high school students.
- foxglove - Student Encyclopedia (Ages 11 and up)
Foxglove is any of about 20 species of herbaceous plants of the genus Digitalis (family Plantaginaceae). The most important plant is the common, or purple, foxglove (Digitalis purpurea), which is cultivated commercially as the source of the drug digitalis. Digitalis is used in medicine to strengthen contractions of the heart muscle. |
Demand Analysis: Utility
Demand Analysis: Utility
The term utility is used often in economics. Utility is a concept used to help explain the choices that consumers make. This helps to explain demand, especially the downward slope of the demand curve.
Utility refers to the amount of satisfaction, or happiness, that individuals receive from the choices that they make. Each person is unique. Different people receive satisfaction for different reasons. They will make different choices even if the circumstances are the same. Because of this, the concept of utility explains how making different choices can still be consistent with the basic assumption in economics that people behave rationally.
Utility (satisfaction) comes from more than just financial wealth and material possessions. Sure, people can derive satisfaction from wealth accumulation. But people also derive satisfaction in other ways. People may be satisfied knowing that they are helping others. Such charity takes many forms, and involves many choices as well. People may gain satisfaction from accumulating many friends, or maybe just having a few very close friends. Some people gain more satisfaction from leisure than others. Many people enjoy hobbies, and different people are willing to devote different amounts of their time and incomes to an unlimited number of potential hobbies. Some people prefer current consumption while others prefer future financial security.
All of these considerations, and more, contribute to different rational choices being made by different people.
Since utility comes from the choices that people make, a cost is always involved. If there were no costs, then people wouldn't have to make choices. But when someone decides what to do with a specific time frame or a specific sum of money, they have to give up using that time frame or that specific sum of money for a different activity.
The cost of a choice is more than just the amount of money or time which has to be spent on that choice. The cost of a choice is the amount of satisfaction (utility) that has to be given up because another choice was not made instead.
This means that the true cost of any choice is its opportunity cost. Since choices often involve more than two options, and only one of the options can be chosen, the opportunity cost is a measure of the utility of the best alternative to any given choice. It would not make sense to add up utilities for several options when only one of them can be chosen.
This concept of measuring opportunity cost brings up another concept in economics: the concept that utility can somehow be measured. It is true that people everywhere do not go around all the time computing a number in order to make the best choice every time that a choice is made. But people do make choices because of the way they envision the benefits and costs involved. Without even realizing that they are doing so, people are making measurements regarding utility. Whichever options are available, people choose the one in which the perceived benefits most outweigh the perceived costs.
Another concept in economics involving utility and choice is that choices do not occur in a vacuum. Every choice relates to choices that have been made previously. Suppose, for example, that you have a favorite song that you haven't heard in a long time. Now, suddenly, it has been made available to you so that you can listen to it as often as you like. Since you haven't heard it for a long time, you might make it a priority to listen to it immediately. All other options that you have for using this time frame will have to wait.
So you listen to it once, and it makes you very happy (you receive a high utility value from this). You want to hear it again. You listen to it a second time immediately after the first time, and the utility is still high. But not quite as high as the first time, because you had to wait so long to hear it the first time. In fact, every time that you listen to it consecutively, the utility is going to be less than the previous time.
This is the concept of diminishing marginal utility. Every time the same option is chosen within a specified time frame, the utility will be less than the previous time. Somewhere along the line, with diminishing marginal utility, the utility received from making the same choice over again will be less than the utility that could be received from something else, even if that 'something else' initially had a lower utility value. You will 'spend' your next choice on something else instead.
As the same choice continues to be made, with less utility received each time, eventually you could end up actually decreasing your total utility: the last choice gave you a negative amount of utility. Maybe you eat so much that you get sick. When marginal utility becomes negative, this is called disutility.
When making choices, the available options might be complicated somewhat because different choices might involve a different (explicit) cost outlay. One option might take up more of your time than another option. One option might require you to spend more of your income than another option.
For example, suppose you decide to go out to dinner with friends. You will enjoy going out, enjoying the company of your friends, so you will be happy (have positive utility) with whatever restaurant choice is made. You could go to a fast food restaurant and be happy. You would be even happier (have more total utility) going to a fancy restaurant, but it would cost more. If you spent more for dinner at a fancy restaurant, you would have less money left over for other things. So, which option would be better, fast food or fancy restaurant?
The answer is that you would choose whichever one gives you the most utility per unit of cost. The value you place on a fast food meal, divided by its cost, can be compared to the value you place on dining in a fancy restaurant, divided by its cost. You would choose the one that gives you the highest value (utility) per unit of cost.
This concept - maximizing marginal utility per unit of cost - applies as well to a series of choices. Each choice depends in part on previous choices. Each subsequent time that the same choice is made will provide less total utility than the previous time that particular choice was made (diminishing marginal utility). But presumably, the cost will be the same: lower benefit, same cost.
Therefore, the marginal utility per unit of cost will change over a series of choices.
In order to maximize total utility from a given budget, the budget will be spent over a series of choices, each one based on maximizing utility per cost, and each choice giving less utility per cost than the previous time that choice was made, until the income is spent with the marginal utility per cost for each option being equal.
This is called consumer equilibrium, or the equimarginal principle. Consumer equilibrium can be shown mathematically as:
MU(A)/P(A) = MU(B)/P(B) = ...MU(X)/P(X)
where A = one option, B = another option, and X represents each additional option to be considered.
All of this talk about utility leads to the reasons behind a downward sloping demand curve. Each individual consumer will allocate income among the various consumer choices in such a way that the ratio MU/P for each consumer good will be equal.
If the price of one good increases relative to other goods, then the ratio MU/P will decrease, and the consumer will purchase this good in a smaller quantity. If the price of one good decreases relative to other goods, then the ration MU/P will increase, and the consumer will purchase this good in a higher quantity. For each individual consumer and each individual good, more will be purchased as the price decreases. This means that an individual's demand curve for each good is downward sloping. Notice that this involves price changes relative to the prices of other goods.
The market demand curve is simply the sum of all individual demand curves. With decreasing marginal utility and consumer equilibrium, the result is a downward sloping demand curve. |
By Lucy Bull
The discovery of the aftermath of a meteorite crashing into earth around three million years ago has been reported below one of Earths continental ice sheets.
Researchers predicted that the giant crater was the result of the impact of a meteorite falling to Earth, that has been revealed a great distance below the ice sheets in northwest Greenland. This is an exciting revelation in science, as a crater from the crash of a meteorite on Earth has never been found before in this location.
Members from the Centre for GeoGenetics at the University of Copenhagen were responsible for the finding of the enormous crater over two years ago, while analysing an unfound ‘circular depression’ at the base of the Hiawatha Glacier. The uncovering of the collision between the meteorite and earth was then disclosed when examining an advanced map of the topography underneath the ice sheets of Greenland.
With the use of new state of the art technology, a plane was flown over the glacier to document the measurements of the crater, revealing the extraordinary size of the crater from the events of the kilometre–wide iron meteorite colliding into Earth.
Dr Iain McDonald, Reader at the School of Earth and Ocean Sciences at Cardiff University, played an imperative role as co-author of the research into the exploration of the colossal crater left behind.
A comprehensive chemical analysis was accomplished at Cardiff University, enabling researchers to understand how the crater and resulting obliteration underneath the ice sheets came about. Signs of various metals within the analysis indicated it was in fact the consequence of a meteorite. This was the evidence the scientists needed to confirm their predictions.
A large meteorite had been previously discovered in Cape York, that is in close location to the Hiawatha site in northern Greenland. This suggested it was an impact in this region that could explain the finding below the Hiawatha Glacier.
However, Dr McDonald stated how the signature found at the Hiawatha site was not the same as that found in Cape York and played a vital role in further research to explain the findings of the crater revealed beneath Greenland’s ice sheets. |
Prevention of infections that cause diarrhea
The best ways to prevent a bacterial, parasitic, or viral gastrointestinal infection are to not drink water or eat food that may be contaminated and to be careful with sanitation measures such as hand washing. Food that might be contaminated, such as raw meats and eggs, should be cooked thoroughly. Cooked foods and foods that are served raw should not touch any surfaces that may have been contaminated.
If someone in a household has a diarrheal infection, careful hand washing by all family members is recommended. It is best to have the infected person avoid preparing food or drink for others until their infection is over.
When traveling to developing nations, it is best to drink only bottled water, carbonated drinks, and hot cooked foods. Avoid fresh fruits and vegetables, limited to those that you can peel yourself. Food from food vendors is generally not considered safe.
Cases of diarrhea that are caused by foodborne illnesses are monitored on a community and state level. Other than travel-related cases, health officials want to try to determine where the infection came from so that they can address any potential public health concerns. For instance, if someone's infection is due to contaminated food served at a restaurant or due to a contaminated community water supply, then steps will need to be taken to prevent the spread of the infection. |
Dracaena fragrans "Massangeana," more commonly called the corn plant, thrives as an outdoor shrub in U.S. Department of Agriculture plant hardiness zones 10 and 11, but gardeners in cooler climates often grow it as a flowering houseplant. This corn plant cultivar generally ranges from 5 to 15 feet in height and features fragrant, white blossoms and scented, variegated leaves. The rugged plant rarely suffers from serious problems, but occasionally experiences fungus gnat infestations if grown in excessively moist environments. Cultural, biological and chemical treatments easily control most gnat outbreaks.
Fungus gnats belong to the Diptera fly family, making them related to houseflies and mosquitoes. These tiny, flying insects thrive in areas full of damp, decaying organic materials and often infest corn plants growing in containers. Adult gnats are dark, mosquito-like insects with long antennae. Although adults only live about 8 days, the prolific females each lay 30 to 200 tiny, yellow-white eggs on moist soils or plant debris. The larvae reach about 1/4-inch long with translucent, white bodies and black heads. They live in the soil and prefer feeding on algae, fungi and other decaying plant material.
Adult fungus gnats are minor nuisance insects that don't pose any threat to corn plants, people or pets, but the larvae are an entirely different story. Living in the top 2 or 3 inches of growing media, fungus gnat larvae feed on corn plant roots and any leaves that touch the soil's surface. Although the larvae's dietary preference causes little damage to outdoor plants, it can quickly cause wilted or yellow foliage, leaf loss, stunted plant growth and loss of vigor on houseplants.
Because fungus gnats are attracted to moisture and decaying plant material, you can often control populations simply by providing the proper cultural conditions. Corn plants tolerate moderate drought, so allowing the top 2 inches of growing medium to dry completely between watering sessions helps kill off any larvae living in the soil. Avoid overwatering your plant and make sure the soil has good drainage. Removing all decaying or dead plant matter near your corn plant makes the area unattractive to adult females looking for a place to lay their eggs.
Bacillus thuringiensis (Bt) is a soil-borne bacterium that naturally controls fungus gnat larvae. Following the instructions on the product's label, mix 1 to 8 teaspoons of Bt product into 1 gallon of water and drench the top inch of soil. The bacterium only remains toxic for 2 days, so repeat applications when you notice more pests.
Adult gnats are attracted to yellow sticky traps, so laying a few of those across the soil can help reduce populations. Check the traps three times a week and replace as needed. Purchasing and releasing predatory mites directly onto your corn plant can control fungus gnat larvae. Mites belonging to the Hypoaspis family typically offer the best results, according to the Missouri Botanical Garden.
Insecticides aren't usually recommended for fungus gnat problems in home settings, but a ready-to-use, permethrin-based pesticide can help treat very severe or persistent infestations. Following the safety precautions, and wear any safety gear recommended. Keep all insecticides away from children and under lock and key. Follow manufacturer's instructions, and thoroughly spray corn plant foliage and the surrounding soil. Repeat applications every 7 to 10 days until you achieve control. Take indoor corn plants outdoors before treatment to avoid releasing chemicals into your home environment. |
Image by istlibrary via FlickrIf you are interested in inquiry learning, and finding out strategies for trying the approach with students…or maybe you are already using inquiry learning but would like to find out more – then this session by Jill Hammonds would be a useful starting point.
Jill Hammonds facilitated a Webinar session this afternoon (21st Sept 2011) entitled “Inquiry Learning: getting kids out of the box” (recording can be accessed here, and you can download the PowerPoint from the session here). Jill started by opening with a view of how schools sometimes implement inquiry learning, which is about discovering and understanding. So who drives? The student or the teacher, or is it a partnership? It is worth thinking of it as a continuum where, when students first start out with enquiry it is the teacher that does most of the driving. As students develop their inquiry skills, the learning becomes much more student directed.
Jill talked compared teaching to lighting a fire – if you build a pile of sticks and chuck a match on, nothing much is likely to happen other than the match dies. Inquiry, Jill asserts is a disposition, and teachers need to adjust their teaching to enable inquiry to happen.
Participants were invited to advise what they felt inquiry to be, and some of the suggestions included:
- ‘allowing opportunities to discover, collaborate, bounce off others, apply the new learning..”
- “Student directed interest”
- “a process of trial and reflection”
- “questioning and thinking and reflecting on information to be sure it answers the questions”
Jill also showed a Wordle where the key words that popped out were: students, hypotheses, investigate and learning. The importance of being a life-long learner is critical in this day and age, and Jill stressed that this is a reason that might shape the focus on inquiry and how it is implemented in schools.
Image via WikipediaThe picture of a tap with a droplet with the world in it was used as a catalyst for conversation, as well as to demonstrate how to encourage people to think and work out what is happening based on prior knowledge. The next image was of several taps, along with questions of how they function, and illustrates how to put new challenges in front of learners. The scenario was expanded out to include wells, then oil wells, and finally notions of electricity and power.
Image via WikipediaJill mentioned about her own experience of learning a language which was mainly rote, and missed the main purpose of language which was to communicate. Inquiry is providing opportunities for students to communicate with each other and to develop meaningful conversations – within language learning, but also in other disciplines. Inquiry in mathematics for example, might be based on a wide question “Suppose you want to climb on the roof of your treehouse….” and using a number of strategies such as Pythagoras’s theorem to work out a solution to a ‘real life’, accessible problem that means experimenting with things, refining approaches, and working out a workable solution.
Literacy is not just about reading and writing and we have a lot more to think about today, and all of these aspects are opportunities for using … and presenting inquiry. It’s finding out about how things work, and how they work and link together.
Image via WikipediaIf we don’t have a model, how can we structure the learning in our class? It is about the way we plan, but how much planning can, or should, we do? Jill presented a sample, and also has an inquiry template to help scaffold the planning process, which is available for anyone to download and use. One of the participants asked if the plan is co-constructed with students – a good question…and maybe, again, it depends on how familiar with inquiry learning the students are.
Image via WikipediaA key point Jill made was that the best questions often happen at the end of the enquiry process, and she mentioned thinking tools in general, and De Bono’s thinking hats specifically. The thinking tools can be one way of helping students to work though and develop questions – to challenge themselves to take things further and to move away from narrow statements, and to come to a wide variety of understandings. Margaret McPherson also suggested the parallel curriculum (Carol Ann Tomlinson), which. The core strand is where the main focus of teaching is, and then the other strands are more flexible and student shaped and directed. It is about co-construction and enabling differentiation.
Jill emphasised that the deliberate acts of teaching were essential to scaffold the inquiry process, and students could not be dropped into an inquiry approach and expected to work with it meaningfully. Likewise, the inquiry has to stretch the students, so that it actually develops the ability to think things through. Jill also cautioned against over-structuring the inquiry process, or following a model rigidly because learning is messy, and it is important to remain agile and responsive. A participant also pointed out that “inquiry needs to be inclusive throughout all curriculum and linking the learning makes it more meaningful – not just at TOPIC time!” |
Why are pandas black and white? Science finds clues.
The giant panda's distinct coloring is loved by wildlife enthusiasts, but researchers have never had a satisfactory explanation as to what made their coloring pattern so unique – until now.
—Pandas, known all over the world for their unique black-and-white pattern, are one of nature's most unusual creatures. Their extremely limited bamboo diet and famed reluctance to breed put them on the brink of extinction for decades, leaving the endangered species lists only last year.
But despite the increased scientific scrutiny that comes with ecological vulnerability, the panda's strange color pattern remained a mystery. Other kinds of bears tend to sport solid, non-patterned colors, and scientists did not have a satisfactory explanation as to why the panda bear was so different. Now, however, researchers from University of California, Davis, and California State University, Long Beach think they may have solved the mystery, once and for all.
"Understanding why the giant panda has such striking coloration has been a long-standing problem in biology that has been difficult to tackle because virtually no other mammal has this appearance, making analogies difficult," said lead author Tim Caro, a professor in the UC Davis Department of Wildlife, Fish and Conservation Biology, in a statement. "The breakthrough in the study was treating each part of the body as an independent area."
The researchers found that while some areas of the panda were black instead of white, the black areas were not always black for the same reasons. In order to figure out what each part of the fur was for, they compared the panda to the coloring of 195 other carnivore species and 39 bear subspecies the bear is related to.
The researchers found "no compelling support for their fur color being involved in temperature regulation, disrupting the animal's outline, or in reducing eye glare," according to their study published in the journal Behavioral Ecology, which were all explanations put forward as potential solutions to the coloring problem. Instead, researchers discovered that the coloring could be tied to two main functions: crypsis (camouflage) and communication.
The researchers realized that, unlike some other types of bears, pandas have to be active year-round without hibernation. The scarceness of bamboo, the only food pandas are capable of digesting, means that pandas need to be able to cross a wide range of habitats in search of a meal, from dense, warm rainforests to snowy, mountainous regions. The white parts of their fur helps pandas to blend in with snowy surroundings, and the black portions help them hide in shady forests.
But the researchers also found that the black portions on the heads of the bears were not directly related to crypsis. Instead, their dark ears communicate aggressive warnings to potential predators, and their black eye patches may help them to be recognized by other bears as well as being signals warning against fellow panda competitors.
This isn't the first time this team of scientists has helped crack the code of a black-and-white fur coat. Last year, the researchers were involved in studies to challenge the longstanding notion that a zebra's stripes help it hide from predators. As Jason Thomson previously reported for The Christian Science Monitor:
Results from the work indicate that beyond 50 meters (about 164 feet) in daylight or 30 meters (about 98 feet) at twilight, when most predators hunt, the stripes are difficult for large carnivores to distinguish. On moonless nights, the distance drops to a mere 9 meters (about 29 feet).
In addition, the research concluded that on open plains, where zebras spend most of their time, lions could see the outline of zebras just as easily as that of similar-sized prey with fairly solid shading patterns.
If the conclusions of this research are correct, and stripes confer no advantage against predation, where did the idea come from, and is there any evidence to support it?
“The idea that stripes allowed zebras to blend into a background composed of tall stem enriched grasses is an old one that emerged from casual observations,” says leading zebra expert Daniel Rubenstein of Princeton University, in an email interview with The Christian Science Monitor. “Until this study most evidence in support of this hypothesis has been anecdotal”.
The researchers involved in that study determined that the reason for the zebra's stripes likely has to do (at least partially) with the pattern's ability to repel flies.
But as far as the panda's coloring is concerned, it's case closed – at least for now, says panda study co-author Ted Stankowich, an assistant professor at CSU Long Beach.
"This really was a Herculean effort by our team, finding and scoring thousands of images and scoring more than 10 areas per picture from over 20 possible colors," said Dr. Stankowich in the statement. "Sometimes it takes hundreds of hours of hard work to answer what seems like the simplest of questions: Why is the panda black and white?" |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2011 April 23
Explanation: What is it? It's a multi-temporal illumination map, of course. To make it, the wide angle camera on the Lunar Reconnaissance Orbiter spacecraft collected 1,700 images over a period of 6 lunar days (6 Earth months), repeatedly covering an area centered on the Moon's south pole. Converted to binary values (shadowed pixels set to 0, illuminated pixels set to 1) the images were stacked to produce a map representing the percentage of time each spot on the surface was illuminated by the Sun. Remaining convincingly in shadow, the floor of the 19 kilometer diameter Shackleton crater is seen near the center of the map. The lunar south pole itself is at about 9 o'clock on the crater's rim. Since the Moon's axis of rotation stays almost perpendicular to the ecliptic plane, crater floors near the lunar south and north poles can remain in permanent shadow and mountain tops in nearly continuous sunlight. Useful to future outposts, the shadowed crater floors could offer reservoirs of water ice, and the sunlit mountain tops ideal locations for solar power arrays.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. |
Since the birth of space flight in 1957, the number of man-made objects orbiting the Earth has grown every year. There are now more than 15,000 such objects larger than 10cm, at least those that we know of. Even very small particles can pose a risk to spacecraft, because of the high relative velocities at which they travel. Not only can space debris affect critical equipment such as communications satellites, but it can also endanger manned space flights.
A dramatic illustration of the dangers of space debris is given in the film “Gravity”. It may have taken some artistic license with science to craft a good story, but its main premise is plausible. What Gravity showed was the worst case scenario, known as the Kessler syndrome, where a collision between two objects generates a cloud of smaller debris, which triggers a chain reaction of further catastrophic collisions, thereby rapidly increasing the amount of debris. This could make the low Earth orbit unusable for spacecrafts.
Most of those are useless fragments of once-useful objects, which were created by explosions, collisions or missile tests. For instance, an accidental collision between the Iridium-33 and Kosmos-2251 satellites in 2009 caused them to shatter into 2,200 (recorded) fragments. Smaller space debris is much harder to track, but NASA estimates that up to 500,000 objects larger than 1cm, and 135 million particles over 1mm in size may now be orbiting the Earth.
Space debris is becoming a serious issue, and many space agencies have started working on solutions. One approach being taken by JAXA, Japan’s space agency, is to use a magnetically charged 700m-wide net made from aluminium and steel wires. If used at the right height it will attract floating space debris to it. When enough has been caught, the system can be ordered to fall out of its orbit back to Earth. During that process the debris, along with the net, will burn up as it enters Earth’s dense atmosphere. JAXA will be doing a test launch of the system next month.
The other approach is to remove existing inactive satellites from orbit. A prime target for this experiment would be the European ENVISAT satellite which stopped functioning in 2012 and now drifts uncontrolled in orbit. At an altitude of 800km and with mass of more than 8,000kg, the ENVISAT satellite would take more than 150 years to deorbit – that is, drop out of its orbit – naturally.
Throughout that time the satellite would be at risk of colliding with other objects and generating further debris. A more sustainable solution is to remove future satellites from orbit after they have served their purpose, thereby mitigating the growth of the amount of space debris. This is why international guidelines have been proposed which will restrict post-mission deorbiting time to 25 years for all new satellites.
Most satellites designed today take will take longer to deorbit, and new technical solutions are necessary to meet the guidelines. This is where Surrey Space Centre (SSC) working with the European Space Agency (ESA) have developed a Gossamer Sail for Satellite Deorbiting. The idea is to attach a large and very light, or gossamer, sail to a satellite, which can be deployed after its mission is over.
The lower Earth orbit has some atmosphere, which enables the large sail to generate enough aerodynamic drag to slow down and deorbit the satellite more rapidly. Unlike existing deorbiting systems based on chemical or electrical propulsion, the gossamer sail system is relatively simple and does not require propellant or electrical power throughout its deorbiting phase.
The gossamer deorbiting system is designed to automatically orient the sail in the direction where maximum drag can be achieved, ensuring quicker deorbiting. Furthermore, the sail is made reflective, which allows it to make use of the solar radiation pressure to manoeuvre; solar sailing, so to speak. This enables the satellite to be lowered to an orbit where the aerodynamic drag takes over, allowing the satellites to be placed in higher orbits and still meet the deorbiting requirements.
Developing and testing the SSC gossamer deorbiting sail was quite an engineering challenge. The 5×5m sail and the four deployable masts that support it have to be packaged inside a space measuring approximately 10×10×20cm. To achieve this, the sail is made of an ultra-thin membrane and the special carbon-fibre masts can be coiled up tightly (much like a tape measure).
The SSC gossamer sail is expected to be tested within the next year. After the technology has been successfully demonstrated in space, the system can then be fitted to much larger satellites as an end-of-life deorbiting system. This will provide satellite operators with a means to meet the 25-year deorbiting guidelines, which in turn will help safeguard the possibility of space flight for future generations. |
Of all the various forms of life that have existed on the earth, less than 1% have left a fossil record. Once an organism dies, various agencies of decay quickly destroy all traces of that organism. Luckily for us some environmental conditions are conducive to the preservation of organic remains, e.g. rivers, estuaries, turbidity flows and sand storms.
Some of the types of fossilization that will be discussed are:
- soft body preservation (e.g. Burgess Shale)
- tar (e.g. La Brea tar pits)
- preservation in permafrost (e.g. Woolly Mammoth)
- permineralization (e.g. 65 million year old termite nest)
- replacement fossil (e.g. Petrified Wood)
- track ways (e.g. dinosaur foot prints)
- and many others
We will also examine how fossils can shed light on the evolutionary history of various organisms.
Graham Beard is Director / Curator of the Vancouver Island Paleontology Museum and co-author of "West Coast Fossils" |
Mega-Bytes (MB) Storage Capacity versus Mega-Bits per sec (Mbps) Data Transfer Speed are interchanged frequently albeit without serious misunderstanding. Nevertheless, we decided to clarify the precise differences between Mega Byte (MB) and Mega Bit Per Second (Mbps) in this blog post for you.
"Mega Bytes" (or sometimes written as megabytes) are usually referred to the memory capacity in a hardware like a memory card or a cell phone. "Megabits" (or sometimes written as mega bits) are usually referred to the speed of data being transferred wirelessly as in Mega bits per second (Mbps). Want to know why? Read on to find out more.
Do you tend to overestimate your Internet speeds? Are you tired of complaining to your ISP about poor downloading speeds?
Internet and everything related to it is so vast like an ocean, that as an end user, all you want is for your video to not buffer, your image to load properly, and your server to not time out while fetching almost any web page connected to the Web. Knowing the difference between megabits and megabytes will equip you with enough consumer power to guarantee credibility when talking to others or when negotiating the quality of Internet that you're paying for.
Without much ado, let us get started on debunking the perpetual air of confusion surrounding Mbps and MBps or megabits and megabytes.
If you're downloading a 250 MB movie file from Internet, you're looking at 250 Megabytes of data stored somewhere on a server. Through your Internet, this entire data will get stored on 250 Megabytes of space on your local disk. Note that we always measure the size of internet files (media, text, webpage, documents) in terms of Bytes, KB or kilobytes (1 KB = 1000 Bytes), MB or Megabytes (1 MB = 1000 Kilobytes), and GB or Gigabytes (1 GB = 1000 Megabytes).
If you scrutinize closely, each of these units of measure is a larger measure for a byte. A byte is stored on a piece of physical memory (chips, hard disks, etc.) as 8 electrical signals. Each of these 8 electrical signals can either be in an on or an off state. Each of these electrical signals is known as a bit.
Hence they say, 1 byte = 8 bits.
For storage, it CANNOT get smaller than a bit. In fact, it only gets bigger.
- 1 byte = 12^0 bytes = 1 byte = 8 bits.
- 1 kilobyte = 2^10 bytes = 1024 bytes = 8192 bits.
- 1 Megabyte = 2^10 kilobytes = 2^100 bytes = 1,048,576 bytes = 8388608 bits!
I hope it is clear by now as to why Megabytes, or an M with a capital B, is the worthwhile unit to measure the size of a file or data storage.
Coming to the speed of the very Internet that will get you that file from the server, it is measured in Megabits per second. Your internet speed has always been measured, displayed, and marketed as Megabits per second or an M with a small b. (Notice how the alphabets assume an upper and a lower case in case of Megabytes and megabits respectively). Back in the 1970s, modems with a network transmission capacity of 300 bits per second were sold. This quickly escalated to a 10 Megabits per second (Mbps) Ethernet in the 80s and 1.54 Mbps T1 lines in early 2000s. When we say Mbps, we're still talking about Megabits per second.
Why rely on different units of measurement when both of them are interconvertible?
Because the thing with storage capabilities and devices is that they are, and will always be manufactured in powers of 2, such as a 10-kilobyte chip, a 500 Megabyte pen drive, a 1 Gigabyte hard disk.
Whereas internet speeds, since time immemorial, have always been denoted as powers of 10, from 10 megabits per second megabits/second, or Mbps to 100 Megabits per second. Keeping it simple to the lowest unit of measurement makes it is easier to denote and present them and apply their measurements to different aspects of networking interchangeably.
Actual Difference between Megabits and Megabytes.
It will be now easier for you to understand that, as stated on so many internet resources, the difference between Megabits and Megabytes surpasses the lower case B and the upper case B.
Essentially, 1 Megabit = 10^3 bits or one million bursts of electrical signals.* On the other hand, 1 Megabyte = 2^100 bytes = 8 * 2^100 bits or that many bursts of electrical signal.*
In simple terms, if you divide Megabits by 8, you get Megabytes. If you multiply Megabytes by 8, you get Megabits.**
How do you calculate time it will take to download files at your current internet speed?
It is simple. Just take your file size in Megabytes or MBs and multiply it by 8. This will give you a large number which is your file size in Megabits or Mbs. Now, divide this figure by your current internet speed (which is in Megabits per second) and voila! You get the number of second in which you will have your file.
For instance, for a file of 250 MB, you take 2000 Mb (250 * 8) as the total file size. Assuming your internet download speed is 50 Mbps, you will have the file on your system in 2000/50 or 40 seconds!
How to put this knowledge to good use?
The next time your ISP or any hotshot person working in networking tries to impress you with a 50 Mbps connection claiming to download your 250 MB movie in less than 5 seconds, tell them it is not in Megabytes that they are selling their internet. They’re doing it in Megabits.
* Anderson, Benedetti, Head First Networking: A Brain-Friendly Guide, O’Reilly.
**Carpenter, Cwna Cert Wireless Net Admin, Mc Graw Hill Education.
Share this post |
Rapid changes in the ultraviolet radiation of the Sun can cause outages in radio
communications and affect satellites orbiting the Earth. Increases in solar
ultraviolet radiation from flares heat Earth's upper atmosphere, causing it to
expand. The expansion makes the air more dense at low-Earth-orbit altitudes, where many satellites fly. The dense air increases the drag on these satellites, slowing them down and causing them to prematurely burn up in the lower atmosphere if there is no more fuel onboard to give them a boost.
EVE will take measurements of the Sun's ultraviolet brightness as often as every 10 seconds, providing space weather forecasters with warnings of communications and navigation outages.
The Sun's extreme ultraviolet output constantly changes. The small solar flares that happen almost every day can double the output, while the large flares that happen about once a month can increase ultraviolet radiation many times in minutes. This harmful ultraviolet radiation is completely absorbed in the atmosphere, which means we can only observe it from satellites.
"The Laboratory for Atmospheric and Space Physics (LASP) is very excited about delivering the state-of-the-art EVE instrument to measure the solar extreme ultraviolet irradiance with best ever spectral resolution and time cadence," said
Tom Woods, SDO EVE Principal Investigator. "These future SDO EVE measurements are important for many different space weather applications such as how solar storms can degrade or even disrupt our navigation and communications."
After launch, SDO will study how solar activity is created and how space weather comes from that activity. SDO is designed to help us understand the Sun's influence on Earth and near-Earth space by studying the solar atmosphere on small scales of space and time and in many wavelengths simultaneously. SDO's other instruments include the Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA). These instruments are expected to arrive at Goddard by the end
"These three instruments together will enable scientists to better understand
the causes of violent solar activity, and whether it's possible to make accurate
and reliable forecasts of space weather," said Liz Citrin, SDO Project
Manager at Goddard. "SDO will provide a full disk picture of the Sun in super
SDO is the first mission of NASA's "Living With a Star" program, which
seeks to understand the causes of solar variability and its impacts on Earth.
SDO is being designed, managed, and assembled at Goddard. HMI is being
built by Stanford University, Stanford, Calif. AIA is being built by the Lockheed
Martin Solar Astrophysics Laboratory (LMSAL), Palo Alto, Calif. EVE is
being built by the University of Colorado.
SDO is expected to launch no earlier than August 2008.
For more information and related images, please visit on the Web:
For more information about the SDO mission, please visit on the Web: |
In this game you throw two dice and find their total, then move the appropriate counter to the right. Which counter reaches the purple box first? Is this what you would expect?
Does a graph of the triangular numbers cross a graph of the six
times table? If so, where? Will a graph of the square numbers cross
the times table too?
Tim's class collected data about all their pets. Can you put the
animal names under each column in the block graph using the
Ideas for practical ways of representing data such as Venn and
Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes?
Anna and Becky put one purple cube and two yellow cubes into a bag
to play a game. Is the game fair? Explain your answer.
How many different sets of numbers with at least four members can
you find in the numbers in this box? |
In this language arts worksheet, students read about the use of alliterations in writing. After reading the examples, students write 5 alliterations about food.
3 Views 35 Downloads
Comprehension During Independent Reading
Ideal for a language arts class, literary unit, or independent reading assignment, a set of reading worksheets address a wide array of skills. From poetic elements to nonfiction text features, you can surely find a valuable resource in...
2nd - 5th English Language Arts CCSS: Adaptable
Practice Book O
Whether you need resources for reading comprehension, literary analysis, phonics, vocabulary, or text features, an extensive packet of worksheets is sure to fit your needs. Based on a fifth-grade curriculum but applicable to any level of...
3rd - 6th English Language Arts CCSS: Adaptable
Poetry Beyond Words: Creating Poetry with Linguistically Diverse Students
Models of and directions for how to write 20 different types of poems are featured in an NCTE resource. The introduction to each form highlights the embedded concepts. For example, tongue twisters encourage poets to use alliteration and...
3rd - 6th English Language Arts
Magical Musical Tour: Using Lyrics to Teach Literary Elements
Language arts learners don't need a lecture about poetry; they listen to poetry every day on the radio! Apply skills from literary analysis to famous songs and beautiful lyrics with a lesson about literary devices. As class...
3rd - 8th English Language Arts CCSS: Adaptable
Fairy Tales and Tall Tales - Read-Aloud Anthology
Enrich a unit on fairy tales and tall tales with a set of read-aloud lessons. Second graders hone writing, vocabulary, comprehension, and literary analysis skills as they read classic stories. Complete with extension projects, discussion...
2nd - 4th English Language Arts CCSS: Designed
Informative/Explanatory: Informative Article Unit Introduction
Here is everything you need to teach fourth graders to write informative/explanatory articles in one unit plan. This carefully crafted resource packet includes lessons, exercises, and activities based on the writing process, that enable...
4th English Language Arts CCSS: Designed
Very Voracious Animal Voices
Students draw a picture of an original animal and then write a poem with words that begin only with the same first letter as the animal. Individually, they must decide how it might sound, where it lives and eats. To end the lesson, they...
1st - 12th Visual & Performing Arts |
Figure 8-22.--Motor-generator set simplified block diagram.
3. A power section
controls the static exciter output. The static exciter
output, in turn, supplies dc (excitation current) to the
The detector circuit includes a sensing circuit and a
generator field of the proper magnitude so as to maintain
three-phase bridge rectifier. The sensing circuit consists
the generator output voltage within specified limits
under all load conditions.
windings connected to the generator output and their
The static exciter consists of the following
secondary windings connected to the bridge rectifier.
The bridge rectifier provides a dc output voltage that is
proportional to the average of the three-phase voltage
outputs from the generator. This dc voltage is filtered
and fed to a Zener reference bridge in the preamp and
2. Three linear reactors (chokes)
3. A three-phase bridge rectifier unit
The dc output from the detector is compared with a
The SCPT contains (1) a primary winding
constant Zener voltage in the reference bridge. The
consisting of both voltage and current windings, (2) a
difference (error) voltage output from the bridge is
dc control winding, and (3) a secondary winding. The
voltage primary windings are connected in series with
unijunction transistor circuit, which provides the pulses
the chokes across the generator output. The current
to trigger the SCRs in the power section. The SCR
primary windings are connected in series with the load,
output from the power section is fed to the control
and thus carry load current. The secondary winding
winding of the SCPT in the static exciter.
output is connected to the bridge rectifier unit, which
supplies the dc for the generator field. The SCPT
During starting, generator field current is supplied
control winding is connected to the output of the voltage
by a field flashing circuit, which is cut out after the
generator builds up an output voltage. At no-load
voltage, the primary windings of the SCPT are
The voltage regulator consists of the following
energized through the choke coils and induce a voltage
in the SCPT secondary windings. The rectified output
1. A detector circuit
of the secondary windings supplies the generator field.
This is the no-load field excitation. |
The Palaeolithic, (or Paleolithic), refers to the prehistoric period when stone tools were made by humans. They are found in the Great Rift Valley of Africa from about 2.6 million years ago. They are found in Europe somewhat later, from about 1 mya (0.7mya for Britain). The Palaeolithic is by far the longest period of humanity's time, about 99% of human history. The geological period which corresponds to the Palaeolithic is the Pleistocene.
Stone tools were not only made by our own species, Homo sapiens. They were made by all previous members of the genus, starting with relatively crude tools made by Homo habilis and Homo erectus. In Europe, the large-brained Neanderthal Man (Homo neanderthalensis) made tools of high quality, and was in turn outshone by the many tools made by our own species. These tools are the first cultural products which have survived.
The Palaeolithic dates from about 2.6 million years ago and ended around 15,000BC with the Mesolithic in Western Europe, and with the Epipaleolithic in warmer climates such as Africa. The Palaeolithic age began when hominids (early humans) started to use stones as tools for bashing, cutting and scraping. The age ended when humans began to make small, fine tools (Mesolithic) and finally when plant crops and have other types of agriculture (Neolithic). In some areas, such as Western Europe, the way that people lived was affected by the Ice age. The move towards agriculture started in the Middle East.
During the Palaeolithic Age humans grouped together in small bands. They lived by gathering plants and hunting wild animals. As well as using stone tools, they used tools of wood and bone. They probably also used leather and vegetable fibers but these have not lasted from that time.
Cultures[change | edit source]
Oldowan[change | edit source]
The Oldowan is the archaeological term used to refer to the stone tool industry that was used by Hominids during the earliest Palaeolithic period. The Oldowan is the earliest stone tool industry in prehistory, from 2.6 million years ago up until 1.7 million years ago. It was followed by the more sophisticated Acheulean industry. Oldowan tools were therefore the earliest tools in human history, and mark the beginning of the archaeological record. The term "Oldowan" is taken from the site of Olduvai Gorge in Tanzania, where the first Oldowan tools were discovered by the archaeologist Louis Leakey in the 1930s.
It is not known for sure which species actually created and used Oldowan tools. It reached its peak with early species of Homo such as H. habilis and H. ergaster. Early Homo erectus appears to inherit Oldowan technology and refines it into the Acheulean industry beginning 1.7 million years ago. Oldowan tools are sometimes called pebble tools, so named because the blanks chosen for their production already resemble, in pebble form, the final product. Oldowan tools are sometimes subdivided into types, such as chopper, scrapers and pounders, as these appear to have been their main uses.
Acheulean[change | edit source]
Acheulean is the industry of stone tool manufacture by early humans of the Lower Palaeolithic era in Africa and much of West Asia and Europe. Acheulean tools are typically found with Homo erectus remains. They are first developed out of the more primitive Oldowan technology some 1.8 million years ago, by Homo habilis.
It was the dominant technology for the vast majority of human history. More than a million years ago Acheulean tool users left Africa to colonize Eurasia. Their distinctive oval and pear-shaped hand axes have been found over a wide area. Some examples attained a very high level of sophistication. Although it developed in Africa, the industry is named after the type site of Saint-Acheul, now a suburb of Amiens in northern France where some of the first examples were identified in the 19th century.
John Frere is generally credited as being the first to suggest a very ancient date for Acheulean hand-axes. In 1797 he sent two examples to the Royal Academy in London from Hoxne in Suffolk. He had found them in prehistoric lake deposits along with the bones of extinct animals and concluded that they were made by people "who had not the use of metals" and that they belonged to a "very ancient period indeed, even beyond the present world". His ideas were ignored by his contemporaries however, who subscribed to a pre-Darwinian view of human evolution.
Dating the Acheulean[change | edit source]
Radiometric dating, often potassium-argon dating, of deposits containing Acheulean material is able to broadly place Acheulean techniques from around 1.65 million years ago to about 100,000 years ago. The earliest accepted examples of the type, at 1.65 m years old, come from the West Turkana region of Kenya although some have argued for its emergence from as early as 1.8 million years ago.
In individual regions, this dating can be considerably refined; in Europe for example, Acheulean methods did not reach the continent until around 400 thousand years ago and in smaller study areas, the date ranges can be much shorter. Numerical dates can be misleading however, and it is common to associate examples of this early human tool industry with one or more glacial or interglacial periods or with a particular early species of human. The earliest user of Acheulean tools was Homo ergaster who first appeared about 1.8 million years ago. Not all researchers use this formal name however and instead prefer to call these users early Homo erectus. Later forms of early humans also used Acheulean techniques and are described below.
There is considerable chronological overlap in early prehistoric stone-working industries and in some regions Acheulean tool-using groups were contemporary with other, less sophisticated industries such as the Clactonian and then later, with the more sophisticated Mousterian also. The Acheulean was not a neatly defined period, but a tool-making technique which flourished especially well in early prehistory. The term Acheulean does not represent a common culture in the modern sense, rather it is a basic method for making stone tools that was shared across much of the Old World.
Clactonian[change | edit source]
The Clactonian is an industry of European flint tool manufacture that dates to the early part of the interglacial period 400,000 years ago. Clactonian tools were made by Homo erectus rather than modern humans. Early, crude flint tools from other regions using similar methods are called either Clactonian or core & flake technology.
The Clactonian is named after finds made at Clacton-on-Sea in the English county of Essex in 1911. The artefacts found there included flint chopping tools, flint flakes and the tip of a worked wooden shaft along with the remains of a giant elephant and hippopotamus. Further examples of the tools have been found at sites in Swanscombe, Kent, and Barnham in Suffolk; similar industries have been identified across Northern Europe.
The Clactonian industry involved striking thick, irregular flakes from a core of flint, which was then employed as a chopper. The flakes would have been used as crude knives or scrapers. Unlike the Oldowan tools from which Clactonian ones derived, some were notched implying that they were attached to a handle or shaft.
The Clactonian industry may have co-existed with the Acheulean industry (which used handaxes). However, in 2004 there was an excavation of a butchered Pleistocene elephant near Dartford, Kent. Archaeologists recovered numerous Clactonian flint tools, but no handaxes. Since handaxes would be more useful than choppers to dismember an elephant carcass, this is evidence of the Clactonian being a separate industry. Flint of sufficient quality was available in the area, so probably the people who carved up the elephant did not have the knowledge to make handaxes.
Mousterian[change | edit source]
The Mousterian is an industry of stone tools associated with Neanderthal Man, Homo neanderthalensis. It dates from about 300,000 years to about 30,000 years ago. There are up to thirty types of tools in the Mousterian as contrasted with about six in the Acheulean.
The Mousterian was named after the type site of Le Moustier, a rock shelter in the Dordogne region of France. Similar flintwork has been found all over unglaciated Europe and also the Near East and North Africa. Handaxes, long blades and points typify the industry. Overall, the items are more perfectly finished than any previous work. The method used to get the blades and flakes is called the Levallois technique. It is a prepared-core technique: the core is worked on so that a long, fine blade can be struck off. For this quality of work, a 'soft' hammer made of something like deer antler is necessary, rather than a stone hammer. The extra brain size of the Neanderthals is probably relevant to these advances.
The cultures which follow the Mousterian are all cultures of modern humans, Homo sapiens. It is characteristic of our species to produce many more tools, all specialised for particular tasks. There are at least 100 types of tools in the Upper Palaeolithic compared to a maximum of 30 tools in the Mousterian.
Chronology of Palaeolithic and following periods[change | edit source]
The Palaeolithic is sometimes divided into three (somewhat overlapping) periods which mark technological and cultural advances in different human communities:
Overview of the main features of these periods[change | edit source]
|Stone age||Palaeolithic||Tools: sharpened flint or stone tools: hand axes, scrapers, wooden spears||Hunting and gathering||Mobile lifestyle – caves, huts, tooth or skin hovels, mostly by rivers and lakes||Tribes of plant gatherers and hunters (25-100 people)||Evidence for belief in the afterlife in the Upper Palaeolithic: appearance of burial rituals and ancestor worship. Priests and sanctuary servants appear in prehistory.|
|Mesolithic (known as the Epipalaeolithic in areas with no trend towards agricultural lifestyles)||Fine small tools: bow and arrow, harpoons, fish-basket, boats||Tribes and Bands|
|Neolithic||Tools: chisel, hoe, plough, reaping-hook, grain pourer, barley, loom, pottery and weapons||Agriculture, Hunting and gathering, fishing and domestication||Farmsteads during the Neolithic and the Bronze Age Formation of cities during the Bronze Age||Tribes and chiefdoms in some Neolithic societies at the end of the Neolithic. States and civilisations during the Bronze Age.|
|Bronze Age||Writing; copper and bronze tools, potter's wheel||Agriculture; cattle-breeding; crafts, trade|
|Iron Age||Iron tools|
Venus figurines[change | edit source]
Possibly among the earliest traces of art are Venus figurines. These are figurines (very small statues) of women, mostly pregnant with visible breasts. The figurines were found in areas of Western Europe to Siberia. Most are between 20,000 and 30,000 years old. Two figurines have been found that are much older: the Venus of Tan-Tan, dated to 300,000 to 500,000 years ago was found in Morocco. The Venus of Berekhat Ram was found on the Golan Heights. It has been dated to 200,000 to 300,000 years ago. It may be the one of the earliest things that show the human form.
Today it is not known what the figurines meant to the people who made them. There are two basic theories:
- They may be representations of human fertility, or they may have been made to help it.
- They may represent (fertility) goddesses.
Scientists have excluded that these figurines were linked to the fertility of fields, because agriculture had not been discovered at the time the figurines were made.
The two figurines that are older may have mostly formed by natural processes. The Venus of Tan-Tan was covered with a substance that could have been some kind of paint. The substance contained traces of iron and manganese. The figurine of Berekhat Ram shows traces that someone worked on it with a tool. A study done in 1997 states that these traces could not have been left by nature alone.
Cave paintings[change | edit source]
Cave paintings are paintings that were made on the walls or roofs of caves. Many cave paintings belong to the Palaeolothic Age, and date from about 15,000 to 30,000 years ago. Among the most famous are those in the caves of Altamira in Spain and Lascaux in France.p545 There are about 350 caves in Europe where cave paintings have been found. Usually, animals have been painted, like aurochs, bisons or horses. Why these paintings were done is not known. They are not simply decorations of places where people lived. The caves they were found in usually do not show signs that someone lived in them.
One of the oldest caves is that of Chauvet in France. Paintings in the cave fall into two groups. One has been dated to around 30,000 to 33,000 years ago, the other to 26,000 or 27,000 years ago.p546 The oldest known cave paintings, based on radiocarbon dating of "black from drawings, from torch marks and from the floors". As of 1999, the dates of 31 samples from the cave have been reported. The oldest paintings have been dated from 32,900±490 years ago.
Some archaeologists have questioned the dating. Züchner believe the two groups date from 23,000–24,000, and 10,000–18,000 years ago. Pettitt and Bahn believe the dating is inconsistent. They say the people at that periods of time painted things differently. They also do not know where the charcoal used to paint some things is from, and how big the painted area is.
People from the Palaeolithic era drew well. They knew about perspective, and they knew of different ways to draw things. They also were able to observe the behaviour of animals they painted. Some of the paintings show how the painted animals behaved. The paintings may have been important for rituals.
Footnotes[change | edit source]
- Ancient Greek: palaios = old; and lithos = stone. Coined by John Lubbock in 1865.
- Nicholas Toth and Kathy Schick (2007). Handbook of Paleoanthropology. Springer. pp. 1963. ISBN 978-3-540-32474-4 (Print) 978-3-540-33761-4 (Online). http://www.springerlink.com/content/u68378621542472j/.
- Klein, Richard G. 2009. The human career: human biological and cultural origins. 3rd ed, Chicago.
- Hosfield R.T., Wenban-Smith F.F. & Pope M.I. 2009. Great prehistorians: 150 years of Palaeolithic research, 1859–2009. Lithics 30.
- Grolier Incorporated (1989). The Encyclopedia Americana. University of Michigan: Grolier Incorporated. p. 542. ISBN ISBN 0-7172-0120-1.
- Mesolithic Period. 2008. In Encyclopædia Britannica. Retrieved April 10, 2008, from Encyclopædia Britannica Online.
- "Stone Age," Microsoft® Encarta® Online Encyclopedia 2007 © 1997-2007 Microsoft Corporation. Contributed by Kathy Schick and Nicholas Toth
- Grolier Incorporated (1989). The Encyclopedia Americana. University of Michigan: Grolier Incorporated. p. 542. ISBN ISBN 0-7172-0120-1. http://books.google.com/books?id=eRQaAAAAMAAJ&q=the+paleolithic+began+2.6+million+years+ago.&dq=the+paleolithic+began+2.6+million+years+ago.&pgis=1.
- McClellan (2006). Science and Technology in World History: An Introduction. Baltimore, Maryland: JHU Press. ISBN 0-8018-8360-1. http://books.google.com/books?id=aJgp94zNwNQC&printsec=frontcover#PPA11. Page 6-12
- Napier, John. 1960. Fossil hand bones from Olduvai Gorge. Nature, December 17th.
- Vrba E. & Y.H.-Selassie 1994. African Homo erectus: old radiometric ages and young Oldowan assemblages in the middle Awash Valley, Ethiopia. Science 264: 1907-1909.
- known as the Hoxnian, the Mindel-Riss or the Holstein stage
- Smithsonian: Middle Stone Age tools.
- Miller, Barbra; Bernard Wood, Andrew Balansky, Julio Mercader, Melissa Panger (2006). Anthropology. Boston Massachusetts: Allyn and Bacon. pp. 768. ISBN 0205320244. http://www.ablongman.com/html/productinfo/millerwood/MillerWood_c08.pdf.
- "'Oldest sculpture' found in Morocco". BBC News online. 23 May 2003. http://news.bbc.co.uk/1/hi/sci/tech/3047383.stm.
- Alexander Marshack (1997). "The Berekhat Ram figurine: a late Acheulian carving from the Middle East" (pdf). http://www.utexas.edu/courses/classicalarch/readings/Berekhat_Ram.pdf.
- Quotes from Clottes 2003b p214.
- Archaeologists sometimes use the phrase "B.P." (before the present day) to mean "years ago"
- Clottes 2003b p33. The oldest is sample Gifa 99776 from "zone 10". See also Chauvet (1996 p131, for a chronology of dates from various caves. Bahn's foreword and Clottes' epilogue to Chauvet 1996 discuss dating.
- Züchner, Christian (September 1998). "Grotte Chauvet Archaeologically Dated". Communication at the International Rock Art Congress IRAC ´98. http://www.rupestre.net/tracce/12/chauv.html. Retrieved 2007-12-23.
Clottes (2003b), pp. 213-214, has a response by Clottes.
- Pettitt, Paul; Paul Bahn (March 2003). "Current problems in dating Palaeolithic cave art: Candamo and Chauvet". Antiquity 77 (295): 134–141. http://www.antiquity.ac.uk/ant/077/Ant0770134.htm.
References[change | edit source]
Dictionary definitions from Wiktionary
Textbooks from Wikibooks
Quotations from Wikiquote
Source texts from Wikisource
Images and media from Commons
News stories from Wikinews
Images and media from Wikiversity
Images and media from Wikispecies
- Christopher Boehm 1999. "Hierarchy in the forest: the evolution of egalitarian behavior" page 198 Harvard University Press.
- Leften Stavros Stavrianos 1991. A global history from prehistory to the present. New Jersey, USA: Prentice Hall. ISBN 0-13-357005-3
- Bahn, Paul 1996. The atlas of world archeology. The Brown Reference Group PLC.
Other websites[change | edit source]
- Early voices: the leap to language, by Nicolas Wade
- Human Evolution, Microsoft® Encarta® Online Encyclopedia 2007 © 1997-2007 Microsoft Corporation. Contributed by Richard B. Potts.
- Stone Age, Microsoft® Encarta® Online Encyclopedia 2007 © 1997-2007 Microsoft Corporation. Contributed by Kathy Schick and Nicholas Toth.
- Middle and Upper Paleolithic hunter-gatherers the emergence of modern humans: the Mesolithic
- Map of Earth during the late Upper Paleolithic, by Christopher Scotese |
Table of Contents
The cholinesterase test is a significant diagnostic tool in both clinical and occupational medicine. Here’s an introduction to it:
Cholinesterase Test: An Introduction
Cholinesterase is an enzyme that plays a pivotal role in nerve function in both humans and insects. It’s responsible for breaking down the neurotransmitter acetylcholine, which is vital for the transmission of nerve impulses. There are two primary types of cholinesterase in the human body: “true” or acetylcholinesterase, found mainly in nerve tissue, and pseudocholinesterase or butyrylcholinesterase, primarily found in the blood.
The cholinesterase test is commonly utilized to measure the levels of these enzymes in the blood. Its significance in medicine arises from several key areas:
- Organophosphate Poisoning: Organophosphate compounds, which are widely used as insecticides, inhibit cholinesterase activity. Prolonged or high exposure can lead to a buildup of acetylcholine, causing overstimulation of the nerves. The cholinesterase test can assist in diagnosing such poisonings and monitoring recovery.
- Liver Function: Pseudocholinesterase is produced in the liver. Therefore, abnormally low levels of this enzyme can indicate liver dysfunction or disease.
- Genetic Variations: Some individuals inherit a variant form of pseudocholinesterase that functions less effectively than the typical form. These individuals might be at risk for prolonged paralysis or respiratory depression when given specific anesthetic drugs, like succinylcholine.
- Other Uses: While less common, cholinesterase tests may also be employed in diagnosing and monitoring other conditions, including exposure to certain nerve agents or specific therapeutic drugs that inhibit cholinesterase.
Test Result, Unit, Reference Range, and Test Methods
The cholinesterase test measures the activity of the cholinesterase enzyme in the blood, predominantly focusing on pseudocholinesterase (also known as butyrylcholinesterase or plasma cholinesterase). The specifics of test results, units, reference ranges, and methods can vary based on the laboratory and the population, but here’s a general overview:
- It indicates the activity of the cholinesterase enzyme in the sample. A reduced activity may suggest exposure to cholinesterase inhibitors, liver disease, or a genetic deficiency.
- The activity of cholinesterase is typically reported in units per liter (U/L) or equivalent units based on the methodology employed.
- The reference range varies among laboratories and is based on the population they serve and the methods they use. Generally, for pseudocholinesterase in adults:
- Male: Approximately 5,300 to 12,900 U/L
- Female: Approximately 4,500 to 11,200 U/L
- Remember, these are approximate values, and actual reference ranges may differ across labs.
- Ellman Method: One of the most widely used methods, the Ellman method involves adding a substrate (acetylthiocholine or butyrylthiocholine) that reacts with cholinesterase to produce a yellow-colored product. The rate of color change, measured spectrophotometrically, is directly proportional to the enzyme activity in the sample.
- Titrimetric Method: This method involves titrating acetylcholine or butyrylcholine against cholinesterase activity until a certain endpoint, usually detected via an indicator or pH change.
- Electrometric Method: It measures the change in pH or potential due to the hydrolysis of acetylcholine or butyrylcholine by the enzyme.
The cholinesterase test holds clinical significance in various scenarios, particularly in the realms of toxicology, hepatology, and anesthesiology. Here’s a breakdown of its importance in different clinical contexts:
- Toxicology – Organophosphate and Carbamate Poisoning:
- Organophosphates and carbamates are chemicals used in insecticides, herbicides, and some medications. They inhibit cholinesterase, leading to an accumulation of the neurotransmitter acetylcholine in synapses and neuromuscular junctions. This results in overstimulation of the nervous system.
- Symptoms of poisoning can range from miosis (pinpoint pupils), salivation, muscle twitching, respiratory failure, to even death.
- The cholinesterase test can help diagnose and monitor individuals with suspected or known exposure to these compounds. A significant reduction in cholinesterase activity is a hallmark of acute poisoning.
- Hepatology – Liver Function:
- Pseudocholinesterase (also known as butyrylcholinesterase) is produced by the liver. Abnormally low levels can indicate liver dysfunction or severe liver disease.
- In cases of acute liver damage or chronic liver disease, cholinesterase levels may be used as part of a panel of tests to assess the synthetic function of the liver.
- Anesthesiology – Response to Certain Anesthetics:
- Some individuals inherit a variant form of pseudocholinesterase that functions less effectively than the usual form.
- These individuals may experience prolonged paralysis or respiratory depression when administered specific drugs, such as the muscle relaxant succinylcholine or the anesthetic mivacurium.
- Testing for cholinesterase activity can help identify these individuals and guide anesthetic choices.
- Neurology – Myasthenia Gravis:
- While acetylcholinesterase inhibitors are a treatment for myasthenia gravis, monitoring cholinesterase levels can be helpful to adjust doses or assess compliance.
- Pharmacology – Monitoring Therapeutic Drugs:
- Certain medications, like those used to treat Alzheimer’s disease (e.g., donepezil), work by inhibiting cholinesterase. Monitoring levels can be part of therapeutic drug monitoring in specific contexts.
- Occupational Health:
- Individuals working in industries that use organophosphates (e.g., agriculture) might be monitored regularly using cholinesterase tests to ensure they aren’t being overexposed to these chemicals.
Here are the keynotes on the cholinesterase test:
- Definition: The cholinesterase test measures the activity of cholinesterase enzymes in the blood.
- Types of Cholinesterase:
- Acetylcholinesterase: Primarily found in nerve tissue.
- Pseudocholinesterase (Butyrylcholinesterase): Produced by the liver and found mainly in the blood.
- Clinical Significance:
- Toxicology: Diagnoses and monitors organophosphate and carbamate poisoning.
- Liver Health: Assesses liver function, with low levels indicating liver dysfunction.
- Anesthesiology: Identifies patients at risk of prolonged paralysis with certain anesthetics, especially those with a genetic variant of pseudocholinesterase.
- Occupational Health: Monitors individuals in industries using organophosphates to ensure safety from overexposure.
- Symptoms of Organophosphate Poisoning: Miosis, salivation, muscle twitching, and potentially, respiratory failure.
- Test Units: Usually reported in units per liter (U/L) or equivalent units based on methodology.
- Reference Range: Varies among laboratories; generally:
- Male: Approx. 5,300 to 12,900 U/L
- Female: Approx. 4,500 to 11,200 U/L
- Test Methods:
- Ellman Method: Spectrophotometric measurement of color change.
- Titrimetric Method: Titrating against cholinesterase activity.
- Electrometric Method: Measures changes in pH or potential.
- Treatment Implications: Can guide therapeutic choices, especially in toxic exposures and anesthesia.
- Prevention: Essential in occupational settings to prevent overexposure to inhibiting agents.
- Interpretation: Results should be viewed in conjunction with clinical symptoms, exposure history, and other relevant tests.
- “Casarett & Doull’s Toxicology: The Basic Science of Poisons” by Curtis Klaassen: This comprehensive toxicology textbook provides insights into the effects of various toxins, including organophosphates, on the human body and the role of cholinesterase testing.
- “Clinical Laboratory Medicine” by Kenneth D. McClatchey: This book offers details about various laboratory tests, including the cholinesterase test, and their clinical significance.
- Scientific Articles:
- “Cholinesterase Testing: A Review of the Past, Present, and Future” – This article, available in the journal “Clinical Laboratory Science,” delves into the historical and contemporary relevance of cholinesterase testing.
- “Butyrylcholinesterase: Overview, Structure, and Function” in the journal “Human Genomics”: This review article focuses on the molecular aspects of one of the cholinesterases and its significance.
- Government and Professional Resources:
- Centers for Disease Control and Prevention (CDC): The CDC’s NIOSH (National Institute for Occupational Safety and Health) offers resources on organophosphate poisoning and the role of cholinesterase monitoring, especially in occupational settings.
- World Health Organization (WHO) Guidelines: WHO has publications on pesticide poisoning, including a detailed guide on the clinical and analytical aspects of organophosphate poisoning and cholinesterase testing.
- Online Platforms:
- Lab Tests Online: This patient-friendly resource explains various lab tests, including the cholinesterase test, in simple terms. It’s a project by the American Association for Clinical Chemistry.
- Medscape: This medical website often features articles and clinical guidelines related to various tests and conditions, including the cholinesterase test.
- University Websites:
- Many universities with medical schools or programs in clinical chemistry or toxicology will have online resources, lecture notes, or publications that can provide insights into the cholinesterase test. |
Gene fusions, GUS, GFP and microscopy.
Click here to download PDF of the quiz sheet
When you finish, you can download a copy of the answer sheet here.
Answer the following questions:
1. Draw a diagram of a typical protein-coding plant gene. The annotation should include the following listed elements, and indicate their typical positions within the gene. Briefly describe the properties of the elements.
upstream regulatory sequences
RNA polymerase initiation site
2. What are transcription factors, and how do they interact with genes?
3. What is a reporter gene, and how is it detected?
4. Draw a schematic view of the differences between plant transformation vectors that might be used to produce protein fusions, transcriptional fusions and for enhancer detection in plants.
5. Why are these different gene fusions useful?
6. Both green fluorescent protein (GFP) and ß-glucuronidase (GUS) are widely used as reporter genes in plants. Describe major advantages of GFP over GUS, and vice versa.
7. As a keen plant biologist, you wish to construct your own fluorescent microscope for work with a variant of green fluorescent protein. The excitation and emission spectra of the protein are shown below. You have access to a box of filters that transmit or reflect light in the following bands:
Bandpass Filter 1: 350-460nm; Filter 2: 450-490nm; Filter 3: 515-560nm, Filter 4: 550-570nm
Beamsplitter Mirror 1: 460nm; Mirror 2: 500nm; Mirror 3: 580nm; Mirror 4: 595nm
Longpass Filter 1: >470nm; Filter 2: >530nm; Filter 3: >580nm; Filter 4: >635nm
You need to construct a suitable filter block for imaging GFP. What are the roles of the different types of filter in the microscope lightpath? What filters will you choose for the (i) excitation filter, (ii) beamsplitter and (iii) emission filter. Explain why you chose this combination? |
Unlike humans, plants are not able to eat food in order to meet their energy needs, instead they have to make their energy by photosynthesis.
As every GCSE student can tell you, photosynthesis is the process through which light energy is converted into either chemical energy or sugar. When it is converted to sugar, that is in turn used by the plant for things like respiration, growth and reproduction. Some of the sugar is also stored for use later, by being converted into starch.
Plants make, and store temporary supplies of starch in their leaves, which they use during the night when there is no light available for photosynthesis. Many plants, including crop plants like wheat and potatoes, also make starch in their seeds and storage organs (their grains and tubers), which is used for germination and sprouting.
But what exactly is starch? Starch is a chain of glucose molecules which are bound together, to form a bigger molecule, which is called a polysaccharide. There are two types of polysaccharide in starch:
- Amylose – a linear chain of glucose
- Amylopectin – a highly branched chain of glucose
Depending on the plant, starch is made up of between 20-25% amylose and 75-80% amylopectin.
As well as being important for plants, starch is also extremely important to humans. Starchy food for example is the main source of digestible carbohydrates in our diet.
The structure of starch can affect digestibility, with high amylose being more resistant to degradation. As such, foods with high levels of amylose are an important source of ‘resistant starch’, which has the potential to provide a range of health benefits by lowering elevated blood glucose levels and insulin response to carbohydrate-based meals that are low in fibre.
Starch also has many non-food applications, including use within the papermaking industry (providing strength to paper), manufacturing of adhesives, the textile industry (as a stiffener), and the production of bioplastics.
The many varied uses of starch depend on its structure, with granule shape and size affecting the properties of starch, and therefore its uses. For this reason, it is important for us to understand more about starch granules; including how starch polymer growth is directed, how different shaped and sized granules are formed, and how the plant controls the number of granules made.
A lot of our understanding about starch initiation and formation in leaves has come from work on the model plant Arabidopsis thaliana.
However, there is still a lot of work to do to understand granule initiation and formation within cereal grains. As cereals are one of the major food crops, and a major source of starch for industrial processes, understanding granule initiation and formation in grains is crucial.
At the John Innes Centre we are using a large mutant collection of wheat to investigate the granule initiation and have already isolated several promising mutants with radically altered starch granules. By investigating these further, we hope to identify key candidates involved in granule initiation in wheat, which may allow the development of new tools to improve crop quality and tailor starch production properties for different uses. |
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
It is 19 years since the world first met to discuss global early warning for natural hazards in Potsdam, Germany. Since then events have only underlined how important early warning systems are for saving lives across the globe.
The infrequency of tsunami events in the Indian Ocean explains why there was no such system in place while one had been installed in the Pacific Ocean 55 years earlier in 1949, following the Aleutian Island earthquake which resulted in 165 casualties.
Filling obvious gaps can justify the expense of a single hazard early warning system as in the case of Bangladesh which has lost hundreds of thousands of lives to cyclones but has reduced the death toll significantly in the last two decades thanks to an effective community-based cyclone preparedness programme.
However, it is often the case, particularly in developing countries, that a multi-hazard approach makes more sense economically, and operationally, especially in parts of the world exposed to many different types of hazard.
That is one key reason why the Mexican Government, the UN Office for Disaster Risk Reduction (UNISDR), and the World Meteorological Organization (WMO), with many partners, are organizing the first-ever Multi-Hazard Early Warning Conference.
This is timely in the context of extreme weather events that have doubled over the last 40 years and continue to claim many lives and cause huge economic losses particularly in countries which struggle to maintain viable climate and weather information services.
Multi-hazard early warning systems are essential to achieving reductions in loss of life, the numbers of people affected by disasters, economic losses and damage to critical infrastructure – targets which Governments have agreed to under the global plan for reducing such losses, the Sendai Framework for Disaster Risk Reduction. |
Sustainability refers to the protection of natural resources and the conduct of activities to meet the needs of today in order to meet the needs of future generations.
The concept of sustainability is a concept with economic, social and environmental dimensions. Therefore, being sustainable requires people to establish a balance between their economic activities and social lives and the protection of natural resources.
When entering the concept of sustainability, firstly economic sustainability can be mentioned. This concept implies that while people engage in production activities to improve their quality of life, at the same time the rate at which they consume natural resources should be such that they meet future needs. Therefore, economic sustainability requires that activities are carried out not only to meet the needs of today, but also to meet the needs of future generations.
Social sustainability means that people have equality and justice in society, their basic needs are met and human rights are protected. Ensuring social sustainability ensures that people’s quality of life is improved and the general welfare of society is increased.
Environmental sustainability is related to the use of natural resources. The consumption of natural resources can cause environmental problems such as damaging the resources of our planet, reducing biodiversity and climate change. Environmental sustainability aims to protect natural resources so that future generations can use them by controlling the consumption of natural resources.
If we do not carry out sustainable activities on Earth, we may face many negative consequences in the future. Firstly, the depletion of natural resources can cause environmental problems such as climate change, reducing biodiversity and damaging the resources of our planet. Therefore, the depletion of natural resources in the future may lead to the inability to meet basic needs and reduce the overall well-being of society. See also,
Furthermore, if sustainable activities are not realised, economic and social problems may also arise. For example, income inequality may increase, unemployment rates may rise and health problems may arise. These reduce the overall well-being of society and lower people’s quality of life.
Carrying out sustainable activities benefits today’s society as well as leaving a better world for future generations. Sustainability helps to create a healthier environment, a stronger economy and a fairer society. However, to achieve sustainability, people need to change their behaviour and habits. Therefore, a range of efforts at individual, organisational and political level are required to achieve sustainability.
Ultimately, to be sustainable is to be sustainable in a way that meets the needs of today in order to meet the needs of future generations. |
Pyrolysis is the thermal Decomposition reactionA decomposition reaction is a thermally induced reaction of a chemical compound forming solid and/or gaseous products. decomposition of organic compounds in an inert atmosphere.
During pyrolysis, solid, liquid or gaseous products can be generated. If gases are released from the sample during pyrolysis, the changes in mass can be detected by TGA (thermogravimetry). The evolved gases can be identified by EGA (evolved gas analysis). Pyrolysis studies are often carried out on polymers, coal, biomass or organic compounds. Residual masses can be indicative of additional components.
The figure below shows the pyrolysis of PVC in a nitrogen atmosphere. The Gram Schmidt signal depicts the change in intensity caused within the FT-IR by the infrared-active gases released. Individual gaseous products can then be identified by database comparison. Measurement conditions: RT–800°C, 10 K/min, 40 ml/min nitrogen, sample mass 10 mg |
Skip to main content
An electromagnetic wave is a phenomenon whereby an electric field and a magnetic field mutually interact and travel through space in the manner of a wave. Electromagnetic waves, expressed as frequency (i.e., the number of waves generated per second), are classified in order of descending frequency as radiation, light, and radio waves.
In places where electricity is flowing, electromagnetic waves are inevitably generated; for example, typical household electrical appliances generate electromagnetic waves with frequency of 50 to 60 Hz. Moreover, electrical devices like microwave ovens and mobile phones generate electromagnetic waves called "microwaves." |
Arts media are the materials and tools used by an artist, composer or designer to create a work of art, for example, "pen and ink" where the pen is the tool and the ink is the material. The following lists types of art and the media each uses.
Further information: List of woods
Main article: Ceramic art
Main article: Outline of drawing and drawings
Film, as a form of mass communication, is itself also considered a medium in the sense used by fields such as sociology and communication theory (see also mass media). These two definitions of medium, while they often overlap, are different from one another: television, for example, utilizes the same types of artistic media as film, but may be considered a different medium from film within communication theory.
Main article: Culinary art
A chef's tools and equipment, including ovens, stoves, grills, and griddles. Specialty equipment may be used, including salamanders, French tops, woks, tandoors, and induction burners.
Glassblowing, Glass fusing, colouring and marking methods.
Main article: Installation art
Further information: Lighting designer
Installation art is a site-specific form of sculpture that can be created with any material. An installation can occupy a large amount of space, create an ambience, transform/disrupt the space, exist in the space. One way to distinguish an installation from a sculpture (this may not apply to every installation) is to try to imagine it in a different space. If the objects present difficulties in a different space than the original, it is probably an installation.
Main article: Outline of painting
Muralists use many of the same media as panel painters, but due to the scale of their works, use different techniques. Some such techniques include:
Main article: Graphic narrative
Comics creators use many of the same media as traditional painters.
The performing arts is a form of entertainment that is created by the artist's own body, face and presence as a medium. There are many skills and genres of performance; dance, theatre and re-enactment being examples. Performance art is a performance that may not present a conventional formal linear narrative.
Main article: Outline of photography
In photography a photosensitive surface is used to capture an optical still image, usually utilizing a lens to focus light. Some media include:
In the art of printmaking, "media" tends to refer to the technique used to create a print. Common media include:
Main article: Outline of sculpture
In sculpting, a solid structure and textured surface is shaped or combined using substances and components, to form a three-dimensional object. The size of a sculptured work can be built very big and could be considered as architecture, although more commonly a large statue or bust, and can be crafted very small and intricate as jewellery, ornaments and decorative reliefs.
The art of sound can be singular or a combination of speech or objects and crafted instruments, to create sounds, rhythms and music for a range of sonic hearing purposes. See also music and sound art.
The use of technical products as an art medium is a merging of applied art and science, that may involve aesthetics, efficiency and ergonomics using various materials.
In the art of textiles a soft and flexible material of fibers or yarn is formed by spinning wool, flax, cotton, or other material on a spinning wheel and crocheting, knitting, macramé (knotting), weaving, or pressing fibres together (felt) to create a work. |
You should not be surprised if someone tells you that the mains voltage fluctuation could be anywhere from 160 volts to 270 volts. Although majority of our electrical and electronics appliances have some kind of voltage stabilization internally built-in, more than 90 per cent of the faults in these appliances occur due to these power fluctuations. This simple AC mains voltage indicator circuit gives visual indication of AC mains voltage from 160 volts to 270 volts in steps of 10 volts.
There are twelve LEDs numbered LED1 to LED12 to indicate the voltage level. For input AC mains voltage of less than 160 volts, all the LEDs remain off. LED1 glows when the voltage reaches 160 volts, LED2 glows when the voltage reaches 170 volts and so on. The number of LEDs that glow keeps increasing with every additional 10 volts. When the input voltage reaches 270 volts, all the LEDs glow.
AC Mains Voltage Indicator
The circuit basically comprises three LM339 comparators (IC1, IC2 and IC3) and a 12V regulator (IC4). It is powered by regulated 12V DC. For power supply, mains 230V AC is stepped down to 15V AC by step-down transformer X1, rectified by a bridge rectifier comprising diodes D1 through D4, filtered by capacitor C4 and regulated by IC4. The input voltage of the regulator is also fed to the inverting inputs of gates N1 through N12 for controlling the level of the AC.
The LED-based display circuit is built around quad op-amp comparators IC1 through IC3. The inverting input of all the comparators is fed with the unregulated DC voltage, which is proportional to mains input, whereas the non-inverting inputs are derived from regulated output of IC4 through a series network of precision resistors to serve as reference DC voltages.
Resistors R13 to R25 are chosen such that the reference voltage at points 1 to 12 is 0.93V, 1.87V, 2.80V, 3.73V, 4.67V, 5.60V, 6.53V, 7.46V, 8.40V, 9.33V, 10.27V and 11.20V, respectively. When the input voltage varies from 160V AC to 270V AC, the DC voltage at the anode of ZD1 also varies accordingly. With input voltage varying from 160V to 270V, the output across filter capacitors C1 and C2 varies from 14.3V to 24.1V approximately. Zener ZD1 is used to drop fixed 12V and apply proportional voltages to all comparator stages (inverting pins). Whenever the voltage at the non-inverting input of the comparators goes high, the LED connected at the output glows.
Construction & testing
Assemble the circuit on a general purpose PCB such that all the LEDs make a bar graph. In the bar graph, mark LED1 for minimum level of 160V, then LED2 for 170V and so on. Finally, mark LED12 for maximum level of 270V.
Now your test gadget is ready to use. For measuring the AC voltage, simply plug the gadget into the mains AC measuring point, press switch S1 and observe the bar graph built around LEDs. Let’s assume that LED1 through LED6 glow. The measured voltage in this case is 220V. Similarly, if all the LEDs glow, it means that the voltage is more than 270V.
The article was first published in May 2006 and has recently been updated. |
Once roofed by ice for millennia, a 10,000 square km portion of the Antarctic seabed represents a true frontier, one of Earth’s most pristine marine ecosystems, made suddenly accessible to exploration by the collapse of the Larsen A and B ice shelves, 12 and five years ago respectively. Now it has yielded secrets to some 52 marine explorers who accomplished the seabed’s first comprehensive biological survey during a 10-week expedition aboard the German research vessel Polarstern.
While their families at home in 14 countries were enjoying New Year’s dinners, experts on the powerful icebreaking research ship were logging finds from icy waters as deep as 850 meters off the Antarctic Peninsula – an area rapidly changing in fundamental ways. The recent report of the Intergovernmental Panel on Climate Change shows nowhere on Earth warming more quickly than this corner of Antarctica, a continent 1.5 times the size of continental USA.
The expedition forms part of the Census of Antarctic Marine Life (http://www.caml.aq), which has 13 upcoming voyages scheduled during International Polar Year, to be launched in Paris March 1. A project of the global Census of Marine Life (http://www.coml.org) collaboration, CAML is responsible for the synthesis of taxonomic data and supports the efforts of national programs the world over.
Says CAML leader Michael Stoddart of Australia: “What we learned from the Polarstern expedition is the tip of an iceberg, so to speak. Insights from this and CAML’s upcoming International Polar Year voyages will shed light on how climate variations affect ice-affiliated species living in this region.”
Leaving South Africa Nov 23, the research icebreaker Polarstern operated by the Alfred Wegener Institute for Polar and Marine Research criss-crossed the northwest Weddell Sea. The cruise included the Larsen A and B zones, an area about the size of Jamaica (or half the size of New Jersey, a third the size of Belgium. The voyage ended Jan. 30.
With sophisticated sampling and observation gear, including a camera-equipped, remotely-operated vehicle, experts on the Polarstern have returned with revealing photography of life on a seabed uncapped by the disintegration of Larsen A and B. The expedition uncovered a wealth of new insights and brilliant images of unfamiliar creatures among an estimated 1,000 species collected, several of which may prove new to science.
The Polarstern’s mission included charting the environmental impact of history’s largest known ice shelf collapses. Polarstern’s team set out to find what indigenous forms of marine life existed under Larsen A and B, and what new organisms now are opportunistically moving in, redefining the ecosystem.
“The breakup of these ice shelves opened up huge, near pristine portions of the ocean floor, sealed off from above for at least 5,000 years, and possibly up to 12,000 years in the case of Larsen B,” says Julian Gutt, a marine ecologist at Germany’s Alfred Wegener Institute for Polar and Marine Research and chief scientist on the Polarstern expedition.
“The collapse of the Larsen shelves may tell us about impacts of climate-induced changes on marine biodiversity and the functioning of the ecosystem. Until now, scientists have glimpsed life under Antarctica’s ice shelves only through drill holes. We were in the unique position to sample wherever we wanted in a marine ecosystem considered one of the least disturbed by humankind anywhere on the planet.”
“This knowledge of biodiversity is fundamental to understanding ecosystem functioning,” he adds. “The results of our efforts will advance our ability to predict the future of our biosphere in a changing environment.”
When Antarctic glaciers reach the coast of the continent, they begin to float and become ice shelves, from which icebergs calve. Since 1974, a total of 13,500 square km of ice shelves have disintegrated in the Antarctic Peninsula, a phenomenon linked to regional temperature increases in the past 50 years. Growing numbers of scientists worry that similar break-ups in other areas could lead to increases in ice flow and cause sea levels to rise.
Polarstern Discoveries and Insights
Larsen zone seafloor sediments were extremely varied, ranging from bedrock to pure mud. As a result, animals living on the sediment (epifauna) were highly varied as well, though far less abundant in the Larsen A and B areas – perhaps only 1% animal abundance compared to sea beds in the eastern part of the Weddell Sea.
In the relatively shallow waters of the Larsen zone, scientists were intrigued to find abundant deep sea lilies (members of a group called crinoids) and their relatives, sea cucumbers and sea urchins.
These species are more commonly found around 2,000 meters or so, able to adapt to life where resources far more scarce – conditions similar to those under an ice shelf.
Apparent newcomers found colonizing the Larsen zone include fast-growing, gelatinous sea squirts. The scientists found dense patches of sea squirts and say they were likely able to colonize the Larsen B area only after ice shelf broke in 2002.
Very slow-growing animals called glass sponges were discovered, with greatest densities in the Larsen A area, where life forms have had seven more years to re-colonize than Larsen B. The high number of juvenile forms of glass sponges observed probably indicates shifting species composition and abundance in the past 12 years.
Biodiversity in the Antarctic Peninsula
Among many hundreds of animal specimens collected on the voyage:
Extensive analyses will be conducted to prove whether or not candidate specimens are in fact new species. Confirmed new species will be logged in the Census of Marine Life OBIS (Ocean Biogeographic Information System) database and its Antarctic component SCAR-MarBIN (the Marine Biodiversity Information Network), which to date has recorded some 5,957 marine life forms, with an estimated 5,000 to 11,000 species yet to be discovered.
The remotely operated vehicle (ROV) used on Polarstern revealed less scouring damage than anticipated from icebergs that broke away from the Larsen shelves. In shallower depths to about 220 metres, the scientists found considerable richness of species variety.
“Iceberg disturbance was much more obvious north of the Larsen A and B areas where icebergs more typically run aground,” says Dr. Gutt. “In those outer areas, at depths of roughly 100 meters, we observed fresh ice scour marks everywhere and early stages of marine life re-colonization but no mature community. At around 200 meters depth we discovered a mosaic of life in different stages of re-colonization.”
A potentially far-reaching find by the Polarstern ROV: small clusters of dead clamshells littering an area on the dark ocean floor and pointing to the presence of a very rare “cold seep” – essentially a sea floor vent spewing methane and sulphide. Seeps can create a temporary habitat for animal life in otherwise barren, inhospitable terrain for many years before extinguishing, abruptly starving off the community.
The first-ever cold vent on Antarctica’s continental shelf was discovered at roughly 830 metres depth two years ago by a U.S. research team. The ROV located it and sampled the soil sediments, the first analysis of which revealed concentrated methane and sulphide. Clamshells found will be studied to determine their age and the life span of the colony.
In all, some 700 and 8,000 nautical miles were dedicated by the Polarstern and its helicopter crews respectively to recording the presence and behavious of marine mammals, which included Minke whales close to the pack ice edge and very rare beaked whale species near Elephant Island.
“It was surprising how fast such a new habitat was used and colonized by Minke whales in considerable densities,” says specialist Dr. Meike Scheidat of Germany. “They indicate that the ecosystem in the water column changed considerably.”
Fisheries investigations were carried out at islands west and north of the Antarctic Peninsula. The results of 85 hauls over 19 days show the biomass of two Antarctic cod species has increased since a survey in 2003 while stocks of Blackfin and Mackerel Icefish has decreased. The results will contribute to fish stock monitoring and assessment ongoing under the Convention on the Conservation of Antarctic Marine Living Resources (http://www.ccamlr.org).
Preliminary findings from the voyage will be confirmed by detailed analysis at the scientists’ home institutes over the next few years.
According to Dr. Stoddart, a significant consequence in the Antarctic Peninsula of rising temperatures is the slow decrease of sea ice and of the planktonic algae that grows underneath. These algae feeds krill, small shrimp-like creatures, and therefore represents the bottom rung on a marine food chain that eventually sustains the iconic large Antarctic species: penguins, whales and seals. An adult blue whale alone eats about 4 million individual krill per day.
“Algae is a source of abundant, high-quality winter food and is utterly central to the health of the whole ecosystem,” says Dr. Stoddart, adding that recent research by colleagues from the U.K. shows krill stocks decreasing significantly around the Antarctic Peninsula.
However, cautions Dr. Gutt: “Predicting the future of higher levels in the food chain, e.g. animals living at the sea-floor or fish, is very difficult. It is for example clear that in the Larsen zone a major biodiversity shift will happen and the unique under-ice shelf system will disappear in this limited area, but we have to analyze carefully our raw data to provide, as a first step, a basis for such predictions. Besides modeling, further observations and ecological field studies are necessary.”
“This is virgin geography. If we don’t find out what this area is like now following the collapse of the ice shelf, and what species are there, we won’t have any basis to know in 20 years’ time what has changed, and how global warming has altered the marine ecosystem,” says Gauthier Chapelle, outreach officer for the expedition and biologist at the Brussels-based International Polar Foundation.
Says Tarik Chekchak, Program Manager of the Cousteau Society: “The Southern Ocean spans 35 million square km – 10% of Earth’s ocean surface, and ice shelves cover 1.5 million square km of it. When Captain Cousteau explored Antarctica aboard the Calypso in 1972-73, the Larsen B ice shelf was 3,250 square km bigger and krill abundance in the Peninsula was much higher than today. The annual local temperature has risen 2.5 °C since the 1940’s.
“Impacts of these changes on the Southern Ocean ecosystem are substantial. Interplay between ocean circulation, sea ice extent, ice shelf cover and the iceberg’s mechanical action on the sea bed seem to determine the characteristics of some key planktonic and benthic communities. In a changing environment, the results of the CAML efforts are key to advancing our ability to understand our biosphere, inform public debate and allow decision-makers to lead us into a more sustainable future.”
The above post is reprinted from materials provided by Alfred Wegener Institute for Polar and Marine Research. Note: Materials may be edited for content and length.
Cite This Page: |
The arctic fox is an example of a complex animal that has adapted to its environment and illustrates the relationships between an animal’s form and function. The structures of animals consist of primary tissues that make up more complex organs and organ systems. Homeostasis allows an animal to maintain a balance between its internal and external environments.
Animals vary in form and function. From a sponge to a worm to a goat, an organism has a distinct body plan that limits its size and shape. Animals’ bodies are also designed to interact with their environments, whether in the deep sea, a rainforest canopy, or the desert. Therefore, a large amount of information about the structure of an organism’s body (anatomy) and the function of its cells, tissues and organs (physiology) can be learned by studying that organism’s environment.
You are not currently logged in and you are encouraged to login or register before you continue, so that you can track your progress. |
Adding and Subtracting with Negative Numbers
Materials: Large number line up front for the class to see, student number lines
- Say: We've been learning how to add and subtract with positive and negative numbers on a number line. Do the following additions on your number line: 3 + (4); 6 + (3); 2 + (4).
- Ask: Is there anything that is the same about all of the answers?
Students will respond that all the answers are negative.
- Ask: Do you think that when we add two negative numbers, the sum will always be negative?
The answer is yes.
Have students give some examples to support their answers, such as walking backwards 3 steps then backwards 4 steps more or borrowing 5 dollars and then borrowing 3 dollars more.
- Ask: When we add two positive numbers, do we always get a positive number? (Yes)
If students ask about the sign of the sum when adding a positive and a negative number, you can tell them that it will vary depending on the numbers being added. If your students seem adept with the basic concepts, point out that the number which is farther from zero determines the sign of the answer.
- Ask: If the sum of two negative numbers is 8, what could those two numbers be?
If the sum of a positive and a negative number is 3, what could those two numbers be?
If the sum of a positive and a negative number is +4, what could those two numbers be?
A good homework project would be to find all the pairs of numbers between 8 and 8 whose sum is 3.
- Ask: How does the sum of 5 + (9) compare to the sum of 9 + 5?
How does the sum of 6 + (4) compare to the sum of 4 + 6?
What property do these illustrate?
Make sure that students recognize that the commutative property for addition is true for integers. Similarly, emphasize that the associative property for addition is true for integers.
- Have the students do the following subtraction problems using their number lines:
7 (+2) and 7 + (2)
3 (+4) and 3 + (4)
6 (+1) and 6 + (1)
5 (+4) and 5 + (4).
- Ask: What was true about the answers for each pair of problems?
It should be stated by them or by you that subtracting a positive number from a negative number always gives a difference of which is negative, and that subtracting a positive number is like adding the opposite of that number.
- Ask: If the difference between a positive and a negative number is 7, what could those two numbers be?
If the difference between a positive and a negative number is
4, what could those two numbers be?
Wrap-Up and Assessment Hints
Students should be allowed to create and use number lines during assessment. You may want to encourage more advanced students to practice adding and subtracting with negative numbers without using a number line. Also, give students problems that test their understanding of the properties of addition using negative numbers.
Placing Negative Numbers
on a Number Line
Adding and Subtracting
with Negative Numbers |
Terrorists have frequently used explosive devices as one of their most common weapons. Terrorists do not have to look far to find out how to make explosive devices; the information is readily available in books and other information sources. Explosive devices can be highly portable, using vehicles and humans as a means of transport. They are easily detonated from remote locations or by suicide bombers.
Conventional bombs have been used to damage and destroy financial, political, social, and religious institutions. Attacks have occurred in public places and on city streets with thousands of people around the world injured and killed.
Learn what to do if you receive a bomb threat or get a suspicious package or letter.
Devastating acts, such as the terrorist attacks on the Oklahoma City and September 11th, have left many concerned about the possibility of future incidents in the United States.
Nevertheless, there are things you can do to prepare for the unexpected. Preparing for such events will reduce the stress that you may feel now, and later, should another emergency arise.
Taking preparatory action can reassure you and your children that you can exert a measure of control even in the face of such events.
Before an Explosion
The following are things you can do to protect yourself, your family and your property in the event of an explosion.
- Build an Emergency Supply Kit, which includes items like non-perishable food, water, a battery-powered or hand-crank radio, extra flashlights and batteries. You may want to prepare a kit for your workplace and a portable kit to keep in your car in case you are told to evacuate. This kit should include:
- Copies of prescription medications and medical supplies.
- Bedding and clothing, including sleeping bags and pillows.
- Copies of important documents: driver’s license, Social Security card, proof of residence, insurance policies, wills, deeds, birth and marriage certificates, tax records, etc.
- Make a Family Emergency Plan. Your family may not be together when disaster strikes, so it is important to know how you will contact one another, how you will get back together and what you will do in case of an emergency.
- Plan places where your family will meet, both within and outside of your immediate neighborhood.
- It may be easier to make a long-distance phone call than to call across town, so an out-of-town contact may be in a better position to communicate among separated family members.
- You may also want to inquire about emergency plans at places where your family spends time: work, daycare and school. If no plans exist, consider volunteering to help create one.
- Knowing your community's warning systems and disaster plans, including evacuation routes.
- Notify caregivers and babysitters about your plan.
- Make plans for your pets
If you receive a telephoned bomb threat, you should do the following:
- Get as much information from the caller as possible. Try to ask the following questions:
- When is the bomb going to explode?
- Where is it right now?
- What does it look like?
- What kind of bomb is it?
- What will cause it to explode?
- Did you place the bomb?
- Keep the caller on the line and record everything that is said.
- Notify the police and building management immediately.
Be wary of suspicious packages and letters. They can contain explosives, chemical or biological agents. Be particularly cautious at your place of employment.
Some typical characteristics postal inspectors have detected over the years, which ought to trigger suspicion, include parcels that:
- Are unexpected or from someone unfamiliar to you.
- Have no return address or a return address that can’t be verified as legitimate.
- Are marked with restrictive endorsements such as “Personal,” “Confidential,” or “Do not X-ray.”
- Have protruding wires or aluminum foil, strange odors or stains.
- Show a city or state in the postmark that doesn’t match the return address.
- Are of unusual weight given their size or are lopsided or oddly shaped.
- Are marked with threatening language.
- Have inappropriate or unusual labeling.
- Have excessive postage or packaging material, such as masking tape and string.
- Have misspellings of common words.
- Are addressed to someone no longer with your organization or are otherwise outdated.
- Have incorrect titles or titles without a name.
- Are not addressed to a specific person.
- Have hand-written or poorly typed addresses.
With suspicious envelopes and packages other than those that might contain explosives, take these additional steps against possible biological and chemical agents.
- Refrain from eating or drinking in a designated mail handling area.
- Place suspicious envelopes or packages in a plastic bag or some other type of container to prevent leakage of contents. Never sniff or smell suspect mail.
- If you do not have a container, then cover the envelope or package with anything available (e.g., clothing, paper, trash can, etc.) and do not remove the cover.
- Leave the room and close the door or section off the area to prevent others from entering.
- Wash your hands with soap and water to prevent spreading any powder to your face.
- If you are at work, report the incident to your building security official or an available supervisor, who should notify police and other authorities without delay.
- List all people who were in the room or area when this suspicious letter or package was recognized. Give a copy of this list to both the local public health authorities and law enforcement officials for follow-up investigations and advice.
- If you are at home, report the incident to local police.
During an Explosion
- Get under a sturdy table or desk if things are falling around you. When they stop falling, leave quickly, watching for obviously weakened floors and stairways. As you exit from the building, be especially watchful of falling debris.
- Leave the building as quickly as possible. Stay low if there is smoke. Do not stop to retrieve personal possessions or make phone calls.
- Do not use elevators.
- Check for fire and other hazards.
- Once you are out, do not stand in front of windows, glass doors or other potentially hazardous areas.
- Move away from sidewalks or streets to be used by emergency officials or others still exiting the building.
- If you are trapped in debris, use a flashlight, if possible, to signal your location to rescuers.
- Tap on a pipe or wall so rescuers can hear where you are.
- If possible, use a whistle to signal rescuers.
- Shout only as a last resort. Shouting can cause a person to inhale dangerous amounts of dust.
- Avoid unnecessary movement so you don’t kick up dust.
- Cover your nose and mouth with anything you have on hand. (Dense-weave cotton material can act as a good filter. Try to breathe through the material.)
After an Explosion
As we learned from the events of September 11, 2001, the following things can happen after a terrorist attack:
- There can be significant numbers of casualties and/or damage to buildings and the infrastructure. So employers need up-to-date information about any medical needs you may have and on how to contact your designated beneficiaries.
- Heavy law enforcement involvement at local, state and federal levels follows a terrorist attack due to the event's criminal nature.
- Health and mental health resources in the affected communities can be strained to their limits, maybe even overwhelmed.
- Extensive media coverage, strong public fear and international implications and consequences can continue for a prolonged period.
- Workplaces and schools may be closed, and there may be restrictions on domestic and international travel.
- You and your family or household may have to evacuate an area, avoiding roads blocked for your safety.
- Clean-up may take many months.
If you require more information about any of these topics, the following resources may be helpful.
- IED Attack Fact Sheet: Improvised Explosive Devices. Document providing preparation guidance for a terrorist attack or similar emergency.
- Terrorism, Preparing for the Unexpected. Document providing preparation guidance for a terrorist attack or similar emergency.
Find additional information on how to plan and prepare for an explosion and learn about available resources by visiting the following websites: |
HPO option is available for this course.
Thermal physics deals with large numbers of particles, anything big enough to see with a conventional microscope. From understanding the greenhouse effect to the blackbody radiation left over from the Big Bang, no other physical theory is used more widely through out science.
This course begins with classical thermodynamics to introduce the fundamental concepts of temperature, energy, and entropy. These concepts are then used to explore free energy, heat, and the fundamental behaviour of heat engines and refrigerators. The physical and mathematical bases of statistical mechanics, in which the laws of statistics are used to make the connection between the quantum behaviour of 1 atom and the behaviour of bulk matter made up of 10^23 atoms, are then introduced. This leads to the statistical physics concepts of temperature, entropy, Boltzmann and Gibbs factors, partition functions, and distribution functions. These concepts are applied to both classical and quantum systems, including phase transformations, blackbody radiation, and Fermi gases.
Upon successful completion, students will have the knowledge and skills to:
On satisfying the requirements of this course, students will have the knowledge and skills to:
1. Identify and describe the statistical nature of concepts and laws in thermodynamics, in particular: entropy, temperature, chemical potential, Free energies, partition functions.
2. Use the statistical physics methods, such as Boltzmann distribution, Gibbs distribution, Fermi-Dirac and Bose-Einstein distributions to solve problems in some physical systems.
3. Apply the concepts and principles of black-body radiation to analyze radiation phenomena in thermodynamic systems.
4. Apply the concepts and laws of thermodynamics to solve problems in thermodynamic systems such as gases, heat engines and refrigerators etc.
5. Analyze phase equilibrium condition and identify types of phase transitions of physical systems.
6. Make connections between applications of general statistical theory in various branches of physics.
7. Design, set up, and carry out experiments; analyse data recognising and accounting for errors; and compare with theoretical predictions.
Assessment will be based on:
- Weekly problem sheets and/or quizzes to assess abilities to analyse problems, identify approaches to solutions, and apply the concepts and mathematical formalisms of thermal physics (35%; LO 1-6)
- An extended research assignment resulting in a paper and a presentation, providing an opportunity to focus on a chosen aspect of thermal physics, thus allowing students to gain a deeper appreciation of the structure and applications of thermal physics (15%; LO 1-6)
- Laboratory component to evaluate understanding of the significance of particular experimental results and the ability to integrate theoretical and experimental work (20%; LO 2, 3, 4, 7)
- Final exam (30%; LO 1-6)
The ANU uses Turnitin to enhance student citation and referencing techniques, and to assess assignment submissions as a component of the University's approach to managing Academic Integrity. While the use of Turnitin is not mandatory, the ANU highly recommends Turnitin is used by both teaching staff and students. For additional information regarding Turnitin please visit the ANU Online website.
A total of approximately twenty-eight lectures and thirty hours of tutorials and laboratory work.
Requisite and Incompatibility
An Introduction to Thermal Physics, Daniel V Schroeder. Published by Addison Wesley Longman, 2000.
Assumed KnowledgeIt is desirable that students take MATH2305 or MATH2405 simultaneously with PHYS2013 unless they have previously completed MATH2023, but it is not a course requirement.
Tuition fees are for the academic year indicated at the top of the page.
If you are a domestic graduate coursework or international student you will be required to pay tuition fees. Tuition fees are indexed annually. Further information for domestic and international students about tuition and other fees can be found at Fees.
- Student Contribution Band:
- Unit value:
- 6 units
If you are an undergraduate student and have been offered a Commonwealth supported place, your fees are set by the Australian Government for each course. At ANU 1 EFTSL is 48 units (normally 8 x 6-unit courses). You can find your student contribution amount for each course at Fees. Where there is a unit range displayed for this course, not all unit options below may be available. |
In the last few chapters, we've looked at the Earth's crust—its mineral composition, its lithospheric plates, and the landforms created by volcanic and tectonic activity. Now let's examine the shallow surface layer in which life exists. We'll look first at how rocks are softened and how they break up. Later, we'll see how the resulting rock materials move downhill under the force of gravity.
Weathering describes the combined action of all processes that cause rock to disintegrate physically and decompose chemically because of exposure near the Earth's surface. There are two types of weathering.
In physical weathering, rocks are fractured and broken apart. In chemical weathering, rock minerals are transformed from types that were stable when the rocks were formed to types that are now stable at the temperatures and pressures of the Earth's surface. Weathering produces regolith—a surface layer of weathered rock particles that lies above solid, unaltered rock—and also creates a number of distinctive landforms.
One of the most important physical weathering processes in cold climates is frost action. Unlike most liquids, water expands when it freezes. If you've ever left a bottle of water chilling in the freezer overnight only to find a mass of ice surrounded by broken glass the next morning, you've seen this phenomenon first-hand. As water in the pore spaces of rocks freezes and thaws repeatedly, expansion can break even extremely hard rocks into smaller fragments.
Water penetrates fractures in bedrock. These fractures, called joints, are created when rocks are exposed to heat and pressure, then cool and contract. Joints typically occur in parallel and intersecting planes, creating natural surfaces of weakness in the rock. Frost action then causes joint-block separation. Water invades sedimentary rocks along their stratification planes, or bedding planes. Joints often cut bedding planes at right angles, and relatively weak stresses will separate the joint blocks. Water can also freeze between mineral grains in igneous rocks, separating the grains to create a fine gravel or coarse sand of single mineral particles. This process is called granular disintegration.
All climates that have a winter season with cycles of freezing and thawing show the effects of frost action. On high mountain summits and in the arctic tundra, large angular rock fragments can accumulate in a layer that completely blankets the hard rock underneath. The result is a rock sea, or
rock glacier that is constantly churned by frost action. On cliffs of bare rock, frost action can detach angular blocks that fall to the base of the cliff. These loose fragments are called talus, and if block production is rapid, a talus slope of coarse rubble forms.
A similar physical weathering process occurs in dry climates. Salt-crystal growth in rock pores can disintegrate rock, and this process carves out many of the niches, shallow caves, rock arches, and pits seen in sandstones of arid regions. During long drought periods, ground water moves to the rock surface by capillary action—a process in which the water's surface tension causes it to be drawn through fine openings and passages in the rock. The same surface tension gives water droplets their round shape. The water evaporates from the sandstone pores, leaving behind tiny crystals of minerals like halite (sodium chloride), calcite (calcium carbonate), or gypsum (calcium sulfate). Over time, the force of these growing crystals breaks the sandstone apart, grain by grain.
Rock at the base of cliffs is especially susceptible to saltcrystal growth. Salt crystallization also damages masonry buildings, concrete sidewalks, and streets. Salt-crystal growth occurs naturally in arid and semiarid regions, but in humid climates, rainfall dissolves salts and carries them downward to ground water.
OTHER PHYSICAL WEATHERING PROCESSES
Unloading, or exfoliation, is another widespread process that weathers rocks. Rock that forms deep beneath the Earth's surface is compressed by the rock above. As the upper rock is slowly worn away, the pressure is reduced, so the rock below expands slightly. This expansion makes the rock crack in layers parallel to the surface, creating a sheeting structure. In massive rocks like granite or marble, thick, curved layers or shells of rock peel free from the parent mass below, producing an exfoliation dome. Thermal expansion from hot fires can also generate or enhance exfoliation.
Although first-hand evidence is lacking, it seems likely that daily temperature changes also break up surface layers of rock that have already been weakened by other weathering agents. Most rock-forming minerals expand when heated and contract when cooled, so intense heating by the Sun during the day alternating with nightly cooling exerts disruptive forces on the rock.
Plant roots can also break up rock as they wedge joint blocks apart. You've probably seen concrete sidewalk blocks that have been fractured and uplifted by the growth of tree roots. This process also happens when roots grow between rock layers or joint blocks.
Chemical reactions can turn rock minerals into new minerals that are softer and bulkier and therefore easier to erode. And some acids can dissolve minerals, washing them away in runoff. These processes are examples of chemical weathering.
Chemical reactions proceed more rapidly at warmer temperatures, so chemical weathering is most effective in the warm, moist climates of the equatorial, tropical, and subtropical zones. There, hydrolysis and oxidation, working over thousands of years, have decayed igneous and metamorphic rocks down to depths as great as 100 m (about 300 ft). The decayed rock material is soft, clay-rich, and easily eroded. In dry climates, oxidation and hydrolysis weather exposed granite to produce many interesting boulder and pinnacle forms.
Acid action is another form of chemical weathering. Carbonic acid is a weak acid formed when carbon dioxide dissolves in water. It is found in rainwater, soil water, and stream water, and it slowly dissolves some types of minerals. Carbonate sedimentary rocks, such as limestone and marble, are particularly susceptible to carbonic acid action, producing many interesting surface forms. Carbonic acid in ground water dissolves limestone, creating underground caverns and distinctive landscapes that form when these caverns collapse.
In urban areas, sulfur and nitrogen oxides pollute the air. When these gases dissolve in rainwater, we get acid precipitation, which rapidly dissolves limestone and chemically weathers other types of building stones, stone sculptures, building decorations, and tombstones. Soil acids that form as microorganisms digest organic matter also rapidly dissolve basaltic lava in the wet low-latitude climates. |
In the human body there are some natural fluids secreted by various organs for cleaning and maintaining the organs.
Some of them are:-
- Tears from the eyes,
- Saliva from the mouth,
- Mucus from the throat/nose,
- Sweating from the skin.
- Wax from the ears.
Whenever the above fluids are secreted in excess, they need cleaning.
Ear wax is an important thing in human body’s cleaning mechanism, protecting the ear drums and the ear canal from dirt, bacteria and other small insects entering the ears. Sometimes, when the accumulation of wax is more, it will be necessary to clean the ears.
Cleaning the ear canals by cotton swaps, cotton buds, safety pins, hair- pins, sticks and feathers of chicken etc. are not advisable. It is dangerous and risky to clean the ear canals using any foreign material; the ear drum/s may get damaged, causing hearing problems.
We can clean the ear canals using Hydrogen Peroxide solution and a ear syringe obtained from a drug store.
- fill the ear syringe with hydrogen peroxide,
- tilt our head back and to the side, allowing the ear to face upwards,
- carefully insert the end of the syringe into the ear and squirt a few drops
- hear and feel fizzing or popping sound caused by hydrogen peroxide reacting with the wax.
- tilt the head after sometime to allow the liquid to drain out into a tissue paper.
- repeat the same process for the other ear.
Hydrogen Peroxide cleaning should not be done more than once or twice a week. People having a perforated ear drum/s and those with some ear problem/s must not do Hydrogen Peroxide cleaning. If the earwax blocking or leakage is abnormal, it is advisable to consult an ENT Specialist, instead of having self medication and treatment. |
is one of the outcomes of a countywide movement in Sonoma County, California, in the 1970s that brought a focus on women into school curricula as well as into the general public’s consciousness. In 1978, the Educational Task Force of the Sonoma County (California) Commission on the Status of Women initiated a “Women’s History Week.” The week of March 8 was chosen since March 8 is International Women’s Day. As word of the movement spread, State Departments of Education across the U. S. initiated similar changes to their curricula, and encouraged celebrations of women’s history as a means of achieving equity in classrooms. In 1987 the National Women’s History Project petitioned the United States Congress to recognize the whole month of March as National Women’s History Month. Since then, every year the House of Representatives and the United States Senate approve the designation.
March is celebrated with special programs and activities in schools, workplaces, and communities. Besides recognizing women’s achievements in such areas as science, math, politics, arts, and athletics, a common topic in school curricula is the women’s suffrage movement in the United States. Before 1920, women did not have the right to vote under the constitution. In the decade between 1910 and 1920, women organized and were involved in political demonstrations and marches across the United States. Though the vote was brought to the congress several times, it failed to pass. Finally in 1919, after years of picketing, petitioning, and protesting, the vote passed, resulting in the passage of the Nineteenth Amendment to the U. S. Constitution on August 26, 1920. In
November 1920, women voted for the first time in a national election.
outcome(s): n. a result or the effect of an action consciousness: n. knowledge or awareness initiate(d): v. to begin equity: n. justice or fairness
designation: n. something chosen for a particular reason or purpose
suffrage: n. the right to vote in an election
right: n. a legal claim
decade: n. a period of ten years
picket(ing): v. to stand or demonstrate outside a building or place of work to prevent people from entering and working, as a means of political protest petition(ing): v. to demand or request some action from a government or other authority amendment: n. a change in a law |
Industrialization sparked a series of social changes as people poured into the cities. The new capitalist elite flaunted its wealth and political might, and class divisions increased. ‘1866–1900: Industrialization and its consequences’ examines the profound effects of the industrial revolution in America and the consequences for society. These decades saw a rise in racism. Some commentators viewed America's laissez-faire capitalist system as the path to progress, others felt uneasy. Industrialization had an impact on American life at all levels. In just fifty years, America changed from an agarian society to an urban-industrial one with a strong agricultural hinterland. As industrialization proceeded, Americans began to look towards reassessing America's global role. |
From the 1920s, psychologists have explored ways to automate teaching.
Grade Range: K-12
Resource Type(s): Artifacts, Primary Sources
Date Posted: 5/3/2012
This apparatus was designed by Catherine Stern, a physicist by training and the founder of a Montessori school in her native Germany. Stern and her husband were of Jewish descent, and emigrated to New York City in 1938 to avoid persecution by the Nazis. There she developed these materials, described in her 1949 bookChildren Discover Arithmetic. The equipment was first used in preschools and then in primary schools.
The kit includes diverse wooden cubes, rods, and cases, as well as paper cards and covers. The painted cubes are 11/16” (1.8 cm.) on a side, and the rods are of integer multiples of this length. The rods are painted green (1), violet (2), white (3), brown (4), yellow (5), red (6), light blue (7), orange (8), black (9) and dark blue (10). There is also a unit cube in each of these colors.
A counting board, designed to introduce the names of numbers, has grooves of length 1 through 10 that hold rods of appropriate length. At the top of each groove is an indentation that holds a wooden number marker, that is to say a block marked with a digit. A flat wooden board known as a number guide, marked with the numbers from 1 through 10, fits across the back of the counting board.
The kit also includes 10 so-called pattern boards, boards indented with holes that hold a single cube. The holes are arranged in two columns. There is a pattern board for each number from 1 through 10. These are designed to teach the distinction between even and odd numbers, as well as addition and subtraction of 0, 1, and 2. A set of 10 yellow cardboard cards known as pattern board slides shows the arrangement of cubes for each number.
Also included are a set of 10 number cases, square boxes that hold from 1x1 through 10x10 cubes. Two further 10x10 number cases(known as “unit boxes”) contain a set of 100 cubes and a set of 19 rods (one rod of length 10 and two of each of the shorter lengths). There is also a “number track” that holds up to 10 cubes.
A series of folding paper “subtraction shields,” representing integer lengths, can be placed over cubes to indicate subtraction. One set of 9 of these is made up, another of 8 is uncut and in a wrapper. Finally, there is a set of 10 yellow cards, each marked with a digit from 1 to 10, as well as a card marked with a subtraction sign and another with an equals sign. These “number slides” fit in a folding “number stand.” Also present is a manual of instructions dated 1966. |
Alternating, palmately lobed leaves. Mottled bark that sheds off in oblong scales, leaving patches of white, smooth bark.
Sycamores are tolerant of wet soils. They grow in flood plains, along rivers and streams, and in lowlands (Peattie 1964). Hairs on the seeds act as parachutes, and the wind distributes them. However, many seeds may be carried by water and deposited on mud flats (a suitable place for growth). The wood of the sycamore is very hard and difficult to split. It has been used for crates, barrels, and boxes. Hollow trunks of old giant trees are homes for chimney swifts (Burns and Honkala 1990). |
What is Prediabetes
Prediabetes is characterized by elevated blood sugar. Patients’ blood sugar may be higher than normal, yet not high enough to meet the standard of Type 2 Diabetes.
Prediabetes is sometimes referred to as “impaired fasting glucose” (IFG) or “impaired glucose tolerance” (IGT).
Every year, 200,000 Americans receive a tough diagnosis of Type 2 Diabetes (T2D).
This condition used to be known as “Adult-Onset Diabetes” to distinguish it from the related disease Type 1 Diabetes, a pancreatic autoimmune disorder some children already have at birth. “Adult-Onset Diabetes” is now a misnomer, since the deteriorating quality of the American diet has led to an epidemic of T2D diagnoses among minors too.
In normal bodies, the pancreas produces insulin, a hormone that helps transfer the simple sugar glucose from the red blood cells to muscles, liver, and fat cells. Those cells can then process glucose as energy.
The various forms of diabetes cause the body to become insulin-resistant or insulin-impaired, either by not producing insulin or failing to use it effectively. As a result, glucose never makes it to the cells that need it, leaving it trapped and not utilized in the blood. As the body becomes insulin-resistant, patients experience elevated blood sugar and an increased risk of heart disease, stroke, blindness, and kidney failure.
A diagnosis of Type 2 Diabetes correlates with an average loss of ten years of the patient’s lifespan.
Less well-known, however, is the matter of some 84 million American adults who are prediabetic. According to the CDC, 90% of people living with this condition don’t even know they have it, or that it represents an urgent window of opportunity to avert Type 2 Diabetes.
Patients with prediabetes exhibit early symptoms of insufficient insulin use within the body. As a result, blood glucose levels begin to rise.
Prediabetes is a significant risk factor for the development of T2D. Whether these prediabetics know they have it or not, most patients diagnosed with T2D do present with prediabetes first.
The CDC study revealed a higher prevalence of prediabetes in males vs. females. In a study of 5,800 patients:
- 22.5% of adolescent boys had prediabetes, compared to 13.4% of adolescent girls
- 29.1% of young adult males had prediabetes, compared to 8.8% of young adult females.
Prediabetes Risk Factors and Causes
Despite its stealthy nature, prediabetes has a number of well-known risk factors.
Obese or overweight people are likely to be prediabetic, especially people with pronounced belly fat. Excess fat cells make the body more resistant to insulin. A waist measurement over 40” in a man, or 35” in a woman, correlates strongly with prediabetes, as does a body mass index (BMI) over 25.
A history of gestational diabetes is another risk factor. This type of diabetes occurs in pregnant women when pregnancy changes in the body impede insulin uptake and result in high blood sugar. Gestational diabetes can also affect the baby’s health.
Other risk factors include:
- Having given birth to a baby over nine pounds in birth weight
- A lifestyle including little or no exercise
- Latino, Native American, African American, or Pacific Islander heritage
- Aged 45 or older
- High LDL cholesterol
- Low HDL cholesterol
- High triglycerides
- A diet high in sugars, sugary drinks, red meat and/or processed meats
- A diet low in whole grains, vegetables, fruits, nuts, and unsaturated fats
- Sleep apnea or other sleep disorders
- Constant or occasional night shifts
You should definitely get tested for prediabetes if you:
- Have heart disease
- Exhibit abnormal blood sugar readings
- Exhibit signs of insulin resistance
How to Screen for Prediabetes
Prediabetes can be detected by a blood test. Indicators of prediabetes revealed by examining the blood include:
Elevated Hemoglobin A1c
Also known as Glycated Hemoglobin or HbA1c. Hemoglobins are proteins found in the red blood cells that carry iron. A1c is specifically linked to sugars.
- A blood percentage of 4%-5.6% HbA1c is considered normal.
- A blood percentage of 6.5% HbA1c have diabetes.
- A blood percentage of 5.7%-6.4% HbA1c could be considered prediabetic.
HbA1c is quick to test for and requires no fasting before lab appointments.
Elevated Fasting Plasma Glucose Levels
When we talk about “high blood sugar,” we mean the simple sugar glucose and its prevalence in the plasma, the liquid component of blood in which blood cells are suspended.
- Healthy fasting plasma glucose (FPG) levels will fall below 100 mg/dL.
- FPG above 126 mg/dL indicates diabetes.
- Any FPG result between 100 and 126 mg/dL does not meet the clinical definition of diabetes but could indicate prediabetes.
As the name indicates, this lab test requires patients to fast before their lab appointment.
Glucose Tolerance Tests
An FPG screening may be paired with a glucose tolerance test to determine if the patient is impaired in his/her ability to absorb glucose.
The most common test is the two-hour 75-gram oral glucose tolerance test (OGTT).
After the fasting blood sample is obtained, the patient is then instructed to drink a solution consisting of 75 grams of glucose dissolved in water. Two hours after drinking the glucose solution, another blood sample is taken.
- If, after two hours, FPG has risen to a range below 140 mg/dL, this indicates a normal uptake of glucose.
- If FPG has risen above 200 mg/dL, this level is consistent with a diagnosis of diabetes.
- An FPG range between 140 and 200 mg/dL after the OGTT indicates prediabetes —an impairment of glucose uptake, but not severe enough to diagnose diabetes yet.
At-Home Blood Tests For Prediabetes
Medicine has adapted to the digital age to bring health screening tools directly into the hands of consumers. This includes home blood screening options like imaware™. The imaware™ home screening kit for prediabetes allows patients to collect their own blood samples quickly, safely, and painlessly.
They then submit the samples to confidential labs and receive results by a secure online portal in as few as seven days. As of its January 6 relaunch, imaware™ allows patients to check their HbA1c and FPG levels and discover if they fall within prediabetic ranges, without making a clinic or lab appointment.
What are the Symptoms of Prediabetes?
Prediabetes goes undiagnosed so often because it has few to no obvious symptoms—at least, nothing to cause immediate alarm.
If checked, patients might present with:
- Higher-than-normal systolic blood pressure
- Higher-than-normal non-HDL cholesterol
- Low insulin sensitivity or glucose tolerance.
All of these internal indicators might go completely unnoticed if not tested for.
Particularly sensitive individuals might notice:
- Excessive thirst
- Excessive urination
- Blurry vision
- Difficulty healing from cuts or sores
Associated Conditions with Diabetes
Associated conditions might indicate prediabetic status, even if the ensuing symptoms are related to the associated condition and not to prediabetes. Examples include:
This associated condition manifests as dark, velvety patches of skin, typically grouped around the elbows, knees, armpits, knuckles, and neck. Acanthosis nigricans is associated with insulin resistance.
Polycystic Ovarian Syndrome
Another condition associated with insulin resistance, polycystic ovarian syndrome (PCOS) affects post-pubescent women characterized by an excess of the male hormone androgen and enlarged ovaries surrounded by follicles. These follicles inhibit the release of eggs.
PCOS is also characterized by long menstrual cycles with infrequent periods—as few as nine a year or one every 35 days. Unexpected acne breakouts can be a useful aid to considering PCOS in women who normally have no menstrual cycle due to being fitted with a coil, or because of taking other contraceptives that interfere with menstruation. While acne can appear for many reasons, if a patient has never had it before and it suddenly shows up, especially in later life, it’s worth discussing PCOS (and the possibility of prediabetes) with a doctor.
What are the Prediabetes Blood Sugar Numbers?
Prediabetes is characterized by the following blood sugar metrics:
- Fasting plasma glucose (FPG) between 100 and 125 mg/dL.
- After a 75mg 2-Hour Oral Glucose Tolerance Test, FPG levels of between 140 and 200 mg/dL.
- Glycated hemoglobin (hemoglobin A1c or HbA1c) between 5.7% and 6.4%. Note that pregnant women or persons with a hemoglobin variant may return inaccurate results on an HbA1c test.
These ranges apply to both adults and children.
How to Reverse Prediabetes
Not everyone who meets the definition of prediabetes will progress to Type 2 Diabetes. However, many patients diagnosed with T2D have probably been prediabetic for years without even knowing it. It is a strong correlative and significant risk factor.
Prediabetic blood sugar and hemoglobin test results should serve as a call to arms. Patients have a rare opportunity to reverse this damaging course and protect themselves from a debilitating, extremely life-shortening condition that brings a plethora of complications.
Many prediabetics are overweight or obese. A reduction of body weight by 5%-7% (usually losing 10-20 pounds) strongly correlates with a reversal of prediabetic blood sugar levels.
The best way to reverse prediabetes is to adopt a healthy lifestyle, including:
Building more physical activity into your daily routine can contribute significantly to the reversal of prediabetes. Physical activity lowers blood glucose levels and reduces body weight.
Thirty minutes of exercise a day, five days a week, is a proactive approach to reversing prediabetes. If this is too much right off the bat, consider starting slowly and building up exercise as a habit before increasing the intensity. Activities to consider include jogging, walking, swimming, cycling, weight training, and yoga. Make sure to check with your doctor before commencing an exercise routine.
Healthy Food Choices
Prediabetics don’t necessarily need to eat less food if they can transition to higher-quality foods.
- Someone with prediabetes can eat as much as they like of non-starchy vegetables like carrots, broccoli, green beans, spinach, and other leafy greens, or at least three servings per day.
- Fruit, on the other hand, contains a lot of sugar, and prediabetics do not need more. Try to limit servings of fruit to between one and three. Fruit can be an appropriate substitute for high-fat or high-sugar snacks like chips or candy, however, as long as intake is limited. Other good snack choices include seeds, nuts, and whole-grain crackers.
- Substitute enriched, bleached, or processed grains (white bread, white rice, etc.,) with whole grains, sprouted grains, and brown rice.
- Vegetables, fruits, and whole grains are excellent sources of fiber, helping to balance the appetite and control cravings for high-calorie, high-fat foods.
- Limit the amount of red or fatty meats, while emphasizing lean meats like chicken, or meats rich in good fatty acids such as fish or other seafood.
- Avoid fatty milk and cheeses. Skimmed milk and low-fat cheeses are acceptable.
- Avoid sugary drinks like soda, fruit juices, sweetened coffee or tea, or sugary sports drinks like Gatorade as much as possible.
Not every food recommendation is appropriate for every person, as body chemistry is quite individual. Consider consulting a nutritionist for more personalized recommendations.
Lack of sleep makes it nearly impossible to lose weight. When the body loses sleep, it responds with overproduction of the hormones leptin and ghrelin. These hormones make you hungry.
Losing sleep also reduces your body’s ability to use insulin effectively, resulting in elevated blood sugar. It can be a vicious cycle, too, because excessive body weight also puts you at risk of sleep disorders like insomnia and sleep apnea.
Certain behavioral habits can help normalize the sleep cycle, even for overweight persons. These habits include:
- Setting a schedule. Go to bed and wake up at the same time every day to establish a stable circadian rhythm (24-hour physiological cycle)
- Relaxing before bed. Try lying down and reading an hour before bedtime
- Avoid caffeine after lunchtime. If you consume caffeinated coffee, tea, or other products, restrict consumption to the morning
- Turning off all screens. Don’t watch TV, look at your smartphone, or look at your computer screen a full hour before bedtime
- Sleep in a darkened room. Make sure no lights are on in your sleeping room, including indicator lights on devices. Consider blackout blinds if you live on a lighted street
- Set the proper temperature for your sleeping room. Between 60 and 67 degrees Fahrenheit is optimal for the sleep cycle. Let in some fresh air if you tend to sleep restlessly in a warmer room.
Many diabetes-prevention resources, both in-person and online, could help patients with prediabetes. The Center for Disease Control (CDC) operates the National Diabetes Prevention Program (NDPP) throughout the US, offering year-long, in-person lifestyle change programs. DPS Health, Noom Health, and Omanda Health also offer year-long lifestyle change programs via their industry-leading online platforms.
Prediabetics should make regular doctor visits, every three to six months at least, to track their progress.
The anti-diabetic medication metformin (trade names Glucophage, Glucophage XR, Glumetza) may help prediabetics achieve normal blood sugar numbers, the same as it does for diabetics.
It does this by increasing muscle cells’ sensitivity to insulin and reducing the amount of sugar produced by the liver.
Metformin is available by prescription only and must be taken as directed by your doctor. Patients taking metformin should not drink alcohol.
What to Do if You Think You Have Prediabetes
If you suspect that you may be prediabetic, take immediate steps to safeguard your health. Type 2 Diabetes is a serious condition, and prediabetes a priceless window of opportunity to nip it in the bud.
1. Verify—take the imaware™ test
As of January 6, 2019, imaware™ offers do-it-yourself home blood testing kits that screen for prediabetes indicators, including Hemoglobin A1c and fasting plasma glucose levels.
You can submit your test confidentially to a lab and receive your results by a secure online portal in as few as seven days. It’s the easiest, most affordable, and most convenient way to confirm if your blood sugar metrics rise to prediabetic levels.
2. Be proactive—make lifestyle changes
Don’t wait for a diagnosis of diabetes or prediabetes to make changes. As of 2019, 76% of Americans do not get enough exercise. Most Americans also eat too much sugar, fat, and sodium, while eating too few vegetables, fruits, and unsaturated fats.
Take steps now to join the minority with healthy habits. Add exercise, a healthy sleep cycle, and healthy eating choices to your routine. Become educated about the science of fitness, nutrition, and weight loss. Remember, a reduction of 5-7% pounds of body weight often correlates with a reversal of prediabetes. If you’re unsure how to calculate it, take your total body weight and divide it into 20, for the minimum desired loss to aim for. Especially keep an eye on dangerous excess abdominal fat.
Make a meal plan and an exercise plan. Feel free to start slow—a light walk every other day, or cutting out sugary beverages. If you try to do too much exercise too soon or completely overhaul your diet, you run the risk of burning yourself out. Focus instead on the gradual adoption of manageable, sustainable habits.
If necessary, consult a personal trainer, nutritionist, or sleep specialist.
3. Seek support.
Find out if the National Diabetes Prevention Program (NDPP) has offices near you. If so, stop by. Seek out diabetes support groups, in person or online. Join lifestyle groups like running clubs or fitness groups that align with your lifestyle goals.
Share your concerns with friends, family, colleagues, or therapists if you think it is appropriate. Rally them to your cause and stress the importance of them ceasing to encourage your unhealthier eating and drinking habits. This is about avoiding a serious health problem, so if everyone is on board, you’ll get there easier. A strong support group is invaluable in the face of a health challenge.
Prediabetes is both a warning sign and an opportunity. Most people who get diagnosed with Type 2 Diabetes were initially prediabetic, possibly for years, without ever knowing it.
Discovering you are prediabetic is a cloud with a silver lining—you have an opportunity to make healthy decisions that avoid Type 2 Diabetes with its gamut of severe symptoms and very bleak prognosis.
Prediabetes is characterized by elevated blood sugar metrics like Hemoglobin A1c and fasting plasma glucose levels. These levels are higher than normal, but not high enough to indicate a diagnosis of diabetes.
They do, however, indicate abnormalities in your body’s ability to process insulin, a precursor for Type 2 Diabetes.
Risk factors for prediabetes include:
- Obesity or excess body weight, particularly an excess of belly fat
- High LDL cholesterol, low HDL cholesterol
- High triglycerides
- Unhealthy habits like lack of exercise or excess calories in the diet
- Disrupted sleep cycle by night shift work or sleep apnea
- African American, Native American, Latin, or Pacific Islander heritage.
Prediabetes typically carries no symptoms, though observant patients may notice thirst, frequent urination, fatigue, or blurry vision.
You can discover if you have prediabetes with blood tests, including:
- Non-fasting tests for Hemoglobin A1c
- Fasting plasma glucose screening
- A possible follow-up plasma glucose screening with a 75mg two-hour oral glucose tolerance test.
Blood screening for prediabetes can be performed at a clinic or lab, or at home using an imaware™ home screening kit.
If you suspect or discover that you have prediabetes, the condition can often be reversed before it progresses to T2D. Steps you can take to reverse prediabetes include:
- Exercise regularly. Join workout buddies, fitness groups, or hire a personal trainer if necessary.
- Adopt healthy sleep patterns. Go to bed and wake up at the same time; relax and read before sleeping; avoid screens like your TV, computer or phone before bed; sleep in a darkened room, cooled to 60-67 degrees Fahrenheit and with plenty of fresh air if possible.
- Adopt a healthy diet. Cut out sugary drinks, limit processed grains, fatty dairy products, and red meats; eat fruits in moderation; add fiber, unsaturated fats, and whole grains to your diet. A loss of 5%-7% of total body weight also goes a long way to alleviating the prediabetic condition. Then be sure to keep it off and especially avoid accruing belly fat.
- Seek support for prediabetes. Check out the CDC’s National Diabetes Prevention Program (NDPP) or the help of online health platforms.
- Quit smoking
- Consider medication. Your doctor may prescribe metformin to help you control your blood sugar levels.
With effort, determination, support, and the right mindset, patients with prediabetes can reverse the course of the disease and lead a healthy, active life. |
Whether your child is in junior primary or further on parents often ask “Do I need to help my child with language now they are at school?” The answer to this is“yes!”
When children start school their language skills still continue to develop and the stronger their language skills are, the better they will progress, especially with subjects such as reading, writing and spelling.
Children between the ages of five and seven have developed the basics of speech and language but their communication continues to develop. Vocabulary develops all through life and children beginning school learn lots of new words. They also learn new concepts and develop their ability to listen to, remember and understand more complex information. While most basic grammar is learnt in the first five years, older children continue to learn to understand and use more complex sentences and to link sentences together into larger units such as stories and procedures. There’s lots to learn and the things that parents do at home are still vitally important to a child’s development.
1. Keep on reading to your child. Don’t stop reading to them just because they are starting to read themselves. Children can’t read at the level of their understanding until around 10 years of age so keep reading to them as often as you can, but read more complex books than they can read themselves. Ask your local library staff to recommend books your child may enjoy. Well written children’s stories and novels are fun to read for adults too. Kids love funny stories such as those by Roald Dahl, Paul Jennings and Morris Gleitzman and fantasy such as by Emily Rhodda and there are lots of great kids classics. Why not share a book you loved as a child?
2. Show your interest in what your child is learning. Talk to your child about what they learn each day. Look at the things they make and talk about them together. Put your child’s work on display and encourage them to show it to others. Your interest tells your child their learning is important and explaining information to another person helps with recall and understanding.
3. Build on what your child is learning at school. Many classrooms have theme based lessons. Ask your child’s teacher what your child is learning about and extend this. If your child is learning about dinosaurs you could borrow some dinosaur books, find some dinosaur websites or go the museum and see some real dinosaur bones.
4. Connect language to your daily activities. When you do activities at home use these as an opportunity to develop your child’s language skills. For example when making a cake you could read a recipe together, talk about the ingredients and any other words your child might not be familiar with to develop vocabulary, write a shopping list together to develop literacy skills, go to the shops and buy the ingredients, look, touch, taste and talk about the ingredients to develop concepts and descriptive language, follow the steps make the cake to develop procedural (steps in a sequence) language, share the cake and the steps to making it with someone else to reinforce the procedural language.
5. Allow your child to see you using language in a range of ways and involve your child when you can. Let your child see you opening mail, paying bills, filling in forms, writing cards and invitations, reading the newspaper, books, newsletters and magazines, searching for information on line, using recipes, instructions and maps. One of the key language skills of school is to understand the structure of different types of texts and how to make them. Texts are larger units of language such as procedures, stories, and recounts. Building these into your daily life is a great way to help kids learn how they work. For more ideas on texts for school kids click here.
6. Play language and listening games together. Play games like “eye spy”, “I went shopping and I bought…..”. Make cards with your child’s sight words on them and use them to play matching and memory games. Here are some more language games to develop descriptive language. Another great language activity is barrier games. You can find out how to use them here, with some more ideas here and download some to print and play here.
7. Use technology to make your own language activities. Kids love things that are about themselves. Use a digital camera to make your own books, card games and power point displays. Take a series photos of your child doing something interesting and help your child add words to make a book or power point about what they did. You can do this using daily activities, outings or craft or cooking activities. Make a book to keep about something special such as a birthday party or school concert.
In all the activities remember to keep things fun and positive. Make it feel like a special sharing time together, not like hard work. Introduce new words and ideas gradually and repeat them lots of times. Model the correct way of doing or saying things if your child makes a mistake but be encouraging so they keep on trying and learning.
Developing your child’s language skills will prepare them well for learning at school and also help them develop a lifelong love of language and learning.
If you are concerned about your child’s development including speech, language, play skills, social communication skills, social skills or learning check our website to see how Talking Matters may be able to help. For more ideas and resources check the resources section on our website and our extensive Pinterest page. Like us on Facebook and follow us on Twitter so you don’t miss out on what’s happening.
If you are concerned about your child’s skills Talking Matters provides speech pathology and occupational therapy. To find out more about Talking Matters and our services and resources check our website or call our office on (08) 8255 7137.
If you are a professional looking to work with children in an exciting, fast paced setting with a dynamic multi-disciplinary team follow us on LinkedIn to find out about our team and any opportunities available.
Want to read more posts on the Talk with me theme for Speech Pathology Week? Click here to see what others are writing about.
Related Blog Posts
If you liked this post you may also like: |
Definition - What does Virtual Routing and Forwarding (VRF) mean?
Virtual routing and forwarding (VRF) is a technology within IP based routers that enables them to create and operate multiple instances of a routing table simultaneously. VRF enables a router to create multiple instances of router within it, each of which operates separately and has its distinct and overlapping set of IP addresses.
Techopedia explains Virtual Routing and Forwarding (VRF)
VRF is primarily implemented for better use of router and segregate network traffic. VRF works like a typical router with its unique routing table, table entries and routing protocols, and it works independently of the core router and other VRF created instances. VRF is similar to virtual routers but the latter uses only one routing table whereas VRF has multiple routing tables. VRF is also used to create VPN tunnels that are solely dedicated to a single client or network.
VRF is also referred to as a routing table that has multiple instances on a VPN power edge (PE) router. |
Birth asphyxia is the name for when a baby is deprived of oxygen during birth. While in many cases this oxygen deprivation is relatively mild and will not have long-term negative consequences for the child, where the problem is more severe it can result in serious life-long conditions.
Fortunately, the major risk factors for birth asphyxia are well known and can therefore be carefully watched for during a birth. This means that if a problem occurs, or is about to occur, the midwife and medical team handling the birth should be able to take swift action to prevent major complications.
In this article, we look at the most common causes of birth asphyxia, what can be done to deal with them and what happens if the right action is not taken promptly.
Common Causes of Birth Asphyxia
There are various different factors that can lead to a baby being deprived of oxygen during their birth. Some of the most common are:
- Low oxygen levels in the mother’s blood
- High or low blood pressure in the mother
- Long or difficult delivery
- The placenta separating from the womb too early
- The umbilical cord becoming wrapped around the baby’s neck during birth
- The baby’s airways becoming blocked
- The baby’s airways not being properly formed
- Anaemia or other reasons meaning the baby’s blood cells cannot carry enough oxygen
- A serious infection in the mother or baby
Avoiding Birth Asphyxia
Midwives, obstetricians and other healthcare professionals dealing with childbirth will have extensive training in the potential problems that can occur during a birth. It is their job to mitigate any risks to prevent issues.
By carefully monitoring the mother and baby, they should be able to take appropriate preventative action or initiate medical intervention where necessary. For example, if a delivery is taking too long, the medical team should be prepared to proceed with a caesarean section promptly to ensure no harm comes to the mother or child.
By taking the right measures, serious oxygen deprivation can almost always be avoided in modern hospitals.
Treatment for Oxygen Deprivation during Birth
Where birth asphyxia has occurred, there are various treatments that can be used to minimise any potential damage. This includes using breathing apparatus to ensure the newborn gets enough oxygen, cooling therapy, getting the baby to inhale nitric oxide and placing them on a heart-lung machine.
With appropriate treatment provided quickly enough, babies who experience mild or moderate oxygen deprivation will often make a full recovery. Those who suffer more severe birth asphyxia may need on-going treatment, potentially for the rest of their lives.
Effects of Birth Asphyxia
Oxygen deprivation during birth can have a number of effects, including causing brain damage and damage to other organs, including the heart, lungs, kidneys and bowels. This can then lead to life-long conditions, including cerebral palsy, learning difficulties, Attention Deficit Hyperactivity Disorder (ADHD) and impaired vision or even complete blindness.
If your child is left with lasting medical consequences due to birth asphyxia, it may be worth looking into making a medical negligence claim. If the problem was caused by mistakes made by the medical team handling the birth, you may be able to win a financial settlement that could help to ensure your child gets the support and treatment they need to live a full, happy life.
IBB Claims are a law firm specialising in all types of birth injury claims so can offer expert advice and support through the entire process of claiming compensation for birth asphyxia or any other type of birth injury. |
Arrows, Guns, and Buffalo
by Dana Dick, Student Conservation Association Intern
Arrows, Guns, and Buffalo
by Dana Dick, Student Conservation Association Intern
The bow and arrow was an indispensable tool for American Indians living on the Great Plains by CE 250 at the latest. When Europeans emigrants founded Jamestown in 1607, the Plains Indian peoples had long ago perfected their bows and arrows into powerful weapons for hunting game and waging war. The bow and arrow worked so well, in fact, that American Indians relied on this traditional weapon long after they adopted firearms from the Europeans. Despite popular belief, they preferred them to the gun even into the late 1800s. How could that be?
The Gun’s Popularity
On the Northern Plains, American Indians obtained the gun through exchange at posts such as Fort Union. Imported from England, Belgium, France, and the American Colonies (later, the states), the gun became a popular trade item for tribal members. Possibly the most iconic fur trade firearm was the Northwest Trade Gun. This weapon was manufactured specifically for the fur trade and marketed to American Indians. For nearly two centuries after its introduction, this gun was sold by almost every company involved in the fur trade. Fort Union’s inventories, correspondence, and orders indicate that the post kept large quantities in stock and placed large orders for more every year. Edwin T. Denig, the Fort Union Bourgeois, or manager (1848–1854), specifically mentioned the Assiniboine people’s use of the Northwest Gun in his 1850s report on the Plains Indians for the Federal government. “The bow and arrow is used altogether by all these tribes when hunting buffalo on horseback,” Denig wrote, “and the Northwest shotgun is the only arm employed in killing any and all game on foot.” Once guns became available—they could not make their own—they wanted ones that were cheap, light, serviceable, and reliable in any season. The Northwest Trade Gun supplied just that. But even with the popularity of firearms, American Indians still depended on their bows and arrows.
Their Weapons and Trade
Up to the time of the fur trade, hunters and warriors had made bows and arrows from locally sourced or traded natural materials. For arrowheads, or projectile points, they relied on a variety of resources, primarily stone, bone, and antler. Chert, flint, and obsidian were the types of rock most often obtained to manufacture these early lithic points. Because they could not be found everywhere in North America, these rocks were traded across vast distances over well-established trade routes. Knife River flint is one point-making stone that traveled great distances. For the most part quarried in central-western North Dakota, Knife River points have been found in archeological sites across North America – from southern Canada to the Texas Panhandle and from the Rocky Mountains to Ohio. Another lithic material traded over great distances was obsidian from the Obsidian Cliff in Yellowstone National Park. Archeological evidence indicates that American Indians quarried that site for more than 10,000 years. Archeologists have found that obsidian as far north as Canada, south into Colorado, west into Washington, and east into the Ohio River Valley. The intense mining activities at these two sites, as well as the great distance the stones traveled, indicate the importance of both projectile points and the existing trade networks used by North America’s earliest peoples.
Complex systems of intertribal exchange flourished as a result. People used these trade systems for thousands of years, if not longer, and the concept was nothing new. The Mandan and Hidatsa’s semi-permanent villages along the Missouri River in central North Dakota had thrived as trading centers long before Europeans arrived. At these farming villages, nomadic Northern Plains tribes could trade items of the hunt for ones produced by the village agriculturists. In the early 1800s, the American Fur Company and individual posts like Fort Union tapped into these existing trade systems. Along these ancient routes, new fur trade–supplied goods spread rapidly across the continent. Guns came west with the French and English from the Great Lakes and farther east, while horses came north from Spanish territories in today’s Southwest. The Assiniboine traded guns to the Mandan as early as 1738, by which time nomadic tribes from the south possessed horses. In this way, distant tribes acquired European goods before the American Fur Company established its Upper Missouri River trading posts.
What Does Change?
Trade items Europeans brought, including guns, had long-lasting impacts on North America’s existing tribal cultures. In what some scholars call a cultural exchange, items made by American Indians began to be replaced by European goods. The arrival of metal kettles, for instance, lessened the need for pottery making. American Indian women’s acquisition of colorful beads, meanwhile, led to a decline in porcupine quill work and a rise of bead embroidery. Even the bow and arrow changed with metal’s introduction.
By the middle 1800s, posts such as Fort Union were a major source for metal arrowheads. However, unlike the Hudson Bay Company that placed orders for factory-made metal points, Fort Union’s clerks appear not to have ordered pre-made metal points. Although known fort invoices omit mention of metal arrowheads, Fort Union’s archeological excavations unearthed several metal points. How could that be? The discovery of the one bone projectile point at Fort Union (FOUS 787) may provide an answer. This bone point looks almost identical to a number of metal ones that archeologists found. These similarities suggest that the bone point may have been a template for making metal ones. A Fort Union blacksmith could have accomplished this task by placing the bone template on top of a piece of barrel hoop and then cut around it with a chisel.
Why They Favored the Bow and Arrow
Why did the guns fur traders introduced not eliminate the use of bows and arrows until after repeating rifles like the Winchester carbine that appeared in the 1870s? Up until then, both young men and poor warriors used the bow almost exclusively while those who could afford guns still used the bow. A powerful weapon, the bow and metal-pointed arrow could kill a man or buffalo as easily as an early gun. On the northern plains, however, the bow was the weapon of choice for hunting buffalo. Early guns were difficult to load while on horseback and could not be fired with the same speed and agility as the bow and arrow. George Catlin, an American painter who visited Fort Union to observe and record the ways of American Indians, participated in several buffalo hunts and was able to see the power of the bow first hand. “Such is the training of men and horses in this country,” Catlin tells us, “that this work of death and slaughter is simple and easy . . . [W]hen the arrow is thrown with great ease and certainty to the heart; and instances sometimes occur, where the arrow passes entirely through the animal’s body. An Indian, therefore, mounted on a fleet and well-trained horse, with his bow in his hand, and his quiver slung on his back, containing an hundred arrows, of which he can throw fifteen or twenty in a minute, is a formidable and dangerous enemy.” The hunter’s ease and ability to discharge arrows rapidly was a clear advantage over the early single-shot long arm, or musket. This was one reason why American Indians kept the bow and arrow in their arsenal until the arrival of more advanced firearms like the repeating rifle.
The tribes’ eventual adoption of the metal-tipped arrow and gun is just one aspect of a larger cultural exchange that occurred in North America following Europeans’ arrival. Such cultural exchanges can be described often as cultural reciprocity, with ideas and goods moving back and forth between two or more cultural groups. This idea of a cultural exchange is one commonly used to explain the relationship between Euro-Americans and American Indians. Tribes’ adoption of metal-tipped arrows and firearms is but one example of how Euro-American culture influenced and at times changed American Indian cultures. For a true reciprocity to exist, however, American Indians also had to affect Euro-American cultures. What influences did American Indian societies and cultures have on Euro-Americans? What knowledge and technologies have Euro-Americans gained from the nation’s first peoples?
Last updated: April 19, 2017 |
Common lilacs (Syringa vulgaris) are hardy shrubs that suffer few disease or pest problems. They grow best when planted in full sun with well-draining, slightly alkaline soil. Sudden dropping leaves, though, probably indicates an insect pest or other problem. In most cases, lilacs can be revived through proper care and annual pruning.
Oystershell scales are tiny, motionless insects that form colonies on the lilac's branches. They suck the juices from young stems, killing them and causing defoliation. To control oystershell scales, wrap black plastic tape around new growth in the spring so the sticky side faces out. As the young insects emerge from eggs laid on the branches the previous fall, they become stuck to the tape. Insecticidal soaps or oils also control scale. Apply these products on cool, dry days and coat the leaves and stems thoroughly. Finally, annual pruning to remove old, dead or diseased wood can eliminate scale colonies. Destroy any pruned wood.
Ash-lilac borer is the larva of a wasplike moth. In spring, these larva emerge from their eggs and bore into the wood of both lilac and ash wood. As they bore through the wood, they kill the stems, causing leaves to wilt and drop. Look for a telltale oval entrance hole at the base of a stem to identify these pests. To control ash-lilac borers, cut back infested branches to the ground and burn or discard them. Do not leave them on your property. Pruning is usually sufficient to control them, but in severe infestations, apply a pesticide containing Permethrin in early spring as new growth emerges, according to package directions. Make a second application three weeks later, recommends the University of Nebraska-Lincoln.
Lilacs are sensitive to herbicides, especially formulas containing dicambia, which is a selective herbicide found in some lawn products or products designed to remove all vegetation. Signs of herbicide injury include leaf cupping, distortion, slow growth, browning or blackening of the leaves and defoliation. In severe cases, the lilac might die. If you suspect herbicide injury, water your lilac well and take a wait-and-see approach.
Diseases might cause sudden defoliation, although in most cases, you'll see other symptoms first. For example, bacterial and shoot blight cause blackened and distorted stems and leaves, while powdery mildew causes a white film on the leaves. If left untreated, the shrubs eventually become defoliated. To treat blights, cut out infected parts and destroy them. Avoid overhead watering, which can spread the diseases. In some cases, remove the entire shrub. Powdery mildew is rarely serious and usually appears at the end of the growing season. Treat severe cases with a fungicide.
- Jupiterimages/Polka Dot/Getty Images |
In 1675 Antonie van Leeuwenhoek asked, “what’s out there?,” before he discovered the existence of microorganisms. Now, over 300 years later, researchers at Lawrence Berkeley National Laboratory (LBNL) are able to accurately and quickly test for over 8,000 bacterial species with a device that fits into a person’s hand – a microbial detection power previously unknown.
|This new technology, called PhyloChip, enables scientists to study bacterial communities, their interactions, and how they change over time. This capability is important because deep, sudden changes in the structure of a bacterial community could represent dangers in the form of an airborne biological terrorist attack, an epidemic caused by contaminated water or soil, or hazardous atmospheric alterations caused by climate change.|
Invented by Gary Andersen, Todd DeSantis, and colleagues at LBNL, PhyloChip is a DNA microarray unique in its ability to identify multiple bacterial species and organisms from complex microbial samples. Because PhyloChip produces results within hours, numerous samplings of a specific environment can be conducted on a daily basis, enabling scientists to track the progress of a certain microorganism over a short period of time.
PhyloChip was the environment category winner of The Wall Street Journal's 2008 Technology Innovation Awards and Affymetrix, Inc. is currently distributing the technology to 28 beta-test sites under a limited license agreement.
Recently, LBNL’s PhyloChip was tested by cataloging the bacteria in air samples taken from San Antonio and Austin, Texas. Over 1,800 types of bacteria were found! Before this study, no one comprehended the diversity of airborne microbes. By identifying microbial communities typically inhaled by inhabitants of large U.S. cities, PhyloChip can help monitor air quality. The bacterial census from this study will help the Department of Homeland Security differentiate between normal and suspicious fluctuations in airborne microbes.
Formerly, microbiologists have relied on bacterial cultures to identify the microbes present in an environmental or medical sample, but most organisms—up to 99% of the bacteria in a sample—don’t survive in a culture. PhyloChip is a much more rapid, comprehensive, and accurate means for sample testing without culturing. As reported in the Journal of Clinical Microbiology (2007), PhyloChip was key to discovering that a loss of bacterial diversity due to antibiotic treatments was directly associated with the development of pneumonia in ventilated patients exposed to a certain common strain of bacteria.
The LBNL technology has also proven valuable in helping preserve a healthy environment. As published in Applied And Environmental Microbiology (2006) PhyloChip could prevent a less-soluble form of uranium from converting to a soluble form, thus forestalling the migration of this radioactive material and optimizing site remediation efforts. Monitoring contaminated sites where the existing bacteria were naturally immobilizing uranium, the PhyloChip was able to identify several synergistically acting microbes. By creating conditions more favorable for these bacteria it may be possible to increase the efficiency of immobilization.
The Berkeley Lab Phylochip makes possible discoveries that may change all disciplines touched by microbiology, including medicine, immunology, and environmental biology and takes Leeuwenhoek’s initial discovery to a whole new level.
Developed by: Gary Andersen, Todd DeSantis,Eoin Brodie, and Yvette Piceno |
Amblyopia, also known as lazy eye is a vision disorder that is characterized by indistict and or poor vision in an eye that is actually physically normal.”Lazy Eye” is caused by a limitation in the transmission of the visual world through the optic nerve to the brain for a sustained period of childhood thus resulting in diminished vision. Amblyopia normally only affects one eye, but it is also possible to be amblyopic in both eyes if they are similarly deprived of clear retinal images.
When this condition, the chances of successful treatment is much improved.While the term “lazy eye” is used to refer to amblyopia, the term is wrong because there is actually no “laziness” of either the eye. “Lazy brain” is a more accurate term to describe amblyopia. The term “lazy eye” is totally untrue.Also incorrectly called lazy eye, strabismus is a condition in which the eyes don’t point straight ahead. Strabismus usually results in normal vision in the preferred sighting (or “fellow”) eye, but may cause abnormal vision in the deviating eye due to the discrepancy between the images projecting to the brain from the two eyes. Adult-onset strabismus usually causes double vision (diplopia), since the two eyes are not fixated on the same object. Children’s brains, however, are more neuroplastic, and therefore can more easily adapt by suppressing images from one of the eyes, eliminating the double vision. This plastic response of the brain, however, interrupts the brain’s normal development, resulting in the amblyopia.
Form-deprivation amblyopia (Amblyopia ex anopsia) results when the ocular media become opaque, such as is the case with cataracts or corneal scarring from forcepts injuries during birth. These opacities prevent adequate visual input from reaching the eye, and therefore disrupt development. If not treated in a timely fashion, amblyopia may persist even after the cause of the opacity is removed. Sometimes, drooping of the eyelid(ptosis) or some other problem causes the upper eyelid to physically occlude a child’s vision, which may cause amblyopia quickly.
Treatment of individuals age 9 through adult is possible through applied perceptual learning.Form deprivation amblyopia is treated by removing the opacity as soon as possible followed by patching or penalizing the good eye to encourage use of the amblyopic eye. |
Every good Literacy Program should be backed up by Reading Mentors. The aim of Reading Mentors is to support students in their literacy journey. It is natural for students to become distracted, stressed out, or unmotivated as they do something for the first time. In the early stages of your Literacy Program, it is important to have Reading Mentors – teacher support – who are able to reflect, question, and monitor the students’ progress.
Canada Royal Arts High School has been implementing it’s first Literacy Program the last few weeks. Our school has been working hard to create it’s own library and get students excited through the Reading Cafe. To support the progress of our program, as well as to motivate students to succeed, we have designated each teacher to have students that they will guide.
How to use Reading Mentors to support your Literacy Program:
- Survey the Students. Let the students voice their own opinion and choose a few teacher mentors that they would be comfortable with. Later, assign a teacher mentor to the student that you believe would be most appropriate.
- Check in with the Student. The teacher mentor should check in with the student at least once a week for a short reading session. The mentor will read a passage with the student out loud, ask them to summarize what they have read, go through unknown vocabulary, and answer any questions the students might have.
- Provide Supplies. Each student should have a small pack of slim sticky notes that they keep in their books. The sticky notes are used for students to mark unfamiliar vocabulary and other interesting passages. It is a good way for the teacher mentor to see that the student has been reading.
- Encourage the Student. Teacher mentors can encourage the student by providing them with rewards when they finish their books.
- Review the Book. When the student has finished reading the book, they will take a big sticky note and write a short ‘review’ or opinion on the sticky note for the next reader to see.
Note: Make sure that the level of the book is appropriate for the student. Use the ‘5-Finger Rule’ when deciding what book to read. |
The Japanese battleship Yamato, built in the shipyards at Kure in 1941, was 263 meters long, with 12 Kanpon boilers turning four turbine screws 6 meters (20 ft.) in diameter, at 27 knots (50Km/h) and displaced an astonishing 72,800 tonnes of water. Along with her sister ship the Musashi, she was the largest, heaviest and most powerfully armed battleships ever constructed.
Sent to “fight until destroyed” while protecting Okinawa, the battleship never made it that far, and its great size may have hindered it when it was spotted by American air forces. The massive ship was bombed by American planes in April 1945. As the ship rolled over and began to sink, the ships explosives detonated and the blast created a mushroom cloud over 4 miles high, visible from 100 miles away. The Yamato lost most of her crew, some 3,055 out of 3,332 sailors.
The sinking of the Musashi (in 1944) and the spectacular sinking/explosion of the Yamato in 1945, represented a major psychological blow to the Japanese. Both ships had represented the apex of Japanese naval engineering, and were potent symbols of the power of the empire itself. Today the story of the Yamato, serves as a sort of shorthand metaphor for the ending of the Japanese empire itself.
The Battleship Yamato Museum, dedicated to the ship and her history, is located by the harbor in Kure, near Hiroshima, and opened in 2005. It features a 26.3 meter long scale model of the battleship, and other fascinating items including a Japanese Type 62 Zero aircraft and a “Kaiten,” a one-man human driven torpedo, used by the Japanese as a suicide weapon.
The battleship has also inspired much in Japanese pop culture such as an anime series about a space battleship Yamato, and models and robots from that series are on display in the museum.
Know Before You Go
- personal visit |
What You Need to Know About the 25th Amendment
(Bloomberg) -- Quick: How could a sitting U.S. president be legally removed from office? Most people have heard of impeachment, a power granted to (and rarely used by) the U.S. Congress. But there’s also the 25th amendment to the U.S. Constitution, which provides an avenue for a president to be removed under extraordinary circumstances by his or her own leadership team. Critics of President Donald Trump have cited the amendment approvingly, even wishfully, over the past two years while reviewing what they consider his erratic behavior. More recently there are indications that deploying the amendment has even been discussed within Trump’s own government.
1. What does the 25th amendment say?
It provides that a president can be removed if the vice president and a majority of the cabinet determines he or she is “unable to discharge the powers and duties” of the office. If the president contests the finding, and the vice president and cabinet persist, Congress can order the president’s removal by a two-thirds vote in both chambers. The amendment also clarifies that the vice president is the successor if a president leaves office in midterm, and that the vice president becomes acting president when, say, a president undergoes major surgery.
2. Why does this even exist?
To address some questions about presidential and vice presidential succession that the Constitution didn’t specifically answer. For instance, when President William Harrison died in office in 1841, there was a debate over whether Vice President John Tyler would become acting president, or president, or officially remain vice president. (Tyler decided on his own to have a judge administer the presidential oath of office.) The 25th amendment was introduced in Congress, and ratified by the requisite three-quarters of U.S. states, after the 1963 assassination of President John F. Kennedy. In the immediate confusion following the shooting of Kennedy, there were tense questions about who would run the country should he survive but only in a semiconscious or otherwise grievously wounded condition.
3. Has the amendment been used before?
Never to remove a sitting president, but twice to fill a vacant vice presidency. (Before the amendment took effect, the U.S. occasionally went long periods without any vice president.) In 1973, after Spiro Agnew was forced to resign because of tax-evasion charges, President Richard Nixon nominated Representative Gerald Ford to become vice president. He was approved by the House and Senate. After Nixon resigned the following year, Ford became president and nominated Nelson Rockefeller, a former governor of New York, as vice president. He was confirmed by Congress.
4. Why is the amendment coming up now?
The New York Times and ABC News reported that the deputy attorney general, Rod Rosenstein, last year discussed recruiting cabinet members to invoke the amendment to remove Trump from office. (Rosenstein denied the account and said in a statement to the Times that he sees "no basis" to invoke the amendment.) Weeks earlier, on Sept. 5, the Times published an op-ed by a person identified only as “a senior official in the Trump administration” who wrote, “Given the instability many witnessed, there were early whispers within the cabinet of invoking the 25th amendment, which would start a complex process for removing the president. But no one wanted to precipitate a constitutional crisis. So we will do what we can to steer the administration in the right direction until -- one way or another -- it’s over.”
The Reference Shelf
- The op-ed by the senior Trump administration official.
- The National Constitution Center’s history of the 25th amendment.
- A 25th amendment timeline courtesy of the Gerald R. Ford Presidential Library and Museum’s timeline.
- QuickTake explainers on impeachment, Trump’s legal risks and the Trump-Russia saga.
©2018 Bloomberg L.P. |
An electric arc, or arc discharge, is an electrical breakdown of a gas that produces an ongoing plasma discharge, resulting from a current through normally nonconductive media such as air. An arc discharge is characterized by a lower voltage than a glow discharge, and relies on thermionic emission of electrons from the electrodes supporting the arc. An archaic term is voltaic arc, as used in the phrase "voltaic arc lamp".
The phenomenon was first described by Sir Humphry Davy, in an 1801 paper published in William Nicholson's Journal of Natural Philosophy, Chemistry and the Arts. In the same year Davy publicly demonstrated the effect, before the Royal Society, by transmitting an electric current through two touching carbon rods and then pulling them a short distance apart. The demonstration produced a "feeble" arc, not readily distinguished from a sustained spark, between charcoal points. The Society subscribed for a more powerful battery of 1000 plates and in 1808 he demonstrated the large-scale arc. He is credited with naming the arc. He called it an arc because it assumes the shape of an upward bow when the distance between the electrodes is not small. This is due to the buoyant force on the hot gas. Independently the phenomenon was subsequently rediscovered and described as a "special fluid with electrical properties", by Vasily V. Petrov, a Russian scientist experimenting with a copper-zinc battery consisting of 4200 discs.
An electric arc is the form of electric discharge with the highest current density. The maximum current through an arc is limited only by the external circuit, not by the arc itself. The voltage across an arc decreases as the current increases, giving it a dynamic negative resistance characteristic. Where a sustained arc is required, this characteristic requires some external circuit element to stabilize current, which would otherwise increase bounded only by the supply limit.
An arc between two electrodes can be initiated by ionization and glow discharge, as the current through the electrodes is increased. The breakdown voltage of the electrode gap is a function of the pressure and type of gas surrounding the electrodes. When an arc starts, its terminal voltage is much less than a glow discharge, and current is higher. An arc in gases near atmospheric pressure is characterized by visible light emission, high current density, and high temperature. An arc is distinguished from a glow discharge partly by the approximately equal effective temperatures of both electrons and positive ions; in a glow discharge, ions have much less thermal energy than the electrons.
A drawn arc can be initiated by two electrodes initially in contact and drawn apart; this can initiate an arc without the high-voltage glow discharge. This is the way a welder starts to weld a joint, momentarily touching the welding electrode against the workpiece then withdrawing it till a stable arc is formed. Another example is separation of electrical contacts in switches, relays and circuit breakers; in high-energy circuits arc suppression may be required to prevent damage to contacts.
Electrical resistance along the continuous electric arc creates heat, which ionizes more gas molecules (where degree of ionization is determined by temperature), and as per this sequence: solid-liquid-gas-plasma; the gas is gradually turned into a thermal plasma. A thermal plasma is in thermal equilibrium, which is to say that the temperature is relatively homogeneous throughout the heavy particles (i.e. atoms, molecules and ions) and electrons. This is so because when thermal plasmas are generated, electrical energy is given to electrons, which, due to their great mobility and large numbers, are able to disperse it rapidly and by elastic collision (without energy loss) to the heavy particles.
Current in the arc is sustained by thermionic emission and field emission of electrons at the cathode. The current may be concentrated in a very small hot spot on the cathode; current densities on the order of one million amperes per square centimetre can be found. Unlike a glow discharge, an arc has little discernible structure, since the positive column is quite bright and extends nearly to the electrodes on both ends. The cathode fall and anode fall of a few volts occurs within a fraction of a millimetre of each electrode. The positive column has a lower potential gradient and may be absent in very short arcs.
A low-frequency (less than 100 Hz) alternating current arc resembles a direct current arc; on each cycle, the arc is initiated by breakdown, and the electrodes interchange roles as anode and cathode as current reverses. As the frequency of the current increases, there is not enough time for all ionization to disperse on each half cycle and the breakdown is no longer needed to sustain the arc; the voltage vs. current characteristic becomes more nearly ohmic.
The various shapes of electric arcs are emergent properties of non-linear patterns of current and electric field. The arc occurs in the gas-filled space between two conductive electrodes (often made of tungsten or carbon) and it results in a very high temperature, capable of melting or vaporizing most materials. An electric arc is a continuous discharge, while the similar electric spark discharge is momentary. An electric arc may occur either in direct current circuits or in alternating current circuits. In the latter case, the arc may re-strike on each half cycle of the current. An electric arc differs from a glow discharge in that the current density is quite high, and the voltage drop within the arc is low; at the cathode, the current density can be as high as one megaampere per square centimeter.
An electric arc has a non-linear relationship between current and voltage. Once the arc is established (either by progression from a glow discharge or by momentarily touching the electrodes then separating them), increased current results in a lower voltage between the arc terminals. This negative resistance effect requires that some positive form of impedance—an electrical ballast—be placed in the circuit to maintain a stable arc. This property is the reason uncontrolled electrical arcs in apparatus become so destructive, since once initiated, an arc will draw more and more current from a fixed-voltage supply until the apparatus is destroyed.
Industrially, electric arcs are used for welding, plasma cutting, for electrical discharge machining, as an arc lamp in movie projectors and followspots in stage lighting. Electric arc furnaces are used to produce steel and other substances. Calcium carbide is made in this way as it requires a large amount of energy to promote an endothermic reaction (at temperatures of 2500 °C).
Spark plugs are used in petrol internal combustion engines of vehicles to initiate the combustion of the fuel in a timed fashion.
Spark gaps are also used in electric stove lighters (both external and built-in).
Carbon arc lights were the first electric lights. They were used for street lights in the 19th century and for specialized applications such as searchlights until World War 2. Today, low-pressure electric arcs are used in many applications. For example, fluorescent tubes, mercury, sodium, and metal halide lamps are used for lighting; xenon arc lamps are used for movie projectors.
Electric arcs have been studied for electric propulsion of spacecraft.
|This section does not cite any references or sources. (July 2013)|
Undesired or unintended electric arcing can have detrimental effects on electric power transmission, distribution systems and electronic equipment. Devices which may cause arcing include switches, circuit breakers, relay contacts, fuses and poor cable terminations. When an inductive circuit is switched off the current cannot instantaneously jump to zero; a transient arc will be formed across the separating contacts. Switching devices susceptible to arcing are normally designed to contain and extinguish an arc, and snubber circuits can supply a path for transient currents, preventing arcing. If a circuit has enough current and voltage to sustain an arc formed outside of a switching device, the arc can cause damage to equipment such as melting of conductors, destruction of insulation, and fire. An arc flash describes an explosive electrical event that presents a hazard to people and equipment.
Arcing can also occur when a low resistance channel (foreign object, conductive dust, moisture...) forms between places with different potential. The conductive channel then can facilitate formation of an electric arc. The ionized air has high electrical conductivity approaching that of metals, and can conduct extremely high currents, causing a short circuit and tripping protective devices (fuses, circuit breakers). Similar situation may occur when a lightbulb burns out and the fragments of the filament pull an electric arc between the leads inside the bulb, leading to overcurrent that trips the breakers.
Electric arc over the surface of plastics causes their degradation. A conductive carbon-rich track tends to form in the arc path, negatively influencing their insulation properties. The arc susceptibility is tested according to ASTM D495, by point electrodes and continuous and intermittent arcs; it is measured in seconds to form a track that is conductive under high-voltage low-current conditions. Some materials are less susceptible to degradation than others; e.g. polytetrafluoroethylene has arc resistance of about 200 seconds. From thermosetting plastics, alkyds and melamine resins are better than phenolic resins. Polyethylenes have arc resistance of about 150 seconds, polystyrenes and polyvinyl chlorides have relatively low resistance of about 70 seconds. Plastics can be formulated to emit gases with arc-extinguishing properties; these are known as arc-extinguishing plastics.
Arcing over some types of printed circuit boards, possibly due to cracks of the traces or the failure of a solder, renders the affected insulating layer conductive as the dielectric is combusted due to the high temperatures involved. This conductivity prolongs the arcing due to cascading failure of the surface.
Arc suppression is a method of attempting to reduce or eliminate the electrical arc. There are several possible areas of use of arc suppression methods, among them metal film deposition and sputtering, arc flash protection, electrostatic processes where electrical arcs are not desired (such as powder painting, air purification, PVDF film poling) and contact current arc suppression. In industrial, military and consumer electronic design, the latter method generally applies to devices such as electromechanical power switches, relays and contactors. In this context, arc suppression refers to the concept of contact protection.
Part of the energy of an electrical arc forms new chemical compounds from the air surrounding the arc; these include oxides of nitrogen, and ozone, which can be detected by its distinctive sharp smell. These chemicals can be produced by high-power contacts in relays and motor commutators, and are corrosive to nearby metal surfaces. Arcing also erodes the surfaces of the contacts, wearing them down and creating high contact resistance when closed.
- The Electric Arc, By Hertha Ayrton, page 94
- Luckiesh, Matthew (1920). Artificial light, its influence upon civilization. New York: Century. p. 112. OCLC 1446711.
- "Arc". The Columbia Encyclopedia (3rd ed.). New York: Columbia University Press. 1963. LCCN 63020205.
- Davy, Humphry (1812). Elements of Chemical Philosophy. p. 85. ISBN 0-217-88947-6. This is the likely origin of the term arc.
- Kartsev, V. P. (1983). Shea, William R, ed. Nature Mathematized. Boston, MA: Kluwer Academic. p. 279. ISBN 90-277-1402-9.
- Howatson, A.M. (1965). An Introduction to Gas Discharges. Oxford: Pergamon Press. pp. 47–101. ISBN 0-08-020575-5.
- Mehta, V.K. (2005). Principles of Electronics: for Diploma, AMIE, Degree & Other Engineering Examinations (9th ed., multicolour illustrative ed.). New Delhi: S. Chand. pp. 101–107. ISBN 81-219-2450-2.
- "Arc Suppression". Retrieved December 6, 2013.
- "Lab Note #106 Environmental Impact of Arc Suppression". Arc Suppression Technologies. April 2011. Retrieved October 10, 2011.
|Wikimedia Commons has media related to Electric arc.|
- "High Voltage Arcs and Sparks" Videos of 230 kV 3-phase "Jacobs Ladder" and unintentional 500 kV power arc
- High Voltage Arc Gap Calculator to calculate the length of an arc knowing the voltage or vice versa
- Ívkisülés Electric arc between two carbon rods. Video on the portal FizKapu (Hungarian)
- Unusual arcing photos |
The Sustainability of Breastfeeding: Protecting Both Babies and Mother Earth
April 30, 2017
It has become widely recognized that breastfeeding is the cornerstone of a child’s healthy development. In addition to containing all of the vitamins and nutrients your baby needs, breast milk supports your baby’s immune system and brain development. However, breastfeeding is also a cornerstone for something just as important but something we don’t generally think about—a healthy planet.
Breast milk is a naturally renewable resource, leaves zero ecological footprint, and provides long-term environmental advantages:
- Breastfeeding is the clean energy choice. Breastmilk is not industrially manufactured or transported, so it generates zero greenhouse gas emissions or other pollutants. In addition, there is no need to heat up breast milk prior to feeding.
- Breastfeeding saves scarce water resources. Breastfeeding has a zero water footprint because no water is required to prepare breast milk or sanitize feeding bottles (if exclusively feeding at the breast). All that a baby needs for the first 6 months of life is breast milk.
- Breastfeeding has zero waste. Breastfeeding is biodegradable and does not produce waste from packaging processes or from plastic feeding bottles. In addition, mothers who exclusively breastfeed have delayed menstruation, thereby reducing the number of feminine hygiene products and their plastic wrappers from ending up in the landfill.
Francis and Mulford (2000) sum up the sustainability of breastfeeding quite nicely:
“Human milk is not skimmed, processed, pasteurized, stored, transported, repackaged, dried, reconstituted, sterilized, or wasted…It requires no fuel for heating, no refrigeration, and is always ready to serve at the right temperature. In short, it is the most environmentally friendly food available.” (Francis and Mulford 2000). |
Myelomeningocele is a birth defect in which the backbone and spinal canal do not close before birth. The condition is a type of spina bifida.
Myelomeningocele is the most common type of spina bifida. It is a neural tube defect in which the bones of the spine do not completely form, resulting in an incomplete spinal canal. This causes the spinal cord and meninges (the tissues covering the spinal cord) to stick out of the child’s back.
The cause of Myelomeningocele is unknown. However, low levels of folic acid in a woman’s body before and during early pregnancy is thought to play a part in this type of birth defect. The vitamin folic acid (or folate) is important for brain and spinal cord development.
Exams and Tests
Prenatal screening can helps diagnose this condition. During the second trimester, pregnant women can have a blood test called the quadruple screen. This test screens for Myelomeningocele, Down syndrome, and other congenital diseases in the baby. Most women carrying a baby with spina bifida will have a higher-than-normal levels of a protein called maternal alpha fetoprotein (AFP).
Myelomeningocele can be seen after the child is born. A neurologic examination may show that the child has loss of nerve-related functions below the defect. For example, watching how the infant responds to pinpricks at various locations may reveal where he or she can feel the sensations.
Tests done on the baby after birth may include x-rays, ultrasound, CT, or MRI of the spinal area.
After birth, surgery to repair the defect is usually recommended at an early age. Before surgery, the infant must be handled carefully to reduce damage to the exposed spinal cord. This may include special care and positioning, protective devices, and changes in the methods of handling, feeding, and bathing.
Children who also have hydrocephalus may need a ventricular peritoneal shunt placed. This will help drain the extra fluid.
Source: Medline Plus |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.