text
stringlengths 116
653k
| id
stringlengths 47
47
| edu_int_score
int64 2
5
| edu_score
float64 1.5
5.03
| fasttext_score
float64 0.02
1
| language
stringclasses 1
value | language_score
float64 0.65
1
| url
stringlengths 14
3.22k
|
---|---|---|---|---|---|---|---|
An ingenious device designed to separate the user and their lunch
Alton Towers is full of cola roasters *roller coasters
von Ezekiel 9. März 2005
17 more definitions
Top Definition
The funnest things since sex. Roller coasters are located at theme parks, and come in many forms. They all kick ass.
Cedar Point has a lot of roller coasters.
von adonkeyisaass 14. September 2003
By pure definition, a roller coaster is anything that may seem rebellious or appear dangerous when in fact, it's perfectly safe for family fun as long as you're over the height for 48 inches. Using this definition, it can be determined that Linkin Park is the typical example of a roller coaster.
Society's definition of the roller coaster is that it is a form of mass-transit system for small- and medium-sized cities. In principle, it is similar to a bus or monorail, in that passengers pay a small fee to board and be transported elsewhere. With roller coasters, however, passengers are also sent through double-backwards corkscrews, 85-degree death drops, underground tunnel plunges, and triple-twistback loop-the-loops, often at speeds of over 100 miles an hour and with G-forces approaching space shuttle launch (or crash) levels. Many cities are reconsidering the installation of coasters, due to the number of heart attacks, pregnant woman injuries, and scalding-hot-coffee-spill disfigurations, but they're just pussing out.
Roller coasters are also located in theme parks like Disneyland, Six Flags over Somewhere Really Flat and Boring, Fantazyland, Disneyland 2, Duff Gardens, Vekomaland, and Grue Park. The majority of them are made out of steel, and given names like "Smegma" or "Mind Eraser" or "Deathmachine" or "That thing over there". People love to ride them, for the simple reason that prostitution was made illegal. They tend to generate long lines and vomit.
"The Mind Eraser is my favorite roller coaster"
~ Paul Ruben on a roller coaster
von kodiac1 8. Juli 2006
the thing that makes your heart rate rise to the same rate as it is during sex. no joke. its on a video =]
if you had sex on a rollercoaster, would you die?!
von darkXangel 10. Juli 2008
What I live for. Make sure to visit Cedar Point on Ohio and Six Flags Great America in Illinois PEACE.
I'll ride roller coasters until I die.
von CoasterMan 31. Juli 2005
A Roller Coaster is a thrill ride that a train with an average of 6 carts move along smooth rails at a high speed useing wheels on top of the rails, under the rails and at the side of the rails conected to the train. you are held in on the train by eather shoulder restraints, lap bar restraints or both and some roller coasters have other type of restraints. types of roller coasters you can get are:
wooden roller coaster
steel roller coaster
floorless roller coaster
stand-up roller coaster
inverted roller coatser
flying roller coaster
4th dimension roller coaster
launch roller coaster
and many more
roller coasters are said to be one of the best things ever in the world, even better than sex. "air time" is the real name of "that feeling yo get in your stomach", you get it when the roller coaster your on reaches a high speed in a short amount of time for example going down the first drop and a launching on a launching roller coaster. on a roller coaster you get elements like a loop, here are some examples:
barrol roll
heartline roll
and MUCH more
by the time you have finished reading this at least 5 new roller coaster designs would have been made.
von theNG 8. Mai 2010
A girl to whom height is important.
Girl 1: "Look at that guy over there; he's so cute!"
Girl 2: "He's too short for me. You have to be over 6 feet for this ride."
Girl 1: "Ah, you're a roller coaster. I guess your carny greased you up for nothing tonight. He's all mine!"
von JL Patrick 25. Januar 2014
Täglicher Gratisnewsletter
Die Mails werden von versendet. Wir versenden keine Spammails. | <urn:uuid:f2ba7a83-d3be-4949-ad87-84d91031caa9> | 2 | 1.90625 | 0.199437 | en | 0.904614 | http://de.urbandictionary.com/define.php?term=Roller+Coaster&defid=1110629 |
Last updated: April 20. 2014 6:12PM - 1034 Views
By Tom Burns
Story Tools:
Font Size:
Social Media:
For stargazers (this one, at least), spring begins when Arcturus rises again in the east in the early evening. This year, given the cold weather, we’re still waiting for spring.
Sadly, poor Arcturus used to be one of the most important stars in the heavens, but now few people can find it.
Let’s rectify that unjust situation. The easiest way to find Arcturus is to start with the Big Dipper, the brightest stars of Ursa Major, the Great Bear. You’ll find the “Big Dip” high in the north just after dark. Follow the handle as it arcs around, and keep going until you come to a bright, yellow-orange star in the east. “Arc to Arcturus,” as the old stargazer’s saying goes.
Because of its closeness to Ursa Major, the Greeks called it the “Guardian of the Bear.” They believed that its power drove the Bear across the sky. In fact, Arcturus is the brightest star in the kite-shaped constellation Bootes, the Herdsman. It may seem odd for him and his brightest star to be herding a bear, but there you go.
You are looking at the fourth brightest star in the sky. The ancient Egyptians worshiped it as a god. It was the Arabian “Keeper of Heaven.” To the Chinese, it was the golden “Palace of the Emperor.” By closely observing its rising and setting, the ancient Greeks used it to set the dates for their annual festival.
For most folks in the northern hemisphere, it did indeed herald the return of spring, even in places like Egypt where the winter is not nearly so bad as Ohio’s. The star foretold the rising of the Nile River and the subsequent restoration of their precious fertile land. The star quite literally meant life to them.
Others were not so sanguine. Sailors saw its rising as a harbinger of violent storms at sea, which is not surprising, since it rises in the year’s most turbulent month. Around 460 BCE, the Greek physician Hippocrates believed that its position in the sky influenced human health. At its rising in the spring, “diseases often prove critical.” One was apt to die at the rising of the star.
A word for the superstitious: I’ve been watching Arcturus rise for over 50 springs, and I feel fine, thank you very much.
For centuries, Arcturus has been the subject of close scientific scrutiny. We now know that it is about 37 light years, or about 220 trillion miles, away from us. At 20 million miles in diameter, Arcturus is 25 or more times wider than our sun. Yet it contains only about the same amount of “star stuff” as our yellow dwarf star, the sun. Its stellar material is spread very thinly indeed, with only 1/3,000 the density of the sun.
In effect, Arcturus is a harbinger of things to come for our sun. As an “orange-giant” star, it has reached the end of its life. Like Arcturus, the sun will eventually swell to enormous size and engulf its inner planets, perhaps even to the orbit or Earth.
In a few hundred million years, Arcturus will collapse to a white dwarf and die, sending its outer shell hurtling into space. Our own sun will suffer the same fate, but not so soon: in five or six billion years.
Arcturus is a relatively cool star compared to the sun. It is only “orange-hot” at 7,000 degrees compared to our sun’s “yellow-hot” 10,000 degrees. Since Arcturus is relatively close to us as stars go, scientists can actually measure the heat we receive from it. It isn’t much — only about the amount you’d get from a candle at five miles away.
For a brief moment in 1933, Arcturus reached the pinnacle of its fame. For the opening of the “Century of Progress” Exposition in Chicago, the light from the star was focused through telescopes on a photoelectric cell. The energy thus generated was used to flip the switch that turned on a huge bank of floodlights, and the Chicago Exposition was for the first time ablaze with the light of a distant star.
Since then, it’s been all downhill for Arcturus. Let’s face it. People don’t look at the stars as much as they used to. The very outside lighting used at the Chicago Exposition now illuminates our cities, and the stars have grown dimmer as a result.
Over the long run, the prognosis is bad for Arcturus. It is moving in our general direction at 90 miles per second, and it will reach its closest point to us — at just a few hundredths of a light year closer — in about 4,000 years. However, its arcing path will eventually cause it to move away. In a few million years, its distance from us will cause it to fade from view, and the “Keeper of Heaven” will desert us for a million centuries.
Tom Burns is the director of Perkins Observatory. He can be reached at [email protected].
comments powered by Disqus
Featured Businesses
Info Minute
Gas Prices
Delaware Gas Prices provided by GasBuddy.com | <urn:uuid:0221e9a3-bd4d-40d6-bf13-81636de7378d> | 3 | 3.296875 | 0.086217 | en | 0.955471 | http://delgazette.com/news/opinion_columns/4525276/The-Keeper-of-Heaven |
Monday, January 27, 2014
The Parable of Deadcatpooville
There was once a large barnyard. It was ruled by cats, who thought the best thing for the barnyard was dead mice. If you walked by the ruling cats, you would hear a chorus, “dead mice, dead mice, we need dead mice; that’s how we’ll fix this barnyard.”
There were lots of problems in the barnyard. In one corner young cats were killing each other because they were bored, maybe they were not being taught how to be clever, how to use their cat skills to make the barnyard a better place. “Dead mice,” said one of the fat cats. “Police and prisons,” said another, "that’s what we need." A few suggested teaching the young cats how to kill just enough mice and no more. But the loudest cats said, “dead mice, dead mice, that’s what we need, more dead mice.”
In spite of this doubtful tactic, there was also a lot of potential in the barnyard. In one corner creative young cats were getting together and asking how they could make the barnyard a better place. Sometimes they would make suggestions, but when they did the ruling cats just said, “bring us dead mice, and then we’ll talk.”
One day a cat came from the next barnyard over to make a name for himself. He was a beautiful cat, very big and sleek with a shiny grey coat. The ruling cats liked him, but he had mean eyes.
Mr. Sleek approached a few of the fattest ruling cats and said, “I will bring you 18000 dead mice AND I will do you a favor. I will build a cat poo train through your barnyard. We will scrape up catpoo from all over and bring it through your barnyard. Then no one will be able to smell all these dead mice I will bring you, all they will smell is cat poo. The fat cats thought this was a great idea.
The creative young cats, said, “Oh no, fatcats, this is a terrible idea, it will be stinky and unhealthy. Let us help you come up with a better plan.” They held meetings, and learned that none of the ordinary cats liked this plan. They agreed that maybe it was a better idea to build cat gardens and teach kittens good cat skills like how to cover their messes or use it as fertilizer. But the fatcats could only think of dead mice.
Mr. Sleek got wind of this and started piling up the dead mice in earnest. Then he threatened to take away the dead mice. There was a bad smell in the barnyard. More cats started killing each other. They said, noone cares about us, they only care about dead mice and cat poo. They don’t care about the barnyard, or building gardens, or teaching kittens. Why should we care?
If only the ruling cats had listened, today that barnyard wouldn’t be called Deadcatpooville.
Let those who have ears hear....
1 comment:
Pooper Scooper said...
The dead cats should leave town with their poo poo mouths and crawl back into their caves. The efforts to stop what they call the poo poo train are a classic example of the ends justifies the means. In other words, lie to those around you, the ignorant ones who don't understand or want to understand projects, refuse to educate themselves to the truth and reality of business and environment; instead they follow their leaders on the council through all the hate, scare tactics, and threats, destroying the city of Pomona slowly but surely in the name of a "righteous cause". Erin, you are the worst kind of hypocrite, a so called Professor of Religious Studies at the Claremont Colleges, obviously not letting the positive aspects of any religion rub off on you. May your poo poo mouth and disgusting antics finally leave our city and its citizens alone. | <urn:uuid:bd66c5db-e997-40da-85d7-3614fe678230> | 2 | 2.09375 | 0.050899 | en | 0.97201 | http://diversitown.blogspot.com/2014/01/the-parable-of-deadcatpooville.html |
Skip to end of metadata
Go to start of metadata
This section describes the practical aspects of getting started using and modifying Jikes RVM. The Quick Start Guide gives a 10 second overview on how to get started while the following sections give more detailed instructions.
1. Get The Source
2. Build the RVM
3. Run the RVM
4. Configure the RVM
5. Modify the RVM
6. Test the RVM
7. Debug the RVM
8. Evaluate the RVM
• No labels | <urn:uuid:ada17c92-13f8-446a-9801-fd25f90e3dc0> | 2 | 1.515625 | 0.931998 | en | 0.701821 | http://docs.codehaus.org/pages/viewpage.action?pageId=73910 |
The Java EE 5 Tutorial
Building, Packaging, and Deploying confirmer Using Ant
To build and package the confirmer example, do the following:
1. In a terminal window, go to tut-install/examples/ejb/confirmer.
2. Enter the following command:
This compiles the source code and creates an EAR file, confirmer.ear, in the dist directory.
To deploy confirmer.ear, type the following command in a terminal window:
ant deploy | <urn:uuid:6722f752-3450-4f9f-8058-20142baf1ffc> | 2 | 1.765625 | 0.998508 | en | 0.743721 | http://docs.oracle.com/cd/E19159-01/819-3669/bncjt/index.html |
Sun Cluster 3.1 Data Services Developer's Guide
Returning From svc_start
Even when svc_start returns successfully, it is possible the underlying application failed to start. Therefore, svc_start must probe the application to verify that it is running before returning a success message. The probe must also take into account that the application might not be immediately available because it takes some time to start up. The svc_start method calls svc_wait, which is defined in xfnts.c, to verify the application is running, as follows.
Example 8–7
/* Wait for the service to start up fully */
"Calling svc_wait to verify that service has started.");
rc = svc_wait(scds_handle);
"Returned from svc_wait");
if (rc == 0) {
scds_syslog(LOG_INFO, "Successfully started the service.");
} else {
scds_syslog(LOG_ERR, "Failed to start the service.");
The svc_wait method calls scds_get_netaddr_list(3HA) to obtain the network-address resources needed to probe the application, as follows.
Example 8–8
/* obtain the network resource to use for probing */
if (scds_get_netaddr_list(scds_handle, &netaddr)) {
"No network address resources found in resource group.");
return (1);
/* Return an error if there are no network resources */
if (netaddr == NULL || netaddr->num_netaddrs == 0) {
"No network address resource in resource group.");
return (1);
Then svc_wait obtains the start_timeout and stop_timeout values, as follows.
Example 8–9
svc_start_timeout = scds_get_rs_start_timeout(scds_handle)
probe_timeout = scds_get_ext_probe_timeout(scds_handle)
To account for the time the server might take to start up, svc_wait calls scds_svc_wait and passes a timeout value equivalent to three percent of the start_timeout value. Then svc_wait calls svc_probe to verify that the application has started. The svc_probe method makes a simple socket connection to the server on the specified port. If fails to connect to the port, svc_probe returns a value of 100, indicating total failure. If the connection goes through but the disconnect to the port fails, then svc_probe returns a value of 50.
On failure or partial failure of svc_probe, svc_wait calls scds_svc_wait with a timeout value of 5. The scds_svc_wait method limits the frequency of the probes to every five seconds. This method also counts the number of attempts to start the service. If the number of attempts exceeds the value of the Retry_count property of the resource within the period specified by the Retry_interval property of the resource, the scds_svc_wait method returns failure. In this case, the svc_start method also returns failure.
Example 8–10
#define SVC_CONNECT_TIMEOUT_PCT 95
#define SVC_WAIT_PCT 3
if (scds_svc_wait(scds_handle, (svc_start_timeout * SVC_WAIT_PCT)/100)
!= SCHA_ERR_NOERR) {
scds_syslog(LOG_ERR, "Service failed to start.");
return (1);
do {
* probe the data service on the IP address of the
* network resource and the portname
rc = svc_probe(scds_handle,
netaddr->netaddrs[0].port_proto.port, probe_timeout);
if (rc == SCHA_ERR_NOERR) {
/* Success. Free up resources and return */
return (0);
/* Call scds_svc_wait() so that if service fails too
if (scds_svc_wait(scds_handle, SVC_WAIT_TIME)
!= SCHA_ERR_NOERR) {
return (1);
/* Rely on RGM to timeout and terminate the program */
} while (1);
Note –
Before it exits, the xfnts_start method calls scds_close to reclaim resources allocated by scds_initialize. See The scds_initialize Call and the scds_close(3HA) man page for details. | <urn:uuid:0b0297f9-70c8-4fda-be7c-6cbbdd7c0793> | 2 | 2.140625 | 0.023377 | en | 0.720889 | http://docs.oracle.com/cd/E19199-01/816-3385/ch8_dsdl_sample-48/index.html |
Thursday, May 20, 2004
Korea's philosophy on white collar crime
I used to be a pretty laissez-faire guy in the US. However the more I am in Korea, I realize that some controls on the market are a good thing. For example, I thought OSHA was a bad thing in the states until I saw a Korean construction site. Further experiences shown me that not only should have rules, you should also enforce them consistently. Going back to my example, one can complain about OSHA's regulations, however their consistency in applying them is predictable.
This all brings me to this analogy in on opinion piece by Moon Chang-keuk, the editor of the Op-ed page of the Joongang Ilbo:
A market is a place where all sorts of people gather. There are merchants and speculators, food sellers and pickpockets. Money goes around and an order is formed naturally. If police officers with clubs are sent to clean the market because the authorities decide it is dirty, the market would lose its vigor.
What genius thinks it's a good idea to have "pickpockets"? Why can't the police clean up these people? What if the merchants have formed a cartel? What if somebody sells products that are, as Korean law always so eloquently puts it, immoral and dangerous to society?
(Speaking of "immoral" products in Korea, did you know that you can not patent a dildo in Korea, while the US patent office has issued at least 15 for such devices?)
This is trouble some to me, since it is indication of how Korea sees its business market, and perhaps business in general. While one could quibble about the details anti-competitive practices, I am troubled that of the fact that "pickpockets" are not only considered part of the market, but a desirable part of the market.
You could say that I over reacting, however seeing how corporate, and even government, malfeasance is treated here, coupled with this makes me wonder if crime does pay.
Tuesday, May 18, 2004
Fantasyland on overseas births to prevent military service
Yesterdays Joongang Ilbo had another snippet that had me rolling in the aisles of the subway erupting with laughter. A piece on the phenomenon of mothers traveling overseas to get their children non-Korean citizenship had this snippet:
One particularly sensitive issue deals with compulsory military service, which every man must carry out...critics of birth tours to predict that if rich people continue to buy an exemption for their children, the Korean Army will be made up of only people from the lower classes.
WHAT COUNTRY DO THESE "CRITICS" LIVE IN!?!? You mean to tell me that without a foreign passport a rich kid could not avoid military service by his dad putting 5-10 million won in the right pocket? Come on stop fooling yourselves "critics", it has always been relatively easy for the upper classes of Korean society to shirk from their duties as a citizen.
A few years ago when I was working with IT companies I ran across something related. As part of the governments targeted development program, a young man can be exempted from military service if he works for companies sanctioned by the government.
I visited such companies as a part of my job function once. As is the case with Korean companies I always got a thick packet of information about practically everyone working in the company. Most of them had very impressive well-decorated people on their rolls all with appropriate engineering degrees. However you almost always had one or two undistinguished people, usually young, with degrees in say "Sociology" or "Korean Art". When I would ask about these people I would be told they have "good contacts". Yeah, daddy always has "good contacts".
Thursday, May 06, 2004
The Famous Mexican Salad
Last week I went to an anju (appetizer) cooking class offered by OB. My wife went a similar class a few weeks ago in Ichon, where she learned about draft beer storage and serving. All of these classes are offered for free by the OB people out of their self-interest (more better ran "hofs" mean more beer sold, that simple).
Anyway the food section was interesting. One of the more interesting tid-bits I picked up was it takes about 10 years for a new anju dish to get receive broad acceptance. This sounds about right, when I cam almost five years ago it was rare to see a sausage anju, now it is common in the high and medium end places, and soon it will be standard everywhere.
However I worry about the fate of food in Korea based on this one episode. We were learning how to cut fruit stylishly for the popular fruit anju. He suddenly diced about three cups of fruit (melon, apple, pear, etc), covered it with about a quarter cup of cream, and then two table spoons of sugar. OK I thought, we are making some kind of salad. I was right in a way.
To the fruit mixture he then added TWO CUPS of mayonnaise, a bit of pepper, and a handful of Frosted Flakes cereal. With me about the wretch with all that mayo on the fruit, he scooped it in a bowl and proudly announced the name of the dish, "MEXICAN SALAD!" | <urn:uuid:f5fb818e-d471-4a0e-850a-5b6b06dee757> | 2 | 1.507813 | 0.027022 | en | 0.973863 | http://dramman.blogspot.com/2004_05_01_archive.html |
Dungeons and Dragons Wiki
Astor (3.5e Deity)
9,584pages on
this wiki
Created By
Ganteka Future (talk)
Date Created: 4 January 2010
Status: Nearly Complete
Editing: Please feel free to edit constructively!
Intermediate Deity
Symbol: A flower made of gray feathers
Home Plane: Nymnelia (Sosha)
Alignment: Neutral Evil
Portfolio: Hate, Hiding, Patience, Study
Clergy Alignments: Lawful Evil, Neutral Evil, Chaotic Evil
Domains: Evil, Magic, Strength
Favored Weapon: Mace (any)
Summary::A disgraced celestial warrior capable of channeling great divine energy. Spends time hidden and in study. He is the second oldest of the ill-fated order of archangels (the sabayoh) created by Rejiksson during The First and Last War against Exaka. He aided his brothers and sister in the attack on Exaka's fortress in Mesaba. After their victory at the end of the war, the sabayoh were to be put on trial to be judged by the remaining deities of the Sosha Genia to decide the fates of all great and powerful, and potentially dangerous, beings. As his trial neared, he saw other servants of war set off to their fates, being either subjugated, re-forged or sealed away as a prisoner for untold millenia to come. He grew fearful and escaped, quickly going into hiding and solitude. Astor, created with free will, felt he had every right to pursue his own freedom and fate.
Over the ages, he gradually turned to hate and resent his father. He trained and harnessed the control of his celestial power and attained Godhood, eventually transforming into a Sosha Orsa. Unlike his brother Nril, Astor is a great source of divine magic, which is why He is known as The Divine Fount. His magic, raw and primordial divine energy, continually restores his body, granting him divine invulnerability.
He appears as a youthful male humanoid of celestial grace, with pale skin and flowing white hair. Great ivory wings with feathers like blades sprout from his back and float about him gently. A constant, world weary expression haunts his face. Despite his gentle, disarmingly average features, an immeasurable aura flows about him, as if he were a star made of pure divine energy.
• Divine Fount: A wave of divine energy constantly flows out of his body, continually healing him and restoring his strength. This ability generates enough power to make him difficult for even the Sosha Genia to battle against. Though, this ability only protects himself.
• Fount Stream: Astor can focus his aura into a solid beam that rips through anything of less than Godly power.
There is no need to be punished for your actions so long as you can avoid punishment for them. The world, and your continuing existence, shall punish you enough simply for the station you were born into.
Clergy and TemplesEdit
Astor keeps no active temples or clergy, and has no need for worshippers, but he will provide aid to those who do know of him and pray to him, so long as they don't draw too much attention to themselves.
Many prisoners will praise Astor.
Back to Main Page3.5e HomebrewDeitiesDeities of Rom
Around Wikia's network
Random Wiki | <urn:uuid:b6f9e66e-8cbe-4512-8303-7be417eeefcf> | 2 | 1.929688 | 0.048502 | en | 0.971807 | http://dungeons.wikia.com/wiki/Astor_(3.5e_Deity) |
Dungeons and Dragons Wiki
9,584pages on
this wiki
Template:Infobox D&D creature
In the Dungeons & Dragons role-playing game, the boobrie is a magical beast loosely based upon the boobrie of Scottish Highlands folklore. It is a giant relative of the stork (12 feet tall), and lives in swamps and marshes. It attacks by finding a tall patch of grass, hiding in it, and waiting for prey to come. The boobrie hardly ever attacks humanoids, instead sustaining itself on catfish, snakes, lizards, giant insects, and other wetland dezinens. The boobrie, due to things it has been forced to eat in lean times, is immune to poison. It has some magical properties.
Boobries cannot speak, and they are Neutral in alignment.Template:D&D-stub
Around Wikia's network
Random Wiki | <urn:uuid:ac59c7ef-3f23-4185-9a91-5a237b08e121> | 3 | 2.875 | 0.055421 | en | 0.920137 | http://dungeons.wikia.com/wiki/DnDWiki:Boobrie |
Voluntary bond buybacks to the rescue?
On Friday, the board of the Institute of International Finance (IIF) suggested its membership of over 400 of the world’s largest banks will consider Greek government bond buybacks alongside the French bank proposal for a voluntary Greek debt rollover. Will this help return Greece to solvency?
I explained why the French bank proposal was likely to fall short of targets in a post last week. If we assume markets are efficient, a voluntary bond buyback programme is also unlikely to return Greece to solvency.
A voluntary debt buyback programme would involve the Greek government offering to repurchase its debt from the banks holding it at the current, severely depressed market prices. The Greek government could use the funding it gets from the European Financial Stability Facility (EFSF) for these transactions. Participating banks would accept a writedown on the debt they were holding, but in exchange would reduce their exposure to Greece in the event of a debt restructuring.
If markets are efficient, a voluntary bond buyback would not significantly reduce the overall debt stock, even if a number of banks were to want to participate. Let’s imagine the EFSF lends enough to Greece that it can buy its way back to solvency. The markets will bake this into bond prices. Prices will rise and yields will fall, at which point the funding from the EFSF will no longer stretch far enough for Greece to repurchase the debt necessary to restore fiscal solvency. Rather than reduce overall debt levels, a bond buyback scheme would only serve to depress bond yields and, in turn, do little to improve Greece’s fiscal situation. Once it became clear that the EFSF funding would not reach far enough to return Greece to solvency, prices would fall and yields would return to their current elevated levels.
About these ads
One Response to Voluntary bond buybacks to the rescue?
1. Dom White says:
But markets aren’t efficient.
An ‘efficient’ market assumes perfect competition i.e. no agent has market power. In this case, you have a single buyer of Greek bonds, meaning it can effectively dicatate prices to potential sellers.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
Get every new post delivered to your Inbox.
Join 410 other followers
%d bloggers like this: | <urn:uuid:8d249b64-edfc-4429-8d1b-12ccfcc63908> | 2 | 1.96875 | 0.032239 | en | 0.937144 | http://economistmeg.com/2011/07/02/voluntary-bond-buybacks-to-the-rescue/ |
Yoga Pose of the Week: Warrior III
Get your Warrior III pose on.
Last week we explored Virabhadrasana II and the importance of building roots to find balance and stability. This week’s posture, Virabhadrasana III (Warrior 3) is a great continuation of these principles. Virabhadrasana III strengthens and tones the ankles, shins and thighs, while building strength in the back and shoulders as well. It is a difficult shape that takes focus and attention. When properly warmed up and aligned, you will be able to enjoy the freedom and lightness within the challenge of the shape.
To find Virabhadrasana III, let’s first come into Crescent Lunge. This shape will help you to find your roots and alignment so you can take flight safely and intentionally.
1. Begin at the top of your mat in Tadasana (Mountain Pose).
2. Inhale to lift your arms overhead.
3. Keep your spine long as you hinge forward to Uttanasana (Standing Forward Bend).
4. Place your hands lightly on the floor and use the strength in your belly to send your right foot behind you to the back of your mat.
5. Extend out through your right heel. Think about lifting the space behind your kneecap.
6. Root down into your left big toe mound and into your left pinky toe evenly.
7. Now, keeping your fingertips light on the floor, draw into your low belly and begin to find strength from your center. Once you have your foundation, lift your arms up overhead–maintaining the energy in your legs and in your center.
8. Let your attention rest in your center. From there, notice your energy lifting up toward the sky. Remember to keep the shoulders soft and relaxed even as the hands reach up. Hold here in Crescent Lunge for five full breaths.
To transition into Virabhadrasana III you will need to harness the energy in your center and keep your front leg strong and connected to the earth. The more focused and present you remain, the more balanced your shape will be.
1. To take flight, first find your breath.
2. Reach your arms forward alongside your ears. Your torso should stay strong as it hovers over your front thigh.
3. Next, soften into your front knee, and begin to bring your weight into your front leg. Be sure to spread the weight evenly so you are not leaning too far forward into your toes–but instead you are pressing down through your heel and toes evenly.
4. Keep the torso toned and the arms extended as you lift your back foot from the ground.
5. As you lengthen your spine and even out your back, think about a long line from your fingertips to your toes–so that energy from your center is moving through the length of your body evenly.
6. Soften the muscles in your face and be sure to keep the back of the neck soft and long. Stay here for 5-10 breaths.
There are several key principles to consider in Virabhadrasana III.
First, the hips should be level. It is common for the raised leg to draw the hip up (in this case, the right hip), when it should stay even. It helps to think about the inseam of your raised leg (the right leg) turning up toward the sky.
Also, try not to lose the attention in your right toes–the foot should stay flexed with toes pointing straight to the floor and energy extending out through the center of your arch.
As you root down through your toes and energize your standing leg, remember to keep your knee soft so it is not hyper-extended.
To release, you can move out of the shape the same way you came in, or just lower your right foot down to meet your left and soften to Uttanasana. Rest here and when your ready, switch sides. Notice the differences in your body from side to side. And notice the level of awareness and attention it takes to root down and find stable ground.
Emily Buchholtz is a yoga instructor in Portland, OR. She believes everyone can benefit from a little more yoga.
Sponsored Content:
Submit a comment:
| <urn:uuid:0e2b1a37-5245-4f61-a8b5-4946dd03c974> | 2 | 2.078125 | 0.214743 | en | 0.916556 | http://ecosalon.com/yoga-pose-of-the-week-warrior-iii/ |
It's Elemental
Previous Element
The Periodic Table of Elements
Next Element
The Element Polonium
[Click for Isotope Data]
Atomic Number: 84
Atomic Weight: 209
Melting Point: 527 K (254°C or 489°F)
Boiling Point: 1235 K (962°C or 1764°F)
Density: 9.32 grams per cubic centimeter
Phase at Room Temperature: Solid
Element Classification: Metal
Period Number: 6 Group Number: 16 Group Name: Chalcogen
What's in a name? Named for the country of Poland.
Say what? Polonium is pronounced as peh-LOW-nee-em.
History and Uses:
Polonium was discovered by Marie Sklodowska Curie, a Polish chemist, in 1898. She obtained polonium from pitchblende, a material that contains uranium, after noticing that unrefined pitchblende was more radioactive than the uranium that was separated from it. She reasoned that pitchblende must contain at least one other radioactive element. Curie needed to refine several tons of pitchblende in order to obtain tiny amounts of polonium and radium, another radioactive element discovered by Curie. One ton of uranium ore contains only about 100 micrograms (0.0001 grams) of polonium.
Due to its scarcity, polonium is usually produced by bombarding bismuth-209 with neutrons in a nuclear reactor. This forms bismuth-210, which has a half-life of 5 days. Bismuth-210 decays into polonium-210 through beta decay. Milligram amounts of polonium-210 have been produced by this method.
Polonium-210 is a very strong emitter of alpha particles. A single gram of polonium-210 creates 140 Watts of heat energy and is being considered as a lightweight heat source for thermoelectric power for spacecraft. Polonium-210 has a half-life of 138.39 days.
Polonium's most stable isotope, polonium-209, has a half-life of 102 years. It decays into lead-205 through alpha decay. Polonium-209 is available from Oak Ridge National Laboratory at the cost of about $3200 per microcurie.
Polonium can be used to eliminate static electricity in machinery that is caused by processes such as the rolling of paper, wire or sheet metal, although other materials which emit beta particles are more commonly used for this purpose. Polonium is also used in brushes for removing dust from photographic films, although the polonium must be carefully sealed to protect the user from contamination. Polonium is also combined with beryllium to form neutron sources.
Estimated Crustal Abundance: 2×10-10 milligrams per kilogram
Estimated Oceanic Abundance: 1.5×10-14 milligrams per liter
Number of Stable Isotopes: 0 (View all isotope data)
Ionization Energy: 8.417 eV
Oxidation States: +4, +2
Electron Shell Configuration:
2s2 2p6
3s2 3p6 3d10
4s2 4p6 4d10 4f14
5s2 5p6 5d10
6s2 6p4 | <urn:uuid:54b4f4ce-3683-4b49-bffb-63f670feca3d> | 4 | 3.71875 | 0.121616 | en | 0.874706 | http://education.jlab.org/itselemental/ele084.html |
The electoral statement of candidate Dr.Taniosn Ebraham Ayo (Category B)
Syria for us all
Qamishly-Syria— Syria is the cradle of civilization, alphabet and home for religions . Throughout her long history she has provided a model for integration and interaction of civilizations based on principles of national , cultural , linguistic and religious diversity , historically , embodying the will of openness and communication between the different components of Syrian society and strengthening the values of brotherhood , tolerance , coexistence and national partnership , far from all forms of monopoly and exclusivity . These values have constituted the basis for the national unity and the symbol of the civilized message carried by Syria to the whole world .
Syria is currently witnessing unprecedented tensions , frustrations , extremism and fanaticism as well as a surge in the national and sectarian extremism amongst the Syrian society for different reasons , and this is considered a retrograde step and a departure from the national and cultural enterprise the Syrian people were looking forward to achieve . The continuation of this condition is a warning signal of grave threats to the unity and cohesion of the society unless all the patriotic and democratic forces are aware of this deterioration and make every effort to take the initiative and work on the production of a new concept of the modern state open to the world at large, through the formulation of new foundations for a State , based on democracy , pluralism , human rights and principles of citizenship . A State ruled by law and order and institutions , a state for all its citizens with their various national and religious affiliations under the umbrella of one unified national Syrian identify .
As a matter of fact , the only way out that ensures the unity and safety of our society and country and prepares for the establishment of a genuine national unity that would enable our people to liberate its occupied Golan Heights and stave off the dangers surrounding us , as well as to put the country on the road to a bright future desired by the Syrian people , is by launching a comprehensive national reform project that would cover all political , economic and administrative aspects , given that it is a pressing and consistent societal need that must not be linked to any regional or international circumstances or any external pressure . The political reform is the key to initiating any serious reform able to spare the country the risk of chaos and potential shocks that could lead to instability and disturbance of the civil peace.
The most important step in this reform project begins with draft constitutional amendments based on the values of democracy, secularism and the principle of citizenship and real partnership in the homeland away from all forms of exclusivity and monopoly. Secondly, the recognition of national and religious diversity in the context of national unity and unified national identity of Syria as a permanent home for all its children .
For all that, at the national level, I am going to work to achieve the following objectives :
1 - The constitutional recognition of the Assyrian Chaldean Syriacs , as indigenous people deeply-rooted in the national soil, and ensure their rights alongside the national rights of all components of the Syrian society like Arabs, Kurds, Armenians and others. Moreover, I will do my best to put emphasis on the Syriac language and culture , protect and make it a living national culture . By including Syriac studies in the curriculums of Syrian Institutes and Universities I will revive interest in it . Moreover, I will work with UNESCO to highlight its importance and declared it as a universal human heritage.
2- Bridging diasporas with the motherland and providing the Syrian expatriates with all the necessary facilities in order to consolidate their relations with their motherland and further engage them in the process of development and defense of their national issues.
3- Discontinuation of the Emergency Laws and abrogation of Exceptional Laws and Courts . The release of all political detainees and prisoners of conscience , in addition to closing , once and for all , the file of political detention .
4- Enacting a democratic law for Political Parties taking into consideration the national and political diversity of our society , as well as , an Election Law that would ensure active participation and fair and genuine representation for all the components of the society . A new Law for Press and Media guaranteeing free expression for all .
5- The independence of Judiciary and separation of powers in the public life .
Moreover , I will strive to :
1- Modernize the laws to be in line with the spirit of the age . Application of laws to become the umbrella protecting all citizens without differentiation or discrimination
2- Improve the living standards of the citizens . Reduce unemployment amongst the young people , combat all forms of corruption and waste.
3- Establish productive projects in alHasake governorate and boost up its development and growth at all levels agricultural , industrial, educational and touristic .
4- Upgrade healthcare services in the Governorate and build up healthcare centers of all specializations .
5- Establish a State University in the Governorate involving all the specializations.
6- Protect and give importance to archeological sites and work for the expansion of archaeological excavations.
Candidate Dr. Taniosn Ebraham Ayo (Category B)
Other press releases
Latest news
ADM Zawaa ADO ADO - MTAKASTA arab kurd relations Assyrian Assyrian Assyrian Autonomy Assyrian Chaldean Syriac People Assyrian Media Assyrian People Assyrians Assyrians Assyrians AUA Autonomy Autonomy Autonomy Belgium Chaldean Syriac Assyrian Chaldean Syriac Assyrian Chladean Syriac Assyrian Christian Christianity - Islam Christianity - Islam Christianity/Islam Christianity/Islam Christianity/Islam Christians Christians of Iraq Christians of Iraq Christians of Iraq Christians of Iraq Christmas & New Year Ethnic Cleansing European Parliament European Parliament Genocide Genocide Human Rights Human Rights Human Rights Human Rights Iraq Iraq Iraq Iraq Iraq Iraq kurdish run province Lebanon media Mesopotamia Middle East Middle East Christians Middle East Christians Middle-East Christians Middle-East Christians Minorities Nationa minority National Minority National Minority National Minority National Minority Native People Native People Native People Native People Northern Iraq Politics Religion Sports Sweden Syria Syria Syria The Assyrian People Turkey Turkey UN USA Vatican
Home | Miscellaneous | News | Articles | Press releases | Debates | Photos | Video | Publications | About ADO
© Assyrian Democratic Organization - Postfach 13 44 - D-65003 WIESBADEN - Fax:0049-0611/ 2050941 | <urn:uuid:6515e3a2-8a27-403b-a50a-b4b83edb8048> | 2 | 1.695313 | 0.050211 | en | 0.794472 | http://en.ado-world.org/press-releases/article/the-electoral-statement-of |
• Belgian Grand Prix
Reutemann win marred by death of mechanic
ESPN Staff
May 17, 1981
A sombre Carlos Reutemann on the podium © Sutton Images
Carlos Reutemann won the Belgian Grand Prix but it was a weekend marred by endless bickering over the legality of cars and the death of a mechanic when he fell from the pit wall and a serious accident involving another.
Reutemann had been an innocent party in an accident in practice when an Osella mechanic fell from the tiny and cramped pit wall and landed between the wheels of the Argentine's car. The mechanic, Giovanni Amadeo, died of his injuries. In the race itself, the Arrows head mechanic was run over and suffered broken legs.
The FIA's continuing inability to explain clearly what was and was not legal just added to the sense of anarchy. The farce of new regulations over body height was evident when cars slowed half a lap from the pits to allow their suspension to decompress enough for them to pass the fixed centimetre ground clearance rule. Alan Jones failed to pass the test and found his fastest qualifying time disallowed.
Ferrari also made itself less than popular when it alone protested that rules limiting the field to 30 cars should be invoked, sidelining Patrick Tambay's Theodore. Ferrari then vetoed a suggestion that a pre-qualifying session be held.
The troubles didn't end there as a bizarre start line accident followed. As the cars lined up ready for the lights a variety of drivers suffered overheating water temperatures and waved their arms as a warning that they had stalled. The organisers started the race as Arrows mechanic Dave Luckett leapt over the pit wall with an air line to fire up Riccardo Patrese's Arrows which had stalled.
As he crouched behind the car, the field blasted away and the yellow flag near the Arrows had been withdrawn. Those at the back of the grid were accelerating when the second Arrows of Siegfried Stohr ploughed into the back of Patrese injuring Luckett in the process. Stohr was beside himself with grief but the race carried on. The person who helped out most was Didier Pironi. He stopped his car on the grid and forced everyone else to stop. The organisers had no choice but to halt the race.
Forty minutes later the race was restarted and Reutemann took the win with his championship rivals suffering problems. Alan Jones and Nelson Piquet tangled and the Brabham spun into the catch-fencing and Jones missed a gear and went off later in the race. That allowed Jacques Laffite up to second in the Ligier from Nigel Mansell's Lotus. It also gave Reutemann a 12-point lead in the championship but he was in no mood to celebrate.
A rare good news story came from Lotus. The team had missed the previous race after the FIA banned its new Lotus 88, but Colin Chapman dusted down the old Lotus 81 and was rewarded with Mansell's third and Elio de Angelis' fifth.
A shambolic weekend was not quite over. Rene Arnoux was arrested following an altercation as he left the circuit and spent several hours in a local jail.
© ESPN Sports Media Ltd.
ESPN Staff Close | <urn:uuid:90ddc4f1-bb9b-4b5c-8aa8-4700d4346b88> | 2 | 1.554688 | 0.062866 | en | 0.976026 | http://en.espnf1.com/f1/motorsport/story/94795.html?CMP=OTC-RSS |
Memory Alpha
38,165pages on
this wiki
Back to page
There are only 6 links to "Hail", but it's been used in almost every episode of Star trek it seems. Is there a reason we've avoided making a page to Hail, if so what links here should be unlinked (as most seem to have been added recently, like Shrei's summaries) Could this be a redirect to Communication or Subspace communication or something? - AJHalliwell 19:48, 20 Aug 2005 (UTC)
You know, it's funny, as I was writing the summary for "These Are the Voyages...", I originally had a link in there for hail, but removed it b/c it didn't exist. Anyway, there's not reason some type of link to hail should not exist, but it may be better as a redirect to Subspace communication, so long as info about "hail" and "hailing frequencies" are included in that article. --From Andoria with Love 19:53, 20 Aug 2005 (UTC)
Around Wikia's network
Random Wiki | <urn:uuid:043e20d9-872b-4bc6-a949-20950dfacb47> | 2 | 1.515625 | 0.027319 | en | 0.975609 | http://en.memory-alpha.org/wiki/Talk:Hail |
James Warren (engineer)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
James Warren (1706–1808) was a British engineer who, around 1748 to 1807 (along with Willoughby Monzoni), patented the Warren-style truss bridge and girder design. This bridge design is mainly constructed by equilateral triangles which can carry both tension and compression. The first suspension bridge to utilize a Warren truss in its design was the Manhattan Bridge in New York City.[1]
A Handley Page H.P.42 showing the Warren Truss diagonal interplane struts
The Warren Truss design was also used in early aviation when biplanes were dominant, the alternating diagonal truss being used for the interplane struts in aircraft such as the Handley Page H.P.42 airliner and the Fiat CR.42 fighter. The Warren Truss is one of the most widely used and known bridge styles worldwide.
1. ^ American Society of Civil Engineers Metropolitan Section (2014). "Manhattan Bridge". Historic Landmarks. New York City: American Society of Civil Engineers Metropolitan Section. Retrieved 2014-06-02. | <urn:uuid:bf896425-dac0-43da-9c3e-fff96184628b> | 3 | 2.953125 | 0.030761 | en | 0.917879 | http://en.wikipedia.org/wiki/James_Warren_(engineer) |
Mosaic (genetics)
From Wikipedia, the free encyclopedia
(Redirected from Mosaicism)
Jump to: navigation, search
Mosaicism can result from various mechanisms including chromosome non-disjunction, anaphase lag and endoreplication.[2] Anaphase lagging appears to be the main process by which mosaicism arises in the preimplantation embryo.[2] Mosaicism may also result from a mutation during development which is propagated to only a subset of the adult cells.
Mosaics may be contrasted with chimerism, in which two or more genotypes arise from the fusion of more than one fertilized zygote in the early stages of embryonic development.
Mosaicism has been reported to be present in as high as 70% of cleavage stage embryos and 90% of blastocyst-stage embryos derived from in vitro fertilization.[2]
Different types of mosaicism exist, such as gonadal mosaicism (restricted to the gametes) or tissue or somatic mosaicism.
Somatic mosaicism[edit]
Somatic mosaicism occurs when the somatic cells of the body are of more than one genotype. In the more common mosaics, different genotypes arise from a single fertilized egg cell, due to mitotic errors at first or later cleavages.
In rare cases, intersex conditions can be caused by mosaicism where some cells in the body have XX and others XY chromosomes (46, XX/XY).[3][4]
The most common form of mosaicism found through prenatal diagnosis involves trisomies. Although most forms of trisomy are due to problems in meiosis and affect all cells of the organism, there are cases where the trisomy occurs in only a selection of the cells. This may be caused by a nondisjunction event in an early mitosis, resulting in a loss of a chromosome from some trisomic cells.[5] Generally this leads to a milder phenotype than in non-mosaic patients with the same disorder.
An example of this is one of the milder forms of Klinefelter syndrome, called 46/47 XY/XXY mosaic wherein some of the patient's cells contain XY chromosomes, and some contain XXY chromosomes. The 46/47 annotation indicates that the XY cells have the normal number of 46 total chromosomes, and the XXY cells have a total of 47 chromosomes.
Around 30% of Turner's syndrome cases demonstrate mosaicism, while complete monosomy (45, X) occurs in about 50–60% of cases.
But mosaicism need not necessarily be deleterious. Revertant somatic mosaicism is a rare recombination event in which there is a spontaneous correction of a mutant, pathogenic allele.[6] In revertant mosaicism, the healthy tissue formed by mitotic recombination can outcompete the original, surrounding mutant cells in tissues like blood and epithelia that regenerate often.[6] In the skin disorder ichthyosis with confetti, normal skin spots appear early in life and increase in number and size over time.[6]
Other endogenous factors can also lead to mosaicism including mobile elements, DNA polymerase slippage, and unbalanced chromosomal segregation.[7] Exogenous factors include nicotine and UV radiation.[7] Somatic mosaics have been created in Drosophila using x‑ray treatment and the use of irradiation to induce somatic mutation has been a useful technique in the study of genetics.[8]
True mosaicism should not be mistaken for the phenomenon of X‑inactivation, where all cells in an organism have the same genotype, but a different copy of the X chromosome is expressed in different cells (such as in calico cats). However, all multicellular organisms are likely to be somatic mosaics to some extent.[9] Since the human intergenerational mutation rate is approximately 10−8 per position per haploid genome[10] and there are 1014 cells in the human body,[9] it is likely that during the course of a lifetime most humans have had many of the known genetic mutations in our somatic cells [9] and thus humans, along with most multicellular organisms, are all somatic mosaics to some extent. To extend the definition, the ends of chromosomes, called telomeres, shorten with every cell division and can vary from cell to cell, thus representing a special case of somatic mosaicism.[7]
Somatic mutation leading to mosaicism is prevalent in the beginning and end stages of human life.[7] Somatic mosaics are common in embryogenesis due to retrotransposition of L1 and Alu transposable elements.[7] In early development, DNA from undifferentiated cell types may be more susceptible to mobile element invasion due to long, un-methylated regions in the genome.[7] Further, the accumulation of DNA copy errors and damage over a lifetime lead to greater occurrences of mosaic tissues in aging humans. As our longevity has increased dramatically over the last century, our genome may not have had time to adapt to cumulative effects of mutagenesis.[7] Thus, cancer research has shown that somatic mutations are increasingly present throughout a lifetime and are responsible for most leukemia, lymphomas, and solid tumors.[11]
Mitotic recombination[edit]
One basic mechanism which can produce mosaic tissue is mitotic recombination or somatic crossover. It was first discovered by Curt Stern in Drosophila in 1936. The amount of tissue which is mosaic depends on where in the tree of cell division the exchange takes place.[12]
Germline mosaicism[edit]
Main article: Germline mosaicism
Germline or gonadal mosaicism is a special form of mosaicism, where some gametes—i.e., sperm or oocytes—carry a mutation, but the rest are normal.[13][14]
Use in experimental biology[edit]
It is sometimes inconvenient to use negatively marked clones, especially when generating very small patches of cells, where it is more difficult to see a dark spot on a bright background than a bright spot on a dark background. It is possible to create positively marked clones using the so-called MARCM ("Mosaic Analysis with a Repressible Cell Marker", pronounced [mark-em]) system, developed by Liqun Luo, a professor at Stanford University, and his post-doc Tzumin Lee who now leads a group at Janelia Farm Research Campus. This system builds on the GAL4/UAS system, which is used to express GFP in specific cells. However a globally expressed GAL80 gene is used to repress the action of GAL4, preventing the expression of GFP. Instead of using GFP to mark the wild-type chromosome as above, GAL80 serves this purpose, so that when it is removed by mitotic recombination, GAL4 is allowed to function, and GFP turns on. This results in the cells of interest being marked brightly in a dark background.[15]
The phenomenon was discovered by Curt Stern. In the 1930s, he demonstrated that genetic recombination, normal in meiosis, can also take place in mitosis.[16][17] When it does, it results in somatic (body) mosaics. These are organisms which contain two or more genetically distinct types of tissue.[18] The term "somatic mosaicism" was used by C. W. Cotterman in 1956 in his seminal paper on antigenic variation.[7]
See also[edit]
1. ^ Strachan, Tom; Read, Andrew P. (1999). "Glossary". Human Molecular Genetics (2nd ed.). New York: Wiley–Liss. ISBN 1-85996-202-5. PMID 21089233. [page needed]
2. ^ a b c Taylor, T. H.; Gitlin, S. A.; Patrick, J. L.; Crain, J. L.; Wilson, J. M.; Griffin, D. K. (2014). "The origin, mechanisms, incidence and clinical consequences of chromosomal mosaicism in humans". Human Reproduction Update 20 (4): 571–581. doi:10.1093/humupd/dmu016. ISSN 1355-4786.
3. ^ Marchi, M. De et al. (2008). "True hermaphroditism with XX/XY sex chromosome mosaicism: Report of a case". Clinical Genetics 10 (5): 265–72. doi:10.1111/j.1399-0004.1976.tb00047.x. PMID 991437.
4. ^ Fitzgerald, P. H.; Donald, R. A.; Kirk, R. L (1979). "A true hermaphrodite dispermic chimera with 46,XX and 46,XY karyotypes". Clinical genetics 15 (1): 89–96. doi:10.1111/j.1399-0004.1979.tb02032.x. PMID 759058.
5. ^ Strachan, Tom; Read, Andrew P. (1999). "Chromosome abnormalities". Human Molecular Genetics (2nd ed.). New York: Wiley–Liss. ISBN 1-85996-202-5. PMID 21089233. [page needed]
6. ^ a b c Jongmans, M. C. J. et al. (2012). "Revertant somatic mosaicism by mitotic recombination in Dyskeratosis Congenita.". American Journal of Human Genetics 90 (3): 426–433. doi:10.1016/j.ajhg.2012.01.004.
7. ^ a b c d e f g h De, S. (2011). "Somatic mosaicism in healthy human tissues". Trends in Genetics 27 (6): 217–223. doi:10.1016/j.tig.2011.03.002.
8. ^ Blair, S. S. "Genetic mosaic techniques for studying Drosophila development". Development 130 (21): 5065–5072. doi:10.1242/dev.00774.
9. ^ a b c Hall, J. G. (1988). "Review and hypotheses: Somatic mosaicism, observations related to clinical genetics". American Journal of Human Genetics 43 (4): 355–363.
10. ^ Roach, J. C.; Glusman, G. et al (2010). "Analysis of genetic inheritance in a family quartet by whole-genome sequencing". Science 328 (5978): 636–639. doi:10.1126/science.1186802. PMC 3037280. PMID 20220176.
11. ^ Jacobs, K. B. et al. (2012). "Detectable Clonal Mosaicism and Its Relationship to Aging and Cancer". Nature Genetics 44 (6): 651–U668. doi:10.1038/ng.2270. Check date values in: |accessdate= (help);
12. ^ King R. C; Stansfield W. D. and Mulligan P. K. 2006. A Dictionary of Genetics. 7th ed, Oxford University Press. p282
13. ^
14. ^ Schwab, Angela L. et al. (2007). "Gonadal mosaicism and familial adenomatous polyposis". Familial Cancer 7 (2): 173–7. doi:10.1007/s10689-007-9169-1. PMID 18026870.
15. ^ Lee, Tzumin; Luo, Liqun (1999). "Mosaic analysis with a repressible cell marker for studies of gene function in neuronal morphogenesis". Neuron 22 (3): 451–61. doi:10.1016/S0896-6273(00)80701-1. PMID 10197526.
16. ^ Stern, C. and K. Sekiguti 1931. Analyse eines Mosaikindividuums bei Drosophila melanogaster. Bio. Zentr. 51, 194–199.
17. ^ Stern C. 1936. "Somatic crossing-over and segregation in Drosophila melanogaster". Genetics 21, 625–730.
18. ^ Stern, Curt 1968. "Genetic mosaics in animals and man". pp27–129, in Stern, C. Genetic Mosaics and Other Essays. Harvard University Press, Cambridge, MA.
Further reading[edit]
| <urn:uuid:78fdf20f-9c12-44c7-b04f-4689ebb42bab> | 4 | 3.53125 | 0.021723 | en | 0.858155 | http://en.wikipedia.org/wiki/Mosaicism |
Russian chanson
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Music of Russia
Beer, Russian boy with balalaika.jpg
Specific forms
Religious music
Traditional music
Media and performance
Music awards
Music charts
Music festivals
Music media
Nationalistic and patriotic songs
National anthem Anthem of Russia
Regional music
Local forms
Related areas
Russian chanson (Russian: Русский шансон, tr. Russkiy shanson) (from French "chanson") is a neologism for a musical genre covering a range of Russian songs, including city romance songs, author song performed by singer-songwriters, and blatnaya pesnya or "criminals' songs" that are based on the themes of the urban underclass and the criminal underworld.
The Russian chanson originated in the Russian Empire. The songs sung by serfs and political prisoners of the Tsar are very similar in content to the songs sung in the Soviet Union and the Russian Federation today. However, during the Soviet Union, the style changed, and the songs became part of the culture of samizdat and dissent.[1]
During the Khrushchev thaw, the Soviet Union released millions of prisoners from the gulag. When the former prisoners returned from the gulags back to their homes in the 1950s, the songs that they had sung in the camps became popular with Soviet students and nonconformist intelligentsia.[2] Then, in the second half of the 1960s, the more conservative Leonid Brezhnev and Aleksey Kosygin made a slight reversal to this process, albeit never reaching the tight, stringent controls experienced during the Stalin era. This, combined with the influx of cheap and portable magnetic tape recorders led to an increase in the popularity and consumption of the criminal songs.[3] These songs were performed by Soviet bards; folk singers who sung with simple guitar accompaniment. Since Soviet culture officials did not approve of the songs, many of the bards initially became popular playing at small, private student parties.[4] The attendees at these gatherings would record the concert with a tape recorder. The songs of the bards spread through the sharing and recopying of these tapes.[5]
After the fall of the Soviet Union and the establishment of the Russian Federation, the musical style of the songs began to shift, although the content did not. Modern artists affiliated with the Chanson genre often sing not in the traditional style used even by the Khrushchev-era performers, but more professionally, borrowing musical arrangements from pop, rock, and jazz. Although the strict cultural control of the Soviet Union has ended, many Russian officials still publically denounce the genre. Russia’s prosecutor general, Vladimir Ustinov, referred to the songs as “propaganda of the criminal subculture”.[6] The official disapproval of chansons has led to an absence of the songs from Russian radio. They are usually only played late at night, if they are played at all. Still, many politicians are fans of the genre, and one of the popular modern chanson singers, Alexander Rosenbaum, was a member of the Duma as part of the United Russia Party.[7] Rosenbaum was also awarded the title of People’s Artist of Russia by a decree of Vladimir Putin.
Soviet officials[edit]
Many of the Soviet bards also worked as writers and actors for the Soviet state. These artists were required to submit their works to government censors for approval. When bards performed uncensored pieces which fans would then distribute, they risked their official jobs.[8] In December 1971 a popular Soviet bard, Alexander Galich, was expelled from the Union of Soviet Writers for publishing uncensored works abroad and making his views known to large groups of people in the Soviet Union, which Galich claims happened after a Politburo member heard a tape of Galich’s uncensored songs at his daughters wedding reception.[9] Galich describes the official backlash following his expulsion from the Union of Soviet Writers in an open letter to the International Committee on Human Rights that he wrote after being denied permission to travel abroad: “I am deprived of...the right to see my work published, the right to sign a contract with a theater, film studio, or publishing house, the right to perform in public”.[10] Other bards who were not official Soviet artist still risked their job by performing uncensored songs. In 1968 Yuli Kim, a Russian language and literature teacher at a boarding school attached to Moscow State University, was dismissed for performing uncensored songs critical of the Soviet Union.[11] Although the official stance of the Soviet Union towards these songs was intolerant, many Soviet officials enjoyed the uncensored tapes. Bulat Okudzhava, a bard often criticized by Soviet officials, was invited to give a concert at the Soviet embassy in Warsaw.[12]
In addition to active repression from the state, Soviet bards also faced criticisms on the literary merit of their songs from Soviet officials. Even songs that were not openly critical of the Soviet union, like the songs of Vladimir Vysotsky, came under attack for their content and the way they were performed. The transgression was not anti-Soviet content, like the songs of Galich, but content that was considered “un-soviet”, and contributed the denigration of the Soviet people.[13] During a meeting of 140 writers, artists and film workers in 1962, Leonid Ilyichev, chairman of the Ideological Commission of the Soviet Communist Party’s Central Committee, criticized the songs of Okudzhava. Ilyichev called them “vulgar songs...designed to appeal to low and cheap tastes” and said they were “out of keeping with the entire structure of [Soviet] life”.[14] Artists in Soviet service also criticized the bards that sung unapproved songs.[15] The newspaper Sovetskaia Rossiia (Soviet Russia) attacked Vysotsky for offering “Philistinism, vulgarity, and immorality” under the “guise of art”.[16] Although Vysotsky was often criticized by officials, he never faced imprisonment or exile like other bards. This was in part due to his use of sarcasm as opposed to criticism, his lack of political activity, but mainly due to his immense popularity among the Soviet People.[17]
Gradually, Soviet authorities eased their reactions to the bards who sang outlaw songs. In 1981, after Vysotsky’s death, the state allowed the publication of a collection of his poetry (although official state poets still attacked Vysotsky’s poems).[18] During Gorbachev’s reign, Gorbachev's policy of glasnost made the outlaw songs officially acceptable. The songs which previously needed to be distributed unofficially through personally copied tapes could now be purchased in stores.[19] In 1987, Vysotsky was posthumously awarded the state literary prize.[20] The songs that were more directly critical of Soviet Union, however, authorities largely ignored.[21]
Soviet public[edit]
The public appeal of the outlaw songs in the Soviet Union was fueled by the contrast between the outlaw songs and state-sanctioned music. The outlaw songs did not have the same civic-minded messages as their official counterparts, and were instead much more personal.[22] They touched on subjects taboo in Soviet society, like anti-semitism, the growing class divide and the power abuses of the political elite.[23] The more personal nature of the music both in content and style, gave it a sense of authenticity, something that led to the mass appeal of the songs.[24] The songs were often very crude, an aspect of which was heavily criticized by the state, and echoed by some Soviet Citizens outside of the government.[25]
Alexander Rosenbaum is known as both a bard and a performer of Russian chanson
Grigory Leps mixed chanson with rock music
Lyube is reputed as Vladimir Putin's favourite band
Lyrically, Chanson songs are usually narrative-driven and are more similar to ballads than pop songs. In fact, this is one of the reasons for naming the genre after the French Chanson (the other being musical similarity).
Chanson themes vary greatly depending on the time in which the songs were written and the places in which they are set. For example, songs set in the Odessa of the 1910s tend to be more cheerful, and are sharply contrasted by the dark, depressing, and violent songs set in the Stalinist era. The interesting thing is that it is common for a Chanson artist, regardless of the time in which he writes his songs, to include songs of all periods in his repertoire, and write songs set in an era different from his own. This often leads to confusion: for example, the bard Alexander Gorodnitsky reports being beaten up once after claiming authorship to one of his songs, which was attributed to a Gulag inmate living over 30 years earlier.
Recurring themes in Chanson songs include:
• Military and patriotic themes. There is a subgenre of Chanson known as Military chanson.
• White Guard (anticommunist side of the Russian Civil War)
• The execution of a traitor to a criminal gang (the first such song is probably "Murka"). This is usually in the context of the Russian criminals' law, which punishes betrayal very harshly.
• Being sent to, or released from, a labor camp.
• Love in the context of criminal life, the conflict usually being either betrayal or separation due to imprisonment.
• Glorification of the 'merry thief' archetype. These songs are often set in the city of Odessa, where the Jewish Mafia was characterized as being particularly cheerful and colorful. Odessa Couplets often depict the rich and glorious life before Stalin's regime, when Odessa was among the only cities in the young Soviet Union to have free trade. These songs are often narrations of weddings and parties, sometimes based on real events.
• Political satire of different forms.
• Appeal to emotions towards relatives or beloved ones, often leading unlawful or morally controversial lives.
As seen above, chanson is rooted in prison life and criminal culture, but some chanson performers insist that the genre transcends mere criminal songs, and look upon Alexander Vertinsky and Alla Bayanova as their precursors.
Musical style[edit]
The musical style of the older Russian criminal songs, much like the Russian Bard songs, are heavily influenced by the classical Russian romance genre of the 19th century, more specifically a subgenre known as the City or Urban Romance. Romance songs are almost always divided into four-line rhymed couplets, rarely have a chorus, and follow a fairly consistent chord progression (Am, Dm, and E, sometimes with C and G added). The strumming pattern is also predictable: it is either a march, or a slow 3/4 waltz pattern often utilizing fingerpicking rather than strumming. Romance songs were traditionally played on a Russian guitar, since its tuning makes playing these chords easier (most of them are played as a single-finger bar chord).
Criminal songs were prominently performed by artists like Arcady Severny, Vladimir Vysotsky, Alexander Gorodnitsky, and Alexander Rosenbaum. Notice that with the exception of Severny, these performers are usually better known for their Bard songs. Arkady was one of the rare performers who focuses exclusively on collecting and performing old criminal songs.
Modern chanson performers include the band Lesopoval, Spartak Arutyunyan-Belomorkanal Band, Boris Davidyan or BOKA (Armenian Shanson), Ivan Kuchin, Butyrka, Aleksandr Novikov, Willi Tokarev, Mikhail Shufutinsky, and Mikhail Krug (murdered in 2002 at his villa in Tver).
A more recent artist who plays chanson with Rock music is Grigory Leps. Elena Vaenga, another recently popularized singer, actress and songwriter, sings in the styles of Russian shanson, folk music and folk rock.
British singer Marc Almond is the only western artist to receive acclaim in western Europe and well as Russia, for singing English versions of Russian Romances and Russian Chanson on his albums Heart On Snow and Orpheus in Exile (the songs of Vadim Kozin).
1. ^ Sophia Kishkovsky, “Notes from a Russian Musical Underground: The Sounds of Chanson,” New York TImes, July 16, 2006, accessed May 5, 2013,, 2.
2. ^ Christopher Lazarski, “Vladimir Vysotsky and His Cult,” Russian Review 51 (1992): 60.
3. ^ Gene Sosin, “Magnitizdat: Uncensored Songs of Dissent,” in Dissent in the USSR: Politics, Ideology, and People, ed. Rudolf L. Tokes. (Baltimore: Johns Hopkins University Press, 1975), 276.
4. ^ Lazarski, “Vladimir Vysotsky,” 60.
5. ^ Sosin, “Magnitizdat,” 278.
6. ^ Kishkovsky, “The Sounds of Chanson,” 1.
7. ^ Kishkobsky, “The sounds of Chanson,” 2.
8. ^ Rosette C. Larmont, “Horace’s Heirs: Beyond Censorship in the Soviet Songs of the Magnitizdat,”: World Literature Today 53 (1979): 220.
9. ^ Sosin, “Magnitizdat,” 299.
10. ^ Sosin, “Magnitizdat,” 301.
11. ^ Sosin, “Magnitizdat,” 286.
12. ^ Sosin, “Magnitizdat,” 284.
13. ^ Lazarski, “Vladimir Vysotsky,” 65.
14. ^ Sosin, “Magnitizdat,” 282.
15. ^ Lazarski, “Vladimir Vysotsky.” 66.
16. ^ Sosin, “Magnitizdat,” 303.
17. ^ Lazarski, “Vladimir Vysotsky,” 65.
18. ^ Lazarski, “Vladimir Vysotsky,”, 67-68.
19. ^ Lazarski, “Vladimir Vysotsky,” 68.
20. ^ Lazarski, “Vladimir Vysotsky,” 69.
21. ^ Lazarski, “Vladimir Vysotsky,” 69.
22. ^ Sosin, “Magnitizdat,” 283.
23. ^ Larmont, “Horace’s Heirs,” 223.
24. ^ Lazarski, “Vladimir Vysotsky,” 62.
25. ^ Lazarski, “Vladimir Vysotsky,” 61.
External links[edit] | <urn:uuid:e70baff7-88cb-407d-b7bc-a2ffe96ac5ec> | 3 | 2.796875 | 0.190502 | en | 0.963486 | http://en.wikipedia.org/wiki/Russian_chanson |
From Wikipedia, the free encyclopedia
(Redirected from Terraformed)
Jump to: navigation, search
This article is about the theoretical planetary engineering process. For the Shellac album, see Terraform (album). For the Knut album, see Terraformer (album).
An artist's conception shows a terraformed Mars in four stages of development.
Terraforming (literally, "Earth-shaping") of a planet, moon, or other body is the theoretical process of deliberately modifying its atmosphere, temperature, surface topography or ecology to be similar to the biosphere of Earth to make it habitable by Earth-like life.
The term "terraforming" is sometimes used more generally as a synonym for planetary engineering, although some consider this more general usage an error.[citation needed] The concept of terraforming developed from both science fiction and actual science. The term was coined by Jack Williamson in a science-fiction story (Collision Orbit) published during 1942 in Astounding Science Fiction,[1] but the concept may pre-date this work.
Based on experiences with Earth, the environment of a planet can be altered deliberately; however, the feasibility of creating an unconstrained planetary biosphere that mimics Earth on another planet has yet to be verified. Mars is usually considered to be the most likely candidate for terraforming. Much study has been done concerning the possibility of heating the planet and altering its atmosphere, and NASA has even hosted debates on the subject. Several potential methods of altering the climate of Mars may fall within humanity's technological capabilities, but at present the economic resources required to do so are far beyond that which any government or society is willing to allocate to it. The long timescales and practicality of terraforming are the subject of debate. Other unanswered questions relate to the ethics, logistics, economics, politics, and methodology of altering the environment of an extraterrestrial world.
History of scholarly study[edit]
Carl Sagan, an astronomer, proposed the planetary engineering of Venus in an article published in the journal Science in 1961.[2] Sagan imagined seeding the atmosphere of Venus with algae, which would convert water, nitrogen and carbon dioxide into organic compounds. As this process removed carbon dioxide from the atmosphere, the greenhouse effect would be reduced until surface temperatures dropped to "comfortable" levels. The resulting carbon, Sagan supposed, would be incinerated by the high surface temperatures of Venus, and thus be sequestered in the form of "graphite or some involatile form of carbon" on the planet's surface.[3] However, later discoveries about the conditions on Venus made this particular approach impossible. One problem is that the clouds of Venus are composed of a highly concentrated sulfuric acid solution. Even if atmospheric algae could thrive in the hostile environment of Venus's upper atmosphere, an even more insurmountable problem is that its atmosphere is simply far too thick—the high atmospheric pressure would result in an "atmosphere of nearly pure molecular oxygen" and cause the planet's surface to be thickly covered in fine graphite powder.[3] This volatile combination could not be sustained through time. Any carbon that was fixed in organic form would be liberated as carbon dioxide again through combustion, "short-circuiting" the terraforming process.[3]
Sagan also visualized making Mars habitable for human life in "Planetary Engineering on Mars" (1973), an article published in the journal Icarus.[4] Three years later, NASA addressed the issue of planetary engineering officially in a study, but used the term "planetary ecosynthesis" instead.[5] The study concluded that it was possible for Mars to support life and be made into a habitable planet. The first conference session on terraforming, then referred to as "Planetary Modeling", was organized that same year.
In March 1979, NASA engineer and author James Oberg organized the First Terraforming Colloquium, a special session at the Lunar and Planetary Science Conference in Houston. Oberg popularized the terraforming concepts discussed at the colloquium to the general public in his book New Earths (1981).[6] Not until 1982 was the word terraforming used in the title of a published journal article. Planetologist Christopher McKay wrote "Terraforming Mars", a paper for the Journal of the British Interplanetary Society.[7] The paper discussed the prospects of a self-regulating Martian biosphere, and McKay's use of the word has since become the preferred term. In 1984, James Lovelock and Michael Allaby published The Greening of Mars.[8] Lovelock's book was one of the first to describe a novel method of warming Mars, where chlorofluorocarbons (CFCs) are added to the atmosphere.
Motivated by Lovelock's book, biophysicist Robert Haynes worked behind the scenes to promote terraforming, and contributed the neologism Ecopoiesis, forming the word from the Greek οἶκος, oikos, "house",[9] and ποίησις, poiesis, "production".[10] Ecopoiesis refers to the origin of an ecosystem. In the context of space exploration, Haynes describes ecopoiesis as the "fabrication of a sustainable ecosystem on a currently lifeless, sterile planet". Ecopoiesis is a type of planetary engineering and is one of the first stages of terraformation. This primary stage of ecosystem creation is usually restricted to the initial seeding of microbial life.[11] As conditions approach that of Earth, plant life could be brought in, and this will accelerate the production of oxygen, theoretically making the planet eventually able to support animal life.
Aspects and definitions[edit]
Beginning in 1985, Martyn J. Fogg began publishing several articles on terraforming. He also served as editor for a full issue on terraforming for the Journal of the British Interplanetary Society in 1992. In his book Terraforming: Engineering Planetary Environments (1995), Fogg proposed the following definitions for different aspects related to terraforming:[12]
• Planetary engineering: the application of technology for the purpose of influencing the global properties of a planet.
• Geoengineering: planetary engineering applied specifically to Earth. It includes only those macroengineering concepts that deal with the alteration of some global parameter, such as the greenhouse effect, atmospheric composition, insolation or impact flux.
• Terraforming: a process of planetary engineering, specifically directed at enhancing the capacity of an extraterrestrial planetary environment to support life as we know it. The ultimate achievement in terraforming would be to create an open planetary biosphere emulating all the functions of the biosphere of Earth, one that would be fully habitable for human beings.
• Astrophysical engineering: taken to represent proposed activities, relating to future habitation, that are envisaged to occur on a scale greater than that of "conventional" planetary engineering.
Fogg also devised definitions for candidate planets of varying degrees of human compatibility:[13]
• Habitable Planet (HP): A world with an environment sufficiently similar to Earth as to allow comfortable and free human habitation.
• Biocompatible Planet (BP): A planet possessing the necessary physical parameters for life to flourish on its surface. If initially lifeless, then such a world could host a biosphere of considerable complexity without the need for terraforming.
• Easily Terraformable Planet (ETP): A planet that might be rendered biocompatible, or possibly habitable, and maintained so by modest planetary engineering techniques and with the limited resources of a starship or robot precursor mission.
Fogg suggests that Mars was a biologically compatible planet in its youth, but is not now in any of these three categories, because it can only be terraformed with greater difficulty.[citation needed] Mars Society founder Robert Zubrin produced a plan for a Mars return mission called Mars Direct that would set up a permanent human presence on Mars and steer efforts towards eventual terraformation.[14]
Requirements for sustaining terrestrial life[edit]
An absolute requirement for life is an energy source, but the notion of planetary habitability implies that many other geophysical, geochemical, and astrophysical criteria must be met before the surface of an astronomical body is able to support life. Of particular interest is the set of factors that has sustained complex, multicellular animals in addition to simpler organisms on this planet. Research and theory in this regard is a component of planetary science and the emerging discipline of astrobiology.
In its astrobiology roadmap, NASA has defined the principal habitability criteria as "extended regions of liquid water, conditions favorable for the assembly of complex organic molecules, and energy sources to sustain metabolism."[15]
Preliminary stages[edit]
Once conditions become more suitable for life of the introduced species, the importation of microbial life could begin.[12] As conditions approach that of Earth, plant life could also be brought in. This would accelerate the production of oxygen, which theoretically would make the planet eventually able to support animal life.
Prospective planets[edit]
Artist's conception of a terraformed Mars
Main article: Terraforming of Mars
In many respects, Mars is the most like Earth of all the other planets in the Solar System.[16] Indeed, it is thought that Mars once did have a more Earth-like environment early in its history, with a thicker atmosphere and abundant water that was lost over the course of hundreds of millions of years.[17]
The exact mechanism of this loss is still unclear, though three mechanisms in particular seem likely: First, whenever surface water is present, carbon dioxide reacts with rocks to form carbonates, thus drawing atmosphere off and binding it to the planetary surface. On Earth, this process is counteracted when plate tectonics works to cause volcanic eruptions that vent carbon dioxide back to the atmosphere. On Mars, the lack of such tectonic activity worked to prevent the recycling of gases locked up in sediments.[18]
Second, the lack of a magnetosphere surrounding the entire surface of Mars may have allowed the solar wind to gradually erode the atmosphere.[19] Convection within the core of Mars, which is made mostly of iron,[20] originally generated a magnetic field. However the dynamo ceased to function long ago,[21] and the magnetic field of Mars has largely disappeared, probably due to "... loss of core heat, solidification of most of the core, and/or changes in the mantle convection regime."[22] Mars does still retain a limited magnetosphere that covers approximately 40% of its surface. Rather than uniformly covering and protecting the atmosphere from solar wind, however, the magnetic field takes the form of a collection of smaller, umbrella-shaped fields, mainly clustered together around the planet's southern hemisphere.[23] It is within these regions that chunks of atmosphere are violently "blown away", as astronomer David Brain explains:
The joined fields wrapped themselves around a packet of gas at the top of the Martian atmosphere, forming a magnetic capsule a thousand kilometres wide with ionised air trapped inside... Solar wind pressure caused the capsule to 'pinch off' and it blew away, taking its cargo of air with it.[23]
Finally, between approximately 4.1 and 3.8 billion years ago, asteroid impacts during the Late Heavy Bombardment caused significant changes to the surface environment of objects in the Solar System. The low gravity of Mars suggests that these impacts could have ejected much of the Martian atmosphere into deep space.[24]
Terraforming Mars would entail two major interlaced changes: building the atmosphere and heating it.[25] A thicker atmosphere of greenhouse gases such as carbon dioxide would trap incoming solar radiation. Because the raised temperature would add greenhouse gases to the atmosphere, the two processes would augment each other.[26]
Artist's conception of a terraformed Venus
Main article: Terraforming of Venus
Terraforming Venus requires two major changes; removing most of the planet's dense 9 MPa carbon dioxide atmosphere and reducing the planet's 450 °C (723.15 K) surface temperature. These goals are closely interrelated, because Venus's extreme temperature is thought to be due to the greenhouse effect caused by its dense atmosphere. Sequestering the atmospheric carbon would likely solve the temperature problem as well.
Europa (moon)[edit]
Europa, a moon of Jupiter, is a potential candidate for terraforming.[citation needed] One advantage to Europa is the presence of liquid water which could be extremely helpful for the introduction of any form of life.[27][not in citation given] The difficulties are numerous; Europa is near a huge radiation belt around Jupiter.[28] This would require the building of radiation deflectors, which is currently impractical. Additionally, this satellite is covered in ice and would have to be heated, and there would need to be a supply of oxygen,[29] though this could, at sufficient energy cost, be manufactured locally by electrolysis of the copious water available.
Artist's conception of what the Moon might look like terraformed
Other bodies in the Solar System[edit]
Other possible candidates for terraforming (possibly only partial or paraterraforming) include Titan, Callisto, Ganymede, the Moon, and even Mercury, Saturn's moon Enceladus and the dwarf planet Ceres. Most, however, have too little mass and gravity to hold an atmosphere indefinitely (although it may be possible, but it is not quite certain, that an atmosphere could remain for tens of thousands of years or be replenished as needed). In addition, aside from the Moon and Mercury, most of these worlds are so far from the Sun that adding sufficient heat would be much more difficult than it would be for Mars. Terraforming Mercury would present different challenges, but in certain aspects would be easier than terraforming Venus. Though not widely discussed, the possibility of terraforming Mercury's poles has been presented. Saturn's moon Titan offers several unique advantages, such as an atmospheric pressure similar to Earth and an abundance of nitrogen and frozen water. Jupiter's moons Europa, Ganymede, and Callisto also have an abundance of water ice.
Also known as the "worldhouse" concept, or domes in smaller versions, paraterraforming involves the construction of a habitable enclosure on a planet which eventually grows to encompass most of the planet's usable area.[30] The enclosure would consist of a transparent roof held one or more kilometers above the surface, pressurized with a breathable atmosphere, and anchored with tension towers and cables at regular intervals. Proponents claim worldhouses can be constructed with technology known since the 1960s. The Biosphere 2 project built a dome on Earth that contained a habitable environment. The project encountered difficulties in operation, including unexpected population explosions of some plants and animals,[31][32] and a lower than anticipated production of oxygen by plants, requiring extra oxygen to be pumped in.[33]
Paraterraforming has several advantages over the traditional approach to terraforming. For example, it provides an immediate payback to investors (assuming a capitalistic financing model). Although it starts out in a small area (a domed city for example), it quickly provides habitable space. The paraterraforming approach also allows for a modular approach that can be tailored to the needs of the planet's population, growing only as fast and only in those areas where it is required. Finally, paraterraforming greatly reduces the amount of atmosphere that one would need to add to planets like Mars to provide Earth-like atmospheric pressures. By using a solid envelope in this manner, even bodies which would otherwise be unable to retain an atmosphere at all (such as asteroids) could be given a habitable environment. The environment under an artificial worldhouse roof would also likely be more amenable to artificial manipulation. Paraterraforming is also less likely to cause harm to any native lifeforms that may hypothetically inhabit the planet, as the parts of the planet outside the enclosure will not normally be affected unlike terraforming which affects the entire planet.
It has the disadvantage of requiring massive amounts of construction and maintenance activity. It also would not likely have a completely independent water cycle, because although rainfall may be able to develop with a high enough roof, but probably not efficiently enough for agriculture or a water cycle. The extra cost might be off-set somewhat by automated manufacturing and repair mechanisms.[citation needed] A worldhouse might also be more susceptible to catastrophic failure if a major breach occurred, though this risk might be reduced by compartmentalization and other active safety precautions. Meteor strikes are a particular concern because without any external atmosphere they would reach the surface before burning up.
Ethical issues[edit]
There is a philosophical debate within biology and ecology as to whether terraforming other worlds is an ethical endeavor. From the point of view of a cosmocentric ethic, this involves balancing the need for the preservation of human life against the intrinsic value of existing planetary ecologies.[34]
On the pro-terraforming side of the argument, there are those like Robert Zubrin, Martyn J. Fogg, Richard L. S. Taylor and the late Carl Sagan who believe that it is humanity's moral obligation to make other worlds suitable for life, as a continuation of the history of life transforming the environments around it on Earth.[35][36] They also point out that Earth would eventually be destroyed if nature takes its course, so that humanity faces a very long-term choice between terraforming other worlds or allowing all terrestrial life to become extinct. Terraforming totally barren planets, it is asserted, is not morally wrong as it does not affect any other life.
The opposing argument posits that terraforming would be an unethical interference in nature, and that given humanity's past treatment of Earth, other planets may be better off without human interference. Still others strike a middle ground, such as Christopher McKay, who argues that terraforming is ethically sound only once we have completely assured that an alien planet does not harbor life of its own; but that if it does, we should not try to reshape it to our own use, but we should engineer its environment to artificially nurture the alien life and help it thrive and co-evolve, or even co-exist with humans.[37] Even this would be seen as a type of terraforming to the strictest of ecocentrists, who would say that all life has the right, in its home biosphere, to evolve without outside interference.
Economic issues[edit]
The initial cost of such projects as planetary terraforming would be gargantuan, and the infrastructure of such an enterprise would have to be built from scratch. Such technology is not yet developed, let alone financially feasible at the moment. John Hickman has pointed out that almost none of the current schemes for terraforming incorporate economic strategies, and most of their models and expectations seem highly optimistic.[38] Access to the vast resources of space may make such projects more economically feasible, though the initial investment required to enable easy access to space will likely be tremendous (see Asteroid mining, solar power satellites, In-Situ Resource Utilization, bootstrapping, space elevator).
Political issues[edit]
Further information: Outer Space Treaty
National pride, rivalries between nations, and the politics of public relations have in the past been the primary motivations for shaping space projects.[39][40] It is reasonable to assume that these factors would also be present in planetary terraforming efforts.
In popular culture[edit]
Terraforming is a common concept in science fiction, ranging from television, movies and novels to video games.
The concept of changing a planet for habitation precedes the use of the word 'terraforming', with H. G. Wells describing a reverse-terraforming, where aliens in his story The War of the Worlds change Earth for their own benefit. Olaf Stapledon's Last and First Men (1930) provides the first example in fiction in which Venus is modified, after a long and destructive war with the original inhabitants, who naturally object to the process. The word itself was coined in fiction by Jack Williamson, but features in many other stories of the 1950s & 60s, such Poul Anderson's The Big Rain, and James Blish's "Pantropy" stories. Recent works involving terraforming of Mars include the Mars trilogy by Kim Stanley Robinson and The Platform by James Garvey. In Isaac Asimov's Robot Series, fifty planets have been colonized and terraformed by the powerful race of humans called Spacers, and when Earth is allowed to attempt colonization once more, the Settlers begin the process of terraforming their new worlds immediately. After twenty thousand years in the future, all the habitable planets in the galaxy have been terraformed and form the basis of the Galactic Empire in Asimov's Foundation Series. In the Star Wars series, the planet Manaan uses a paraterraforming-like infrastructure, with all buildings being built above the water as the habitable land of the planet. There is no natural land on the planet. In the Star Wars Expanded Universe, the planet Taris is restored to its former state after a Sith bombardment through aggressive terraforming.
Terraforming has also been explored on television and in feature films, including the "Genesis device", developed to quickly terraform barren planets, in the movie Star Trek II: The Wrath of Khan. A similar device exists in the animated feature film Titan A.E. which depicts the eponymous ship Titan, capable of creating a planet. The word 'terraforming' was used in James Cameron's Aliens to describe the act of processing a planet's atmosphere through nuclear reactors over several decades in order to make it habitable. The 2000 movie Red Planet also uses the motif: after humanity faces heavy overpopulation and pollution on Earth, uncrewed space probes loaded with algae are sent to Mars with the aim of terraforming and creating a breathable atmosphere. The television series Firefly and its cinematic sequel Serenity (circa 2517) are set in a planetary system with about seventy terraformed planets and moons. In the 2008 video game Spore, the player is able to terraform any planet by using either terraforming rays or a "Staff of Life" that completely terraforms the planet and fills it with creatures. Doctor Who episode "The Doctor's Daughter" also references terraforming, where a glass orb is broken to release gases which terraform the planet the characters are on at the time. One crew member in Ridley Scott's 2012 Prometheus bets another that the purpose of their visit is terraforming.
In the video game Halo (2001), the main setting is an ancient ring-shaped structure whose radius is nearly that of Earth; the structure is terraformed to support an Earth-like ecosystem. The rings are created using Forerunner technology, and terraformed during their construction by an extra-galactic construct known as The Ark or Installation 00. Various works of fiction based on Halo also mention the terraforming of planets.[41]
John Christopher's "Tripods" trilogy has a twist on terraforming. Aliens have conquered Earth. They live in three domed cities located in Germany, China, and Panama where they breathe an atmosphere poisonous to Earth life (probably containing chlorine). As the plot unfolds, the protagonist determines the aliens are awaiting the arrival of another ship from their home star containing the equipment for them to terraform (or alienscape) Earth. If this occurs, all Earth life will be wiped out by the poisoned atmosphere. In M. Night Shyamalan's After Earth, the planet Nova Prime has been terraformed to be adaptable for human life because Earth has lost all properties of being adjustable for humanity (e.g. violent thermal shifts)
In Zack Snyder's Man of Steel, General Zod attempts to use terraforming to revive the environment of planet Krypton on Earth.
See also[edit]
1. ^ "Science Fiction Citations: terraforming". Retrieved 2006-06-16.
2. ^ Sagan, Carl (1961). "The Planet Venus". Science 133 (3456): 849–58. Bibcode:1961Sci...133..849S. doi:10.1126/science.133.3456.849. PMID 17789744.
3. ^ a b c Sagan 1997, pp. 276–7.
4. ^ Sagan, Carl (1973). "Planetary Engineering on Mars". Icarus 20 (4): 513. Bibcode:1973Icar...20..513S. doi:10.1016/0019-1035(73)90026-2.
5. ^ Averner& MacElroy, 1976
6. ^ Oberg, James Edward (1981). New Earths: Restructuring Earth and Other Planets. Stackpole Books, Harrisburg, Pennsylvania.
7. ^ McKay, Christopher (1982). "Terraforming Mars". Journal of the British Interplanetary Society.
8. ^ Lovelock, James and Allaby, Michael (1984). The Greening of Mars.
9. ^ οἶκος. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
10. ^ ποίησις in Liddell and Scott.
11. ^ Fogg, Martyn J. (1995). Terraforming: Engineering Planetary Environments. SAE International, Warrendale, PA.
12. ^ a b Fogg 1995.
13. ^ Fogg, 1996
14. ^ Zubrin, Robert (1 November 1996). "Building a Solid Case". SpaceViews. Archived from the original on 2007-09-11. Retrieved 2006-09-26.
16. ^ Read and Lewis 2004, p.16; Kargel 2004, pp. 185–6.
17. ^ Kargel 2004, 99ff
18. ^ Forget, Costard & Lognonné 2007, pp. 80–1.
19. ^ Forget, Costard & Lognonné 2007, p. 82.
20. ^ Dave Jacqué (2003-09-26). "APS X-rays reveal secrets of Mars' core". Argonne National Laboratory. Retrieved 2009-06-10.
21. ^ Schubert, Turcotte & Olson 2001, p. 692
22. ^ Carr 2007, p. 318
23. ^ a b Solar Wind, 2008
24. ^ Forget, Costard & Lognonné 2007, pp. 80.
25. ^ Faure & Mensing 2007, p. 252.
26. ^ Zubrin, Robert M. & McKay, Christopher P. (1997). Technological Requirements for Terraforming Mars. Journal of the British Interplanetary Society, 50, 83. Accessed 2009-06-09.
27. ^ Brody, Dave (2005). Terraforming: Human Destiny or Hubris?. Ad Astra (National Space Society). Spring 2005. Accessed 2012-09-19.
28. ^ ScienceDaily (2001, Mar. 29). Jupiter Radiation Belts Harsher Than Expected.
29. ^ "Humans on Europa: A Plan for Colonies on the Icy Moon". Retrieved 2006-04-28. [dead link]
30. ^ Taylor, 1992
31. ^ Trouble in the bio bubble[dead link]
32. ^ Biosphere 2
33. ^ Biosphere 2 Members 'aired out'
34. ^ MacNiven 1995
35. ^ Robert Zubrin, The Case for Mars: The Plan to Settle the Red Planet and Why We Must, pp. 248–249, Simon & Schuster/Touchstone, 1996, ISBN 0-684-83550-9
36. ^ Fogg 2000
37. ^ Christopher McKay and Robert Zubrin, "Do Indigenous Martian Bacteria have Precedence over Human Exploration?", pp. 177–182, in On to Mars: Colonizing a New World, Apogee Books Space Series, 2002, ISBN 1-896522-90-4
38. ^ "The Political Economy of Very Large Space Projects". Retrieved 2006-04-28.
39. ^ "China's Moon Quest Has U.S. Lawmakers Seeking New Space Race". Bloomberg. 2006-04-19. Retrieved 2006-04-28.
40. ^ Thompson 2001 p. 108
41. ^ For example, Greg Bear's novel Halo: Cryptum (2011).
External links[edit] | <urn:uuid:85168893-6b02-48d1-a2e1-3086140b8d42> | 3 | 3.375 | 0.1911 | en | 0.911768 | http://en.wikipedia.org/wiki/Terraformed |
Take the 2-minute tour ×
I am aware of the usage of "lack thereof", but I was wondering whether it is valid to use "lack of it".
During a conversation with someone I used "lack of it" in a sentence, and she claimed that it is an error and that "lack thereof" should be used instead.
Example sentence:
Do you think that accent (or lack of it) is a critical factor in obtaining a job?
share|improve this question
It's not wrong. The two are syntactically identical. – Dmitry Brant Apr 17 '13 at 19:28
'or lack thereof' is the "accepted" (common) way of saying it, but it's not an error to say or write it your way. – Tyler James Young Apr 17 '13 at 19:28
Thank you both! – EyalAr Apr 17 '13 at 19:30
Or lack thereof is indeed the most usual way of saying it; but that's a 'fossil' from legal language, and your version, or lack of it, is much better suited to even the most formal modern discourse. – StoneyB Apr 17 '13 at 21:12
2 Answers 2
up vote 2 down vote accepted
"Lack of it" is a more awkward construction of the sentence than "lack thereof." Though words like "thereof" can seem stuffy or antiquated, they are often the best way to express yourself.
share|improve this answer
Though one could of course take the view that sometimes it is "better" or more effective to use plain, boring, universally understandable language rather than fancy antiquated verbiage. Both arguments can be made... – Neil Coffey Apr 18 '13 at 19:53
I fail to see how ‘lack of it’ is somehow more awkward than ‘lack thereof’. If anything, it is the opposite to me; and there are contexts in which ‘lack thereof’ is not a possibility at all. – Janus Bahs Jacquet Oct 17 '13 at 18:03
The issue is obscured for me by another issue -- that the sentence itslef is a little awkward. Obviously we understand that we're talking about the candidate's accent, but that could be clearer.
Do you think that a candidate's accent is a critical factor in obtaining a job?
If that is present then I think that "(or lack thereof)" becomes a more clear choice. I'd prefer to use commas though:
Do you think that a candidate's accent, or lack thereof, is a critical factor in obtaining a job?
share|improve this answer
Your Answer
| <urn:uuid:7479f0f1-8c1f-4a96-8daa-5f51c5d4d1fc> | 2 | 2.09375 | 0.048816 | en | 0.933352 | http://english.stackexchange.com/questions/111641/lack-of-it-lack-thereof |
Take the 2-minute tour ×
In Italian we can say "Buon lavoro" to someone who is working and it basically means that we wish him/her the best while working (It can be literally translated with "Good work" but it sounds just wrong). It's like when you say "Good morning" to someone and it can be roughly translated with: "Have a good day at work".
Note: I'm aware of the fact that in English you can say "Good job" but that's usually said after a job is done.
Is there such an idiom in English?
share|improve this question
No there's nothing in English. When we want to say something I would agree the most common and natural is "Have a good day at work". Japanese does have a set way to say this though. – hippietrail Dec 1 '13 at 12:28
"Have a good day at work" expresses the concept well enough. I've always used the Italian idiom to encourage someone, especially if they've been having some kind of problem i.e. "I wish you success in your endeavour/task/job/work" – Mari-Lou A Dec 1 '13 at 12:55
Along with "buon appetito" this is a very useful phrase. There is another sense of "buon appetito" that you use to politely acknowledge someone eating, like when you bump into a friend at a restaurant. The English translations would be "as you were" (military) or "carry on" (authoritative) or, more politely, "please, don't let us interrupt [your meal]." Does "buon lavoro" allow the same usage? – Rich Armstrong Jan 13 at 12:04
2 Answers 2
There is no direct equivalent, just as there is no direct equivalent of bon appétit. In the UK, someone observing someone else working hard might say something like ‘Don’t work too hard, mate’ or even ‘Come on, mate, no slacking’.
share|improve this answer
you should use subjunctive form "I wish you could do the best"
share|improve this answer
No. First off, that's not a subjunctive; and secondly, it is not idiomatic English. No one would say that. – Janus Bahs Jacquet Dec 1 '13 at 15:38
And if you did say that to somebody, you might get a smack in the face, because I wish you could is a counter-factual, and clearly implies but you can't, so it seems to be saying that you're not very good at what you're doing. – Colin Fine Dec 1 '13 at 19:57
I think what you meant to say here is "I wish you the best." – Josh Jan 7 '14 at 19:26
Your Answer
| <urn:uuid:1a4e69fa-b677-4712-bbcb-4a465337f9c3> | 2 | 1.8125 | 0.197077 | en | 0.948339 | http://english.stackexchange.com/questions/139922/english-idiom-for-the-italian-buon-lavoro |
Take the 2-minute tour ×
Just out of curiousity, how did this double negative come to be?
When I use it, it's often because I want to emphasise the fact that x is not y but is still similar in some way, whereas "like" doesn't necessarily stress the fact that two things aren't the same when stating their... likeness.
The ship's design was not unlike that of a Firefly class vessel, but it was a lot faster, like the Millenium Falcon.
Captain Reynolds was like Han Solo in a way. A great leader in times of need.
Yes, I'm a Firefly fan.
share|improve this question
Sometimes not unlike is used as weasel words. (I want very much to say A is like B, but you know it isn’t; so I say it is not unlike B instead, to avoid a direct contradiction and to invite you to compare them anyway.) Sometimes it is used for no apparent reason; then it just sounds pretentious. Sometimes it’s fine. – Jason Orendorff Apr 14 '11 at 7:07
"Not unlike" is an example of the rhetorical device litotes; they have been a popular subject of discussion and many a question at this StackExchange. Have you read these: "Does not uncommon mean common?" and "Are not uncommon and similar phrases double negatives?..." – Uticensis Apr 14 '11 at 7:08
@Billare: For the thing you highlighted, such expressions should be used only when the writing style requires it, excluding informal or normal conversations. – Alenanno Apr 14 '11 at 9:35
FWIW, George Orwell hated the "not un-" pattern so much that he tried to create a vaccine against it. – Robusto Aug 18 '14 at 19:56
2 Answers 2
up vote 4 down vote accepted
"Not unlike" is slightly different than saying "like" much like saying "I love apples" is not the same thing as saying "I don't hate apples." It emphasizes a different degree of likeness. "Not unlike" just means that there exist similarities while "like" means they are similar.
Two objects can have similarities but not be similar, such as an apple and an orange are both fruits, so I could say "An apple is not unlike an orange in that they are both fruits." However to call an apple similar to an orange is perhaps a bit much.
Also consider that the meaning of "like" has many meanings, and is therefore a little ambiguous. So I notice a tendency to prefer "not unlike" in books in order to make the meaning clear that you mean to say that two things have similarities.
share|improve this answer
+1, though I'm not keen on your apple/orange example. – Charles May 9 '11 at 19:47
how did this double negative come to be?
About 100,000 years ago, during a conversation about antelope, Ug grunted twice to Og, and lo, litotes was invented.
Which is to say that it will have arisen naturally as part of the development of spoken human language at a time before writing had been invented, so no one will really know exactly how it came to be. It just obviously fills a subtle linguistic need to communicate the degree to which a verb applies to a noun.
Disclaimer: my knowledge of these things is not entirely unlike Ug's.
share|improve this answer
I want to meet this Ug fellow one day. – Nick Bedford Oct 24 '14 at 0:14
Your Answer
| <urn:uuid:2438894a-e1fb-4b42-aef0-8c46ced278f9> | 3 | 3.1875 | 0.605637 | en | 0.951983 | http://english.stackexchange.com/questions/20986/like-versus-not-unlike |
Take the 2-minute tour ×
When one makes a new acquaintance with somebody in person, you may say "it was nice to meet you", e.g. when you leave. What if you make a new acquaintance over the internet, what do you say when you finish the conversation, business transaction, etc.? You didn't actually meet the person, you only exchanged some emails. One could say "it was nice to talk/do business/etc. with you" but that doesn't confer the fact you made a new acquaintance and it was enjoyable experience for you. Does the word "meet" really work in this case or something else is better?
share|improve this question
It could be understood as "nice to virtually meet you." – kiamlaluno Jun 4 '11 at 20:51
Thank you for asking this question; I've often agonized over the correct phrase to use in that first return email. "It was nice to exchange bytes with you?" Ewwwww... – MT_Head Jun 4 '11 at 21:50
2 Answers 2
up vote 5 down vote accepted
If you talk with someone on the phone before (or instead of) meeting them in person, it would sound odd to say "nice to meet you." Instead, you would say "It was nice talking with you," and possibly adding that you look forward to meeting them (or hope to meet them, at least) in person.
Similarly, an exchange of written correspondence doesn't constitute a meeting either. It would be laughable to use "Nice to meet you" as the close of a letter.
I personally avoid using "meet" to describe an internet encounter, whether in email or chat, even though my fingers often want to type just that. Such an encounter is a kind of meeting, as @kiamlaluno notes in his comment, but it seems so much less than a real meeting that it feels awkward to call it that. Nevertheless, this is the Internet, and what may hold for one medium may not hold for another. I would say that if you have a lengthy, soul-baring chat session with someone via the Web you might well say it was nice to meet that person, since you did do exchange more than just a few words. But that would be a figurative usage, and it could apply to a similarly lengthy phone call.
As a final note, people talk about others they have "met online" all the time. So I think this argues for at least considering online encounters as legitimate "meetings" — and if they can be called meetings, I suppose it would not be too surprising to hear "Nice to meet you" in an email. But I think it still involves more than the exchange of a few words.
share|improve this answer
I'm surprised I'm glad to make the acquaintance of you wan't metioned. Or it's simply wrong to say that? – Philoto Jun 5 '11 at 6:11
@Philoto: You would say "I'm glad to make your acquaintance," not "the acquaintance of you." – Robusto Jan 21 '14 at 1:36
It depends on the formality of the exchange or, more specifically, the necessary level of formality in language that your relationship requires.
If it's an informal relationship, you can be a little more colourful with your use of the English language and perhaps put quotes around "meet" to indicate that you are using the term metaphorically. You are certainly not the first person to worry about using meet in this way and, I believe, the meaning would be understood.
If the relationship is more formal, then the appropriate phrasing would be dictated by the purpose and content of your exchange. In this instance "It was nice chatting/doing business with you" would be more appropriate.
share|improve this answer
protected by tchrist Mar 28 '13 at 18:18
Would you like to answer one of these unanswered questions instead?
| <urn:uuid:5d56e3bc-6ee6-4e0d-a07d-4c14e5496f30> | 2 | 1.609375 | 0.121964 | en | 0.971536 | http://english.stackexchange.com/questions/28575/is-nice-to-meet-you-an-appropriate-online-salutation/28599 |
Take the 2-minute tour ×
Would it be correct to use "chronologically diverse" to describe a group of people whose ages range from very young to very old? If not, what would be a better phrase to use?
share|improve this question
1 Answer 1
up vote 1 down vote accepted
The most common phrase to use here is simply:
a group of all ages
share|improve this answer
Or "of varying ages". – Mechanical snail May 27 '12 at 1:57
@Mechanicalsnail- Yes. Good. – Jim May 27 '12 at 1:58
Your Answer
| <urn:uuid:47843adc-5c19-4f23-a989-910461922a0c> | 2 | 2.109375 | 0.919334 | en | 0.898172 | http://english.stackexchange.com/questions/69064/phrase-to-describe-the-diversity-of-age-range-of-a-group-of-people |
EPA vs Texas
A fight against the Environmental Protection Agency (EPA) was won in Texas this past week as the EPA allowed Texas significantly more flexibility when dealing with state permits concerning pollution sources. Giving Texas more control over the environmental impacts that occur within the state. A hard fought win for Texas as they have been working for years to gain back the control that the EPA stripped away from them.
The compromise on Texas’s clean air plan can be used as an example for other states hoping to gain back some of their rights concerning environmental regulation. By allowing the states the ability to implement locally tailored plans and permits, plants will no longer fear being shut down due to overly strict federal regulation. Less regulation is for the better as Texas currently houses 832 drilling rigs, which accounts for 47% of all US oil rigs and 25% of the entire worlds!
number of rigs
This has boosted oil and gas severance tax revenue to an amazing $900 million dollars which has funded several projects. The projects would utilize the surplus to fund the State Water Plan which was create in 1997 but never received funding until now. The legislature in total created two funds that;
• The State Water Implementation Fund (SWIFT) will contain $2.5 billion to fund projects in the State Water Plan.
• The State Water Implementation Revenue Fund of Texas (SWIRFT) will contain $3.5 billion for road, port and rail infrastructure projects.
In a state that led national job growth in 2013, is a powerhouse in the energy industry, and increased environmental standards without EPA involvement; even less federal involvement can only be for the better.
Comments (4)
Trackback URL | Comments RSS Feed
1. Jeff M says:
Good news for Texas, a state that has been a key participant in the economic development of the country. This can only mean that the Texas economy would probably keep flourishing and help pulling the country out of its economic stagnation. There is a reason why many corporations come to Texas, and the local government keeps performing to incite more to come.
2. Blake P says:
Times of economic prosperity are the perfect scenario to help State development. If the income from the oil and gas are being wisely invested, that can only translate into more economic growth. If resources are being directed to fund key programs such as the SWIFT, the SWIFRT and the improvement of highways, the quality of life will improve substantially. Great times are ahead of us; let’s start building a brighter future.
3. Luigi D. says:
I hope that the new flexibility in environmental standards doesn’t lead to a substantial harmful impact of the ecosystem.
4. Justin C. says:
The federal government has abandoned Texas. Few things have been done in Washington to benefit Texas, one of the strongest, economically speaking, states in the country. It seems as if Washington sees Austin as a rival. Threatening key projects, such as the Keystone Pipeline, that would substantially benefit the state and the county as a whole, the federal government attempts to stall Texan progress. Having more independence allows Texas to embark in different ventures that benefit the state without facing major roadblocks from Washington.
Leave a Reply
| <urn:uuid:6eaa362c-807a-4d99-a560-568437ecd2d7> | 2 | 2.109375 | 0.052684 | en | 0.955608 | http://environmentblog.ncpa.org/epa-vs-texas/ |
Top Definition
Small hidden valley west of the blue mountains in NSW Australia. Famous for nothing in particular except a mining past. Locals are creatures of habit, they wear trackidacks for days on end and dine at 'Maccas'. Their speech is slow (as is their driving) and consists mainly of old 50's Aussie slang. Bring a high powered microscope and you may catch someone working.c Most locals are found waiting ffor benefits at Centrelink..
Cenrelink lithgow speak:
* meet ya at the warehouse, we'll finally get that tracksuit pants yis afta!
* shazza, wazza and wazza are comin ova for a barbie, wanna come cobba?
* where is my tracksuit pants?!
* lets go to maccas for dinner, i wear my tracksuit pants!
* i'm wearing two sizes smaller, oneday i'll fit into it
* wheres me concession card? I hope it gets me a discount at maccas!
* six large fries, 2 cheeseburgers, 5 thick shakes.. and for my husband..
* lets drive down the main street doing 20k's, i'll bring my trackies!
Por DJS1 24 de abril de 2006
Correo diario gratis.
Los emails se envían desde Nunca te enviaremos spam. | <urn:uuid:5f253e33-6c85-4fb9-bb0f-b4b51566a8e2> | 2 | 1.710938 | 0.036141 | en | 0.833702 | http://es.urbandictionary.com/define.php?term=Lithgow |
A cellular automaton is really nothing more than an array or a transitive graph of identical finite state machines (aka DFAs), feeding on themselves! (Or, if you want to add randomness, you also add randomness to your machines.)
The possible states of each cell are the states of its finite state machine. At every time step, each cell receives as input the states of its neighbours (there are a finite number of neighbours, and each can be in one of a finite number of states, so this input comes from a finite alphabet). The combination of the cell's current state and its input (a finite amount of information) yields the cell's state in the next time step.
For instance, Conway's Game of Life consists of a 2D array of cells. Each cell has 2 possible states (dead or alive). There are 8 neighbours for each cell, so the input is from an alphabet of size 28=256, giving the states of all cells. Of course the transition function is particularly simple: A dead cell turns alive if it has exactly 3 live neighbours, i.e. it receives one of the 56 possible inputs with exactly 3 1's. A live cell stays alive if it has exactly 2 or 3 live neighbours, i.e. it receives either one of these 56 inputs or one of the 28 inputs with exactly 2 1's.
Note that a Turing machine can easily be modeled in this fashion, essentially by arraying machines along the number line Z. Each machine either represents a tape cell or a tape cell occupied by the Turing machine's head. The rules of the Turing machine are easily adapted to the rules for the 1D cellular automaton. So it's not too surprising that Conway found a cellular automaton (Life) capable of computing any computable function; the surprise is that it's such a simple automaton!
Log in or registerto write something here or to contact authors. | <urn:uuid:2776a052-b38b-441b-93be-accbf2e04190> | 4 | 3.640625 | 0.787617 | en | 0.911676 | http://everything2.com/title/cellular+automaton |
Viking name generator
This viking name generator will generate either 10 male or female names depending on your choice. Both male and female versions have their own first and last names.
Vikings didn't know last names in the same way as we do, instead of real names they would usually refer to people as son of or daughter of, hence why our last names all end in either son or dottir.
Note that viking names sometimes have weird characters, but for simplicity's sake we only used the standard aplhabet, as many games only allow those characters.
Share this generator
facebook share tweet google plus
Your art here? Click here to find out more! | <urn:uuid:1e808a79-70b8-4c66-b9e2-c0b44cebd056> | 2 | 1.867188 | 0.441633 | en | 0.90075 | http://fantasynamegenerators.com/viking_names.php |
Washington Post
July 22, 1999
Pg. 23
Why We Need The F-22
By Richard P. Hallion
There was some irony in the House Appropriations Committee's canceling production funding last week for the Air Force's next generation fighter -- the Lockheed-Martin F-22 Raptor. The action came only weeks after America's military forces proved -- for the third time since 1990 -- that exploiting dominant aerospace power is the irreplaceable keystone of our post-Cold War strategy for successful quick-response crisis intervention.
No issue has been more misunderstood than the F-22. The plane links radar-evading stealth with the ability to cruise at supersonic speeds and to exploit and display data from various sources to better inform the pilot about threats and opportunities.
Critics charge that it is unnecessary. They see it as a relic of Cold War thinking, a plane that is too expensive and too complex for the kind of foes America is likely to fight. Instead, they argue, American pilots should make do with a modified airplane still on the drawing boards (the proposed Joint Strike Fighter, intended primarily for ground attack), or upgrade the existing F-15 and F-16, both already more than 25 years old.
Right now a range of advanced fighter designs are flying around the world -- for example, the newer Sukhoi Flankers, the Eurofighter, the Gripen and the Rafale -- that already fly as well or better than the finest contemporary American fighter, the non-stealthy F-15. Complementing these are a slew of advanced surface-to-air and air-to-air missiles that further erode the traditional qualitative advantage the United States has enjoyed over potential foes.
Control of the air is at the heart of the F-22 debate. It reflects a difference between those who believe mere air superiority is sufficient and those who believe one must have air supremacy, even air dominance. The differences are not unimportant. Mere superiority keeps one in the fight but rarely guarantees victory. One who recognized this was Dwight Eisenhower; in 1944, while scanning the vast supplies and troops stretched across the beaches of Normandy, he told his son, "If I didn't have air supremacy, I wouldn't be here."
Over Korea, American fighter pilots shot down 10 MiGs for every friendly plane lost, and so dominated the air war that U.N. ground forces conducted their operations with essentially no fear of enemy air attacks. But after Korea we took air supremacy for granted, and Vietnam showed the sorry results. Over North Vietnam, American airmen barely had air superiority, with a scant 2 to 1 victory-loss ratio. Shocked, America rebuilt its air strength across all the services to reflect the need to dominate, not merely survive. Today the F-22 is intended to ensure that same kind of dominance into the new millennium.
Many of the same arguments made against the F-22 were made in the 1970s against the F-14, F-15, F-16 and F-18: They were too advanced, too complex, too costly, etc. The wisdom of producing them has since been proven repeatedly over the Middle East and the Balkans.
Seeking air superiority should never be what we choose to live with. Rather, air supremacy should be the minimum we seek, and air dominance our desired goal. Control of the air is fragile and can be lost from a variety of causes, including poor doctrine and tactics, deficient training, poor strategy and rules of engagement. But worst of all, it can be lost through poor aircraft.
It takes more than a decade to develop a fighter, and it is imperative we make the right choice. The hallmarks of a dominant fighter are the ability to evade and minimize detection (stealth), transit threat areas quickly (supercruise) and exploit information warfare (sensor fusion) to react more quickly than one's foes. Only one aircraft contemplated for service today can do that: the F-22.
Critics of the F-22 make much of its cost, but that cost -- rigorously managed and within the historical trends of fighter aircraft development -- buys capabilities that ensure the survival of those who have volunteered to put themselves at risk in their nation's service. The F-22 offers the potential for intimidating opponents so that they do not choose to test our resolve in war.
Failure to procure the F-22 would mark the first time since the Second World War that the United States has consciously chosen to send its soldiers, sailors and airmen into harm's way while knowingly conceding the lead in modern fighter development to a variety of foreign nations that may sell their products on the world's arms market. America needs the F-22, and needs it now.
The writer is the Air Force historian. | <urn:uuid:51d3edba-25b7-4cd6-89a4-3df60ea09ab9> | 2 | 1.921875 | 0.102947 | en | 0.953299 | http://fas.org/man/dod-101/sys/ac/docs/e19990722need.htm |
Frequently Asked Questions
What is Forex?
Forex, or the Foreign Exchange, is the international market for buying and selling currencies. Forex is the largest financial market in the world, with over 2 trillion dollars traded every day. Millions of traders have begun trading online, and the market continues to grow. Forex traders buy and sell currency pairs, like the USDEUR (dollar and euro). When traders buy/sell these pairs, they are actually buying one currency while selling the other at the same time.
How do people trade Forex?
Previously this market was only opened to banks, hedge funds, multinational corporations and high net worth individuals. Now, anyone with an internet connection can trade Forex, utilizing online brokerages that cater to the retail market.
Why do people trade Forex?
While banks and multinational corporations utilize the Forex market to conduct international business, retail traders attempt to make money from currency movements.
What is a pip?
Pip literally stands for “percentage in point” and it represents the smallest change in price that a currency pair can make, usually 1/100 of a penny. If the EURUSD currency pair is quoted at 1.4345, the fourth numeral after the decimal point—in this case the five—represents one pip.
What is a pip spread?
The pip spread is the difference in pips between the buy (bid) price, and the sell (ask) price of currency pairs, and is the means by which brokerages make money. The pip spread of a currency is expressed like this: EURUSD 1.4343 / 1.4346.
What is leverage?
Leverage is essentially the using of borrowed money for investment. In the Forex market, your account deposit is considered as collateral on borrowed capital. Because there is almost never a shortage of liquidity in the Forex market, brokerages are able to provide high leverages, confident that positions may be closed if necessary to avoid the broker having to suffer any losses. This means that your position will be automatically closed if your margin on a trade falls to a certain, predetermined level.
Leverage can be used by traders to significantly increase potential return on their investments. Usually, traders are able to trade with leverages of 50:1, 100:1, 200:1 or higher. If a trader uses a leverage of 100:1, it means that he/she is able to buy $100,000 of currency on the Foreign Exchange for just $1000. While there is the potential for substantial returns using such a high leverage, the risk is also amplified.
For example, let’s say you want to buy USD/EUR at 1.4342 / 1.4345. You have $5000 in your account and you decide to buy 5 lots (each lot is $100,000). You take your $5,000 and buy a 500,000 USD/EUR position at 1.4345.
Two days later, the USD/EUR is trading at 1.5000/1.5003. You decide to close your position at 1.5000 for a gain of 55 pips. On a 500,000 position, 55 pips make you a profit of $2750. In two days, you made over 50% on your investment.
Now, just as leverage can work for the trader, it can work against the trader. If the dollar had lost 55 pips in the above example, the trader would have lost over half his initial position.
FXCASH Affiliate Marketing
What is FXCASH?
FXCASH is a Forex affiliate program providing affiliates easy and simplified access to numerous Forex related products and services.
What does it mean to be an affiliate?
As an affiliate, you will be able to advertise FXCASH clients and gain revenue from the traffic you refer. If you know potential traders, have a finance related website, a blog or a mailing list of people interested in finance related products, you can become an affiliate and monetize your inventory.
Why should I become an FXCASH affiliate?
Unlike other existing affiliate programs, FXCASH offers you access to a selection of different forex-related products and services—from training schools, to Forex signals, to brokerages—designed to appeal to every type of potential trader.
Best of all, cross promotion among FXCASH clients serves to triangulate traffic in order to maximize your return on investment (ROI).
Bottom line: more interesting, quality products for your community, and a continuous, multi-source revenue stream for you.
How is cross promotion good for me?
Traffic sent to any one of our clients will permanently be tracked to your account. This means that you will receive additional revenue for each and every product your traffic eventually utilizes.
Who is FXCASH?
The FXCASH affiliate team is comprised of affiliate marketing professionals from a variety of industries. Together, the FXCASH team has been responsible for paying out millions of dollars in affiliate commissions annually. Our experience in every major international market provides you with the opportunity to enter the Forex marketing business, regardless of language or location. With FXCASH, you work with reputable professionals that have an in-depth understanding of online marketing.
What is a Forex broker?
A Forex brokerage offers investors the opportunity to trade Forex in exchange for a fee or commission.
What is Forex training?
Forex training sites provide traders Forex trading education. Training programs offer one-on-one mentoring, instructional webinars, and access to immense libraries of Forex-related information and/or live trading sessions.
What are trading signals?
Trading signals are Buy and Sell recommendations, news alerts and market analysis, usually from an independent provider. They typically charge a monthly fee for use of their services.
How can I promote FXCASH products?
There are many ways!
Forex/finance-related websites: Simply incorporate our marketing material onto your site. All traffic generated from our marketing material will be tracked to you.
Search Engine Marketing (SEM): The various search engines provide you a great, easy way to attract targeted Forex traffic. We will be happy to supply you with landing pages, keywords, market data or anything else you need to run a SEM campaign.
Offline advertising: Offline marketing is an important aspect of our business. If you are interested in advertising offline, please contact your account manager to determine which type of campaign would work best for you.
Direct Marketing: Send your finance-related mailing list information about FXCASH products, with the ability to offer exclusive promotions!
Sub-affiliates: Have a friend with a website? Refer him to FXCASH, and earn a percentage of what he earns!
What is a sub-affiliate?
Referring affiliates to the FXCASH affiliate program entitles you to a percentage of the net revenue they generate! The affiliates you refer are known as your “sub-affiliates”.
What do I use to promote FXCASH clients?
FXCASH will provide you with extensive marketing tools located within the FXCASH affiliate program to help you promoted our clients. These marketing tools have been produced and tested by online marketing experts.
Are there any costs related to signing-up?
Signing up is completely free.
Who should I contact at FXCASH with my questions?
After you sign up, you will be assigned a personal account manager who will assist you in determining the most efficient way to turn your traffic into profit. Your account manager acts as your counterpart at FXCASH, responsible for handling all aspects of your account and addressing all your needs. We have account manager who speak English, French, Italian, Hebrew and Russian.
How do I access your support service?
Write to and a member of our support staff will get back to you within 1 business day. Of course, all support questions should be directed to your account manager after we begin working together.
Do I need to open an account for each one of my websites?
No. Our affiliate program enables you to generate different codes from within your account for each particular website.
Where can I find materials to promote FXCASH clients?
After logging in to the FXCASH website, select “Tools” from the menu bar and follow the prompts to the specific marketing material you are looking for. If there is something you want but can’t find, contact your account manager directly.
What is CPA?
CPA stands for cost per acquisition. When working on a CPA arrangement, you will be compensated for each client you send that makes a deposit or purchase a service
What is revenue share?
Revenue share offers affiliates a lifetime share of the revenue generated by traffic sent to the different services.
How do signal providers generate revenue?
Signal providers generally charge a monthly fee for their service.
What type of deals can I get working with an FXCASH signal provider?
Affiliates earn a lifetime percentage of the revenue generated by each client sent to our signal providers.
How do Training sites generate revenue?
Training sites charge either a one-time fee for products or a periodical subscription fee.
What type of deals can I get working with an FXCASH broker?
Our affiliates may chose between CPA, CPL, revenue share or a hybrid of the 3.
How do Forex brokers generate revenue?
Broker compensation is included in the pip spread, which is factored in to every trade made by traders. Pip spreads are generally 3/100 of a penny for every dollar traded. While this is a very small amount of money, it adds up when trading large positions. For instance, when buying/selling a position of $100,000 (a position which, as explained above, costs the trader only $500-$1000) the broker compensation on a 3 pip spread is $30. Serious traders often buy and sell multiple lots a day.
Forex traders do not actually invest the entire $100,000 if they buy or sell a lot. Instead, they must put down only a small percentage of the lot—usually 1-2%, and sometimes as low as .5%—and the brokerage lends the trader the rest of the money. For example, if a trader uses a 1% margin, he must put up only $1000 to buy $100,000 on the market—1/100 of the position. This concept is also referred to as leverage. In the above example, the trader is using a leverage of 100:1.
Is there an option to negotiate different types of deals?
Every quarter affiliate accounts are reviewed. Based on the results, affiliate managers are able to offer special revenue schemes to eligible affiliates.
Toggle title
Net revenue is the total income generated from clients.
Technical Questions
How are payments processed?
Statements are sent to affiliates via email between the 15th and the 20th to each month. Upon reception of the statement, affiliates have 72 hours to notify their account manager concerning any payment issues. If no contact is made, FXCASH assumes the affiliate acceptance the statement, and processes the payment as shown. Payment is made over the course of the following week.
Is there a minimum amount that I need to earn in order to receive my commission?
$250 is the minimum amount FXCASH will pay affiliates per month. Affiliate balances with less than $250 outstanding will be withheld until such time as the balance exceeds $250 at the end of the month
What are the processing fees associated with my commission payments?
There is a $30 wire transfer fee.
Where can I update my contact information?
You may update all contact information from the “Account info” tab located on the main menu after you login to the site.
How can I be sure that my traffic is properly tracked?
FXCASH’s affiliate software is licensed from DirectTrack, a subsidiary of Digital River, a publicly traded company (NASDAQ: DRIV). FXCASH chose to work with DirectTrack because of its unimpeachable integrity and the superior quality of its product. DirectTrack has perfected its technology over the last 10 years, based on industry advancement and client recommendation. With DirectTrack, you can be comfortable knowing that the traffic you send is will be tracked by the best in the industry.
How do I know how much money I'm making?
We provide online statistics for you 24 hours a day. Log in with your username and password to our secure affiliate stats page to see how much you’ve earned along with other relevant stats.
What information is displayed on my stats page?
FXCASH displays very detailed reporting for affiliates in real time.. You will be able to view the exact number of impressions, clicks, downloads, signups of demo and real accounts, and of course, your share of the profit. You can view stats according to specific time frames, by service, by campaign or by creatives. These stats will enable you to judge whether or not certain marketing strategies and campaigns work for you.
Fan us on Facebook
Connect with us on Linked In | <urn:uuid:f8701d4b-ab57-473f-964a-9262cfa8a5bc> | 2 | 2.328125 | 0.422994 | en | 0.942327 | http://fxcash.com/faq/ |
RootsTech 2015
Tuesday, September 10, 2013
The effect of different scanning resolutions for genealogical archive photos
I mentioned in a previous blog post that I would scan the same photograph at various resolutions to determine the quality of the scanner and to see the difference in the image. One concern, in the past, was the size of the resultant file. At high resolutions, the file size tended to be very large. Since memory storage space was at a premium, we were concerned about the size of each file. Today, storage space is no longer an important consideration. We now have huge amounts of storage available at a reasonable price. So, for my own purposes, I am more concerned with quality.
The current and most common archive standard is that an image be scanned as a TIFF file type at a resolution of 300 dpi. I am acutely aware that the commonly used designation of "dots per inch" or dpi is not used by serious archivists. The current standard is based on the number of pixels per line and high standards for resolution would demand two or more pixels for the narrowest line segment of the image. But, since most of the scanners sold in consumer marketplace still give their resolution in dpi, I will use that term in this post.
For reference, I am using a Canon Canoscan 8800F flatbed scanner for these examples. I am using a software program called VueScan. Bear in mind that when I upload the images to the Internet, they are uniformly converted into .JPG files so viewing the original TIFF files is not possible in this context. But having done this same experiment many times in the past, I can say that the results you see in the following examples are good indications of the quality of the images you get on the same type of equipment in your own home.
By the way, I always scan in color, even with black and white images. The reason is that the scan preserves, as much as possible, the full range of color in the original. Even though an old photograph may appear to be "black and white" it is seldom simply those two colors.
Here is the initial scan of a high contrast, very old image scanned at 300 dpi in 24 bit RGB with no other editing parameters changed including color balance set at neutral.
The only parameter I will change in this post is the dpi setting. You may need to click on an image to see any of the detail. I am going to use Photoshop Elements 10 for these examples. I could use either Adobe Lightroom or Photoshop, but I thought I would use the less expensive alternative.
Here is a closeup of the image showing a portion of the face at 100% magnification and 300 dpi:
Just to show the detail I will show an image at a higher magnification. This time the magnification is 400%:
On the screen there is a small amount of pixellation, that is, I can begin to see the individual pixel elements of the image.
Now, I will go back and do the same image at 100 dpi:
Hmm. The image looks exactly the same. But if you were here with me, you would have seen that the image has changed in physical size. But here is the real difference, first the image at 100% magnification:
Now you can see that the physical size of the image changes. If you click on the smaller image you will also see less detail. But where this is apparent is in higher magnification. Here is the same 100 dpi image at 400%:
If you look closely, you will see that most of the detail of the beard has been lost to pixellation. Now let's see what happens when you go to a higher resolution. Of course the physical size of the file and the image will increase. Here is the same photo scanned at 600 dpi.
In the smaller size version here in the post, it looks about the same as all the others, but on my computer, the image is much larger. Here is the same area of detail at 100%:
You can tell the difference between the 300 dpi and the 600 dpi photo if you look carefully. But the real difference is shown by the file size. The 100 dpi file is 1.1 MBs. The 300 dpi scan is 10.2 MBs. The 600 dpi scan is 40.6 MBs. The question is whether the increased resolution is worth the much larger file size. Remember that the images of the photos all look about the same on the screen.
Now let's look at the 600 dpi photo at 400% magnification:
If you click on this detailed image, you will see that the image appears fuzzy. There is a limit to amount of detail any scan can capture from a photograph and that is the limit of the detail in the original. When you get right down to it, the 600 dpi image really adds nothing to the scan except the size of the file and the size of the resulting photo. If you are going to make a huge poster of this photo, there is some reason to use the higher resolution, but if you are going to reproduce the photo at 100% there is almost no difference between the 300 dpi scan and the 600 dpi scan.
You may wish to go through this process with your own images using a photo editing program such as Adobe Photoshop Elements. You may end up scanning at the maximum resolution of your scanner every time. This is a value choice you have to make. The file sizes will increase dramatically and eventually, the differences in the resolution of the detail in the images will stay about the same at 100%.
More on this subject later.
1. However as the resolution of monitors increases so the requirement for higher resolution scans increases.
When I first started scanning in 1995 there was no benefit in scanning at a higher resolution than 150 DPI. That has changed over the intervening years as equipment has improved.
If you are scanning as an archive for the future scan at the highest optical resolution (the highest resolution possible without interpolation)available for your scanner.
Choose a lower resolution and I would suggest you will be revisiting your work in a few years time.
2. Also, on many scanners, it takes physically longer to scan at higher resolution. | <urn:uuid:4794895c-4055-4db3-94a1-69c7755e4302> | 2 | 2.46875 | 0.029274 | en | 0.950111 | http://genealogysstar.blogspot.com/2013/09/the-effect-of-different-scanning.html?showComment=1378836117811 |
What the Violence Against Protesters at the Convention Really Means → Washingtons Blog
What the Violence Against Protesters at the Convention Really Means - Washingtons Blog
Wednesday, September 3, 2008
What the Violence Against Protesters at the Convention Really Means
A classified FBI intelligence memorandum gives police detailed instructions on how to target and monitor lawful political demonstrations under the rubric of fighting terrorism. And the Joint Terrorism Task Force was involved in infiltrating, tracking and disrupting every-day Americans who disagree with the current administration's policies.
While the ACLU calls such tactics "a return to the days of J. Edgar Hoover's spying tactics", that is not an accurate description.
While Hoover's FBI had its "enemies list", and carried out numerous dirty tricks including Cointelpro, the current governmental actions are a lot worse.
For example, according to a law school professor, the Military Commissions Act of 2006 has the following consequences:
"Anyone who donates money to a charity that turns up on Bush's list of 'terrorist' organizations, or who speaks out against the government's policies could be declared an 'unlawful enemy combatant' and imprisoned indefinitely. That includes American citizens."
According to the New York Times:
And according to a Yale law professor, "The [torture] legislation....authorizes the president to seize American citizens as enemy combatants, even if they have never left the United States. And once thrown into military prison, they cannot expect a trial by their peers or any other of the normal protections of the Bill of Rights."
After they disappear into the black hole of enemy combatant status, they may, of course, also be tortured.
See also this FBI memo showing that peace protesters are being labeled as "terrorists" (and see this).
Is that why mass arrest facilities were set up for the Democratic National Convention, and signs on the walls of the warehouse read "Warning! Electric stun devices used in this facility" (and see this)?
Remember, the Department of Homeland Security - instead of protecting vulnerable targets from alleged terror threats - has instead randomly made up lists which include kangaroo centers, petting zoos and ice cream parlors as high-priority terrorist threats. The fact that they are now targeting reporters, children, little old ladies and skinny vegetarians is just more proof that the "war on terror" has nothing to do with terror, and everything to do with grabbing power and stifling dissent.
No comments:
Post a Comment
-- Comments that explicitly call for violence
| <urn:uuid:1d620c3d-3b51-453f-bad6-f47b754bfd68> | 2 | 1.726563 | 0.020283 | en | 0.94216 | http://georgewashington2.blogspot.com/2008/09/what-violence-against-protesters-at.html |
Pollution Might Shrink Your Penis, Say Otters
Otter dicks are shrinking thanks to pollution, and you should definitely freak out, because the trend might affect humans, too.
A new study from the Cardiff University Otter Project and the Chemicals, Health, and Environmental Monitoring Trust (CHEM) found that otter penis bones are getting smaller. What researchers haven't yet determined is if endocrine disrupting chemicals (EDCs) are to blame. Otters have different reproductive physiognomy, but EDCs can also lead to human babies with undescended testicles, sex organ deformities and low sperm count.
The connection requires further investigation, but the study notes that humans and animals are exposed to many of the same chemicals on a daily basis. According to Dr. Eleanor Kean from the Cardiff University, the otter is a very good indicator of the health of the environment in the UK, and in the 1970s, the presence of pollutants caused a giant crash in the aquatic animal's population. Many of these chemicals have been banned, but others still exist and aren't monitored. Too bad for otters. This is an alarming find, but you can rest a little easier knowing you already have a penis. [Daily Mail]
Image credit: Shutterstock/S.Cooper Digital | <urn:uuid:b625a5b6-3142-4216-a712-534794e43059> | 2 | 2.421875 | 0.127456 | en | 0.942513 | http://gizmodo.com/5986690/pollution-might-shrink-your-penis-say-otters?tag=science |
Posts Tagged ‘Herpes’
Carl Lowe
Can Bread Give You Herpes?
November 5th, 2010 by Carl Lowe
Gluten in bread can wreak havoc on the body. (AP Photo/S Ilic)
If you’re sensitive to gluten, a protein found in wheat, and foods made from wheat, it can make you more susceptible to herpes. Herpes, a virus that forms blisters on the skin, mouth and genitals causes what are called cold sores or fever blisters. It is highly contagious and may keep coming back, causing repetitive infections. (more…) | <urn:uuid:8348a3d9-915c-452a-ac23-03defd19f1d4> | 2 | 1.765625 | 0.332031 | en | 0.938951 | http://glutenfreeworks.com/blog/tag/herpes/ |
How To Make Ice Cream in a Blender
Ice cream is very popular and captured the hearts of the people most especially the children. It is simply a favorite dessert for many. Over the years several flavours of ice cream evolved in the market today and most people buy ice cream on the basis of flavour and ingredients. Basically, ice cream is made from milk, sugar, cream and other flavourings such as fruit puree or vanilla. We can purchase ice cream at the store or make your own ice cream using your own smoothie blender at home.
Making your own homemade ice cream at home through a blender can be lots of fun and excitement, you have the option to choose on what flavour you want. Your favorite fruits and vegetables for a healthy homemade ice cream if you opt for a healthy is one great idea. If you want to know how to make ice cream using your smoothie blender at home, make sure you have all the ingredients needed to achieve the flavour you want. Prior to making your own ice cream, prepare a checklist of all the ingredients you need like a pack of your favorite fruits such as bananas, strawberries, etc. Milk, whipping cream, sugar and flavourings should also be among the list.
The best thing about homemade ice cream is that you can change the ingredients depending on your taste. When ingredients are completed and you are now ready to start making your ice cream using your blender, there are simple steps to follow.
Simply put the thick materials first in the blender such as frozen fruits, milk and sugar. You can use the different speed in your blender to regulate a creamy mixture. After mixing the ingredients in a high speed switch to lower speed to control the intensity of mixing your frozen delight making it smooth to eat. An ice cream generally is cold and hard, you can do this by simply placing the mixture in a freezer for half an hour or more.
Your homemade ice cream is now ready to serve. Enjoy it with your friends and loved ones around. Making your homemade ice cream at home using your smoothie blender is exciting and fulfilling. Not only the ice cream itself but on the idea that you can add different ingredients when you would like another flavour for your ice cream.
Related posts:
1. Use Sugar Flowers To Decorate Your Cakes
2. Making Your Own Juicer Recipes
Leave a Reply
| <urn:uuid:c6608cb0-2e87-4b03-94f2-88cad0dafa68> | 2 | 2.125 | 0.018576 | en | 0.915363 | http://gluttonforum.com/how-to-make-ice-cream-in-a-blender/ |
Friday, November 19, 2010
Natural Factories
Nanotechnology is the manipulation or self-assembly of individual atoms, molecules, or molecular clusters into structures to create materials and devices with new or vastly different properties. This technology can work from the top down (which means reducing the size of the smallest structures to the nanoscale e.g. photonics applications in nanoelectronics) or the bottom up (which involves manipulating individual atoms and molecules into nanostructures and more closely resembles chemistry or biology).
The word “nano” means 10-9, or one billionth of something. It is generally used when referring to materials with the size of 0.1 to 100 nanometres, however it is also inherent that these materials should display different properties from bulk (or micrometric and larger) materials as a result of their size. These differences include physical strength, chemical reactivity, electrical conductance, magnetism, and optical effects.
Nanoparticle can be synthesized using a variety of methods, including physical, chemical and biological. Development of reliable and eco-friendly processes for synthesis of nanoparticles is an important step in the field of application of nanotechnology. One of the options to achieve this objective is to use ‘natural factories’ such as Bacteria, Fungi, Algae, and plants, as biomass provides both reducing agents and capping agents required for nanoparticle synthesis.
1 comment:
1. Good description provided for nanotechnology..It's a technique which manipulate substance at atomic and molecular level..keep continue with this research Market Study | <urn:uuid:df1b510e-c485-46cc-84ea-8f0b35a9fbcd> | 3 | 3.359375 | 0.447227 | en | 0.904695 | http://greennanotech.blogspot.com/2010/11/natural-factories.html |
Become a fan of Slashdot on Facebook
Forgot your password?
Slashdot videos: Now with more Slashdot!
• View
• Discuss
• Share
Power Science
Three Mile Island Memories 309
Posted by Soulskill
from the if-it-ain't-broke,-send-it-through-congress dept.
theodp writes "Thirty years after the partial nuclear core meltdown at Three Mile Island, Robert Cringely describes the terrible TMI user interface, blaming a confluence of bad design decisions — some made by Congress — for making the accident vastly worse. While computers could be used to monitor the reactor, US law prohibited using computers to directly control nuclear power plants — men would do that. So, when the (one) computer noticed a problem, it would set off audible and visual alarms, and send a problem description to a line printer. Simple, except the computer noticed 700 things wrong in the first few minutes of the TMI accident, causing the one audible alarm to ring continuously until it was shut off as useless. The one visual alarm blinked for days, indicating nothing useful. And the print queue was quickly flooded with 700 error reports followed by thousands of updates and corrections, making it almost instantly hours behind. The operators had to guess at what the problem was."
Three Mile Island Memories
Comments Filter:
• by marco.antonio.costa (937534) on Saturday April 04, 2009 @12:23PM (#27458605)
Obama's 'new regulatory framework for the 20th century' crowd: Choke on that please.
• by tomhudson (43916) <> on Saturday April 04, 2009 @12:28PM (#27458667) Journal
So the problem with Three Mile Island (TMI) was Too Much Information (TMI). But I didn't read the article, as that would have been TMI.
• Re:Three-Mile Island (Score:3, Interesting)
by King_TJ (85913) on Saturday April 04, 2009 @12:48PM (#27458833) Journal
Yep ... and as I think I posted once before in another Slashdot topic, I actually work with a guy who used to be an engineer at the firm that was ordered to make some piping for the Three Mile Island reactor, on a "rush" basis, when the problems first started there.
He claims he spoke with people at the reactor site, asking them "How could something like this happen in the first place?" and was taken off to the side, and told that it would take a very specific sequence of adjustments to a number of valves to cause what happened. He replied, "Well, that doesn't sound very probable that could happen by accident?" He was then told that, "Yes, although it COULD theoretically happen, it seems HIGHLY improbable. It's also worth considering that the China Syndrome movie was just released in theaters shortly before this happened."
So in short, seems very possible it was caused by someone wishing to sabatoge the project as much as anything.
• Jimmy Carter (Score:3, Interesting)
by bgeer (543504) on Saturday April 04, 2009 @12:48PM (#27458839)
Our President at the time, Jimmy Carter, was also a micro-manager and a former nuclear engineer:
U.S. Navy reactor operators, the sort who served under Jimmy Carter in the 1950s,
Is not and never was a nuclear engineer, much less did he command a nuclear sub. He served as an enlisted man on several diesel-electric subs and started, but did not complete, a Naval class in nuclear engineering. He resigned from the Navy (as a lieutenant) before any nuclear subs were commissioned.
The FEMA guys were just plain stupid.
NO U
• Bleh (Score:5, Interesting)
by NewbieProgrammerMan (558327) on Saturday April 04, 2009 @12:54PM (#27458877)
U.S. Navy reactor operators, the sort who served under Jimmy Carter in the 1950s, were selected primarily for their temperament. ... their Navy job--as at TMI--was to follow the manual. All knowledge was inside the book. So knowing the book was everything. Unfortunately knowing the book isn't the same as knowing the reactor. So knowing the book was everything. Unfortunately knowing the book isn't the same as knowing the reactor.
No. Just fucking no. There's a significant (and necessary) emphasis on following procedures and getting the books out for any planned change to the plant to make sure you're doing things right. But Cringely makes it sound like nuclear operators are just slightly trained mouth-breathers that only know how to look things up in the book and do what it tells them. I can't speak for the civilian training, but the Navy does NOT do things that way.
When something goes wrong, they depend on you having enough internalized knowledge about the plant, its controls, and its indicator systems to work out what's going on and (if necessary) do something about it. Once you've got stuff at least marginally under control, *then* you get the books out to check the applicable procedures to make sure you haven't forgotten something, and to figure out how to recover from whatever happened without causing any more problems.
The Navy puts a lot of effort put into making sure their operators know how and why things work the way they do. They would never have got to the 21st century with the track record they have if all they did was train people to look at the book.
• Re:Ugh. (Score:3, Interesting)
by Jonner (189691) on Saturday April 04, 2009 @12:57PM (#27458899)
If you read the article, you'd realize it was a very significant wake up call. Death was narrowly avoided because the reactor containment vessel was over-engineered compared to the typical design. The tragedy is that the lesson the public learned was that nuclear power was too dangerous to use at all, when the reality was that it was poorly designed and mismanaged.
• by NewbieProgrammerMan (558327) on Saturday April 04, 2009 @01:20PM (#27459081)
Don't let Cringely convince you that he actually knows anything about nuclear power plants--those guys had a whole room full of alarms, gauges, meters, etc., giving them a lot of info about the whole plant.
Shutting down the reactor could probably have been done by the operator within a couple of seconds by flipping a switch. IIRC, though, the automatic safety system shut it down at the beginning of the incident because it detected a situation that warranted it.
• by marco.antonio.costa (937534) on Saturday April 04, 2009 @03:06PM (#27459819)
As I tire of pointing out and people never tire of not understanding, lack of regulation does not mean free-for-all, might is right or whatever.
An unregulated nuclear industry does not mean plants can pour waste in other people's property. Since governments regulate commons they must either take responsibility to ensure they are not destroyed or privatize them to internalize the externalities.
• by Anonymous Coward on Saturday April 04, 2009 @04:37PM (#27460363)
My sister went to school there, and after two and a half semesters there she was diagnosed with thyroid cancer.
That's nothing. My grandad went to Lourdes and only six months later he got leukemia. I want to know when people are going to take a stand against the unshielded holy radiation that causes such damage to humans.
I'm posting this AC because I just know it's going to be marked troll, and you're going to post something like "she's dead now, you jackass" as though that's relevant to the debate. But if you read the Wikipedia article you linked to, several studies have found no evidence of any increase in death due to TMI, especially compelling with the observations that cancer deaths were highest in the area with lowest fallout, and that the area around TMI has high levels of radon and so high background radiation anyway.
Your sister may have died from cancer (I don't know) and that's certainly a tragedy. Nevertheless, this is not the fault of the nuclear industry, but one of those pieces of shit that happen depressingly regularly in this amoral, godless universe. As a someone who is an engineer or an allied trade (you respect engineers after all, so you must be one) you should know statistics well enough to accept that.
• by Bigjeff5 (1143585) on Saturday April 04, 2009 @06:17PM (#27461049)
That control room is very similar (if a bit larger and whiter) to the control rooms in gas plants, oil rigs, and pump/flow stations in oil fields today. The stuff may seem old as heck, but really a lot of that stuff you can't just replace with a fancy new computer. The best you can do in the control room is upgrade to digital displays and consolodate sections a little bit. But that may not even be ideal, because the analog systems will be able to run for a lot longer during a power failure than a digital will, and that's a BIG deal.
One thing you CAN do is send all the information in that control room to a fancy new computer, and then you only need a couple hands-on operators at the plant in case things go very wrong. The rest can be handled by operators sitting in front of a few monitors back at home-base.
I know you didn't really say it, but I'd wager you were thinking it, and you've got to realize that is not a giant computer. It is a giant control room. It's not like you can replace the steering wheel of your car because you've got a new engine.
• by anorlunda (311253) on Saturday April 04, 2009 @06:38PM (#27461161) Homepage
I used to work in the nuclear power plant operator training industry. Believe me, whatever else those operators were, they were not cheap. The CEO could not skimp on salaries and hire idiots. In fact, in a time when $40K was an excellent salary, the training costs per operator was more than $1 million.
On the other hand, there were cultural obstacles. In Europe (Sweden), they hired engineers with masters degrees to become nuclear plant operators. In the USA, they were mostly high school grads who were union members and promoted from running older coal plants. Union politics, not merit decided who got promoted. They were not the best and brightest. Of course in Sweden they also attract the best and brightest to be civil servants. Can you imagine that happening here?
There are always plenty of suggestions as to where society should apply its best and brightest. It is much harder to place the worst and dumbest. Consider the bottom 25%. They have to have jobs. No matter where you assign them, the public will in some way be depending on those jobs being done well. So filling jobs becomes less of a question of rational allocation of resources, but more a matter of attractiveness and recruiting.
A plant operator must stand there and do nothing but monitor year after year, yet react swiftly and accurately in those rare seconds of pure terror, and then have the whole world second guess how well they did it. In addition, they have to do shift work for 24x7 operation. Most people think that it is a hell of an unattractive job. I think that the plant owners do a hell of a job trying to find and retain the best people they can get, and to enrich the jobs to make them less boring. It takes much more than deep pockets to succeed.
So you tell me. You play CEO and tell me how would you convince Google engineers to quit Google and become operators, and how many of the lower quartiles you would assign to invent Google. Convince those bright college students that they don't want to be environmental scientists, but nuclear power plant operators instead.
• by The_mad_linguist (1019680) on Saturday April 04, 2009 @07:07PM (#27461375)
Fun fact: cows in a field two miles away from Three Mile Island got more radiation from Chernobyl.
• by NewbieProgrammerMan (558327) on Saturday April 04, 2009 @07:46PM (#27461643)
This. Most of the US civilian nuclear power industry is, to say the least, heavily influenced by the military nuclear power industry and the cult of personality surrounding Admiral Rickover. If nobody is in control, nobody can be held accountable when the fan hits the shit.
Er, in what way is that "nobody is accountable" attitude reminiscent of the nuclear Navy? They're obsessive when it comes to accountability. Every time I saw any fecal matter hit a rotary device, they were pretty damn rigorous about getting to the bottom of it and finding out who did what.
• by hawk (1151) <> on Saturday April 04, 2009 @09:51PM (#27462313) Journal
to adapt a suggestion given by a libertarian acquaintance years ago . . .
Never mind government regulation. Require a half-trillion dollar liability policy. The insurance company will regulate far tighter and more effectively than the government.
hawk, who isn't advocating this, but finds it an interesting proposal
• by Lershac (240419) on Sunday April 05, 2009 @10:18PM (#27471451) Homepage
Look fella, you just cannot have that requirement, that a person with full understanding of how the plant operates be on site at all times! What happens if the day shift all gets killed on the busride home from the company outing? or if there are say 10 guys who really have an understanding of the plant, and the plant gets bought out by some crap company and they decide to go pump gas for a living...
You have to design for the worst case.
| <urn:uuid:7147c901-ea01-4d3e-8ffb-40692642444e> | 2 | 1.5625 | 0.051081 | en | 0.970753 | http://hardware.slashdot.org/story/09/04/04/1638249/three-mile-island-memories/interesting-comments |
Harry Potter Wiki
First year
12,074pages on
this wiki
Redirected from First Year
Hogwarts Express
The first year students on their way to Hogwarts
Beginning the magical career
Travelling to Hogwarts
Rubeus Hagrid[src]
Tumblr mk88r1PzbL1qeoetoo1 r1 250
First years travelling across the Black Lake on boats, to reach Hogwarts
First years are typically eleven to twelve years of age, and begin the year by boarding the Hogwarts Express at exactly 11 a.m. on 1 September, from King's Cross Station on which they travel to Hogwarts. If they live in Hogsmeade, they do not need to catch the train. From there, first year students are accompanied by the Keeper of Keys and Grounds (or another suitable teacher if they are absent), along a shady path that leads to a fleet of small boats, which sail themselves across the Black Lake before arriving at a small landing stage near the base of Hogwarts Castle; they then await their turn to be Sorted into their houses. A teacher takes them to a small room where they await the Sorting ceremony. Older students ride up to the castle in carriages pulled by Thestrals.
The Start-of-Term Feast
—How the Hogwarts song finished in Harry's first year[src]
Sorting hat
The Sorting Hat on Harry Potter's head
Just before the Start-of-Term Feast begins, new students are Sorted into one of four houses (Gryffindor, Hufflepuff, Ravenclaw, and Slytherin) by the Sorting Hat. The Hat analyses each student's mind, looking for specific characteristics that it uses to decide where to put each student. After the sorting, the Headmaster says a few words and the feast begins. After the feast, the headmaster says a few more words; if he or she is feeling particularly festive, they will direct the students as they sing the school song, such as the year Harry Potter came to Hogwarts. Dumbledore used his wand as a conductor's baton, conjuring a ribbon that floated in the air, forming the words for the students to sing along with, each singing to the tune of their choice. Not all Professors are overfond of the song, but, Albus Dumbledore, the wise and odd man he was, conducted the singing with gusto and even got a bit misty at the end of it:
Hogwarts, Hogwarts,
Hoggy Warty Hogwarts
Teach us something, please,
Whether we be old and bald
Or young with scabby knees,
Our heads could do with filling
With some interesting stuff,
For now they're bare and full of air,
Dead flies and bits of fluff,
So teach us things worth knowing,
Bring back what we've forgot,
Just do your best, we'll do the rest,
And learn until our brains all rot.
During the first year
—Harry's first experiences with classes [src]
Severus Snape teaching Potions to a first-year class
First year students must take Transfiguration, Charms, Potions, History of Magic, Defence Against the Dark Arts, Astronomy and Herbology, classes that are continued throughout their entire magical education. First years, and only first years, are required to take flying.[1]
In Astronomy, students observe the sky with their telescopes, learning the names and movements of the stars, planets and their moon(s).
Charms students learn the Levitation Charm, the Softening Charm, the Fire-Making Spell, and, as the exam requires them to make a pineapple dance across a desk, presumably the dancing charm.
Defence Against the Dark Arts classes learn about the Curse of the Bogies, the Knockback Jinx, and different ways to treat werewolf bites.
First years also learn the basic commands to give to their broomsticks, as well as basic tricks and tips for riding.
Herbology students study various plants and fungi, such as Dittany and Devil's Snare.
In History of Magic, they learn the names and dates of various famous events and people, including Emeric the Evil, Uric the Oddball, the Warlock's Convention of 1709, the inventor of the self-stirring cauldron, various goblin rebellions, and the uprise of Elfric the Eager.
Transfiguration students must take complex notes before learning the spell to turn a match into a needle, the mouse into snuffbox spell, and the Switching Spell.
Lastly, potions students learn the Cure for Boils potion, and the Forgetfulness Potion.
A Standard First Year Timetable
First year Gryffindor
Period Monday Tuesday Wednesday Thursday Friday
First Herbology (?) Herbology (?) Herbology (?) Charms Potions
Second Defence Against the Dark Arts (?) Transfiguration Potions
Afternoon Flying (3:30 pm) ———
Midnight ——— ——— Astronomy ——— ———
First year restrictions
"Parents are reminded that first years are not allowed their own broomsticks"
—Extract from the Hogwarts acceptance letter[src]
Nimbus 2000
Harry's Nimbus 2000, a gift from Professor McGonagall
First years cannot go to Hogsmeade with the students in the third year or above, and are not permitted to have their own broomstick inside the school grounds, an exception being Harry Potter, who was given a Nimbus 2000 in his first year. They are further forbidden from taking Divination, Muggle Studies, Study of Ancient Runes, Care of Magical Creatures and Arithmancy until third year, and Apparition and Alchemy until sixth year.
Required textbooks
Harry's first year (1991)
Main article: 1991-1992 school year
Upon arival, the new students were greeted at the castle door by Professor McGonagall, who explained the four houses of Hogwarts: Gryffindor, Hufflepuff, Ravenclaw, and Slytherin, as well as the rules of the House Cup. McGonagall led the first years into a small room off the Entrance Hall and told them to wait until she returned.
The Sorting Ceremony and Start-of-Term Feast
Main article: Start-of-Term Feast
Albus Dumbledore[src]
The Sorting
The first years were led into the Great Hall, where they were greeted by the rest of the students, and, more importantly, a shabby wizard's hat on a small stool. Harry was particularly anxious, as he did not feel that any of the Houses as they were described in the Hat's song were right for him. Harry noted that Draco Malfoy, whom Harry had met in Diagon Alley, was instantly placed in Slytherin, and remembered what Hagrid and Ron had told him about Slytherin's reputation for turning out Dark Wizards, and that Voldemort had been in Slytherin. When Harry put on the Hat, it slipped down past his eyes, and the hat told him that he "would do well in Slytherin". Thinking of Voldemort, Harry desperately begged the Hat to not put him in Slytherin. The Hat instead placed Harry in Gryffindor along with Ron and Hermione Granger.
First years lessons/classes
In his first ever Potions class, Harry discovered that Professor Snape hated him, mocking him as the school's "new celebrity" before teaching the class how to brew a Boil-Cure Potion. Harry and Ron went down to Hagrid’s hut for tea, where they met Hagrid's huge and fierce-looking dog, Fang. Hagrid told Harry that he was overreacting to Snape’s treatment, asserting that Snape would have no reason to hate him. Hagrid and Ron began to talk about Ron's brother Charlie, and Harry picked up a cutting from the Daily Prophet that was lying on the table. The article detailed a break-in that occurred on Harry's birthday at Gringotts bank, in Vault 713, the same vault Hagrid visited with Harry on their trip to Diagon Alley.
One of the things Harry had been looking forward to was learning to fly until he found out that the Gryffindors would be taking flying lessons with the Slytherins. Malfoy had been bragging about his skill to anyone who would listen. Madam Hooch taught the class by starting with basic broom control. After learning the theory, the students were told to hover gently off the ground on Madam Hooch's go-ahead. Neville, terrified of being left behind, panicked and kicked off before anyone else, rising fifty feet in the air before falling off and breaking his wrist. Madam Hooch took Neville to the hospital wing after warning the other students to stay on the ground until she got back. Malfoy, taunting Neville, nicked Neville's Remembrall off the ground. Harry, already enemies with Malfoy, told Malfoy to give him the Remembrall. Malfoy, jeering, claimed that he'd leave it "up a tree" unless Harry stopped him and took of on his broom. Harry, blood pounding in his ears, mounted his broom and kicked off after Malfoy. As much to Harry's surprise as everyone else's, he discovered that he could not only fly, but that it was something he didn't need to be taught. Bending low on the broom handle, Harry shoot toward Malfoy, who, realizing that Harry was a better flyer, threw the ball in the air, daring Harry to catch it. Harry raced the ball towards the ground, catching it and coming out of his dive a foot from the ground. He toppled lightly onto the grass amidst the cheers of the Gryffindors, grinning wildly. His euphoria did not last long however, as Professor McGonagall quickly arrived on the scene. Having seen the dive, she ordered Harry to follow her. Harry, expecting expulsion, was instead introduced to Oliver Wood, whom she pulled out of a Defence Against the Dark Arts class. Captain of the Gryffindor Quidditch team, Wood's confusion at being introduced to a first year quickly turned to excitement and ecstasy upon hearing McGonagall recount the dive. He told her that Harry would need a decent broom if they were to compete, and explained to Harry that he would make an excellent Seeker, perhaps rivaling the legendary Charlie Weasley.
The midnight duel and encountering Fluffy
Filch: "Which way did they go, Peeves? Quick, tell me."
Peeves: "Say ‘please.’"
Filch: "Don’t mess with me, Peeves, now where did they go?"
Peeves: "Shan’t say nothing if you don’t say please"
Filch: "All right — please."
Peeves: "NOTHING! Hahaaa! Told you I wouldn’t say nothing if you didn’t say please! Ha ha! Haaaaaa!"
Peeves taunting Filch after pretending to aiding him.[src]
Harry told Ron about everything that happened after he left with McGonagall over dinner that night, but warned him that Wood wanted to keep it a secret. Much calmer on the ground, and with his cronies Vincent Crabbe and Gregory Goyle flanking him, Malfoy came over to taunt Harry about getting in trouble earlier. Enraged that Harry not only escaped trouble, but was instead rewarded, Malfoy challenged Harry to a wizard’s duel. In spite of Hermione’s attempt to dissuade them from breaking the school rules, (or perhaps because of it), Harry accepted. As they left the tower, the Trio found Neville, (whose wrist had been fixed by Madam Pomfrey), waiting outside, having forgotten the password. The four arrived at the Trophy Room, the site of the duel, but Malfoy was nowhere to be found. They speculated that he may have chickened out, and were deciding what to do next when they heard Argus Filch and his cat, Mrs Norris, enter the room. Realising that Malfoy tricked them, they attempted to quietly exit the room. Before they could go a dozen paces, however, a nearby doorknob rattled and Peeves burst into the hallway,[13] and threatened to expose them. Growing desperate, Ron took a swipe at Peeves, who began to bellow their whereabouts as loud as he could, attracting Filch. Panicking, the four ran for it, right to the end of the corridor, where they found themselves stopped by a locked door. With Filch’s running footsteps highly audible to them, Hermione seized Harry's wand and unlocked the door, allowing all four to hurry inside. Thinking themselves out of danger, the four turned around to discover a monstrous sight: a giant three-headed dog. Choosing Filch over death, the children ran for it, somehow managing to get back to their dormitory without running into anyone along the way. Though shaken by the night’s events, Harry’s interest in exploring was piqued by Hermione’s pointing out that the dog was standing on a trapdoor.
The Nimbus 2000
Halloween Feast Food
The traditional food, served at the (interrupted) 1991 Feast
On Halloween, Professor Flitwick began teaching his students how to make objects fly. Only Hermione succeeded; Ron, offended by her air of superiority, later made a nasty comment that Hermione overheard, causing her to run off in tears, something that made both Ron and Harry feel slightly guilty. When the two went down to the Hallowe'en feast later, they overheard Parvati telling her best friend Lavender that Hermione had locked herself in the girls' bathroom, making Ron feel even more uncomfortable. The instant Harry and Ron entered the Great Hall, however, their guilt was forgotten amidst the splendour of the decorations. Partway into the feast, Professor Quirrell, arrived to announce that there was a twelve-foot troll in the dungeons, before fainting where he stood. As the prefects led the students back to their dorms, Harry, realising that Hermione did not know about the troll, convinced Ron that they were responsible, and that they needed to save her. Joining a group of passing Hufflepuffs, they snuck off to the girl’s toilet to warn Hermione. Hiding, the two saw the troll enter a room, and realised they could lock the troll inside. As they turned to leave, they heard a terrified scream emanate from the room they had just locked. Horrified, they realised they had locked the troll in the same bathroom that Hermione had been hiding in. Harry and Ron ran back into the room to rescue Hermione; Harry stuck his wand up its nose, and Ron knocked the troll out using its own club. The teachers arrived, attracted by the troll’s yells, to find Harry, Ron, and Hermione covered in dust, and the bathroom in disarray. Professor McGonagall, head of Gryffindor, began scolding the boys for not going straight to their dormitories with the rest of their house, but instead putting themselves in grave danger. Much to Harry and Ron’s surprise, Hermione lied to McGonagall and told her that she had gone looking for the troll, as she thought she could handle them. She claimed that Harry and Ron were looking for her, (which was true), and she would most likely be dead if the boys had failed to rescue her (also true). The three bonded over the shared experience, and were friends thereafter.
Main article: Quidditch
As the Quidditch season began, Harry became increasingly nervous. The first match of the season was against Slytherin, and Harry was under increasing pressure to show that he was not just a famous name. In an attempt to calm his nerves, Harry borrowed a book entitled Quidditch Through the Ages, a comprehensive history of the sport from Hermione. During break the day before the match, Harry, Ron, and Hermione were huddled together around a jar of flames, which had been conjured by Hermione, to keep warm. Professor Snape noticed their guilty faces, and looking for a reason to punish them, confiscated Harry’s book on the feeble pretext that library books were not be taken outside. Harry noticed that Snape was limping, as though his leg was injured, strengthening his suspicions that the Potions Master was after whatever it was that Fluffy was guarding. Nervous about the next day's match, Harry decided to ask Snape for the book back; realising that he would most likely be in the staffroom and that it would be harder for Snape to bully him if there were other teachers around, Harry decided to confront him. Approaching the door, Harry overheard Snape complaining to Filch about Fluffy. Opening the door, Harry saw Filch helping Snape to bandage his leg, which was bitten and bloody. When Harry returned to the Common Room, Harry told his two friends everything he had seen.
Harry had little time to dwell on the Snape’s injury, however, as the first Quidditch match began the very next morning. Harry’s job, as the Gryffindor Seeker, was to catch the Golden Snitch, a walnut-sized gold ball. Harry’s first attempt to catch the Snitch was foiled when the Slytherin Seeker blatched him. Though the Seeker was penalised, the move succeeded in stopping Harry from getting to the Snitch, which was the Seeker’s goal. Soon after, Harry’s broom began bucking uncontrollably, as if trying to unseat him. Ron and Hermione, watching Harry from the stands, began to wonder if the other was at fault until Hagrid, who had arrived to watch the game, noted that it would take powerful dark magic to make a broomstick so hard to manage, magic well above the level of a second year. Hermione, who had turned her gaze away from Harry, and was scanning the stands, noticed that Snape was staring unblinkingly at Harry and muttering nonstop under his breath. Thinking quickly, Hermione took advantage of the fact that everyone’s attention was now focused on Harry (and the Weasley twins’ attempts to rescue him) to run around the entire stadium, knocking over Professor Quirrell, and ending up behind Snape. Muttering a few "well-chosen words", Hermione lit Snape’s robes on fire; a yell of shock told her she had done her job, and she scooped the fire into a jar. Suddenly, up in the air, the spell on Harry’s broom was broken and Harry was once again able to control his broom. The crowd watched in confusion as Harry dove towards the ground, only to clasp his hand to his mouth as if he was being violently sick the instant he landed. To everyone’s disbelief, Harry had caught the Snitch in his mouth in a dive he executed after holding onto a jinxed broom fifty feet in the air, ending the game in what may have been the most chaotic manner possible.
The Christmas season
Main article: Christmas
The Great Hall during Christmas
Impressed as they were with the fact that Harry had managed to hold onto a bucking broomstick, Malfoy soon found that the rest of the school no longer found his taunts that Harry was to be replaced amusing, and so reverted to teasing Harry about having to stay at Hogwarts for the holidays. Harry, however, was looking forward to spending Christmas away from the Dursleys, especially in light of the fact that Ron was also staying at Hogwarts, but also because it would give them some time to look up Nicolas Flamel; they were certain that the librarian would be able to find a book on Flamel in an instant, but were worried that it might be suspicious, and were thus forced to search for themselves. On Christmas day, Harry and Ron awoke to a pile of presents each at the foot of their beds. Harry received a flute from Hagrid, a fifty pence coin from the Dursleys, which he gave to Ron, (who had never seen Muggle money), assorted sweets from Hermione, and a knitted sweater from Ron's mother. At the bottom of the pile, he found a package containing an Invisibility Cloak and an anonymous note telling him only that the cloak once belonged to his father, and to "use it well". That night, after a satisfying Christmas dinner, Ron fell asleep instantly, but Harry, thoughts on the Cloak that had belonged to his father and the note telling him to “use it well”, decided to try it out. Realising he could go anywhere, he snuck back to the library and headed straight for the Restricted Section. Knowing he had to start somewhere, Harry pulled down one of the heavier books, and let it fall open on his knee. To his shock and horror, the silence was rent by a blood-curdling scream that issued from the book in front of him. He stuffed the book back in its place and ran for the door, knocking over the lantern he brought with him in his haste. Ducking under Filch’s outstretched arms, Harry ran down the dark corridors, away from the library, and away from Filch. Thinking he had escaped, Harry was scared to hear Filch’s voice approaching, and horrified when he realised who Filch was talking to: Snape. As Snape and Filch rounded the corner, Harry realised that although his father’s cloak made him invisible, it did not stop him from being solid, and that he had no chance of sneaking past them, as the corridor was particularly narrow. Thinking quickly, (and panicking slightly), Harry noticed a door to his left; slipping inside, he found himself in an abandoned classroom. After Filch and Snape passed his hiding place, Harry relaxed and took in more details about the room he was in. In doing so, he noticed something he had missed the first time: an old, gilded mirror bearing the inscription “Erised stra ehru oyt ube cafru oyt on wosi”. Stepping in front of the mirror, Harry very nearly cried out in shock: inside the mirror he saw a large crowd of people standing behind him. Shocked, Harry turned around to look at the room, but saw no one there. Turning back to the mirror and looking more closely, Harry realised that the man and woman in the front looked oddly like him. The man looked just like him, from his untidy hair to his glasses, and the woman, Harry saw, had the same eyes he had. Understanding, Harry focused on other members of the crowd, and saw others who had his untidy hair, his eyes, and even an old man who had Harry’s knees; Harry was looking at his family, for the first time in his life.
Ss firenze
Harry with Firenze in the Forbidden Forest
Having realised how much Harry, Ron and Hermione had worked out about the Stone after running into them in the library, Rubeus Hagrid told them to meet him in his hut later. When the Trio arrived later, they noticed that the fire was lit, despite the heat of the day. Although he was reluctant to answer their questions, Hermione managed to charm him into talking about the various protections used to guard it: Fluffy, the three headed dog, was Hagrid's, along with enchantments from Professors Sprout, Flitwick, McGonagall, Quirrell, and Snape. Harry, growing uncomfortable in the heat, asked Hagrid to open a window, something Hagrid refused to do as he had a dragon egg in the fire. Unfortunately, Draco Malfoy discovered the dragon, and decided to use the knowledge to get revenge by getting them into trouble for possessing an illegal dragon. To save everyone involved, Harry, Ron, and Hermione convinced Hagrid to send Norbert off to Ron's brother Charlie, who would take Norbert to a Romanian dragon preserve. While helping Hagrid to prepare Norbert for the journey, the dragon bit Ron's hand, causing it to swell up and forcing Ron to see Madam Pomfrey. On the pre-arranged night, Harry and Hermione managed to smuggle Norbert in a crate up to the Astronomy Tower under Harry's Invisibility cloak. On the way up they witnessed Professor McGonagall hauling Malfoy away for being out of bed at night, who protested that Harry was in possession of a dragon. Harry and Hermione passed the crate off to Charlie's friends and headed back down the stairs, where they were confronted by a gleeful Argus Filch; they had left the Cloak behind.
The next morning, Harry, Hermione, and Neville received notes from Professor McGonagall informing them their detention would begin at eleven that night. Argus Filch took them out to the Forbidden Forest, where Hagrid was waiting for them. Hagrid led them into the Forbidden Forest and showed them a pool of unicorn blood on the ground. They split up, Hagrid taking Harry and Hermione, while Neville and Malfoy went off with Fang. After Malfoy scared Neville into sending up red sparks, Hagrid sent Harry off with Malfoy, deciding that Malfoy would be less likely to scare Harry. As they continued, Harry noticed the pools of unicorn blood they were following seemed to be growing larger and larger, as if the animal had been thrashing around. Eventually, they came to a clearing and found the laying on the ground, and very dead. As they watched, a hooded figure emerged from the bushes and began to drink the unicorn's blood. Malfoy screamd and bolted away with Fang, leaving Harry, half blinded by the pain in his scar to stuble away from the advancing figure. Harry was saved by Firenze, a palamino centaur, who allowed Harry to ride on his back out of the forest. Firenze told Harry the properties of unicorn blood: it "will keep you alive, even if you are an inch from death, but at a terrible price. You have slain something pure and defenceless to save yourself, and yo will have but a half-life, a cursed life, from the moment the blood touches your lips." Harry realised that there would only be one person who would be so desperate as to commit such an act: Lord Voldemort.
The Philosopher's Stone
Main article: Philosopher's stone
Dumbledore: "Harry, do you know why...Prof. Quirrell couldn't bear to have you touch him? It was because of your mother. She sacrificed herself for you. And that kind of act leaves a mark.. No, no. This kind of mark cannot be seen. It lives in your very skin."
Harry: "What is it?"
Dumbledore: "Love, Harry. Love."
Albus Dumbledore to Harry[src]
Behind the scenes
Lesson Monday Tuesday Wednesday Thursday Friday Saturday
First Xylomancy Potions Double Potions Defence Against the Dark Arts Herbology Potions
Second Potions History of Magic Charms Potions History of Magic Potions
Third Defence Against the Dark Arts Herbology Magical Theory Transfiguration Charms Transfiguration
Fourth Charms Transfiguration Magical Theory Flying Magical Theory Transfiguration
Notes and references
1. Pottermore
2. Harry Potter and the Philosopher's Stone - Chapter 8 (The Potions Master): "Three times a week they went out to the greenhouses behind the castle to study Herbology [...]"
3. Harry Potter and the Philosopher's Stone - Chapter 14 (Norbert the Norwegian Ridgeback): "Then, one breakfast time, Hedwig brought Harry another note from Hagrid. [...] Ron wanted to skip Herbology and go straight down to the hut. Hermione wouldn't hear of it. [...] Ron and Hermione argued all the way to Herbology and in the end, Hermione agreed to run down to Hagrid's with the other two during morning break."
4. Harry Potter and the Philosopher's Stone - Chapter 13 (Nicolas Flamel): "The next morning in Defense Against the Dark Arts, while copying down different ways of treating werewolf bites [...]"
5. Harry Potter and the Philosopher's Stone - Chapter 8 (The Potions Master): "They had to study the night skies through their telescopes every Wednesday at midnight and learn the names of different stars and the movements of the planets."
6. Harry Potter and the Philosopher's Stone - Chapter 13 (Nicolas Flamel): "Potions lessons were turning into a sort of weekly torture, Snape was so horrible to Harry."
7. Harry Potter and the Philosopher's Stone - Chapter 8 (The Potions Master): "Friday was an important day for Harry and Ron. [...] "What have we got today?" Harry asked Ron as he poured sugar on his porridge. "Double Potions with the Slytherins," said Ron."
8. Harry Potter and the Philosopher's Stone - Chapter 10 (Hallowe'en): "On Halloween morning [...] Professor Flitwick announced in Charms that he thought they were ready to start making objects fly [...]"
9. Harry Potter and the Philosopher's Stone - Chapter 10 (Hallowe'en): "Hermione didn't turn up for the next class and wasn't seen all afternoon".
10. Harry Potter and the Philosopher's Stone - Chapter 8 (The Potions Master): "Friday was an important day for Harry and Ron. [...] "Wish McGonagall favored us," said Harry. Professor McGonagall was head of Gryffindor House, but it hadn't stopped her from giving them a huge pile of homework the day before."
11. Harry Potter and the Philosopher's Stone - Chapter 9 (The Midnight Duel): "Flying lessons would be starting on Thursday — and Gryffindor and Slytherin would be learning together."
12. Harry Potter and the Philosopher's Stone - Chapter 9 (The Midnight Duel): "At three-thirty that afternoon, Harry, Ron, and the other Gryffindors hurried down the front steps onto the grounds for their first flying lesson."
Hogwarts years
First - Second - Third - Fourth - Fifth - Sixth - Seventh
Around Wikia's network
Random Wiki | <urn:uuid:f8d4f7c0-9350-4a6b-9ae6-b21ba1b1d27f> | 3 | 2.625 | 0.196755 | en | 0.9791 | http://harrypotter.wikia.com/wiki/First_Year |
Big Increase of Background Checks for Minnesotans Seeking Gun Permits
Updated: 01/05/2013 3:30 PM By: Mark Saxenmeyer
To buy a gun from a licensed dealer in the U.S., you first need to pass an FBI background check. The agency says last month in Minnesota, it conducted 20,000 more checks than it did a year ago, in December 2011.
The increase would seem to indicate there's a growing demand for guns. So what's behind it? Analysts attribute the spike to concerns that tighter gun control regulations might take effect this year, in response to the Sandy Hook Elementary shootings. And that some people who want guns fear they need to get them now, or they might not be able to get them later.
Friday evening, gun enthusiasts were making final preparations at the Brooklyn Park Armory for a two-day gun and knife show. They were expecting record crowds--and sales. It's the first major gun show in the Twin Cities this year, and the second since the Connecticut tragedy. A show in Bloomington last month drew 80 percent more attendees than the year before.
"Well, because a lot of people really think the government is going to take some strong actions to limit purchases and possessions. Everybody wants to get their toys," said Keith Kallstrom, a self-proclaimed "lifetime hunter, target shooter and collector." He was planning to sell a rifle and a shotgun at the show.
There were nearly 56,000 FBI background checks done in Minnesota in December, a result of people applying for permits. Compare that to December 2011 when there were a little more than 34,000 checks done. That's a 64 percent increase.
Looking at this trend across the U.S., at the total number of potential gun buyers the FBI investigated in December, the numbers are up as well. In fact, a new national record was set. The FBI conducted nearly 2.8 million background checks. That's a 49 percent increase compared to December 2011.
"Yeah, I see a lot of panic buying," said David Meacham of Coon Rapids. He's not selling guns at the show, only ammunition and reloading supplies. But he made it clear: "If people don't have a permit to carry or a permit to purchase, I will not sell them a firearm."
Yet according to the Coalition to Stop Gun Violence, background checks aren't required in Minnesota--or 32 other states--for people wanting to buy guns at these shows. Critics have long said that's a loophole that needs to be closed--especially now, in light of growing concerns that gun violence is out of control.
"They have the perception that anybody can come in here to a private party and buy anything they want," Kallstrom said. "I'm not going to say that doesn't happen but I'm going to say it's unusual."
Jim Wright, the organizer of the gun show, said, "At least here in a gun show it's a controlled environment, versus the individual making sales out of their house. These sellers don't want to do that so they come here where they have a larger audience to move the guns."
Mark Saxenmeyer can be reached at | <urn:uuid:ddb18f4e-a1e1-422c-b324-4e9b2e325778> | 2 | 1.570313 | 0.032265 | en | 0.975689 | http://hbispace.com/printStory/kstp/index.cfm?id=2885971 |
Top 5 Myths About the Fourth of July!
tags: Rick Shenkman, myths, Independence Day, July 4, Fourth of July
Originally published 7-08-2003
Credit: Wiki Commons.
#1 Independence Was Declared on the Fourth of July.
John Adams, writing a letter home to his beloved wife Abigail the day after independence was declared (i.e. July 3), predicted that from then on"the Second of July, 1776, will be the most memorable Epocha, in the History of America. I am apt to believe it will be celebrated, by succeeding Generations, as the great anniversary Festival." A scholar coming across this document in the nineteenth century quietly" corrected" the document, Adams predicting the festival would take place not on the second but the fourth.
#2 The Declaration of Independence was signed July 4.
Hanging in the grand Rotunda of the Capitol of the United States is a vast canvas painting by John Trumbull depicting the signing of the Declaration. Both Thomas Jefferson and John Adams wrote, years afterward, that the signing ceremony took place on July 4. When someone challenged Jefferson's memory in the early 1800's Jefferson insisted he was right. The truth? As David McCullough remarks in his new biography of Adams,"No such scene, with all the delegates present, ever occurred at Philadelphia."
The truth about the signing was not finally established until 1884 when historian Mellon Chamberlain, researching the manuscript minutes of the journal of Congress, came upon the entry for August 2 noting a signing ceremony.
As for Benjamin Franklin's statement, which has inspired patriots for generations,"We must all hang together, or most assuredly we shall hang separately" … well, there's no proof he ever made it.
#3 The Liberty Bell Rang in American Independence.
Well of course you know now that this event did not happen on the fourth. But did it happen at all? It's a famous scene. A young boy with bond hair and blue eyes was supposed to have been posted in the street next to Independence Hall to give a signal to an old man in the bell tower when independence was declared. It never happened. The story was made up out of whole cloth in the middle of the nineteenth century by writer George Lippard in a book intended for children. The book was aptly titled, Legends of the American Revolution. There was no pretense that the story was genuine.
If the Liberty Bell rang at all in celebration of independence nobody took note at the time. The bell was not even named in honor of American independence. It received the moniker in the early nineteenth century when abolitionists used it as a symbol of the antislavery movement.
If you visit the Liberty Bell in Philadelphia, encased in a multi-million dollar shrine (soon to be replaced by an even grander building), a tape recording made by the National Park Service leaves the impression that the bell indeed played a role in American independence. (We last heard the recording three years ago. We assume it's still being played.) The guides are more forthcoming, though they do not expressly repudiate the old tradition unless directly asked a question about it. On the day we visited the guide sounded a bit defensive, telling our little group it didn't really matter if the bell rang in American independence or not. Millions have come to visit, she noted, allowing the bell to symbolize liberty for many different causes. In other words, it is our presence at the bell that gives the shrine its meaning. It is important because we think it's important. It's the National Park Service's version of existentialism.
As for the famous crack … it was a badly designed bell and it cracked. End of story.
#4 Betsy Ross Sewed the First Flag.
A few blocks away from the Liberty Bell is the Betsy Ross House. There is no proof Betsy lived here, as the Joint State Government Commission of Pennsylvania concluded in a study in 1949. Oh well. Every year the throngs still come to gawk. As you make your way to the second floor through a dark stairwell the feeling of verisimilitude is overwhelming. History is everywhere. And then you come upon the famous scene. Behind a wall of Plexiglas, as if to protect the sacred from contamination, a Betsy Ross manikin sits in a chair carefully sewing the first flag. Yes, ladies and gentlemen, this is where Betsy sewed that first famous symbol of our freedom, the bars and stripes, Old Glory itself.
Alas, the story is no more authentic than the house itself. It was made up in the nineteenth century by Betsy's descendants.
The guide for our group never let on that the story was bogus, however. Indeed, she provided so many details that we became convinced she really believed it. She told us how General George Washington himself asked Betsy to stitch the first flag. He wanted six point stars; Betsy told him that five point stars were easier to cut and stitch. The general relented.
After the tour was over we approached the guide for an interview. She promptly removed her Betsy Ross hat, turned to us and admitted the story is all just a lot of phooey. Oh, but it is a good story, she insisted, and one worth telling.
Poor Betsy. In her day she was just a simple unheralded seamstress. Now the celebrators won't leave her alone. A few years ago they even dug up her bones where they had lain in a colonial graveyard for 150 years, so she could be buried again beneath a huge sarcophagus located on the grounds of the house she was never fortunate enough to have lived in.
So who sewed the first flag? No one knows. But we do know who designed it. It was Frances Hopkinson. Records show that in May 1780 he sent a bill to the Board of Admiralty for designing the"flag of the United States." A small group of descendants works hard to keep his name alive. Just down the street from Betsy's house one of these descendants, the caretaker for the local cemetery where Benjamin Franklin is buried, entertains school children with stories about Hopkinson, a signer of the Declaration, who is also credited with designing the seal of the United States. We asked him what he made of the fantasies spun at the Betsy Ross house. He confided he did not want to make any disparaging remarks as he was a paid employee of the city of Philadelphia, which now owns the house.
The city seems to be of the opinion that the truth doesn't matter. Down the street from the cemetery is a small plaque posted on a brick building giving Hopkinson the credit he rightly deserves.
As long as the tourists come.
#5 John Adams and Thomas Jefferson Died on the Fourth of July.
Ok, this is true. On July 4, 1826, Adams and Jefferson both died, exactly fifty years after the adoption of Jefferson's Declaration of Independence, which the country took as a sign of American divinity. But there is no proof that Adams, dying, uttered,"Jefferson survives," which was said to be especially poignant, as Jefferson had died just hours before. Mark that up as just another hoary story we wished so hard were true we convinced ourselves it is.
Have a Happy Fourth!
Related Links
• Independence National Historical Park
• comments powered by Disqus
More Comments:
Dale R Streeter - 6/29/2010
Thank you! Accuracy and preciseness are worth noting. (No sarcasm intended here.)
WILLIAM HYLAND - 7/3/2009
In celebration of the 4th of July, I offer this essay in defense of our greatest founding father, Thomas Jefferson. I feel Mr. Jefferson’s reputation has been unfairly eviscerated by a misrepresentation of the DNA results in the Hemings controversy. The exhumation of discredited, prurient embellishments has not only deluded readers, but impoverished a fair debate. In fact, with the possible exception of the Kennedy assassination, I am unaware of any major historical controversy riddled with so much misinformation and outright inaccuracies as the sex-oriented Sally Hemings libel.
The “Sally” story is pure fiction, possibly politics, but certainly not historical fact or science. It reflects a recycled inaccuracy that has metastasized from book to book, over two hundred years. In contrast to the blizzard of recent books spinning the controversy as a mini-series version of history, I found that layer upon layer of direct and circumstantial evidence points to a mosaic distinctly away from Jefferson. My research, evaluation, and personal interviews led me to one inevitable conclusion: the revisionist grip of historians have the wrong Jefferson--the DNA, as well as other historical evidence, matches perfectly to his younger brother, Randolph and his teen-age sons, as the true candidates for a sexual relationship with Sally.
A monopoly of books (all paternity believers) written since the DNA results have gone far beyond the evidence and transmuted conjecture into apparent fact, and in most instances, engaged in a careless misreading of the record. My new book, IN DEFENSE OF THOMAS JEFFERSON (Thomas Dunne Books, 2009), definitively destroys this myth, separating revisionist ideology from accuracy. It is historical hygiene by pen, an attempt to marshal facts, rationally dissect the evidence and prove beyond reasonable doubt that Jefferson is completely innocent of this sordid charge:
• the virulent rumor was first started by the scandal-mongering journalist James Callender, who burned for political revenge against Jefferson. Callender was described as “an alcoholic thug with a foul mind, obsessed with race and sex,” who intended to defame the public career of Jefferson.
• the one eyewitness to this sexual allegation was Edmund Bacon, Jefferson’s overseer at Monticello, who saw another man (not Jefferson) leaving Sally’s room ‘many a morning.’ Bacon wrote: “…I have seen him come out of her mother’s room many a morning when I went up to Monticello very early.”
• Jefferson’s deteriorating health would have prevented any such sexual relationship. He was 64 at the time of the alleged affair and suffered debilitating migraine headaches which incapacitated him for weeks, as well as severe intestinal infections and rheumatoid arthritis. He complained to John Adams: “My health is entirely broken down within the last eight months.”
• Jefferson owned three different slaves named Sally, adding to the historical confusion. Yet, he never freed his supposed lover and companion of 37 years, ‘Sally Hemings’ from her enslavement, nor mentioned her in his will.
• Randolph Jefferson, his younger brother, would have the identical Jefferson Y chromosome as his older brother, Thomas, that matched the DNA. Randolph had a reputation for socializing with Jefferson's slaves and was expected at Monticello approximately nine months before the birth of Eston Hemings, Sally’s son who was the DNA match for a “male Jefferson.”
• The DNA match was to a male son of Sally’s. Randolph had six male sons. Thomas Jefferson had all female children with his beloved wife, Martha, except for a male who died in infancy.
• Until 1976, the oral history of Eston’s family held that they descended from a Jefferson "uncle." Randolph was known at Monticello as "Uncle Randolph."
• Unlike his brother, by taste and training Jefferson was raised as the perfect Virginia gentleman, a man of refinement and intellect. The personality of the man who figures in the Hemings soap opera cannot be attributed to the known nature of Jefferson, and would be preposterously out of character for him.
William G. Hyland Jr.
Attorney at Law
Tampa, FL.
andy mahan - 9/19/2006
I Corinthians 13 is one of the most beautiful truths of the word of God.
9 For we know in part and we prophesy in part,
10 but when perfection comes, the imperfect disappears.
Harold Robert Hunter Jr - 7/9/2006
Interesting Article about myths of 4th of July. I wonder what it would've been like if the US had lost the war against the British? But things happen for a reason and the colonists won. July 4th is a great holiday and I'll cherish celebrating it until i die
Harold Hunter Jr, Esq.
Hunter Law Office, PLLC
464 Eastway Drive
Charlotte, NC 28205
S Anaya - 7/1/2006
Sontag is good…
She claimed that white people were the ‘cancer of human history’.
She said that the 911 Terrorist attacks were not a cowardly act on civilisation, liberty, or humanity. She justified the attacks while the ruins of the World Trade Center still lay smoldering atop the bodies of 2900 innocent civilians.
She bestowed the ‘virtue’ of courage upon Islamic fanatics, yet in her twisted logic dismisses the word courage as being a morally neutral virtue when spoken by people who didn’t fit into her small world view. Since when has the word ‘courage’ been exclusive of morality? I guess somewhere in the world of Ms. Sontag she was able to successfully extract and apply morality to whatever suited her. Hey, nothing like being a liberal, it allows for the ‘liberal’ interpretation of all words regardless of their meaning, both literal and symbolic. I can only imagine the conversations Ms. Sontag must have had with Mr. Chomsky… ‘…. Let’s see, how can we change this word to mean something less virtuous so that it suits our agenda better?…’, ‘….oh, but we reserve the right to use the word in its true meaning if it will also suit our cause…’. Pinning down liberal thinking is a bit like trying to pick up spilled mercury.
Ms. Sontag -- A self proclaimed human rights activist and lover of peace who hails Rachel Corrie as a heroine for using herself as a human shield to protect the bombs caches of Palestinian Terrorists. The very bomb caches used to blow up innocent men, women, and children in Israel. I well imagine that there is an Israeli family who doesn’t consider Rachel Corrie to be quite the model of ‘virtue’ that our Ms. Sontag proclaimed her to be. Yet typical of leftist ideologues, the ends justifies the means, so whatever futhers the cause is just alright with them. Never mind that Palestenians blow up innocent civilians, who cares that Saddam murdered 100’s of thousands of his own people… the ‘real enemy’, and listen well all you poor unsuspecting young Students, is the United States of America and the head of the great beast is no other than the evil incarnate, George W. Bush.
Yes, Sontag was good alright… a good little Marxist who proclaimed America to be the great enemy of human justice and civilisation, while she enjoyed a prosperous and celebratory lifestyle living the very dream of Life, Liberty, and Happiness that still remains just a dream for millions throughout the world.
I wonder, Mr. NYGuy… will you be teaching both sides of the two headed snake, or just one?
As far as you’re concerned, Mr. I Have a Different Voice Therefore I’m More Enlightened Than You… I wonder if you were as ashamed of the United Nations when it turned its head away from the slaughter of 800,000 people in Rwanda, or was that just alright with you because George W. Bush wasn’t President of the USA at the time?
Kevin DeVita - 6/26/2006
The tone of this piece disappoints me. The authors seem to be in joyful glee that they destroying American myths. Where is the objectivity now?
Give me the facts. Just don't sound so happy in the telling please.
Kevin DeVita - 6/26/2006
Kevin Solomon - 7/4/2005
Often times, fiction can be inserted into truth, and unfortunately, be accepted. When this happens, things get out of hand and historians just have to remember that this isn't true, that is all we can do to pretect the truth.
E. Simon - 7/4/2004
If you haven't done so already, Benjamin, I recommend you check out Walter A. MacDougal's essay on the development of the American civic religion. Betsy Ross, the Pilgrims, etc. all obscure the numerous and more relevant narratives that illustrate the country's political development, through the individuals most closely involved in it - Jefferson, Paine, etc. If you consider these (the latter) stories as fulfilling the equivalent need for an American mythology, then I think no amount of revision will allow the essential sense of purpose within them to be degraded. Your second to last paragraph begins to hit on that point, it seems, and is worth exploring further.
Kenneth T. Tellis - 6/30/2004
When is someone in the US going to get it right? English Citzens? Pray tell me what an English citizen is?
American colonists were British subjects, not English citizens. That status has been so, from the Union of Great Britain, meaning Scotland and England. That is why there always occurs the misuse of the term Union Jack. The Union flag of Great Britain, is only a Union Jack when flies on the Jackstaff of a ship, and not otherwise.
I would to remond Americans, that there has been no Queen of England since 1714. That title has been defunct since the time of Queen Anne.'s death. And Queen Anne held that title, because of the two separate kingdoms. Today Queen Elizabeth II is Queen of the United Kingdom of Great Britain and Northern Ireland.
Stephen Vinson - 6/29/2004
"innocent, overmatched Iraqi soldiers"
A weak bully is still a bully. Last March they met a bigger one.
>On this 4th of July, think about the 10,000 families that lost civilians in Iraq and the 1000s of innocent, overmatched Iraqi soldiers that were decimated by the American killing machine and never got a chance to reach adulthood.<
I'll think about it, to the same extent I think of Dresden when I think of World War II. Cyanide gas and plastic shredders take priority.
Benjamin Scott Crawford - 6/29/2004
And I am not arguing that teachers, historians, tour guides, etc., should intentionally pass on misinformation and myths - they should be as objective as possible and search for truth. However, the myths of our past are anything but "pseudoscience." They are real - they shaped an American character to some degree - and as such, they are fair game for historians to study in order to better understand the nation's past and its character/identity.
Michael Meo - 6/29/2004
I'll agree that historical myths are worth study. Pseudoscience is worth study.
Intentionally propagating a misconception has, for me, nothing of nobility about it. Nor, while I admire Plato (and Strauss too), do I accept his teachings as a valid metaphysical statement of the human condition.
You are sophisticated enough not to need specific examples of 'noble lies' that got everyone into trouble. It's a slippery slope; we struggle against ignorance, in general, and myths are not our allies.
Benjamin Scott Crawford - 6/29/2004
Please re-read my post.
I did NOT state so much that the ends justify the means. I was simply first exploring from a philosophical perspective some observations about the noble lie. Second, I did NOT argue that lies should entirely be perpetuated, but rather we as historians need to examine the historical significance of the creation of these national myths - my post is not meant to be a justification for these myths, but rather a suggestion that these myths are a valid topic for historians to study. These myths arguably played an important role in the young republic as they helped unite an extremely diverse group of people.
I do believe that these stories, because they have become so much a part of America's historical consciousness and even identity, do have a place in our society - the lesson associated with George Washington's cherry tree incident, for example, does teach a good lesson - one should not lie (an interesting paradox in that a lie teaches children not to lie).
Of course, as we mature and begin to examine history from a mature perspective, we put away our childish ways - I think my post demonstrates that I am aware of these myths and I have put these childish things away - but not completely, as no mature historian should. As historians, we would be entirely remiss to ignore them completely. Please remember that I did note in my post above that the tour guide SHOULD inform visitors that these stories are not accurate, but again, since these stories are as much a part of the nation's history as the truth, they should still be examined. Do you believe Abrams in my post above to be a child because she wrote an entire book examining some important myths associated with the history of America? I would hope not, because I believe Abrams, among others (e.g., Jill Lepore, James and Patricia Deetz, etc.) are contributing greatly to our understanding of history as they explore the origin of such myths and how those myths shaped the early republic.
This is how I explain it to my students - and, Michael, I have actually, unfortunately, brought at least one student to tears when she learned that many of the stories her parents and former teachers had told her were lies - when in the classroom, I still meet hostility from some students when I explain to them how Disney gets it so wrong in POCOHONTAS.
So, please, do not insinuate that I am holding onto my childish ways because I simply note that these stories cannot be ignored - I have actually done my part in revealing many of these myths.
I hope this is clear - believe me, Michael, I am a strong advocate of finding the truth, not that we ever can - we are, after all, in a cave - and of revealing historical inaccuracies.
The noble lie is an interpretive perspective of that important part of Book III - yes, associated with Strauss, Voegelin, and Bloom, among others. I believe that particular interpretive model to be accurate - in order for Plato's city in speech to survive it must be founded on a lie. This, along with numerous "inconsistencies" within the work as a whole (e.g., Book V and its comedic elements; Plato's condemnation of mimesis, yet the entire work being one of imitation, etc.) also possibly reveals Plato's beliefs about man's ability, or rather inability, to find utopia.
Finally, exactly how does the noble lie "hinder real problem-solving"? It is a lie that is noble in that it serves a truly just cause - it brings justice. How does it affect "problem-solving"? What problems, specifically, did American myths prevent from being solved? I am just curious about your line of though here.
Michael Meo - 6/29/2004
I do not believe, Benjamin, that the ends justify the means.
Telling noble lies may help to promote unity but may do more to hinder real problem-solving.
Perhaps you have noticed that the phrase "noble lie" is most prominent these days as an accusation against the followers of Leo Strauss; as for me, although I think the accusations against the late classicist not well documented, I prefer to practice the prescription of Paul's First Letter to the Corinthians, chapter 13, verse 11:
(Quoted, of course, for its eloquence, not for its implicit recommendation that we become 'adult' in the Evangelical sense.)
Benjamin Scott Crawford - 6/28/2004
Plato through the voice of Socrates informs us in Book III of THE REPUBLIC that in order for his city in speech to ever have a chance of existing (it is debatable as to whether or not they believed this utopia could exist - see the comical Book V), a noble or big lie would need to be created to insure that everyone stayed in his or her assigned place in society. Children were to be taught that before they were born certain elements (gold, silver, and bronze) were mixed in with their blood, dictating what occupation they should take on in life - ruler, warrior, or laborer. Only in this manner did Socrates and Plato believe that rulers, warriors, and laborers would not try to become something they were not meant or suited to be and in turn create an unjust society - of course, with the city in speech serving as a macrocosm of the individual's soul, the lesson is clear: each part of the soul must perform its natural duties in order for the individual to have a healthy and just soul.
In a similar manner, the United States has its noble lies; myths, if you will, that serve to teach its citizens values and to give them stories to bring pride, respect, and loyalty. The article above highlights only a few of the numerous myths that have crossed over to the nation's collective memory. If these myths were to be believed, representative democracy emerged in New England (when arguably it actually emerged with the creation of the House of Burgesses in Virginia - 1619), Pilgrims/Puritans wore black all of the time (they really wore colorful outfits), turkey was served at the "first" Thanksgiving (the first Thanksgiving actually occurred in Virginia, not with the Pilgrims - there is no real evidence that turkey was served at either place; it appears that shell fish, fish, "water fowl," and deer were the primary meats), and Pocahontas was a voluptuous, incredibly sexy vixen who had some sort of love affair with John Smith (actually, when, and if, she saved John Smith's life, she was only around 12 years old, most likely would have had her head shaved, and was NEVER romantically involved with Smith - remember, she married John Rolfe - this part is just for any younger students who are reading this post - Disney got it wrong, sorry). Of course, possibly the greatest noble lie the nation retains is that George Washington cut down his father's cherry tree and then later confessed because "he could not tell a lie." This is a great story that reinforces the importance of character and honesty, but a myth, nonetheless.
The essay above does correctly recognize myths about the nation's founding. However, what is lacking in the essay, and what I believe we as historians should focus on, is what factors came about to create so many historical misnomers and fallacies. In her work THE PILGRIMS AND POCOHONTAS, Ann Uhry Abrams does an excellent job uncovering the roots of the many myths surrounding the Pilgrims, Thanksgiving, Pocahontas, and John Smith. Abrams suggests that the ways in which the two competing founding myths were portrayed in art during the antebellum period reflects the sectionalism the nation experienced during that turbulent time. With the North's victory in the Civil War, it was their representations of the Pilgrims and Pocahontas that survived and dominated, and continue to dominate, the national historical consciousness; it also allowed many Americans to believe that the birth of the nation - or rather that the roots of the birth of the nation - emerged in 1620 with the arrival of the Pilgrims on the Mayflower. This, of course, is 13 years after the English arrived and established Jamestown and one year after representative "democracy" came to the New World through the House of Burgesses.
The United States lacks a common religion, race, and ethnicity - three forces that have historically united people and that allowed, or facilitated, the creation of the nation state. So how were Americans to unite? Noah Webster believed that language could unite Americans, so he set out to attempt to create a newer, phonetically more accurate, form of English (see Jill Lepore, A IS FOR AMERICAN). However, in large part, national myths were needed to accomplish what had never been done before: the uniting of an EXTREMELY diverse group of people under one flag - incorrectly attributed to Betsy Ross. The myths that emerged during the early nineteenth century, a time when the fate of the nation was fairly precarious, were attempts to bring these diverse people together (excluding the issues revolving around the founding myths mentioned above, which tended to facilitate sectionalism as North and South fought over which region should lead the nation as they both saw themselves as the true origin of America). These myths also helped to instill in the masses a sense of morality and virtue - something essential to a republic. Were they lies? Many times, yes; but they were noble lies.
Of course the myths surrounding the 4th of July are extremely important because it was those events surrounding the 4th that in many ways shaped an American character. The words expressed in the Declaration gave, and give, the nation a national creed - an ideology to embrace that would then unite Americans. The national creed, of course, is the belief that ALL men (and women) are created equal. It is the belief in this ideal that has allowed Americans to materialize - no matter what an individual's race, ethnicity, or religion may happen to be, one can become American through birth or naturalization, and the belief in that ideal.
Does this mean that the perpetuation of these lies should continue? Well, at one level, of course not. We as historians and scholars have a duty to uncover the "truth," or our understanding of "truth." As such, we should not teach these myths as unchallengeable truths. Rather, we as historians should explore the need for these noble lies and learn the morals that our forefathers believed to be so important. In this sense, the tour guide that told the authors of the above essay that even though she knew the story about Betsy Ross to be "phooey" but was still "a good story . . . and one worth telling" was correct - however she should share with her patrons the roots of that story, the lessons to be learned from that story, and the "facts" in that story that are inaccurate.
NYGuy - 7/6/2003
Hey Man,
Don't stop now you are on a roll. But you have left out a few more juicy items to commerate Independence Day.
How about the one that U. S. soldiers stood around while the Iraq Museum was looted. No, how about this one, I spoke to the Lt. but he said he had to go over and protect the Oil Ministry for Bush, Cheney, Rumsfeld and the other oil executives. But I want to share my favorite with you, hope you like it. There were these American soldiers, who were part of the christian coalition, working with the muslins, destroying some statues of antique Idols while the rest of the soldier's looked on and applauded.
Yes, Sontag is good, but I hate to tell you there are others out there who are even better.
Well, let's hope for a quagmire. That will teach them.
A different Voice, you certainly lived up to your name.
I want to show this to my class. I am now teaching them that they must get to know their enemy.
Hugh Nash - 7/5/2003
I have read that the Liberty Bell was hauled up what is now the Pennsyvania turnpike to Allentown for safe keeping when British General Howe's troops occupied Philadelphia about Oct 1, 1777. By that time the Bell must have had some prominence. Congress fled the city too.
A Different Voice - 7/3/2003
This is from her New Yorker piece after the Sept. 11 incidents. On this 4th of July, think about the 10,000 families that lost civilians in Iraq and the 1000s of innocent, overmatched Iraqi soldiers that were decimated by the American killing machine and never got a chance to reach adulthood. I am ashamed to be an American and even worse is the shame that so many Americans are inured to the butchery that it conducts.
Backsight Forethought - 7/2/2003
Observing the quotation marks, I assume that this is a cut and paste of a Sontag article.
What, pray tell, should a "mature democracy" do in response to the trials in the Middle East? What would Sontag suggest is appropriate for Defense? What should be cut. It is fine to generally suggest cuts, but to do so more directly requires actual thought. Should we cut the P-3 Orions. Certainly debatable. I don't think it is a great idea. What is the alternative view? Sontag deals with "views". "Bromides" too. If the Bush Administration cut back P-3 checks on the coast, and something worse than "views" and "Bromide" occured due to the lack of P-3 flights, I do not believe the subscriptionist would be charitable to the Administration. Indeed, it would be that Bush and Co. were in bed with (place your favourite bogeyman here), and not paying attention to the American borders.
This outlook is actually getting more tedious the longer it goes on. If the Republicans want to further their hold on Washington, D.C., they can do no better than forward these types of missives. They, and I, don't think they'll play in Peoria.
A Different Voice - 7/1/2003
—Susan Sontag courage author, intellectual and patriot
Mark Thornley - 6/30/2003
I read recently that this holiday is correctly called Independence Day, not the 4th of July. That makes sense to me, since we aren't celebrating the fourth day of July - we are celebrating the day English citizens declared themselves independent from their leader, King George of England.
I read the Declaration of Independence every year. It's significance is clear to me when I consider that the signers were not Americans at the time they signed, they were English citizens! Consider the act of bravery it would take today to sign a new declaration of independence from America...
KEVINKAL - 7/4/2001
So the Declaration of Independence wasn't written & signed on July Fourth. However, it is the date listed on the Document. It is the date the fathers chose to leave, so who are you to deem it a "Myth"? Should we have an "Independence Week" instead, or maybe an "Independence Decade" since the Revolutionary War lasted 8 years. With all the historical myths that are taught in public schools today, I'd think you could find a few that actually matter rather than attempting to de-ligitimize this special day.
Subscribe to our mailing list | <urn:uuid:76e5b0eb-a5bd-467f-81e3-5f2a7dd7ad68> | 3 | 2.96875 | 0.043235 | en | 0.97228 | http://historynewsnetwork.org/article/132 |
Pillarless: The appeal of an open coupe
With the advent of stricter safety regulations and a growing focus on bold, thick shapes in car design, the idea of the pillarless coupe has drifted into memories of an era better remembered for its muscle cars. But the design has aged much more gracefully than the simple, immediately-recognizable shapes of 1960s and ‘70s American iron, and even survived by a few obscure luxury cars today.
But what defines these pillarless coupes? The idea at its most basic is simply to delete (or neglect to include) the B-pillar, or at least make it possible to lower that pillar into the body as the windows roll down. The result is a clean look, as if the car has been sculpted from a single sheet of metal.
Photo: BringATrailer.com
I can never choose what car is my personal favorite-looking example of this body style, but one contender (the other is mentioned later) may also be the most recognizable. As a predecessor to every modern BMW 6-series, the BMW E9’s clean profile helped create the smooth, fluid shape we recognize today as a grand tourer. The magic of the 2800CS and 3.0 models is a topic I plan on covering extensively in the future, but if anything can testify to the importance of the pillarless body style alone, it was the E9’s ability to look good no matter what the situation. Big-bumpered North American cars (usually a detriment to classic Bimmers) look excellent, as do the purer European models; racing versions, too, look smooth and clean, even with their form-follows-function widebody kits and accessories. Even as the lineage extended to include the more modern 6-series cars, a multitude of tricks were used to create the illusion of a missing B-pillar, but, to fans of the E9, few compare to the original, genuine car.
Photo: XJC.com.au
Many examples of the body style are even lesser known than the already-rare E9. The Jaguar XJC is a little-known British contribution to the design, and both the V12 and straight-six versions are fine specimens of a luxury-over-driving-involvement coupe. Based only off the second-generation XJ sedan, the lengthy two-door is now an often unheard-of rarity, as just over 10,000 were produced during the three-year run. Still, like the E9 of that time period, the pillarless body style looked good on anything from a bumperless, one-color model to a two-tone, updated version.
Photo: Wikipedia
The exquisite Jaguar was an exception to Jaguar’s low-slung coupes; for a longer run of examples, we must return to German manufacturers. Mercedes, with their naturally expansive body styles, seems the most apt marque to take advantage of the pillarless body style - as they ended up doing on many occasions. The two-door version of the stately W111 series lacked a B-pillar, as did the W126 years later. Early SL coupes carried the body style, as well. Even the 600 SEC in the ‘90s (notable for being the basis of one of AMGs craziest road cars) was pillarless. In fact, most of the Mercedes coupes from that decade were.
Today, the tradition continues with the CL-class, based Mercedes current largest non-Maybach sedan, the S-Class. Though the body mass of its flanks has narrowed the window line significantly, the clean body lines are just as significant a detail as they were in the 1959.
Photo: Wikipedia
To end this snapshot of the genre (bear in mind that pillarless coupes tend to fall under history’s radar, so if you know of a particularly exceptional vehicle, let me know in the comments), we return to BMW - and to the 1990s. The 6-series may have taken a leave of absence between the E24 and the E63, but the spiritual link was the polarizing 8-series. Oddly, the 1989-1999 E31, like the E24 before it, could range from bland to spectacular. But also like that earlier car, it looked absolutely stunning with the right combination of trim pieces. The CSi-style lip (now common on used E31s) lowers the front on 850s and 840s to a height that perfectly complements the car’s flowing, GT shape. With the right wheels and the M73 V12 engine, the car is a striking (though expensive to maintain) sports car.
Strangely, the pillarless coupe is often forgotten, residing in the shadows of their respective eras’ greats. The early Mercedes and BMWs existed in the era of American muscle cars, and in the ‘90s, the attention of design critics gravitated towards exotic Lamborghinis, Ferraris, and Japanese manufacturers, leaving the GT coupe class in a low-selling pit. Today, cars the public throws under the spotlight are still generally sports and exotics, not the formal Bentley Brooklands, Rolls-Royce Phantom Coupe, or Maybach Xenatec Coupe (these, and the others in that class, lack an obtrusive B-pillar). But whenever we see a car from this rare class, whatever era it might hail from, it remains a welcome addition to the endless flow of traditional car design.
And if you have a suggestion for another great vehicle with this body style, be sure to register and comment below. | <urn:uuid:37c730e2-f553-470d-a7cf-ecd62fa98c1e> | 2 | 1.523438 | 0.024982 | en | 0.954191 | http://hittingredline.com/content/pillarless-appeal-open-coupe |
<< Preserving Software - Feedback Requested! | The roads I take... | Preserving Software: Museums, Archives, Libraries >>
Linguistically Mistaking Phrases
I've been back from my vacation and the Preserving Software summit at the Library of Congress for more than a week now, but still haven't blogged about anything, and I recently didn't blog too much at all, mostly because I always fear it takes up too much time. In the last few days, I decided I'll do shorter posts but do them more often, so I hopefully get to communicate more of what's going around in my head (thoughts on that summit will follow as well when I get around to them). Here's the first installment of this, let's see how it goes.
I just listened to a "Fireside Chat" (sorry, only available to Mozillians) with Brendan Eich, conducted by Pascal Finette. One thing that did strike me there was the use of two phrases, by each of them, and their chances of being mistaken from the point of view of English/German crossover.
Pascal, a native German (his accent gives that away as well), is using "a couple" (e.g. "of times", etc.) in many questions in this interview. Now, the interesting thing there is that in German, we're using "ein paar" (which literally translates to "a couple") a lot, usually meaning "an undetermined amount larger than one but smaller than 'a lot'". We are very tempted to use this the same way in English, as it comes very naturally to us - but in US English, I notice that "a couple" usually means "(more or less) exactly two", so when we mean "some probably between 4 and 7 times", we may end up saying "a couple times" and the US English native speaker understands "twice". Oops. We better had said "a few times". I learned this in detail when I requested to stay "a couple weeks" in the office around a work week and thought there would be later discussion of how many weeks exactly, when the other side was "OK, he wants two weeks, he'll get two weeks". Note that in German there is "ein Paar" (different capitalization) which means the same as "a couple", but in most cases we just say "zwei"/"two" so it can't be mistaken.
On the other side, Brendan starts the reply to some questions with "that's a good question" - which, as I learned over the years, is a usual phrase to compliment the person the question came from and say that this is an important issue to ask and talk about. Now, in German, this literally translates to "das ist eine gute Frage" - which we usually say when we recognize that it's an interesting question but we still need to think about this and don't have any really fitting answer, often coming up with one as we go on this. If you're a native German speaker, be aware that English speakers don't usually have that connotation to this phrase, actually they're often happy someone asked this because it's something they have thought about long and hard and have come up with a really good answer for already. If you're not a German speaker, be aware that those who are might understand it this way and be surprised or take your answer as weaker instead of stronger as you intended.
I'm sure there's tons of other misunderstandings between phrases in different languages for sure, I'm mentioning those two because I heard them in this "chat", it's (the) two languages I know quite well, and they're even in the same language family (in linguistics called "Germanic languages") - and still run into things like that.
I'm always interested about such nuances, if you have any to share, feel free to comment here or blog about them yourself, here in this global Mozilla community, it's always nice to learn from each other! :)
Entry written by KaiRo and posted on June 12th, 2013 17:41 | Tags: English, German, languages, Mozilla | 8 comments | TrackBack
OPerhaps "several" suits?
I'm British and I generally accept "a couple" being an indeterminate number between 2 and 5 usually. Though when someone is talking about units of alcohol I tend to assume a higher number!
I believe the word "several" is more what you are looking for. It is more suited to describing indeterminate numbers ranging from 2 upwards. I personally would consider it to mean between 2 and 10.
2013-06-12 18:31
Stephan Sokolow
from Canada
OSomething to keep in mind regarding English...
Never forget that English isn't your typical Germanic language by a long shot.
We had a large portion of our lexicon replaced with French words during the Norman conquest of 1066 and, while our core grammar is relatively intact, that did have a big effect on how we use the language.
2013-06-12 19:16
from Glasgow, UK
OI'm also British and I'd accept (and say!) a couple to mean more than one but fewer than lots. Maybe a US/UK difference?
2013-06-12 20:09
Mysterious Andy
OAs a native speaker of en-us (Southern California), unless you are referring to people using the phrase "They are a couple," I take "couple" to mean at least 2, possibly more. Where "a couple of X" becomes "several X" depends on the nature of X and the speaker. "A couple of elephants" is probably fewer than 6, while "a couple of grains of sand" could be a few dozen.
"That is a good question" can also match your expectation. In my experience, modification that expresses surprise ("Wow, that's a good question.") clarifies that the respondent didn't previously consider the question. An immediate launch into an answer usually means the phrase was intended as the equivalent of "Thanks for bringing that up."
2013-06-12 20:34
from US
OAgree with others here
My understand and experience in US culture is that the way we use 'couple' corresponds to the German usage you described.
I googled and found the definition: an indefinite small number
As for "That's a good question."
When someone asks me a question and I think it was good, I'll say so. I may or may not have the answer. There's a different connotation in either case of course ("That's a good question. I don't know." or "I appreciate your understanding/insight in asking.")
It is also used frequently in US politics when answering questions. Start with flattery, hope they miss the fact that you don't actually give an actual answer. It's very general and can be taken in more than one way.
"That's a good question and I'm glad you asked it. Somebody's got to ask it and there you go being awesome and asking it. That reminds me of <insert unrelated story>..."
2013-06-12 22:21
Anonymous guest
Oreminds me of the Episode "Big Brother" of the BBC Series Yes, Minister:
Jim: You know, I'm glad you asked that question.
Bob: Well Minister could we have the answer?
Jim: Well yes, of course, I was just about to give it to you, if I may. Yes as I said I'm glad you asked me that question because it's a question that a lot of people are asking, and quite so, because a lot of people want to know the answer to it. And let's be quite clear about this without beating about the bush the plain fact of the matter is that it is a very important question indeed and people have a right to know.
Bob: Minister, we haven't yet had the answer.
Jim: I'm sorry, what was the question?
2013-06-12 22:50
ORelevant xkcd:
2013-06-12 23:02
OInteresting to read those statements on "a couple". I for myself have made the experience that most of the time it's used for "two", as I described in the case of those "couple weeks", or in my watching of e.g. NFL Network in terms of "he needs a couple yards" or "he's been with the team for a couple of years" where it always was meant as "two" (plus/minus less than a half) - and the xkcd comic supports that as well. Apparently the usage expands from that up to the German usage. Thanks for the comments!
2013-06-13 00:26
Add comment | <urn:uuid:c96424a7-3a64-4ecd-ae47-57bf443ea8db> | 2 | 1.59375 | 0.724954 | en | 0.974921 | http://home.kairo.at/blog/2013-06/linguistically_mistaking_phrases |
Shuriken-Throwing blades of the Ninja. Translates as basically Shu-Hand-Ri-released-Ken-blade. Hand released blade. Shuriken dont have *sides* as such, but have points. There were certain Shuriken particular to a tradition of Ninpo, ect Togakure Ryu used the classic 4 pointed Shuriken that appeared to be nearly square. There are only roughly 3 or 4 true designs of flat shuriken, commerciality has created many more. There are also Bo shuriken, which are practical iron darts. Each tradition has a specific specification ect one style would have 15cm Bo shuriken, another tradition would have 18 cm. Contrary to popular belief, Shuriken were not coated in poison. There was no need-Poison was dangerous to carry on sharp objects and difficult to aquire, and usually did not work quicly. Rust was the most dangerous aspect of a Shuriken-as throwing objects they were intended to distract and slow down as a ninja escaped ect, and would not fataly wound an enemy, but could penetrate. If rust got into the cut, it could be lethal. It is ironic that you can buy Shuriken through postal order these days-they used to be fuedal ninja famalies most closest guarded secrets.
Hey look, i found a Shuriken!
I got Shuriken and i aint' afraid to use 'em.
Ph34r my l33t shruiken of uNf47|-|omalbal d00m!
We practise Shuriken-jutsu at Honbu dojo every Sunday after lunch.
Beküldő: Ninpo-Bugei 2006. július 28.
6 more definitions
Photos & Videos
Top Definition
Any sharp throwing weapon used in the martial art of Ninjutsu. Includings things such as kunai (throwing knives, darts, needles) and shaken ("ninja stars", which by the way dont have to have 4 sides, i have one with 6).
In the darkness of the shadows, the assassin unsheathed several shuriken too rain down upon his target.
Beküldő: Killa65 2003. december 3.
Ninja Throwing Stars.
"I smuggled sum Shuriken in from Japan."
Beküldő: Diego 2003. augusztus 30.
An affirmation of ability.
Lisa: Wendy, you can't fit another person in this car, can you?
Wendy: Shuriken.
Beküldő: Anonymous 2002. november 12.
A four-pointed, star-shaped object used by ninja.
"The armoury included guns, knives, and a selection of shuriken."
Beküldő: Mad Walrus 2002. augusztus 6.
a small throwing weapon
blah blah blah
Beküldő: blah 2003. december 9.
Post Hardcore/Emo/Punk band from Scunthorpe, UK. They are ace.
Person1: You heard the new shuriken EP?
Person2: Yeah, evacuate/disintegrate rocked.
Beküldő: Lewis 2005. február 6.
Ingyenes Napi Email
Az emailek a feladótól érkeznek. Nem fogunk szemetet küldeni. | <urn:uuid:997fd0f8-19f7-47e8-8f2c-df0af437ad53> | 2 | 2.09375 | 0.040988 | en | 0.837159 | http://hu.urbandictionary.com/define.php?term=shuriken&defid=1887469 |
Wednesday, April 02, 2008
Can I Get Meds For This?
Internet addiction is a psychiatric disorder - health - 02 April 2008 - New Scientist
1. I'm sure you could get meds for it. But you'd probably have to order them online, which seems counter productive.
2. I'm calling Fark on this article. What they're basically telling us is that people who spend most of their time gaming online are social misfits. Does this really come as a surprise to anyone?
3. Agreed.
Not to mention the fact that if one is persistently in touch and interacting with others on-line, that should constitute actual social relationships. Thus, it should come as no surprise that the sudden loss of access to friends and social activities would cause anger and depression.
If you locked a traditionally socially active person in their house and said they couldn't leave or call or maybe even watch television, then I'm sure anger and depression would be noticed.
Of course, for them it would be normal reactions and not mental illness. | <urn:uuid:c7eca62e-a2d1-426f-909a-c873327218ef> | 2 | 2.21875 | 0.270274 | en | 0.975213 | http://infernaldesiremachines.blogspot.com/2008/04/can-i-get-meds-for-this.html |
Saturday, May 26, 2007
A question to consider
What if you didn't live in a hyper-aggressive world focused on the next best thing? What if you didn't spend every second of every day fighting off the constant attack of messages and changes?
If that wasn't your life, then think about how you might answer this question:
"What would be innovative to you without any context or understanding of the word innovation?"
Saturday, May 05, 2007
Defining Innovation
One of the biggest challenges in a world where innovation is used to describe almost everything, is to get beyond the clutter to a clear understanding of the word. One of the key publications talking about innovation is Fast Company and in one of their blogs they are talking about just this topic.
The concept is that innovation must lead to something "good" - that is defined loosely :). The concept is that if it is not good for the environment, if it doesn't inspire and move forward society, then it really isn't innovation, even if it is. Did you get that???
So can we moralize innovation or is it just what it is?
Check out the post and tell us what you think.
Tuesday, May 01, 2007
Podcasting Prayer
A new innovation in sharing prayer time just launched in April. One Way Ministries launched a "prayercast." It is a mix of music and prayer that you can subscribe to and join in times of prayer.
Definitely a innovative application of the podcasting concept. | <urn:uuid:032b3a5e-4470-4a8e-af53-10cb0662f486> | 2 | 1.5 | 0.323735 | en | 0.953949 | http://innovationinmission.blogspot.com/2007_05_01_archive.html |
Win with Lottery ‘Number Gaps’
by GuestAuthor on October 14, 2006
Picking 6 numbers from 49 numbers is HARD. How would you like to reduce that down to just 30 numbers when you play? Well now you can. Here’s how. The method is based on the statistical observation by that usually 3 or 4 of the 6 numbers drawn will have a gap between them of no more than 1 to 5 digits. In other words, the draw will be ‘clustered’ in a relatively small area of the spectrum of 49 numbers. For example, look at this actual draw.
The 6 balls drawn from 49 were: 26,31,35,43,48,49.
Lets look at the gap between the numbers.
26 to 31 = gap is 5 numbers
31 to 35 = gap is 4 numbers
43 to 48 = gap is 5 numbers
48 to 49 = gap is 1 number
If we examine almost any selection of 50 draws we find that about 20 draws had 4 numbers with a gap size between 1 and 5. Maybe 21 draws had 3 numbers of the 1-5 gap size. Perhaps 3 of the draws had 5 numbers with a 1-5 gap size. Only a single draw will have a 1-5 gap size. The next most common gap size encountered is 6-10, which appears about 50% less than the 1-5 group. There are also gap sizes of 11-15, 16-20, 21-25, 26-30, 31-35, but only rarely do we find gaps above this.
So how do you use this information? Simple. Pick the area you think the ‘cluster’ will occur this week. If it was heavily low numbered last week, for example, you may want to try high numbers this week. Choose your first number using your usual method, and make it fairly low. You then pick a second number that is only a few digits away, as this is statistically where the likely grouping will be. Repeat for the other numbers. What you end up with, is a cluster of numbers, of course. What you now have to do is ’spread em out a bit’ so you have a couple of ‘outliers’. If you think this weeks draw will be clustered at the high end of the numbers, and your picks reflect that, drop one or two of them and replace with numbers from the other end of the spectrum.
As detailed above, you can expect 3 or 4 numbers per draw to have a gap
size of 1-5. If you have clustered in the right area, and have positioned your outliers correctly, you can significantly increase the chances of a 4 ball or higher strike.
About the Author
S Potter plays the UK lottery weekly, and writes articles for the free site aimed at helping you win that lotto!
Winning Gambling Articles | <urn:uuid:2d637aef-7bec-4d79-881e-5b280273e70d> | 2 | 2.203125 | 0.133812 | en | 0.951439 | http://insidersystem.com/free-info/Win-with-Lottery-Number-Gaps.html |
Log in • Sign up • Subscribe feed icon
Mona Lisa Recreated
New Mona LisaNew Mona Lisa
What is it about Leonardo da Vinci’s Mona Lisa that makes people want to create her image over and over again? Why is it so intriguing? Maybe it’s that enigmatic smile she wears or the facial hair she lacks, such as the missing eyebrows.
There is also the unanswered question of who Mona Lisa really is. Some believe Mona Lisa is really a self portrait of Leonardo da Vinci. Then there is the fascinating theft of the painting in 1911 and of course the artistic talent of the painter himself? Along with this fascination comes the spoof that usually accompanies something with fame: Mona Lisa with a mustache, Mona Lisa with moving lips and teeth, Mona Lisa as Medusa, Mona Lisa as Monica Lewinsky, Mona Lisa with southern braids and even Mona Lisa with a spaceship above her head and a giraffe on her chest.
The recreations are endless, but there is one imitation that tops all the rest and it isn’t a parody at all. Instead it is innovative and environmentally friendly all at the same time. How? It has to do with motherboards. Asustek, a Taiwanese electronics maker took an opportunity to recreate the Mona Lisa painting using hundreds of obsolete motherboards. Supposedly, the further away you get from the painting the clearer the image of Mona Lisa becomes. Does it? You can see it for yourself on display at the 9th China Beijing International High-tech Expo.
What kind of art can you make using old computer parts? See Green Product
Gloria Campos-Hensley
Oct 17, 2006
by Anonymous (not verified)
FUnny women this one is I love it | <urn:uuid:a5057aa6-a762-4e7c-ac0c-72cd178b05ea> | 2 | 1.992188 | 0.55363 | en | 0.941245 | http://inventorspot.com/mona_lisa_recreated |
October 2000
John Edward Hasse, editor
Jazz: The First Century
John Edward Hasse assembled capable writers, including Michael Brooks, John Litweiler and Kevin Whitehead, to contribute chapters to this coffee-table book, and numerous others to write sidebars. The principal writers involved come off pretty well, and no overview this general can be very enlightening to those of us who have read many of the same books as the authors, but I do think there’s a lot of room for improvement in Jazz: The First Century.
Two basic problems keep cropping up. One is the modern post-TV, post-Internet format that presents everything in cute little boxes, literally and figuratively. The artsy (I guess) layout of the photos and colored screens on half the pages are merely distracting, but the USA Today-style graphics are obnoxious, and the sidebars are too short to be satisfying, though not too short for some tedious ax-grinding. The main articles laid out, like old schoolbooks, with bold-faced headings, which the text illuminates. Thus, we are told early on in big, bold letters of ^New Orleans’ Six Jazz-Creating Conditions^, half of which existed in not just the Big Easy but any large African-American community, while the others are linked with the evolution of jazz only by implication (and why six, anyway?). I doubt Hasse would have come up with this sort of thing were he not committed to the clumsy format.
The other underlying problem is more pervasive, and that is the tendency of the writers to confuse what’s been written about jazz with its actual history. They often just rehash what they’ve read elsewhere, and where nothing has been written already, little imagination is shown—a problem especially evident in the first chapter. While the African influences are dealt with fairly accurately and the difficult subject of the minstrels is handled deftly, scant attention is paid to the crucial development of specifically African-American folksong, a process that was plainly influenced by English and Scottish traditional music. There are no African traditional songs that sound like spirituals; these developed, presumably, because early African-Americans digested the basics of the strongly melodic Anglo-Scottish folksong they heard around them (much as early white fiddlers incorporated syncopated phrasing). It would also seem apparent that what W.C. Handy called the “groping racial sense of harmony” was influenced by white traditions like shape-note singing more than by classical music. These subjects have been dealt with more by gospel and folk music writers than jazz historians, who generally continue to copy each other’s weak licks.
Hasse is aware of the article by Joshua Berrett that details Louis Armstrong’s surprising debt to opera, but because no jazz writer ever asked Louis anything specific about a much more obvious influence, it doesn’t get singled out here. That would be Louis’ early membership in vocal quartets, an experience he shared with many other young New Orleans musicians. Wouldn’t it seem that learning music by part singing along the lines of early black quartets would have more bearing on the evolution of the free-wheeling New Orleans front line than ideas for coloration. Louis learned from overtures or from the presumed Caribbean influences? Not until someone writes a book about it, evidently.
Generally, this book is good at contemporary conventional wisdom. But where that lags, there are problems. For instance, don’t look here for evidence of Bud Powell’s true stature. We are told that around 1940 Monk was the pianist in the bop vanguard. Yes, he had the gig at Minton’s, but listen to Monk’s anonymous playing on the Joe Gordon tapes and then to the ’44 recordings of both him and Powell. They reveal no basis to assume that one was ahead of the other. Powell is also identified by the usual “man who adopted Parker’s approach to the keyboard” tag, despite testimony from Max Roach, Kenny Clarke and others that that ain’t the way it happened.
Another distraction is the lamentable attempt to apply affirmative action retrospectively. Ann Kuebler describes Mary Lou Williams as a “mentor” to Parker and Powell, and states that there were other talented females who might have made a major impact on modern jazz but stayed at home because they were uncomfortable with the drug habits of the Minton’s/ Monroe’s coterie. The fact that the critics who frequented that scene were all men is mentioned, as if they were somehow complicit in ignoring unknown “genii” who weren’t even there and whose identities we are never told. Female jazz instrumentalists do have an even harder row to hoe than their male counterparts, but what is Kuebler trying to accomplish with this? Can’t we just talk about what actually happened? Along the way, producer Moe Asch is praised for recording Mary Lou. The fact that the gallant Asch almost never paid royalties owed to artists isn’t mentioned—at least he didn’t discriminate, presumably.
Granted “The 100 Essential Jazz Albums” listing near the end is a good general guide for neophytes, but it still features some weird choices: ’50s, not ’30s Eldridge? There are five Miles Davis titles comprising 11 discs and none feature the classic quintet? Mary Lou Williams, but not Herbie Nichols, Randy Weston, Andrew Hill or Teddy Wilson? Are the people who are falling all over each other in the rush to glorify Williams actually listening to her records? There’s a difference between good and great, after all. The additional “More Recordings” listing is downright embarrassing. Three Quincy Jones titles are given but none by Hank Mobley—need I continue?
The last chapter, “Late Century Tradition and Innovation,” is a depressing exercise. The social underpinnings that nourished jazz have vanished, and while many great performers are still active and there will always be room for adventurous younger spirits in the greater bohemian community, the music of suburbanite music-school grads is never going to be what the music we know as jazz was. The newer names are often the product of corporate hype that drives a market largely ignorant of the music’s history, just a small-scale version of the hype that induces the kids to buy the newest poster-boy or -girl. Hell, even rock writers are hip to that rather important angle, but no mention is made of it here.
Then again, I don’t suppose a book projected as Jazz: The Whole 75 Years would interest publishers. And despite the shortcomings noted above, the present volume might help some younger readers find their way back to the real thing.
Add a Comment
| <urn:uuid:0fabe7b4-3254-4463-adc0-0adf4552fe98> | 2 | 2.03125 | 0.029955 | en | 0.961155 | http://jazztimes.com/articles/20275-jazz-the-first-century-john-edward-hasse-editor |
Saturday, January 2, 2010
Dennis Bray and Hans von Storch: Projections and Predictions
The survey "CLISCI2008" among climate scientists has been used to examine the terminology concerning two key concepts in climate science, namely, predictions and projections, as used among climate scientists.
Established guidelines (e.g., by the Intergovernmental Panel on Climate Change) define predictions as probable developments, and projections as possible developments. The survey data suggest that this terminology is not adopted, or only loosely adopted, by a significant minority of scientists - approximately 29% of the respondents falsely associate probable developments (i.e., predictions) with projections, and approximately 20% of the respondents falsely associate possible developments (i.e., projections) with predictions.
The survey CliSci2008-survey has been mentioned before on this weblog.
The full article has been published here:
Bray, D., and H. von Storch, 2009: 'Prediction' or 'Projection'? The nomenclature of climate science. Science Communication 30, 534-543, doi:10.1177/1075547009333698
(If you do not have access to the journal, ask Hans von Storch for a copy)
Hans Erren said...
IMHO it's time to leave the politically based storylines and make some falsifiable predictions for the next decade.
P Gosselin said...
Projection, prediction and scenario are often confused and used interchangeably by the media.
One has to look at the language in order to distinguish between prediction and projection. The use of words like: could, might, may, etc.
to me indicate projections.
Climate predictions in my view are not possible. There exist too many factors that are not understood. Nobody can predict the climate in 50 years time. You can guess - that's all.
Anyone who claims he can is surely a charlatan.
P Gosselin said...
Prof. Latif is an excellent scientist. But he also has had some difficulties with projections and predictions:
Of course a good scientist adjusts his hypothesis as new information is learned. I think this is maybe what Prof. Latif has done. I think a lot of care has to be exercised when assigning certainty. Some scientists have been irresponsibly certain of their "projections".
Hans von Storch said...
Hans Erren, what means "IMHO"?
P. Gosselin: "Predictions", in the sense of "probable developments", are possible if the forcing factors are predictable, and/or if the initial state is the most relevant parameter for the future deelopment.
The latter is usually only for a limited time the case, before the effect of external factors kicks in. People nowadays hope that knowledge about the state of the ocean (including its deep part) may do so for one or two decades. (That is what Keenlyside and Latif use.)
The former is mostly the case in the form of persistence or something similarly simple (forcing remains as it is, or trend of forcing change persists). In case of Greenhousegas (GHG) forcing an assumption of that sort can be used also for the next decade or so.
In that case we are talking about real predictions, as opposed to estimates based on "story-lines" about future GHG-emissions (scenarios or projections; "possible developments"). Of course, nobody knows at this time, how good these predictions are; many people study this problem - and quite a few have reservations. The projections take the technical form of conditional predictions, i.e, predictions which depend on the (unknown) validity of the assumed forcing.
Thus, technically, the terms are well defined, and make sense. The question is how they are used, and which impact they have, in the public.
I noticed that in the UK the term "predictions" is often used not only by the media and policymakers but also by scientists, when they actually refer to projections. I wonder why that is so. Has it something to do with different uses of the language called "English" in the US and in the UK?
P Gosselin said...
1. A prediction is something you can make if you know the present conditions and the future behaviour of all factors and conditions.
2. A projection is simply an extrapolation of the known present conditions. It ignores possible future impacts from external factors. It asumes that there won't be any, or they will be negligible.
3. A scenario is only one variation from an infinite number of future possibilities.
4. In the English language, I think the media are not really interested in reporting this topic truthfully. They are purposely being inaccurate with the objective of misleading the reader.
P Gosselin said...
Climate science has had the habit of making projections and scenarios, and selling them to the public as predictions. This is in my view a subtle type of fraud. There is no way they can predict with sufficient certainty what the sun and oceans will do. They don't have anywhere near enough information about them. Those who claim otherwise are either unqualified or dishonest.
Hans von Storch said...
Of course, we can define terms in different ways. But in the IPCC terminology, as described by Baede, it is:
- “A projection is a potential future evolution of a quantity or set of quantities”
- “A climate prediction or climate forecast is the result of an attempt to produce an estimate of the actual evolution of the climate in the future, for example, at seasonal, interannual or long-term time scales.”
In Giorigi (23005) one finds:
"Essentially, a projection of climate change differs from a prediction in that a scenario of future emissions is assumed without giving it any specific likelihood of occurrence. A projection thus tells us what the climate response would be when assuming a future forcing scenario."
Thus, projections and scenarios are the same.
Baede, A. P. M. (Ed.). (n.d.). IPCC Annex I: Glossary. Retrieved February 9, 2009, from
Giorgi, F. (2005). Climate change prediction. Climatic Change, 73, 239-265.
Hans von Storch said...
Agree, I find the usage of "predictions" instead of "projections" confusing and counterproductive. I wonder of it is just sloppy language or done on purpose.
P Gosselin said...
Sloppy language? The people in this business are quite well educated, many are at the top of their fields and are very familiar with such nuances.
It's strange how many include a good dose of emotion when delivering their packaged projections to the public.
plazamoyua said...
Good to know thank you.
Maybe the IPCC should emphasize projected temperatures are just possible temperatures. And only possible as much as climate science has it right.
Or they could just say "possible temperatures", instead of "projected temperatures". But then, the scare could be not so effective ... I wonder why.
Reiner Grundmann said...
A quick search in our US media Corpus (1981-2007) shows that PREDICTION* occurs more than twice as often as PROJECTION* (ca. 9,000 v 4,000 times). Bear in mind also that there are Climate Prediction Centers (NOAA, Hadley). Undoubtedly, prediction is the preferred term suggesting the ability to foresee.
What also appears in early media reports is the hope and the promise that with increased research funding the ability to predict will increase.
plazamoyua said...
I know we are not talking about forecasting, but I also think most of the non scientific public sees a projection as a short of forecasting. And not without reason:
Projection: The act of scheming or planning; also, that which is planned [Webster, 1913]
Or also, if you are drawing, you are representing something on a perspective plane [same source].
And then, here comes the IPCC, and projection doesn't have it's usual link with reality. This paper may be of some help, to understand IPCC's wording:
Global Warming: Forecasts by scientists vs scientific forecast
Kesten C. Green and J. Scott Armstrong
Hans von Storch said...
11,12 - This is a common problem when dealing with language that it is differently understood in different quarters. Therefore I listed the IPCC definition as given by the IPCC Glossary. Approaching an issue with one's own terminology often leads to the disaster of failing communication.
Reiner - clearly the problem is that in the public, also because of sloppy or otherwise inadequate communication by scientists and their communication, the term "prediction" is heard. Therefore you find it more often in the media corpus. On the other hand, most climate scientists - see the CLISCI2008 survey - understand the terminology as implied by the IPCC wording.
That NCEP has its name with a P = prediction may originally be related to attempts in predicting the coming year's El Nino conditions, an effort which commanded lots of attention in the 1980s and 1990s. Hadley, - hm, that is a problem. I think they use the term systematically not in the IPCC meaning, but as if prediction = projection. That's why I was wondering if it is a problem of British English (Britts have a problem in speaking scientific English).
The issue is getting even more complicated now, when real forecasts / predictions of the statistics of weather (climate) are attempted for the next one or two decades aka Keenlyside & Latif.
The Green & Scott paper is not very helpful, as they seem not have tried to disentangle the different concepts adequately. I would wish that such researchers would communicate with people from the field before embarking on such an effort. Just to clarify words and concepts.
Or the reviewers should have helped.
Such linguistic problems, and that's what they are cause lots of excitement between sceptics and main-streamers - but these problems could easily be avoided if w could agree on a common language. The IPCC has tried to come up with useful terminology, s let's use it.
Reiner Grundmann said...
Hans - why do you give a charitable interpretation of NOAA's label but not Hadley's? Cannot see the difference. Apart, prediction will always win in terms of popularity (also among scientists) becuase it allows to promise more certain knowledge in the future. This is what politicians want, they want ABOVE ALL probable scenarios, not just possible scenarios. And scientists collude. It is therefore not a solution to blame the media.
Hans von Storch said...
Reiner, I did not want to blame the media (we have the media, which we deserve); the word "prediction" arrived in the public arena because some scientists brought it there, sure. But within teh scientific community the terminology is mostly used as suggested by IPCC -a s demonstated by CLISCI2008.
Hans Erren said...
IMHO: In my humble opinion
other forum abbreviations:
We need a probabilities attached to the SRES scenarios. eg According to Lutz et al the A2 scenario is unlikely as:
"There is a 60 per cent probability that the world's population will not exceed 10 billion people before 2100"
Wolfgang Lutz, Warren Sanderson and Sergei Scherbov, The end of world population growth, Nature 412, 543-545(2 August 2001) doi:10.1038/35087589
hans von storch said...
I would not know how to attach probabilities to the scenarios - and what would they mean? A probability is the frequency of outcomes of a certain random experiment. What would the random experiment be here? Or would it be a purely subjective probability? What would be the utility of such a probability?
Ore differently: what is the utility of a scenario without a probability?
Should we open a new thread to discuss these issues?
TCO said...
I've actually noodled about this, Hans. I'm sure there is some answer in terms of Bayesian stats (which I have never studied). But I think the probabililty shoul represent your INDIFFERENCE odds for betting. IOW, if you are required to give a probability, knowing that your counterparty may decide to take EITHER side of the bet (and assume a lot of money on the line, no hedging or help on the side) what probability are you indifferent? It's like asking one child to cut the cake in two pieces and allowing the other to pick.
MikeR said...
Off topic: Drs. Bray and von Storch, I'm currently involved in a discussion on wikipedia
concerning including your survey on the page. Are you planning to peer-review and publish it? I think that it would then be the most important survey on the page, whereas now I'm somewhat hard-pressed to get it to show up there at all.
Ryan said...
Here's are two questions that I would like to see in some form on a future survey:
1. How important is prediction to climate science?
2. Does climate science need to predict in order to be useful?
My own sense is that the majority would say, respectively, "very important," and "definitely yes."
We could then have a very interesting discussion about the many consequences of this mindset. For climate science itself, for climate science policy, and for those who look to climate science for actual help in dealing with climate change.
Roger Pielke, Jr. said...
Here is Michael Mann on the difference between predictions and projections as related to his book summarizing the IPCC, titled "Dire Predictions":
When science meets politics correct becomes "correct."
TCO said...
I don't have an issue with Mike using Prediction in his title. For a popular science book, it makes sense. And he does at least show he knows the difference.
Now, he does other bad stuff, sure. But let's not just assume any time he is involved, he must be wrong.
For instance, that letter to the editor was DEAD ON in pinning Michaels down for being a professional op ed writer...not a real publishing scientist...and counterposing Christy as someone who writes real papers.
P Gosselin said...
Climate scientists, IPCC and climate media have redefined many terms. Here is an abridged climate science dictionary for beginners:
1. science (noun) the art of changing, hiding or deleting data in order to confirm or support a desired outcome.
2. trick (noun) a clever scientific solution for hiding an inconvenient reality.
3. crap (noun) any information, data or studies that contradict the AGW hypothesis and are inconvenient to real climate scientists.
4. denialist (noun) a pseudo-scientist, or any person, who has the gall of doubting to any degree the AGW hypothesis.
5. prediction (noun) a forecast of catastrophic events projected by manmade computer models and claimed to be likely or very likely, even if they are not.
6. Medieval Warm Period (noun) a fabled period of warm weather claimed to have occurred by denialists in the northern hemisphere from 1000 to 1200 A.D., but did not really happen.
7. mitigation (noun) the act by governments of regulating nature or God.
8. real climate scientist (noun) a highly trained person working in the field of climatology who actively practices tricks and science (see above).
9. flat-earther (noun) any person who doubts the hypothesis of manmade catastrophic GW to any degree. Synonym: denialist.
10. democracy (noun) an undesirable system of government that obstructs world leaders from mitigating impending, model-projected manmade climate disasters.
11. climate change (noun) a recent phenomenon that began in the 19th century, which had never existed before and entails changes in longer term weather patterns that are solely caused by human activity.
12. consensus (noun) a system in climate science used for proving a hypothesis. If a certain circle of politically correct scientists agrees on a hypothesis, then it is considered to be settled. Neither data nor new discoveries can nullify science by consensus.
13. settled (adjective) irrevocably agreed on, through the assertion of authority and claim to consensus.
14. temperature decline (noun) a short-term decline in global temperature, unforeseen by models and due to natural variations. Synonym: weather.
15. temperature increase (noun) manmade global warming predicted by models. Synonym: global warming.
16. natural variation (noun) an explanation for unexpected temperature declines not foreseen by models.
17. sun (noun) a star near the planet Earth that has the unique property of a steady, unchanging radiative output, with a fixed solar constant and benign constant behaviour.
18. peer review (noun) the process of rigorously checking over and approving a paper that supports the AGW hypothesis, and done so only by scientists who agree with the paper.
19. rigorous (adjective) having the quality of ignoring faults in AGW papers and putting such papers on the fast track to publication in prestigious scientific journals.
20. prestigious scientific journal (noun) any journal that agrees with the AGW hypothesis and applies rigorous peer-review, yet does not require access to raw data and codes as conditions for publication.
21. FOIA (noun) Freedom of Information Act. A law that greatly inconveniences busy public scientists by requiring them to disclose their data and codes to the public and to denialists.
22. climate criminal (noun) any sceptic, denialist or flat-earther who promotes and supports democracy.
Once you learn these basic terms, understanding climate science becomes a whole lot easier!
P Gosselin said...
By the way,
I wish to congratulate Prof von Storch on his new blog, which I'm sure will be a big success.
I hope he will consider writing some posts in German too, as I believe the German public would be well-served by his views.
eduardo said...
As all sweeping assertions, this one is not accurate nor fair. I know many climate researchers that dont fit in this description, actually most of them.
Perhaps the problem is that they are not interested in interacting with the broad public, something that I can also understand very well.
The so called 'Team' is by light-years not representative of the whole comunity.
TCO said...
You are going to win VERY few followers to the middle path (and it is not really a "middle" path but an HONEST path). Those who are "sided" essentially have NO interest in actually learning more UNLESS they learn points to help their pre-existing bias. When they interact, this is what they look for. This is what almost all political blogs are like. This is what the community on RC and CA (and WUWT and OM, etc.) are like. You may find a very few places like Volokh Conspiracy that have genuine interest in truth, regardless of which position it supports (does not mean they don't have "sides"...but are willing to look analytically at things that go against their side). The RCers and the CAers are FAR from the curiosity and willingness to disprove hypotheses of a natural scientist. Solid state chemistry and physics are better.
P Gosselin said...
Please don't take my little dictionary too is meant to be humourous and taken with irony. I think it's clear which circle of scientists it is aimed at.
Still I feel that it reflects how climate science has been operated over the last years. Climate science was vitually hijacked by a certain destructive groupthink.
Do you really believe that the field of climate science deserves a clean reputation after all that has happened?
No, it must go through a period of heavy criticism and be purged of its wrongness. Otherwise things will just continue as before.
P Gosselin said...
Science must not be about suppressing other views, splicing data sets, erasing hisory and hiding data. It has to be open. Otherwise my dictionary starts to become valid.
Werner Krauss said...
'We' are right and 'they' are wrong? 'We' are honest and 'they' are not? Interesting. Hm. Really.
corinna said...
#27 >Climate science was vitually hijacked by a certain destructive groupthink.
This is really a key problem.
This group thinking and all ideology coming along ruins the reputation of climate science, potentially impacting to reputation of science in general.
But more worse, it had a significant impact on the education of young scientist. PhDs and young Postdocs which have been educated in climate science the last years are severly impacted by this group thinking, and dont even see the problem with the ´Teams´ behaving.
Considering that we have educated really a lot of young scientists in that field during the last decade (look at the ´age structure from Denis and Hans survey), this is something which really worries me.
The earlier climate scientist start to be open for criticismn the better, discussions like these
are only a start point.
The very big question is how to deal with the next IPCC report?
Do climate scientist agree again to accept that the modelling strategy is arranged in internal circles (the ´leading model´circles of course) ?
Or will there be a possibility to discuss the validity of the models, the setup and the parameterisations chosen for the purpose?
Will we again face the situation that with the modelling chapter the strategy of peer reviewed science is flipped around?
IPCC here creates results which have not been out for scientific discussions and reflections, nor has the concept. Will
these results again dictate the frame for peer reviewed paper
in the following years and totally dominate the research funding in climate science and specifically in other related disciplines (climate impact research)?
eduardo said...
@ 27
Perhaps we are seeing things from a different perspective. I guess you are not an active climolotologist, so you have a perspective from outside, which is interesting on its own, and we will like to learn from.
I wanted to highlight that 'usual suspects' is a really small, but so far powerul, group within climate science, that has also found allies outside.
But please you can believe that within climate science there is a huge number of honest and silent people. For instance, You would be surprised to know how strongly unpopular within climate science some leading figures of the 'team' are.
I agree with you that our 'reputation' within the broad public opinion has taken a hard blow, and that something has to be done, ideally by ourselves. But I dont really know exactly what. This being said, I also think that criticism from outside should be as objective as possible.
Anonymous said...
31 Eduardo, you highlight the problem - why have so many of the honest climate scientists remained silent? There were very few in the climate science field (You two of course!!) who have spoken out against the hockey stick construction, even though it is now clear from the leaked emails that many people knew it was dubious.
It would be interesting if you could explain more about leading members of the 'team' being unpopular.
Your blog is a step in the right direction.
Rich said...
I'm going to vote for "sloppy thinking".
To quote: "define predictions as probable developments, and projections as possible developments."
"Developments" means "the course of future events". "Predictions" and "projections" are statements about future events and ,ideally, are associated with some probability to represent out uncertainty.
So if predictions are "probable" and projections are merely "possible" then the distinction can only be one of the probability we associate with the event. Though, since in ordinary language what is "probable" must necessarily be "possible", all predictions are also projections.
Does it matter? It appears that the claim has been made that projections can't be falsified because they weren't predictions. But no probabilistic statement about the future can be falsified so predictions associated with a probability can't be falsified either.
My conclusion is that the distinction is not useful. What is useful is to associate a probability with your prediction/projection so the risks can be reasonably assessed.
Note: I am not an expert, just someone with an opinion.
Anna said...
@ 32
I agree; being honest and silent in climate science isn't a winning concept in this situation.
At Climate Audit, Steve McIntyre made a similar observation: the only scientists commenting on Climategate at all were those who have already spoken against for example the hockey stick.
Where are all the other scientists?
MikeR said...
Further update: The reference to the 2003 survey has now been removed from Wikipedia, and the link to the 2008 survey with it. There is some support there for putting it back... In any event, it would help if it were being published.
Dennis Bray said...
To Mike R
the results of the 2008 survey can be found at
eduardo said...
@ 34
Anna, I cannot speak for all scientist. I can only hint at some explanations that I can figure out:
- Many conduct perfectly 'honest science' but are not bothered by seeing a bit of overselling if they feel it helps the good cause. I think they are wrong because in the long-run it is damaging also for the 'good' cause.
- Others find themselves between two evils: the 'team' and the skeptics, and think that the skeptics are the more dangerous evil. This feeling is reinforced because, although some of the skeptics may have a point, these points are misused by politically interested parties, mainly in the USA.
- Some of the bluntest arguments of the skeptics are wrong (yes, sorry), although not all. In an embroiled situation like the present one, a message with certainties and uncertainties is difficult to bring across, and most scientist do not want to be seen by colleagues as related to wrong scientific positions.
- As in the general population, many don't want to comment in public. This entails risks and requires time.
MikeR said...
To Dennis Bray: I know that the survey is available; it was posted here. The discussion at wikipedia currently is whether a non-peer-reviewed survey should be listed. So my question is, Are you planning to publish it in a peer-reviewed journal? If not, I'm just pointing out that it's getting in the way at wikipedia.
Dennis Bray said...
Hi Mike
There are and will be peer reviewed publications from selections of the survey results but a complete entity - i.e. the entire survey in one piece - will not be published other than as the PDF already on the web. A hundred or so pages of descriptive statistics are not very attractive as a peer reviewed publication. Neither the 1996 nor the 2003 surveys were 'published' in their entirety but were nonetheless listed n wipedia. Hope that helps.
TCO said...
cross cuts?
MikeR said...
Dennis, I have posted your response over at the wikipedia discussion; we'll see if it helps. In the meantime, even the mention of the 2003 paper has been removed there, though there is support for restoring it. Will there be a peer-reviewed abstract soon of some of the central results of the survey (i.e., questions bearing on what fraction accept the standard AGW consensus. I know that your survey gives a lot more detail than that, so that not's just a simple number, but still.)
Rich said...
I have an alternative view based on apparent usage rather than definition.
It seems to me that implicit in the use of "prediction" is the idea that the events in question will occur, albeit with some uncertainty. And that implicit in "projection" is the idea that the events will not occur unless some conditions, not presently met, are found to be true.
If this is right then predictions might be falsified in practical terms (though never in strictly logical terms) but projections will only be falsified (practically) if their conditions are met in practice. Otherwise they become merely counter-factual arguments.
This seems to me to reflect the way the terms are used better than the way they're defined.
My conclusion this time is that the distinction is useful but that it is frequently missed.
Again, just my 2c.
Dennis Bray said...
Hi Mike
There is a paper concerning consensus now in the review process and there is also a paper concerning climate models nearing the submission stage. There is enough material for a significant number of papers but I am not as certain that there is a significant amount of reviewer/editor interest to publish them.
Anna said...
@ Eduardo no. 37
Thank you for your answer! Your explanations seem plausible to me, but I must however repeat my view that the time for being silent is long gone. This because the result of Climategate is that the credibility of the whole field of climate research is being questioned.
If I were a climate scientist and were convinced that I had enough knowledge to conclude that the emission of CO2 is likely to cause serious damage, I definitely wouldn't stay silent.
I'd be really mad with the Team for causing damage to the credibility of climate research, and I would speak out, not necessarily by criticising these scientists, but by defending the AGW-hypothesis. Some scientists, like you and Hans Von Storch, do this, but the majority is completely silent, and I frankly find this odd.
Personally I am convinced that the behaviour of Michael Mann and his Hockey-team have made more people skeptics than anything else, and this even before Climategate...
I really appreciate your view on “overselling”. That the media does this is bad enough, but when scientists let this pass, or worse, encourage it, this damages the credibility of science.
Of course some of the skeptic arguments are wrong, otherwise we wouldn't have any scientists defending the hypothesis ( at least I hope so.. : ), but the problem is how to know which are wrong, which are right and where there isn't enough knowledge to say what is the truth.
An article on this subject would be very welcome (at least if it didn't only contain the most common “straw men”). | <urn:uuid:7aa8d4bc-a19d-4157-9502-4169b9482226> | 3 | 2.515625 | 0.107996 | en | 0.9552 | http://klimazwiebel.blogspot.nl/2010/01/dennis-bray-and-hans-von-storch.html |
$20 Digital Copier Is a DIY Book-Scanning Machine
Already bought a book and don't want to buy it again to read on a portable device (like the iPad)? Consider transforming an old digital camera into a DIY digital scanner for less than $20.
Using guidance from DIY site Instructables and a few metal rods, you'll need to create a frame in which to hold the book and/or document. To scan the book or document, the picture needs to be clear while the pages are flat. This contraption uses the rectangular frame to press the open book flat, and the corners are connected by rods to the camera, making sure the images are steady.
After taking the picture—and you'll most likely want to clean them up in an image-editing software—paste them into Word, turn it into a PDF, or run them through an OCR (optical character recognition) program to make them searchable documents. It's a completely different way to scan your documents than previously mentioned document-processing service Qipit, but if you've got some time and want a digital version of your analog books, it might be worth a try. If you do scan to PDF, you may still want to consider converting your book to ePub format to make it more e-reader friendly. | <urn:uuid:e85bc8ef-531e-4871-8d83-8cdbffcc1162> | 2 | 1.96875 | 0.181128 | en | 0.939801 | http://lifehacker.com/5513528/20-digital-copier-is-a-diy-book-scanning-machine |
Feel Like You're Faking It? That Might Not Be a Bad Thing
For the vast majority of people, confidence and ease come with practice and accomplishment. But even being at the height of your career is no guarantee you'll feel comfortable in your own professional skin—"imposter syndrome" is common even for those at the top, experts say. It turns out not only are there ways to manage your feelings of being a fraud, but the worry about being unmasked actually has its upsides.
"There are high-achieving celebrity impostor syndrome sufferers including Tina Fey, Maya Angelou, and Sheryl Sandberg, who have all openly admitted to feeling like an impostor at some point during their careers," wrote Caroline Dowd-Higgins recently on The Huffington Post. If the likes of Facebook's COO suffers from occasionally feeling like she's faking it, no wonder so many young careerists experience imposter syndrome (women are more likely to suffer than men).
Surf Your Imposter Syndrome
The most immediate question for imposter syndrome sufferers is, how do I make it stop?
That's the wrong question, according to Down-Higgins. She suggests instead riding out your feelings of being a fake. In her post, she quotes Dr. Valerie Young, author of The Secret Thoughts of Successful Women: Why Capable People Suffer From the Impostor Syndrome and How to Thrive In Spite of It, who advises that "when you feel yourself sliding into competence extremism, recognize it for what it is. Then make a conscious decision to stop and really savor those exhilarating mental high points and forgive yourself for the inevitable lulls."
"The beauty of the impostor syndrome is you vacillate between extreme egomania and a complete feeling of: ‘I'm a fraud! Oh God, they're on to me! I'm a fraud!' So you just try to ride the egomania when it comes and enjoy it, and then slide through the idea of fraud," writes Young in her book.
A Badge of Honor?
Another way to respond to your imposter syndrome is to be aware that the feeling actually indicates positive things about you as a professional.
Feelings of faking it are usually associated with intelligence, diligence and, paradoxically, competence. Slackers, blusterers, and the genuinely incompetent tend not to stress about feeling like fakers.
Don't believe me? Neuroscientist and former TED speaker Bradley Voytek has written on his blog, that:
Anecdotally, [imposter syndrome] appears to be fairly rampant among academics and other "smart" people. At some point during your career, possibly more than once, you will look at your peers and think to yourself, "I'm not as good as they are; I am not cut out for this…"
Listen to that voice. Understand where it's coming from. But be aware that you're failing to recognize your own accomplishments; you're overemphasizing the accomplishments of others and you're vastly underestimating the failures other successful people experience on their way to success.
The New York Times has also covered this phenomenon, rounding up research into imposter syndrome and concluding that, "in mild doses, feeling like a fraud… tempers the natural instinct to define one's own competence in self-serving ways."
The paper explains:
Researchers have shown in careful studies that people tend to be poor judges of their own performance and often to overrate their abilities. Their opinions about how well they've done on a test, or at a job, or in a class are often way off others' evaluations. They're confident that they can detect liars (they can't) and forecast grades (not so well).
This native confidence is likely to be functional: in a world of profound uncertainty, self-serving delusion probably helps people to get out of bed and chase their pet projects.
But it can be poison when the job calls for expertise and accountability, and the expertise is wanting…. At those times feeling like a fraud… reflects a respect for the limits of one's own abilities.
So don't stress if you feel like an impostor sometimes. You're in good company, you're probably wrong in your fears, and, on the contrary, are probably bright and conscientious. All you need to do is ride out that feeling of faking it.
Feel Like You're Faking It? That Might Not Be a Bad Thing | Brazen Life
Jessica Stillman is a freelance writer based in London. She writes a daily column for Inc.com and has blogged for CBS MoneyWatch and GigaOM.
Image remixed from Viorel Sima | <urn:uuid:3164618f-b5a4-4f2a-a5cb-4a90ed95a44a> | 2 | 1.851563 | 0.082805 | en | 0.960494 | http://lifehacker.com/5928639/feel-like-youre-faking-it-that-might-not-be-a-bad-thing?tag=thinking |
Every Way We've Tried to Fix Email (and Why It's Not Working)
Email is broken. Or so we've been told, anyway. Countless essays, apps, extensions, and other methods out there revolve around how broken email is, but we still haven't found the silver bullet that fixes it. If anything, the problem's just getting worse. Here’s why.
A Brief History of How People Have Tried to "Fix" Email
We're all inundated with too much email, pointless messages, and other junk we just don't need. The problem with email is that there's simply too much of it.
Ever since email became a primary mode of communication we've been trying to fix this problem, but nothing has really made the email experience that much better. Email started as a series of one-off conversations. You'd email your grandma a blurry scanned picture of your new house. You'd get the occasional email from your boss about a project. Maybe you'd sign up for a newsletter from your favorite web site. Nowadays, we use email for everything, but the system for handling it isn't all that different from its early days.
As email has scaled up in use, management systems and apps have come in to handle the load. But email overload is still a massive problem for people. We're still inundated with notifications and now we get email notifications no matter where we are. The fact is, pretty much all of us hate email for one reason or another. So, we've all tried to fix it in some way. Let's take a look at a few of the most popular fixes.
Inbox Zero
Merlin Mann's Inbox Zero was one of the first big "fixes" for email. Inbox Zero is an email management system that keeps your inbox empty (or with “zero” unread messages) at all times. You do this by processing your emails with five separate actions: delete, delegate, respond, defer, and do. As the name suggests, the main goal is to delete as many emails as humanly possible.
Inbox Zero has a ton of accompanying tips for getting your inbox down to zero, but plenty of apps have also sprung up to help you get to inbox zero. The problem with inbox zero isn't that it's a bad managment system. It's that it doesn't really solve the problem of email overload. Instead, it just treats it. You end up spending your time managing folders, triaging emails, and moving things around, but you're still spending time doing that. It requires more discipline and time than most of us have. It works great for organization junkies, but it's just about making your inbox tidy, not actually fixing the inherent problems.
The Trusted Trio
As a sort of follow-up to Inbox Zero, our founder, Gina Trapani, came up with The Trusted Trio. This method reduced Mann's five folders for organization to just three pretty self-explanatory folders: Follow Up, Archive, and Hold.
Like Inbox Zero, the Trusted Trio is all about managing your email so it's easier to process. It still takes a bit of willpower to use correctly. You need to keep revisiting the follow up and hold folders because otherwise you'll end up with unattended emails everywhere that need a reply. So, much like Inbox Zero, the Trusted Trio is fantastic for organizing those emails, and worth using—but it still doesn’t solve the real problem. Things are bound to slip between the cracks and you're still spending a lot of time organizing email, which is about as fun as counting apples in the Arctic.
Filters and Labels for Weeding Out the Junk
Gmail has had filters for a long time and the smart label system automatically filters a ton of stuff. Nowadays, you can essentially automate a system that mimics the Trusted Trio (to a point, at least) with labels.
Using these labels and filters, you can then get rid of spam, filter out junk, and plenty more. These can help ensure that you're only looking at the emails that matter and you're not wasting your time on what doesn't. Obviously smart labels and filters are restricted to Gmail, but they helped pave the way for systems that reduced email overload by getting rid of all the stuff you don't really care about. These filters make email easier to read by getting rid of the stuff that doesn't matter to you, but it's still around if you look for it. In some cases, it also makes you inbox a disjointed mess of folders, labels, and other organization techniques where it's hard to find what you're actually looking for.
Priority Inbox for Just the Emails that Matter
In 2010, Gmail introduced Priority Inbox, a system that automatically decides which emails are important to you and filters everything else into another folder. The great promise of Priority Inbox is that an automated system decides which emails are important to you and then filters out the rest. This takes care of the email overload problem so you can concentrate on what matters.
Our own Whitson Gordon is a huge fan of Priority Inbox, and with a little training the service works great. After a few weeks of working with Priority Inbox, you can make it so you only see important messages in your main inbox, and the rest get filtered off to a place that isn't in your face so much. The problem with Priority Inbox is that it's still a machine, so you have to have the patience to train it to do what you want. It's also never going to be perfect and you'll have to accept that some emails will get lost for a couple days every once in a while. The problem is obvious: you're still getting all that junk. Just like the other methods, it simply pushes that junk elsewhere.
Of course, the other big problem is that you have to use Gmail, which isn't always a possibility for everyone.
Rules, Regulations, and Email Agreements
It's not just technology that's trying to solve the problem of email overload. When it boils down to it, the source of email is the problem. That's us. We send too much email. We reply to emails we don't need to. We're long-winded, include too many attachments, and send emails when a text message or phone call would suffice.
The obvious fix, of course, is to overhaul how we think about email. And to a certain extent, this is the best thing we can do (see below). Unfortunately, for all the charters, rules, and countless tips out there, this idea hasn't created any lasting effect on the general populace. We're all pretty annoying with email and unless we somehow collectively agree to stop being annoying with email, rules and tips aren't going to do much good.
Is There a Fix?
The overarching problem with email is pretty simple: we all use it differently. Some people get actionable items in their email that they need to add to to-do lists. Others get pictures of adorable kittens. Some people are inundated with thousands of press releases. Others get hundreds of newsletters and coupons. Some people get unsolicited love letters. Others are worried about missing that email from a potential employer. As such, we'll all have different fixes.
Those email charters and rules may not have had any lasting effect, but the best way to "fix" email is to stop sending so much, and send more useful messages. So what can you do? Lead by example. If you want people to use your rules and system—whatever that may be—then start by following those rules yourself. If you do that, people will tend to mirror you. For example:
• If you use short, bulleted lists, people will usually respond to those bullets individually, making their emails easier to read and scan.
• If you give specific, actionable items at the top, people will usually respond to your email in the same form.
• If you ask a single, quick question, you'll get a quick answer back.
• If you don't use email for certain things—whether that's a grocery list or meeting notes, people will usually respond in kind. So, only send out the kind of emails you'd want to receive, and use other media (like IM, SMS, or in-person chats) for other types of communication.
• ...And so on.
We might never see email get "fixed," but accepting that fact and moving on—whether that's finding better modes of communication or better apps to deal with our personal pet peeves—is perhaps the best course to take for now.
Photo by Chad Swaney. | <urn:uuid:fc1e137d-e0c1-45f8-9ca6-4a3936f4a745> | 2 | 1.632813 | 0.23094 | en | 0.964541 | http://lifehacker.com/heres-the-real-secret-i-get-400-emails-a-day-is-t-1447162673/+epitomized22 |
Show Mobile Navigation
10 Fascinating Stages of Death
Sarah Thompson
Clinical death is defined as cessation or failure of all vital bodily functions. The heart stops beating, lungs cease to function, brain activity no longer exists, and the brain stem dies. Death comes in many forms, whether it’s expected due to a tragic medical diagnosis, an unexpected accident, or maybe planned and carried out by a disturbed person – death happens. Approximately 150,000 people die each day around the world.
Death is a fact of life that everyone will go through one day and the following are the 10 most fascinating stages one’s body experiences immediately following death if the body is exposed to a natural decay and not preserved by processes such as embalming. Most of these stages are known by studying university controlled “body farms” used by forensic anthropologists in an attempt to further knowledge in the forensic field. This can include identifying dead bodies and their circumstances of death. These “body farms” and the studies resulting from them have made a tremendous impact in solving crimes, including cold cases. As all of the regulars on this website know, J. Frater loves morbid and bizarre topics. This is morbid but also natural.
Images within this list are all safe for work.
If you like this list be sure to check out our Morbid Collection for much much more of the same.
Death Occurs
The heart stops, the body convulses, the person starts taking short gasping breaths, and the ears become cold due to the lack of circulation. The blood turns acidic, the larynx loses its cough reflex and a build up of mucous may occur. The passage of breath through this mucous due to spasms will cause a gurgling or rattle-like sound. This specific sound is also known as the “death rattle.” The lungs shut down and the brain also stops functioning. However, if the brain stem is still alive, the body still retains the ability to heal and perform other crucial functions.
0 Minutes
Clinical death arrives as the brain stops getting oxygen. This death of the brain eventually shuts down other vital bodily functions including circulation throughout the body and to the extremities. Pallor mortis, paleness of the body, sets in almost immediately because of lack of blood circulation. The pupils begin to have a glassy appearance and the body temperature begins to slowly drop due to depleting oxygen levels. This may be why many crime dramas use the plotline of a killer turning down the thermostat to keep the body cooler than it should be and give the police a false timeline of death.
1 – 9 Minutes
1103A1C51B42128353C7E0E118Ca Grande
Blood starts pooling in the body, which starts causing discoloration called livor mortis, usually a reddish-blue color. The muscles relax which results in the bowels and bladder beginning to empty. Brain cells die in droves and liquefaction occurs. Pupils begin to dilate, unresponsive to direct light, and “cloud” over. The cloudy appearance of the pupils results from potassium in the red blood cells breaking down. This process can take longer (approximately 3 hours) but, because many people die with their eyes open, the process often occurs in this time frame. Some forensic scientists believe this clouding of the eye can be a better indication of the time of death than rigor mortis and livor mortis. The eyeballs flatten due to loss of blood pressure. At the end of this time frame, the brain stem dies.
1 – 8 Hours
Rigor mortis begins to set in. This stage is where the muscles become stiff and the hair stands up. Rigor mortis is due to the lactic acid in the muscles causing rigidity in the tissue. This is why it appears that the hair seems to grow longer after death because the stiffening muscles push on the hair follicle. After four to six hours, rigor mortis begins to spread more throughout the body. The pooled blood begins to stain the skin a blackish color. At six hours muscles continue to spasm sporadically. Anaerobic processes, such as the liver’s breakdown of alcohol, continue. At eight hours the body starts to rapidly cool. This is called algor mortis. This is different than the initial cooling in that this process of cooling is much faster.
1 – 5 Days
Rigor mortis ends at the beginning of this stage and the body again becomes pliable. If the body happens to be discovered, presentable, and in a mortuary, the undertaker takes advantage of this stage to position the body for presentation at a funeral (folding hands and such). At 24 to 72 hours, internal microbes putrefy the intestines and the pancreas begins to digest itself. This process liquefies the insides. In 3 to 5 days, decay starts to produce large blisters all over the body. If a body is not found until this stage, it will most likely not be presentable for viewing at a funeral. Bloody froth begins to trickle from your mouth and nose.
8 – 10 Days
Bacteria, located in the intestines, feed off of dead tissue and give off gases. These gases cause the belly to begin to swell. The body starts to emit an odor during this purification. This is also called “the bloat stage.” The tongue will protrude out of the mouth due to the swelling of tissues in the neck and face. This swelling of the neck and facial tissues makes the body hard to identify if it is found at this stage after death. The formation of gases also causes any remaining feces or liquid in the body to be forced out. Sounds like the ultimate wet fart. The color of the body changes from red to green as the red blood cells decompose.
2 Weeks
The hairs, nails, and teeth begin to detach very easily. The skin slippage during this stage can make it hard to move a body if discovered in this condition. Hopefully, if the teeth fall out they don’t fall far from the body, because this may be the only way to identify the body at this point. The skin becomes glove-like and can easily slip away from decaying muscle and connective tissue lying just below. Buffalo Bill, a popular character from The Silence of the Lambs, should have just waited for this stage to make his “woman suit.” It may have saved him a lot of energy and sewing materials.
1 Month
The skin begins to liquefy from internal gases and decaying or dries out, depending on environmental circumstances. I wonder if Jeffry Dahmer ever made any liquefied skin soup. Several types of insects feed on the body and cause the liquefaction of the dying skin cells. Some of the first insects on the scene are blowflies. Another favorite subject for famous crime dramas is the study of insect activity, lending to time and place of death determination. If the conditions are right, the body dries out instead of liquefying in a process called butyric fermentation or mummification. A body is also considered to be mummified when all of the organs are gone due to the feeding insects.
Several Months
As the process of mummification is finishing, the body’s fat begins to breakdown and turns into a crumbly-white waxy substance called adipocere, also known as “grave wax.” It’s this stage where the putrid odor of the dying flesh on the corpse starts to deplete exponentially. It is believed, in the 17th century, that some individuals were known to use this adipocere to make candles for the use at the mummy’s vigil. The “grave wax” can also be important if the body is found at this stage because the adipocere can help retain body and facial features used in recognition, as well as any wounds or injuries that may have caused the death.
Circa 1 Year
During this time span and depending on environmental circumstances, carrion eaters such as hyenas, large birds of prey (vultures, bald eagles, etc.) raccoons, and opossum have reduced the corpse to bones and other hard fragments of the body. Of the remains, teeth are the most resilient substance in the body. So even in the case of bone erosion, the remaining teeth can be used for identification purposes. At this stage, most bodies found will have to be identified by dental records. If the technology or DNA profile for comparison is available, DNA can also be extracted from teeth or bones for identification. | <urn:uuid:d31d42ca-fc1a-452a-a1dd-09bf08a22037> | 3 | 3.125 | 0.159408 | en | 0.932274 | http://listverse.com/2012/10/26/10-fascinating-stages-of-death/ |
The Origin of Species
Charles Darwin
Chapter 4 - Natural Selection
* Natural Selection * its power compared with man's selection * its power on characters of trifling importance * its power at all ages and on both sexes * Sexual Selection * On the generality of intercrosses between individuals of the same species * Circumstances favourable and unfavourable to Natural Selection, namely, intercrossing, isolation, number of individuals * Slow action * Extinction caused by Natural Selection * Divergence of Character, related to the diversity of inhabitants of any small area, and to naturalisation * Action of Natural Selection, through Divergence of Character and Extinction, on the descendants from a common parent * Explains the Grouping of all organic beings
How will the struggle for existence, discussed too briefly in the last chapter, act in regard to variation? Can the principle of selection, which we have seen is so potent in the hands of man, apply in nature? I think we shall see that it can act most effectually. Let it be borne in mind in what an endless number of strange peculiarities our domestic productions, and, in a lesser degree, those under nature, vary; and how strong the hereditary tendency is. Under domestication, it may be truly said that the, whole organisation becomes in some degree plastic. Let it be borne in mind how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life. Can it, then, be thought improbable, seeing that variations useful to man have undoubtedly occurred, that other variations useful in some way to each being in the great and complex battle of life, should sometimes occur in the course of thousands of generations? If such do occur, can we doubt (remembering that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection. Variations neither useful nor injurious would not be affected by natural selection, and would be left a fluctuating element, as perhaps we see in the species called polymorphic.
We shall best understand the probable course of natural selection by taking the case of a country undergoing some physical change, for instance, of climate. The proportional numbers of its inhabitants would almost immediately undergo a change, and some species might become extinct. We may conclude, from what we have seen of the intimate and complex manner in which the inhabitants of each country are bound together, that any change in the numerical proportions of some of the inhabitants, independently of the change of climate itself, would most seriously affect many of the others. If the country were open on its borders, new forms would certainly immigrate, and this also would seriously disturb the relations of some of the former inhabitants. Let it be remembered how powerful the influence of a single introduced tree or mammal has been shown to be. But in the case of an island, or of a country partly surrounded by barriers, into which new and better adapted forms could not freely enter, we should then have places in the economy of nature which would assuredly be better filled up, if some of the original inhabitants were in some manner modified; for, had the area been open to immigration, these same places would have been seized on by intruders. In such case, every slight modification, which in the course of ages chanced to arise, and which in any way favoured the individuals of any of the species, by better adapting them to their altered conditions, would tend to be preserved; and natural selection would thus have free scope for the work of improvement.
We have reason to believe, as stated in the first chapter, that a change in the conditions of life, by specially acting on the reproductive system, causes or increases variability; and in the foregoing case the conditions of life are supposed to have undergone a change, and this would manifestly be favourable to natural selection, by giving a better chance of profitable variations occurring; and unless profitable variations do occur, natural selection can do nothing. Not that, as I believe, any extreme amount of variability is necessary; as man can certainly produce great results by adding up in any given direction mere individual differences, so could Nature, but far more easily, from having incomparably longer time at her disposal. Nor do I believe that any great physical change, as of climate, or any unusual degree of isolation to check immigration, is actually necessary to produce new and unoccupied places for natural selection to fill up by modifying and improving some of the varying inhabitants. For as all the inhabitants of each country are struggling together with nicely balanced forces, extremely slight modifications in the structure or habits of one inhabitant would often give it an advantage over others; and still further modifications of the same kind would often still further increase the advantage. No country can be named in which all the native inhabitants are now so perfectly adapted to each other and to the physical conditions under which they live, that none of them could anyhow be improved; for in all countries, the natives have been so far conquered by naturalised productions, that they have allowed foreigners to take firm possession of the land. And as foreigners have thus everywhere beaten some of the natives, we may safely conclude that the natives might have been modified with advantage, so as to have better resisted such intruders.
As man can produce and certainly has produced a great result by his methodical and unconscious means of selection, what may not nature effect? Man can act only on external and visible characters: nature cares nothing for appearances, except in so far as they may be useful to any being. She can act on every internal organ, on every shade of constitutional difference, on the whole machinery of life. Man selects only for his own good; Nature only for that of the being which she tends. Every selected character is fully exercised by her; and the being is placed under well-suited conditions of life. Man keeps the natives of many climates in the same country; he seldom exercises each selected character in some peculiar and fitting manner; he feeds a long and a short beaked pigeon on the same food; he does not exercise a long-backed or long-legged quadruped in any peculiar manner; he exposes sheep with long and short wool to the same climate. He does not allow the most vigorous males to struggle for the females. He does not rigidly destroy all inferior animals, but protects during each varying season, as far as lies in his power, all his productions. He often begins his selection by some half-monstrous form; or at least by some modification prominent enough to catch his eye, or to be plainly useful to him. Under nature, the slightest difference of structure or constitution may well turn the nicely-balanced scale in the struggle for life, and so be preserved. How fleeting are the wishes and efforts of man! how short his time! and consequently how poor will his products be, compared with those accumulated by nature during whole geological periods. Can we wonder, then, that nature's productions should be far 'truer' in character than man's productions; that they should be infinitely better adapted to the most complex conditions of life, and should plainly bear the stamp of far higher workmanship?
Although natural selection can act only through and for the good of each being, yet characters and structures, which we are apt to consider as of very trifling importance, may thus be acted on. When we see leaf-eating insects green, and bark-feeders mottled-grey; the alpine ptarmigan white in winter, the red-grouse the colour of heather, and the black-grouse that of peaty earth, we must believe that these tints are of service to these birds and insects in preserving them from danger. Grouse, if not destroyed at some period of their lives, would increase in countless numbers; they are known to suffer largely from birds of prey; and hawks are guided by eyesight to their prey, so much so, that on parts of the Continent persons are warned not to keep white pigeons, as being the most liable to destruction. Hence I can see no reason to doubt that natural selection might be most effective in giving the proper colour to each kind of grouse, and in keeping that colour, when once acquired, true and constant. Nor ought we to think that the occasional destruction of an animal of any particular colour would produce little effect: we should remember how essential it is in a flock of white sheep to destroy every lamb with the faintest trace of black. In plants the down on the fruit and the colour of the flesh are considered by botanists as characters of the most trifling importance: yet we hear from an excellent horticulturist, Downing, that in the United States smooth-skinned fruits suffer far more from a beetle, a curculio, than those with down; that purple plums suffer far more from a certain disease than yellow plums; whereas another disease attacks yellow-fleshed peaches far more than those with other coloured flesh. If, with all the aids of art, these slight differences make a great difference in cultivating the several varieties, assuredly, in a state of nature, where the trees would have to struggle with other trees and with a host of enemies, such differences would effectually settle which variety, whether a smooth or downy, a yellow or purple fleshed fruit, should succeed.
In looking at many small points of difference between species, which, as far as our ignorance permits us to judge, seem to be quite unimportant, we must not forget that climate, food, &c., probably produce some slight and direct effect. It is, however, far more necessary to bear in mind that there are many unknown laws of correlation of growth, which, when one part of the organisation is modified through variation, and the modifications are accumulated by natural selection for the good of the being, will cause other modifications, often of the most unexpected nature.
As we see that those variations which under domestication appear at any particular period of life, tend to reappear in the offspring at the same period; for instance, in the seeds of the many varieties of our culinary and agricultural plants; in the caterpillar and cocoon stages of the varieties of the silkworm; in the eggs of poultry, and in the colour of the down of their chickens; in the horns of our sheep and cattle when nearly adult; so in a state of nature, natural selection will be enabled to act on and modify organic beings at any age, by the accumulation of profitable variations at that age, and by their inheritance at a corresponding age. If it profit a plant to have its seeds more and more widely disseminated by the wind, I can see no greater difficulty in this being effected through natural selection, than in the cotton-planter increasing and improving by selection the down in the pods on his cotton-trees. Natural selection may modify and adapt the larva of an insect to a score of contingencies, wholly different from those which concern the mature insect. These modifications will no doubt affect, through the laws of correlation, the structure of the adult; and probably in the case of those insects which live only for a few hours, and which never feed, a large part of their structure is merely the correlated result of successive changes in the structure of their larvae. So, conversely, modifications in the adult will probably often affect the structure of the larva; but in all cases natural selection will ensure that modifications consequent on other modifications at a different period of life, shall not be in the least degree injurious: for if they became so, they would cause the extinction of the species.
Natural selection will modify the structure of the young in relation to the parent, and of the parent in relation to the young. In social animals it will adapt the structure of each individual for the benefit of the community; if each in consequence profits by the selected change. What natural selection cannot do, is to modify the structure of one species, without giving it any advantage, for the good of another species; and though statements to this effect may be found in works of natural history, I cannot find one case which will bear investigation. A structure used only once in an animal's whole life, if of high importance to it, might be modified to any extent by natural selection; for instance, the great jaws possessed by certain insects, and used exclusively for opening the cocoon or the hard tip to the beak of nestling birds, used for breaking the egg. It has been asserted, that of the best short-beaked tumbler-pigeons more perish in the egg than are able to get out of it; so that fanciers assist in the act of hatching. Now, if nature had to make the beak of a full-grown pigeon very short for the bird's own advantage, the process of modification would be very slow, and there would be simultaneously the most rigorous selection of the young birds within the egg, which had the most powerful and hardest beaks, for all with weak beaks would inevitably perish: or, more delicate and more easily broken shells might be selected, the thickness of the shell being known to vary like every other structure.
Sexual Selection
Inasmuch as peculiarities often appear under domestication in one sex and become hereditarily attached to that sex, the same fact probably occurs under nature, and if so, natural selection will be able to modify one sex in its functional relations to the other sex, or in relation to wholly different habits of life in the two sexes, as is sometimes the case with insects. And this leads me to say a few words on what I call Sexual Selection. This depends, not on a struggle for existence, but on a struggle between the males for possession of the females; the result is not death to the unsuccessful competitor, but few or no offspring. Sexual selection is, therefore, less rigorous than natural selection. Generally, the most vigorous males, those which are best fitted for their places in nature, will leave most progeny. But in many cases, victory will depend not on general vigour, but on having special weapons, confined to the male sex. A hornless stag or spurless cock would have a poor chance of leaving offspring. Sexual selection by always allowing the victor to breed might surely give indomitable courage, length to the spur, and strength to the wing to strike in the spurred leg, as well as the brutal cock-fighter, who knows well that he can improve his breed by careful selection of the best cocks. How low in the scale of nature this law of battle descends, I know not; male alligators have been described as fighting, bellowing, and whirling round, like Indians in a war-dance, for the possession of the females; male salmons have been seen fighting all day long; male stag-beetles often bear wounds from the huge mandibles of other males. The war is, perhaps, severest between the males of polygamous animals, and these seem oftenest provided with special weapons. The males of carnivorous animals are already well armed; though to them and to others, special means of defence may be given through means of sexual selection, as the mane to the lion, the shoulder-pad to the boar, and the hooked jaw to the male salmon; for the shield may be as important for victory, as the sword or spear.
Amongst birds, the contest is often of a more peaceful character. All those who have attended to the subject, believe that there is the severest rivalry between the males of many species to attract by singing the females. The rock-thrush of Guiana, birds of paradise, and some others, congregate; and successive males display their gorgeous plumage and perform strange antics before the females, which standing by as spectators, at last choose the most attractive partner. Those who have closely attended to birds in confinement well know that they often take individual preferences and dislikes: thus Sir R. Heron has described how one pied peacock was eminently attractive to all his hen birds. It may appear childish to attribute any effect to such apparently weak means: I cannot here enter on the details necessary to support this view; but if man can in a short time give elegant carriage and beauty to his bantams, according to his standard of beauty, I can see no good reason to doubt that female birds, by selecting, during thousands of generations, the most melodious or beautiful males, according to their standard of beauty, might produce a marked effect. I strongly suspect that some well-known laws with respect to the plumage of male and female birds, in comparison with the plumage of the young, can be explained on the view of plumage having been chiefly modified by sexual selection, acting when the birds have come to the breeding age or during the breeding season; the modifications thus produced being inherited at corresponding ages or seasons, either by the males alone, or by the males and females; but I have not space here to enter on this subject.
Thus it is, as I believe, that when the males and females of any animal have the same general habits of life, but differ in structure, colour, or ornament, such differences have been mainly caused by sexual selection; that is, individual males have had, in successive generations, some slight advantage over other males, in their weapons, means of defence, or charms; and have transmitted these advantages to their male offspring. Yet, I would not wish to attribute all such sexual differences to this agency: for we see peculiarities arising and becoming attached to the male sex in our domestic animals (as the wattle in male carriers, horn-like protuberances in the cocks of certain fowls, &c.), which we cannot believe to be either useful to the males in battle, or attractive to the females. We see analogous cases under nature, for instance, the tuft of hair on the breast of the turkey-cock, which can hardly be either useful or ornamental to this bird; indeed, had the tuft appeared under domestication, it would have been called a monstrosity.
Illustrations of the action of Natural Selection
In order to make it clear how, as I believe, natural selection acts, I must beg permission to give one or two imaginary illustrations. Let us take the case of a wolf, which preys on various animals, securing some by craft, some by strength, and some by fleetness; and let us suppose that the fleetest prey, a deer for instance, had from any change in the country increased in numbers, or that other prey had decreased in numbers, during that season of the year when the wolf is hardest pressed for food. I can under such circumstances see no reason to doubt that the swiftest and slimmest wolves would have the best chance of surviving, and so be preserved or selected, provided always that they retained strength to master their prey at this or at some other period of the year, when they might be compelled to prey on other animals. I can see no more reason to doubt this, than that man can improve the fleetness of his greyhounds by careful and methodical selection, or by that unconscious selection which results from each man trying to keep the best dogs without any thought of modifying the breed.
Let us now take a more complex case. Certain plants excrete a sweet juice, apparently for the sake of eliminating something injurious from their sap: this is effected by glands at the base of the stipules in some Leguminosae, and at the back of the leaf of the common laurel. This juice, though small in quantity, is greedily sought by insects. Let us now suppose a little sweet juice or nectar to be excreted by the inner bases of the petals of a flower. In this case insects in seeking the nectar would get dusted with pollen, and would certainly often transport the pollen from one flower to the stigma of another flower. The flowers of two distinct individuals of the same species would thus get crossed; and the act of crossing, we have good reason to believe (as will hereafter be more fully alluded to), would produce very vigorous seedlings, which consequently would have the best chance of flourishing and surviving. Some of these seedlings would probably inherit the nectar-excreting power. Those in individual flowers which had the largest glands or nectaries, and which excreted most nectar, would be oftenest visited by insects, and would be oftenest crossed; and so in the long-run would gain the upper hand. Those flowers, also, which had their stamens and pistils placed, in relation to the size and habits of the particular insects which visited them, so as to favour in any degree the transportal of their pollen from flower to flower, would likewise be favoured or selected. We might have taken the case of insects visiting flowers for the sake of collecting pollen instead of nectar; and as pollen is formed for the sole object of fertilisation, its destruction appears a simple loss to the plant; yet if a little pollen were carried, at first occasionally and then habitually, by the pollen-devouring insects from flower to flower, and a cross thus effected, although nine-tenths of the pollen were destroyed, it might still be a great gain to the plant; and those individuals which produced more and more pollen, and had larger and larger anthers, would be selected.
When our plant, by this process of the continued preservation or natural selection of more and more attractive flowers, had been rendered highly attractive to insects, they would, unintentionally on their part, regularly carry pollen from flower to flower; and that they can most effectually do this, I could easily show by many striking instances. I will give only one not as a very striking case, but as likewise illustrating one step in the separation of the sexes of plants, presently to be alluded to. Some holly-trees bear only male flowers, which have four stamens producing rather a small quantity of pollen, and a rudimentary pistil; other holly-trees bear only female flowers; these have a full-sized pistil, and four stamens with shrivelled anthers, in which not a grain of pollen can be detected. Having found a female tree exactly sixty yards from a male tree, I put the stigmas of twenty flowers, taken from different branches, under the microscope, and on all, without exception, there were pollen-grains, and on some a profusion of pollen. As the wind had set for several days from the female to the male tree, the pollen could not thus have been carried. The weather had been cold and boisterous, and therefore not favourable to bees, nevertheless every female flower which I examined had been effectually fertilised by the bees, accidentally dusted with pollen, having flown from tree to tree in search of nectar. But to return to our imaginary case: as soon as the plant had been rendered so highly attractive to insects that pollen was regularly carried from flower to flower, another process might commence. No naturalist doubts the advantage of what has been called the 'physiological division of labour;' hence we may believe that it would be advantageous to a plant to produce stamens alone in one flower or on one whole plant, and pistils alone in another flower or on another plant. In plants under culture and placed under new conditions of life, sometimes the male organs and sometimes the female organs become more or less impotent; now if we suppose this to occur in ever so slight a degree under nature, then as pollen is already carried regularly from flower to flower, and as a more complete separation of the sexes of our plant would be advantageous on the principle of the division of labour, individuals with this tendency more and more increased, would be continually favoured or selected, until at last a complete separation of the sexes would be effected.
Let us now turn to the nectar-feeding insects in our imaginary case: we may suppose the plant of which we have been slowly increasing the nectar by continued selection, to be a common plant; and that certain insects depended in main part on its nectar for food. I could give many facts, showing how anxious bees are to save time; for instance, their habit of cutting holes and sucking the nectar at the bases of certain flowers, which they can, with a very little more trouble, enter by the mouth. Bearing such facts in mind, I can see no reason to doubt that an accidental deviation in the size and form of the body, or in the curvature and length of the proboscis, &c., far too slight to be appreciated by us, might profit a bee or other insect, so that an individual so characterised would be able to obtain its food more quickly, and so have a better chance of living and leaving descendants. Its descendants would probably inherit a tendency to a similar slight deviation of structure. The tubes of the corollas of the common red and incarnate clovers (Trifolium pratense and incarnatum) do not on a hasty glance appear to differ in length; yet the hive-bee can easily suck the nectar out of the incarnate clover, but not out of the common red clover, which is visited by humble-bees alone; so that whole fields of the red clover offer in vain an abundant supply of precious nectar to the hive-bee. Thus it might be a great advantage to the hive-bee to have a slightly longer or differently constructed proboscis. On the other hand, I have found by experiment that the fertility of clover greatly depends on bees visiting and moving parts of the corolla, so as to push the pollen on to the stigmatic surface. Hence, again, if humble-bees were to become rare in any country, it might be a great advantage to the red clover to have a shorter or more deeply divided tube to its corolla, so that the hive-bee could visit its flowers. Thus I can understand how a flower and a bee might slowly become, either simultaneously or one after the other, modified and adapted in the most perfect manner to each other, by the continued preservation of individuals presenting mutual and slightly favourable deviations of structure.
I am well aware that this doctrine of natural selection, exemplified in the above imaginary instances, is open to the same objections which were at first urged against Sir Charles Lyell's noble views on 'the modern changes of the earth, as illustrative of geology;' but we now very seldom hear the action, for instance, of the coast-waves, called a trifling and insignificant cause, when applied to the excavation of gigantic valleys or to the formation of the longest lines of inland cliffs. Natural selection can act only by the preservation and accumulation of infinitesimally small inherited modifications, each profitable to the preserved being; and as modern geology has almost banished such views as the excavation of a great valley by a single diluvial wave, so will natural selection, if it be a true principle, banish the belief of the continued creation of new organic beings, or of any great and sudden modification in their structure.
On the Intercrossing of Individuals
I must here introduce a short digression. In the case of animals and plants with separated sexes, it is of course obvious that two individuals must always unite for each birth; but in the case of hermaphrodites this is far from obvious. Nevertheless I am strongly inclined to believe that with all hermaphrodites two individuals, either occasionally or habitually, concur for the reproduction of their kind. This view, I may add, was first suggested by Andrew Knight. We shall presently see its importance; but I must here treat the subject with extreme brevity, though I have the materials prepared for an ample discussion. All vertebrate animals, all insects, and some other large groups of animals, pair for each birth. Modern research has much diminished the number of supposed hermaphrodites, and of real hermaphrodites a large number pair; that is, two individuals regularly unite for reproduction, which is all that concerns us. But still there are many hermaphrodite animals which certainly do not habitually pair, and a vast majority of plants are hermaphrodites. What reason, it may be asked, is there for supposing in these cases that two individuals ever concur in reproduction? As it is impossible here to enter on details, I must trust to some general considerations alone.
In the first place, I have collected so large a body of facts, showing, in accordance with the almost universal belief of breeders, that with animals and plants a cross between different varieties, or between individuals of the same variety but of another strain, gives vigour and fertility to the offspring; and on the other hand, that close interbreeding diminishes vigour and fertility; that these facts alone incline me to believe that it is a general law of nature (utterly ignorant though we be of the meaning of the law) that no organic being self-fertilises itself for an eternity of generations; but that a cross with another individual is occasionally perhaps at very long intervals -- indispensable.
On the belief that this is a law of nature, we can, I think, understand several large classes of facts, such as the following, which on any other view are inexplicable. Every hybridizer knows how unfavourable exposure to wet is to the fertilisation of a flower, yet what a multitude of flowers have their anthers and stigmas fully exposed to the weather! but if an occasional cross be indispensable, the fullest freedom for the entrance of pollen from another individual will explain this state of exposure, more especially as the plant's own anthers and pistil generally stand so close together that self-fertilisation seems almost inevitable. Many flowers, on the other hand, have their organs of fructification closely enclosed, as in the great papilionaceous or pea-family; but in several, perhaps in all, such flowers, there is a very curious adaptation between the structure of the flower and the manner in which bees suck the nectar; for, in doing this, they either push the flower's own pollen on the stigma, or bring pollen from another flower. So necessary are the visits of bees to papilionaceous flowers, that I have found, by experiments published elsewhere, that their fertility is greatly diminished if these visits be prevented. Now, it is scarcely possible that bees should fly from flower to flower, and not carry pollen from one to the other, to the great good, as I believe, of the plant. Bees will act like a camel-hair pencil, and it is quite sufficient just to touch the anthers of one flower and then the stigma of another with the same brush to ensure fertilisation; but it must not be supposed that bees would thus produce a multitude of hybrids between distinct species; for if you bring on the same brush a plant's own pollen and pollen from another species, the former will have such a prepotent effect, that it will invariably and completely destroy, as has been shown by Gärtner, any influence from the foreign pollen.
When the stamens of a flower suddenly spring towards the pistil, or slowly move one after the other towards it, the contrivance seems adapted solely to ensure self-fertilisation; and no doubt it is useful for this end: but, the agency of insects is often required to cause the stamens to spring forward, as Kölreuter has shown to be the case with the barberry; and curiously in this very genus, which seems to have a special contrivance for self-fertilisation, it is well known that if very closely-allied forms or varieties are planted near each other, it is hardly possible to raise pure seedlings, so largely do they naturally cross. In many other cases, far from there being any aids for self-fertilisation, there are special contrivances, as I could show from the writings of C. C. Sprengel and from my own observations, which effectually prevent the stigma receiving pollen from its own flower: for instance, in Lobelia fulgens, there is a really beautiful and elaborate contrivance by which every one of the infinitely numerous pollen-granules are swept out of the conjoined anthers of each flower, before the stigma of that individual flower is ready to receive them; and as this flower is never visited, at least in my garden, by insects, it never sets a seed, though by placing pollen from one flower on the stigma of another, I raised plenty of seedlings; and whilst another species of Lobelia growing close by, which is visited by bees, seeds freely. In very many other cases, though there be no special mechanical contrivance to prevent the stigma of a flower receiving its own pollen, yet, as C. C. Sprengel has shown, and as I can confirm, either the anthers burst before the stigma is ready for fertilisation, or the stigma is ready before the pollen of that flower is ready, so that these plants have in fact separated sexes, and must habitually be crossed. How strange are these facts! How strange that the pollen and stigmatic surface of the same flower, though placed so close together, as if for the very purpose of self-fertilisation, should in so many cases be mutually useless to each other! How simply are these facts explained on the view of an occasional cross with a distinct individual being advantageous or indispensable!
If several varieties of the cabbage, radish, onion, and of some other plants, be allowed to seed near each other, a large majority, as I have found, of the seedlings thus raised will turn out mongrels: for instance, I raised 233 seedling cabbages from some plants of different varieties growing near each other, and of these only 78 were true to their kind, and some even of these were not perfectly true. Yet the pistil of each cabbage-flower is surrounded not only by its own six stamens, but by those of the many other flowers on the same plant. How, then, comes it that such a vast number of the seedlings are mongrelised? I suspect that it must arise from the pollen of a distinct variety having a prepotent effect over a flower's own pollen; and that this is part of the general law of good being derived from the intercrossing of distinct individuals of the same species. When distinct species are crossed the case is directly the reverse, for a plant's own pollen is always prepotent over foreign pollen; but to this subject we shall return in a future chapter.
In the case of a gigantic tree covered with innumerable flowers, it may be objected that pollen could seldom be carried from tree to tree, and at most only from flower to flower on the same tree, and that flowers on the same tree can be considered as distinct individuals only in a limited sense. I believe this objection to be valid, but that nature has largely provided against it by giving to trees a strong tendency to bear flowers with separated sexes. When the sexes are separated, although the male and female flowers may be produced on the same tree, we can see that pollen must be regularly carried from flower to flower; and this will give a better chance of pollen being occasionally carried from tree to tree. That trees belonging to all Orders have their sexes more often separated than other plants, I find to be the case in this country; and at my request Dr Hooker tabulated the trees of New Zealand, and Dr Asa Gray those of the United States, and the result was as I anticipated. On the other hand, Dr Hooker has recently informed me that he finds that the rule does not hold in Australia; and I have made these few remarks on the sexes of trees simply to call attention to the subject.
Turning for a very brief space to animals: on the land there are some hermaphrodites, as land-mollusca and earth-worms; but these all pair. As yet I have not found a single case of a terrestrial animal which fertilises itself. We can understand this remarkable fact, which offers so strong a contrast with terrestrial plants, on the view of an occasional cross being indispensable, by considering the medium in which terrestrial animals live, and the nature of the fertilising element; for we know of no means, analogous to the action of insects and of the wind in the case of plants, by which an occasional cross could be effected with terrestrial animals without the concurrence of two individuals. Of aquatic animals, there are many self-fertilising hermaphrodites; but here currents in the water offer an obvious means for an occasional cross. And, as in the case of flowers, I have as yet failed, after consultation with one of the highest authorities, namely, Professor Huxley, to discover a single case of an hermaphrodite animal with the organs of reproduction so perfectly enclosed within the body, that access from without and the occasional influence of a distinct individual can be shown to be physically impossible. Cirripedes long appeared to me to present a case of very great difficulty under this point of view; but I have been enabled, by a fortunate chance, elsewhere to prove that two individuals, though both are self-fertilising hermaphrodites, do sometimes cross.
It must have struck most naturalists as a strange anomaly that, in the case of both animals and plants, species of the same family and even of the same genus, though agreeing closely with each other in almost their whole organisation, yet are not rarely, some of them hermaphrodites, and some of them unisexual. But if, in fact, all hermaphrodites do occasionally intercross with other individuals, the difference between hermaphrodites and unisexual species, as far as function is concerned, becomes very small.
From these several considerations and from the many special facts which I have collected, but which I am not here able to give, I am strongly inclined to suspect that, both in the vegetable and animal kingdoms, an occasional intercross with a distinct individual is a law of nature. I am well aware that there are, on this view, many cases of difficulty, some of which I am trying to investigate. Finally then, we may conclude that in many organic beings, a cross between two individuals is an obvious necessity for each birth; in many others it occurs perhaps only at long intervals; but in none, as I suspect, can self-fertilisation go on for perpetuity.
Circumstances favourable to Natural Selection
This is an extremely intricate subject. A large amount of inheritable and diversified variability is favourable, but I believe mere individual differences suffice for the work. A large number of individuals, by giving a better chance for the appearance within any given period of profitable variations, will compensate for a lesser amount of variability in each individual, and is, I believe, an extremely important element of success. Though nature grants vast periods of time for the work of natural selection, she does not grant an indefinite period; for as all organic beings are striving, it may be said, to seize on each place in the economy of nature, if any one species does not become modified and improved in a corresponding degree with its competitors, it will soon be exterminated.
In man's methodical selection, a breeder selects for some definite object, and free intercrossing will wholly stop his work. But when many men, without intending to alter the breed, have a nearly common standard of perfection, and all try to get and breed from the best animals, much improvement and modification surely but slowly follow from this unconscious process of selection, notwithstanding a large amount of crossing with inferior animals. Thus it will be in nature; for within a confined area, with some place in its polity not so perfectly occupied as might be, natural selection will always tend to preserve all the individuals varying in the right direction, though in different degrees, so as better to fill up the unoccupied place. But if the area be large, its several districts will almost certainly present different conditions of life; and then if natural selection be modifying and improving a species in the several districts, there will be intercrossing with the other individuals of the same species on the confines of each. And in this case the effects of intercrossing can hardly be counterbalanced by natural selection always tending to modify all the individuals in each district in exactly the same manner to the conditions of each; for in a continuous area, the conditions will generally graduate away insensibly from one district to another. The intercrossing will most affect those animals which unite for each birth, which wander much, and which do not breed at a very quick rate. Hence in animals of this nature, for instance in birds, varieties will generally be confined to separated countries; and this I believe to be the case. In hermaphrodite organisms which cross only occasionally, and likewise in animals which unite for each birth, but which wander little and which can increase at a very rapid rate, a new and improved variety might be quickly formed on any one spot, and might there maintain itself in a body, so that whatever intercrossing took place would be chiefly between the individuals of the same new variety. A local variety when once thus formed might subsequently slowly spread to other districts. On the above principle, nurserymen always prefer getting seed from a large body of plants of the same variety, as the chance of intercrossing with other varieties is thus lessened.
Even in the case of slow-breeding animals, which unite for each birth, we must not overrate the effects of intercrosses in retarding natural selection; for I can bring a considerable catalogue of facts, showing that within the same area, varieties of the same animal can long remain distinct, from haunting different stations, from breeding at slightly different seasons, or from varieties of the same kind preferring to pair together.
Intercrossing plays a very important part in nature in keeping the individuals of the same species, or of the same variety, true and uniform in character. It will obviously thus act far more efficiently with those animals which unite for each birth; but I have already attempted to show that we have reason to believe that occasional intercrosses take place with all animals and with all plants. Even if these take place only at long intervals, I am convinced that the young thus produced will gain so much in vigour and fertility over the offspring from long-continued self-fertilisation, that they will have a better chance of surviving and propagating their kind; and thus, in the long run, the influence of intercrosses, even at rare intervals, will be great. If there exist organic beings which never intercross, uniformity of character can be retained amongst them, as long as their conditions of life remain the same, only through the principle of inheritance, and through natural selection destroying any which depart from the proper type; but if their conditions of life change and they undergo modification, uniformity of character can be given to their modified offspring, solely by natural selection preserving the same favourable variations.
Isolation, also, is an important element in the process of natural selection. In a confined or isolated area, if not very large, the organic and inorganic conditions of life will generally be in a great degree uniform; so that natural selection will tend to modify all the individuals of a varying species throughout the area in the same manner in relation to the same conditions. Intercrosses, also, with the individuals of the same species, which otherwise would have inhabited the surrounding and differently circumstanced districts, will be prevented. But isolation probably acts more efficiently in checking the immigration of better adapted organisms, after any physical change, such as of climate or elevation of the land, &c.; and thus new places in the natural economy of the country are left open for the old inhabitants to struggle for, and become adapted to, through modifications in their structure and constitution. Lastly, isolation, by checking immigration and consequently competition, will give time for any new variety to be slowly improved; and this may sometimes be of importance in the production of new species. If, however, an isolated area be very small, either from being surrounded by barriers, or from having very peculiar physical conditions, the total number of the individuals supported on it will necessarily be very small; and fewness of individuals will greatly retard the production of new species through natural selection, by decreasing the chance of the appearance of favourable variations.
If we turn to nature to test the truth of these remarks, and look at any small isolated area, such as an oceanic island, although the total number of the species inhabiting it, will be found to be small, as we shall see in our chapter on geographical distribution; yet of these species a very large proportion are endemic, that is, have been produced there, and nowhere else. Hence an oceanic island at first sight seems to have been highly favourable for the production of new species. But we may thus greatly deceive ourselves, for to ascertain whether a small isolated area, or a large open area like a continent, has been most favourable for the production of new organic forms, we ought to make the comparison within equal times; and this we are incapable of doing.
Although I do not doubt that isolation is of considerable importance in the production of new species, on the whole I am inclined to believe that largeness of area is of more importance, more especially in the production of species, which will prove capable of enduring for a long period, and of spreading widely. Throughout a great and open area, not only will there be a better chance of favourable variations arising from the large number of individuals of the same species there supported, but the conditions of life are infinitely complex from the large number of already existing species; and if some of these many species become modified and improved, others will have to be improved in a corresponding degree or they will be exterminated. Each new form, also, as soon as it has been much improved, will be able to spread over the open and continuous area, and will thus come into competition with many others. Hence more new places will be formed, and the competition to fill them will be more severe, on a large than on a small and isolated area. Moreover, great areas, though now continuous, owing to oscillations of level, will often have recently existed in a broken condition, so that the good effects of isolation will generally, to a certain extent, have concurred. Finally, I conclude that, although small isolated areas probably have been in some respects highly favourable for the production of new species, yet that the course of modification will generally have been more rapid on large areas; and what is more important, that the new forms produced on large areas, which already have been victorious over many competitors, will be those that will spread most widely, will give rise to most new varieties and species, and will thus play an important part in the changing history of the organic world.
We can, perhaps, on these views, understand some facts which will be again alluded to in our chapter on geographical distribution; for instance, that the productions of the smaller continent of Australia have formerly yielded, and apparently are now yielding, before those of the larger Europaeo-Asiatic area. Thus, also, it is that continental productions have everywhere become so largely naturalised on islands. On a small island, the race for life will have been less severe, and there will have been less modification and less extermination. Hence, perhaps, it comes that the flora of Madeira, according to Oswald Heer, resembles the extinct tertiary flora of Europe. All fresh-water basins, taken together, make a small area compared with that of the sea or of the land; and, consequently, the competition between fresh-water productions will have been less severe than elsewhere; new forms will have been more slowly formed, and old forms more slowly exterminated. And it is in fresh water that we find seven genera of Ganoid fishes, remnants of a once preponderant order: and in fresh water we find some of the most anomalous forms now known in the world, as the Ornithorhynchus and Lepidosiren, which, like fossils, connect to a certain extent orders now widely separated in the natural scale. These anomalous forms may almost be called living fossils; they have endured to the present day, from having inhabited a confined area, and from having thus been exposed to less severe competition.
To sum up the circumstances favourable and unfavourable to natural selection, as far as the extreme intricacy of the subject permits. I conclude, looking to the future, that for terrestrial productions a large continental area, which will probably undergo many oscillations of level, and which consequently will exist for long periods in a broken condition, will be the most favourable for the production of many new forms of life, likely to endure long and to spread widely. For the area will first have existed as a continent, and the inhabitants, at this period numerous in individuals and kinds, will have been subjected to very severe competition. When converted by subsidence into large separate islands, there will still exist many individuals of the same species on each island: intercrossing on the confines of the range of each species will thus be checked: after physical changes of any kind, immigration will be prevented, so that new places in the polity of each island will have to be filled up by modifications of the old inhabitants; and time will be allowed for the varieties in each to become well modified and perfected. When, by renewed elevation, the islands shall be re-converted into a continental area, there will again be severe competition: the most favoured or improved varieties will be enabled to spread: there will be much extinction of the less improved forms, and the relative proportional numbers of the various inhabitants of the renewed continent will again be changed; and again there will be a fair field for natural selection to improve still further the inhabitants, and thus produce new species.
That natural selection will always act with extreme slowness, I fully admit. Its action depends on there being places in the polity of nature, which can be better occupied by some of the inhabitants of the country undergoing modification of some kind. The existence of such places will often depend on physical changes, which are generally very slow, and on the immigration of better adapted forms having been checked. But the action of natural selection will probably still oftener depend on some of the inhabitants becoming slowly modified; the mutual relations of many of the other inhabitants being thus disturbed. Nothing can be effected, unless favourable variations occur, and variation itself is apparently always a very slow process. The process will often be greatly retarded by free intercrossing. Many will exclaim that these several causes are amply sufficient wholly to stop the action of natural selection. I do not believe so. On the other hand, I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time. I further believe, that this very slow, intermittent action of natural selection accords perfectly well with what geology tells us of the rate and manner at which the inhabitants of this world have changed.
This subject will be more fully discussed in our chapter on Geology; but it must be here alluded to from being intimately connected with natural selection. Natural selection acts solely through the preservation of variations in some way advantageous, which consequently endure. But as from the high geometrical powers of increase of all organic beings, each area is already fully stocked with inhabitants, it follows that as each selected and favoured form increases in number, so will the less favoured forms decrease and become rare. Rarity, as geology tells us, is the precursor to extinction. We can, also, see that any form represented by few individuals will, during fluctuations in the seasons or in the number of its enemies, run a good chance of utter extinction. But we may go further than this; for as new forms are continually and slowly being produced, unless we believe that the number of specific forms goes on perpetually and almost indefinitely increasing, numbers inevitably must become extinct. That the number of specific forms has not indefinitely increased, geology shows us plainly; and indeed we can see reason why they should not have thus increased, for the number of places in the polity of nature is not indefinitely great, not that we have any means of knowing that any one region has as yet got its maximum of species. probably no region is as yet fully stocked, for at the Cape of Good Hope, where more species of plants are crowded together than in any other quarter of the world, some foreign plants have become naturalised, without causing, as far as we know, the extinction of any natives.
Furthermore, the species which are most numerous in individuals will have the best chance of producing within any given period favourable variations. We have evidence of this, in the facts given in the second chapter, showing that it is the common species which afford the greatest number of recorded varieties, or incipient species. Hence, rare species will be less quickly modified or improved within any given period, and they will consequently be beaten in the race for life by the modified descendants of the commoner species.
From these several considerations I think it inevitably follows, that as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct. The forms which stand in closest competition with those undergoing modification and improvement, will naturally suffer most. And we have seen in the chapter on the Struggle for Existence that it is the most closely-allied forms, varieties of the same species, and species of the same genus or of related genera, which, from having nearly the same structure, constitution, and habits, generally come into the severest competition with each other. Consequently, each new variety or species, during the progress of its formation, will generally press hardest on its nearest kindred, and tend to exterminate them. We see the same process of extermination amongst our domesticated productions, through the selection of improved forms by man. Many curious instances could be given showing how quickly new breeds of cattle, sheep, and other animals, and varieties of flowers, take the place of older and inferior kinds. In Yorkshire, it is historically known that the ancient black cattle were displaced by the long-horns, and that these 'were swept away by the short-horns' (I quote the words of an agricultural writer) 'as if by some murderous pestilence.'
Divergence of Character
The principle, which I have designated by this term, is of high importance on my theory, and explains, as I believe, several important facts. In the first place, varieties, even strongly-marked ones, though having somewhat of the character of species as is shown by the hopeless doubts in many cases how to rank them yet certainly differ from each other far less than do good and distinct species. Nevertheless, according to my view, varieties are species in the process of formation, or are, as I have called them, incipient species. How, then, does the lesser difference between varieties become augmented into the greater difference between species? That this does habitually happen, we must infer from most of the innumerable species throughout nature presenting well-marked differences; whereas varieties, the supposed prototypes and parents of future well-marked species, present slight and ill-defined differences. Mere chance, as we may call it, might cause one variety to differ in some character from its parents, and the offspring of this variety again to differ from its parent in the very same character and in a greater degree; but this alone would never account for so habitual and large an amount of difference as that between varieties of the same species and species of the same genus.
As has always been my practice, let us seek light on this head from our domestic productions. We shall here find something analogous. A fancier is struck by a pigeon having a slightly shorter beak; another fancier is struck by a pigeon having a rather longer beak; and on the acknowledged principle that 'fanciers do not and will not admire a medium standard, but like extremes,' they both go on (as has actually occurred with tumbler-pigeons) choosing and breeding from birds with longer and longer beaks, or with shorter and shorter beaks. Again, we may suppose that at an early period one man preferred swifter horses; another stronger and more bulky horses. The early differences would be very slight; in the course of time, from the continued selection of swifter horses by some breeders, and of stronger ones by others, the differences would become greater, and would be noted as forming two sub-breeds; finally, after the lapse of centuries, the sub-breeds would become converted into two well-established and distinct breeds. As the differences slowly become greater, the inferior animals with intermediate characters, being neither very swift nor very strong, will have been neglected, and will have tended to disappear. Here, then, we see in man's productions the action of what may be called the principle of divergence, causing differences, at first barely appreciable, steadily to increase, and the breeds to diverge in character both from each other and from their common parent.
We can clearly see this in the case of animals with simple habits. Take the case of a carnivorous quadruped, of which the number that can be supported in any country has long ago arrived at its full average. If its natural powers of increase be allowed to act, it can succeed in increasing (the country not undergoing any change in its conditions) only by its varying descendants seizing on places at present occupied by other animals: some of them, for instance, being enabled to feed on new kinds of prey, either dead or alive; some inhabiting new stations, climbing trees, frequenting water, and some perhaps becoming less carnivorous. The more diversified in habits and structure the descendants of our carnivorous animal became, the more places they would be enabled to occupy. What applies to one animal will apply throughout all time to all animals that is, if they vary for otherwise natural selection can do nothing. So it will be with plants. It has been experimentally proved, that if a plot of ground be sown with several distinct genera of grasses, a greater number of plants and a greater weight of dry herbage can thus be raised. The same has been found to hold good when first one variety and then several mixed varieties of wheat have been sown on equal spaces of ground. Hence, if any one species of grass were to go on varying, and those varieties were continually selected which differed from each other in at all the same manner as distinct species and genera of grasses differ from each other, a greater number of individual plants of this species of grass, including its modified descendants, would succeed in living on the same piece of ground. And we well know that each species and each variety of grass is annually sowing almost countless seeds; and thus, as it may be said, is striving its utmost to increase its numbers. Consequently, I cannot doubt that in the course of many thousands of generations, the most distinct varieties of any one species of grass would always have the best chance of succeeding and of increasing in numbers, and thus of supplanting the less distinct varieties; and varieties, when rendered very distinct from each other, take the rank of species.
The truth of the principle, that the greatest amount of life can be supported by great diversification of structure, is seen under many natural circumstances. In an extremely small area, especially if freely open to immigration, and where the contest between individual and individual must be severe, we always find great diversity in its inhabitants. For instance, I found that a piece of turf, three feet by four in size, which had been exposed for many years to exactly the same conditions, supported twenty species of plants, and these belonged to eighteen genera and to eight orders, which shows how much these plants differed from each other. So it is with the plants and insects on small and uniform islets; and so in small ponds of fresh water. Farmers find that they can raise most food by a rotation of plants belonging to the most different orders: nature follows what may be called a simultaneous rotation. Most of the animals and plants which live close round any small piece of ground, could live on it (supposing it not to be in any way peculiar in its nature), and may be said to be striving to the utmost to live there; but, it is seen, that where they come into the closest competition with each other, the advantages of diversification of structure, with the accompanying differences of habit and constitution, determine that the inhabitants, which thus jostle each other most closely, shall, as a general rule, belong to what we call different genera and orders.
The same principle is seen in the naturalisation of plants through man's agency in foreign lands. It might have been expected that the plants which have succeeded in becoming naturalised in any land would generally have been closely allied to the indigenes; for these are commonly looked at as specially created and adapted for their own country. It might, also, perhaps have been expected that naturalised plants would have belonged to a few groups more especially adapted to certain stations in their new homes. But the case is very different; and Alph. De Candolle has well remarked in his great and admirable work, that floras gain by naturalisation, proportionally with the number of the native genera and species, far more in new genera than in new species. To give a single instance: in the last edition of Dr Asa Gray's 'Manual of the Flora of the Northern United States,' 260 naturalised plants are enumerated, and these belong to 162 genera. We thus see that these naturalised plants are of a highly diversified nature. They differ, moreover, to a large extent from the indigenes, for out of the 162 genera, no less than 100 genera are not there indigenous, and thus a large proportional addition is made to the genera of these States.
By considering the nature of the plants or animals which have struggled successfully with the indigenes of any country, and have there become naturalised, we can gain some crude idea in what manner some of the natives would have had to be modified, in order to have gained an advantage over the other natives; and we may, I think, at least safely infer that diversification of structure, amounting to new generic differences, would have been profitable to them.
After the foregoing discussion, which ought to have been much amplified, we may, I think, assume that the modified descendants of any one species will succeed by so much the better as they become more diversified in structure, and are thus enabled to encroach on places occupied by other beings. Now let us see how this principle of great benefit being derived from divergence of character, combined with the principles of natural selection and of extinction, will tend to act.
In a large genus it is probable that more than one species would vary. In the diagram I have assumed that a second species (I) has produced, by analogous steps, after ten thousand generations, either two well-marked varieties (w10 and z10) or two species, according to the amount of change supposed to be represented between the horizontal lines. After fourteen thousand generations, six new species, marked by the letters n14 to z14, are supposed to have been produced. In each genus, the species, which are already extremely different in character, will generally tend to produce the greatest number of modified descendants; for these will have the best chance of filling new and widely different places in the polity of nature: hence in the diagram I have chosen the extreme species (A), and the nearly extreme species (I), as those which have largely varied, and have given rise to new varieties and species. The other nine species (marked by capital letters) of our original genus, may for a long period continue transmitting unaltered descendants; and this is shown in the diagram by the dotted lines not prolonged far upwards from want of space.
But during the process of modification, represented in the diagram, another of our principles, namely that of extinction, will have played an important part. As in each fully stocked country natural selection necessarily acts by the selected form having some advantage in the struggle for life over other forms, there will be a constant tendency in the improved descendants of any one species to supplant and exterminate in each stage of descent their predecessors and their original parent. For it should be remembered that the competition will generally be most severe between those forms which are most nearly related to each other in habits, constitution, and structure. Hence all the intermediate forms between the earlier and later states, that is between the less and more improved state of a species, as well as the original parent-species itself, will generally tend to become extinct. So it probably will be with many whole collateral lines of descent, which will be conquered by later and improved lines of descent. If, however, the modified offspring of a species get into some distinct country, or become quickly adapted to some quite new station, in which child and parent do not come into competition, both may continue to exist.
If then our diagram be assumed to represent a considerable amount of modification, species (A) and all the earlier varieties will have become extinct, having been replaced by eight new species (a14 to m14); and (I) will have been replaced by six (n14 to z14) new species.
It is worth while to reflect for a moment on the character of the new species F14, which is supposed not to have diverged much in character, but to have retained the form of (F), either unaltered or altered only in a slight degree. In this case, its affinities to the other fourteen new species will be of a curious and circuitous nature. Having descended from a form which stood between the two parent-species (A) and (I), now supposed to be extinct and unknown, it will be in some degree intermediate in character between the two groups descended from these species. But as these two groups have gone on diverging in character from the type of their parents, the new species (F14) will not be directly intermediate between them, but rather between types of the two groups; and every naturalist will be able to bring some such case before his mind.
In the diagram, each horizontal line has hitherto been supposed to represent a thousand generations, but each may represent a million or hundred million generations, and likewise a section of the successive strata of the earth's crust including extinct remains. We shall, when we come to our chapter on Geology, have to refer again to this subject, and I think we shall then see that the diagram throws light on the affinities of extinct beings, which, though generally belonging to the same orders, or families, or genera, with those now living, yet are often, in some degree, intermediate in character between existing groups; and we can understand this fact, for the extinct species lived at very ancient epochs when the branching lines of descent had diverged less.
I see no reason to limit the process of modification, as now explained, to the formation of genera alone. If, in our diagram, we suppose the amount of change represented by each successive group of diverging dotted lines to be very great, the forms marked a214 to p14, those marked b14 and f14, and those marked o14 to m14, will form three very distinct genera. We shall also have two very distinct genera descended from (I) and as these latter two genera, both from continued divergence of character and from inheritance from a different parent, will differ widely from the three genera descended from (A), the two little groups of genera will form two distinct families, or even orders, according to the amount of divergent modification supposed to be represented in the diagram. And the two new families, or orders, will have descended from two species of the original genus; and these two species are supposed to have descended from one species of a still more ancient and unknown genus.
We have seen that in each country it is the species of the larger genera which oftenest present varieties or incipient species. This, indeed, might have been expected; for as natural selection acts through one form having some advantage over other forms in the struggle for existence, it will chiefly act on those which already have some advantage; and the largeness of any group shows that its species have inherited from a common ancestor some advantage in common. Hence, the struggle for the production of new and modified descendants, will mainly lie between the larger groups, which are all trying to increase in number. One large group will slowly conquer another large group, reduce its numbers, and thus lessen its chance of further variation and improvement. Within the same large group, the later and more highly perfected sub-groups, from branching out and seizing on many new places in the polity of Nature, will constantly tend to supplant and destroy the earlier and less improved sub-groups. Small and broken groups and sub-groups will finally tend to disappear. Looking to the future, we can predict that the groups of organic beings which are now large and triumphant, and which are least broken up, that is, which as yet have suffered least extinction, will for a long period continue to increase. But which groups will ultimately prevail, no man can predict; for we well know that many groups, formerly most extensively developed, have now become extinct. Looking still more remotely to the future, we may predict that, owing to the continued and steady increase of the larger groups, a multitude of smaller groups will become utterly extinct, and leave no modified descendants; and consequently that of the species living at any one period, extremely few will transmit descendants to a remote futurity. I shall have to return to this subject in the chapter on Classification, but I may add that on this view of extremely few of the more ancient species having transmitted descendants, and on the view of all the descendants of the same species making a class, we can understand how it is that there exist but very few classes in each main division of the animal and vegetable kingdoms. Although extremely few of the most ancient species may now have living and modified descendants, yet at the most remote geological period, the earth may have been as well peopled with many species of many genera, families, orders, and classes, as at the present day.
Summary of Chapter
Whether natural selection has really thus acted in nature, in modifying and adapting the various forms of life to their several conditions and stations, must be judged of by the general tenour and balance of evidence given in the following chapters. But we already see how it entails extinction; and how largely extinction has acted in the world's history, geology plainly declares. Natural selection, also, leads to divergence of character; for more living beings can be supported on the same area the more they diverge in structure, habits, and constitution, of which we see proof by looking at the inhabitants of any small spot or at naturalised productions. Therefore during the modification of the descendants of any one species, and during the incessant struggle of all species to increase in numbers, the more diversified these descendants become, the better will be their chance of succeeding in the battle of life. Thus the small differences distinguishing varieties of the same species, will steadily tend to increase till they come to equal the greater differences between species of the same genus, or even of distinct genera.
We have seen that it is the common, the widely-diffused, and widely-ranging species, belonging to the larger genera, which vary most; and these will tend to transmit to their modified offspring that superiority which now makes them dominant in their own countries. Natural selection, as has just been remarked, leads to divergence of character and to much extinction of the less improved and intermediate forms of life. On these principles, I believe, the nature of the affinities of all organic beings may be explained. It is a truly wonderful fact the wonder of which we are apt to overlook from familiarity that all animals and all plants throughout all time and space should be related to each other in group subordinate to group, in the manner which we everywhere behold namely, varieties of the same species most closely related together, species of the same genus less closely and unequally related together, forming sections and sub-genera, species of distinct genera much less closely related, and genera related in different degrees, forming sub-families, families, orders, sub-classes, and classes. The several subordinate groups in any class cannot be ranked in a single file, but seem rather to be clustered round points, and these round other points, and so on in almost endless cycles. On the view that each species has been independently created, I can see no explanation of this great fact in the classification of all organic beings; but, to the best of my judgment, it is explained through inheritance and the complex action of natural selection, entailing extinction and divergence of character, as we have seen illustrated in the diagram.
The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. The green and budding twigs may represent existing species; and those produced during each former year may represent the long succession of extinct species. At each period of growth all the growing twigs have tried to branch out on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life. The limbs divided into great branches, and these into lesser and lesser branches, were themselves once, when the tree was small, budding twigs; and this connexion of the former and present buds by ramifying branches may well represent the classification of all extinct and living species in groups subordinate to groups. Of the many twigs which flourished when the tree was a mere bush, only two or three, now grown into great branches, yet survive and bear all the other branches; so with the species which lived during long-past geological periods, very few now have living and modified descendants. From the first growth of the tree, many a limb and branch has decayed and dropped off; and these lost branches of various sizes may represent those whole orders, families, and genera which have now no living representatives, and which are known to us only from having been found in a fossil state. As we here and there see a thin straggling branch springing from a fork low down in a tree, and which by some chance has been favoured and is still alive on its summit, so we occasionally see an animal like the Ornithorhynchus or Lepidosiren, which in some small degree connects by its affinities two large branches of life, and which has apparently been saved from fatal competition by having inhabited a protected station. As buds give rise by growth to fresh buds, and these, if vigorous, branch out and overtop on all sides many a feebler branch, so by generation I believe it has been with the great Tree of Life, which fills with its dead and broken branches the crust of the earth, and covers the surface with its ever branching and beautiful ramifications. | <urn:uuid:702e28fa-06f4-4820-885b-a5a5ba33e04e> | 3 | 3.3125 | 0.030885 | en | 0.967441 | http://literature.org/authors/darwin-charles/the-origin-of-species/chapter-04.html |
a journal of modern society & culture
The Clogged Capillaries of the Peruvian Amazon
When one decides to take the trip into the jungle city of Iquitos – the largest city in the world inaccessible by road – there are two options. The first is a flight by one of Peru’s many domestic airlines, 5 to 10 times per day, with a flight time of approximately 2 hours. This will cost between $100-$200 one way, far too out of the range of the average Peruvian worker who in Lima can make 50 soles per day ($18.50 on today’s markets), and in the provinces a mere 25-30. And certainly out of the range of my budget, a traveling journalist on a mission to send home my stories of peeling through the carefully crafted and commodified layers of culture that are for sale on the international market for anyone who is willing to pay the steep fees for “experiences”.
The need to travel to and from the Amazon on a budget has given rise to the network of river ships that carry cargo, livestock, and people to and from Tarapoto, Iquitos, and Pucalpa – to name a few Peruvian ports – and sustain the human presence deep within the jungle. As I arrived to the port of Yurimaguas that night after having been delayed a few hours due to land slides that often plague the roads swerving between mountains in the high jungle, I was presented with a pleasant and comforting illusion as most things present themselves when you are a foreigner traveling through the jungle. There was an empty ship, its pale blue steel stood as a reminder to all of how the sky and heavens reflect to the iris before being shrouded in a thick pillow of clouds that replenish the Amazon with its life source of sudden and heavy sheets of rain. The three floors were stacked like a jenga puzzle, with neither windows nor walls, allowing the few gusts of wind that find themselves in this area of the world to break the stranglehold and provide relief to all those suffering from the stifling humidity. The green, murky waters of the Amazon slapped up against the side of the ship as it shuddered up and down with the current; a whole new world of mysterious and magical creatures living below the river’s dirty shell adorned with drifting pieces of wood and plastic soda bottles.
The ride lasts for 3 days, but nothing is ever on schedule. The ships leave as they fill, and after learning that most of the cargo trucks were jammed in the same landslide that had affected me, I quickly began to understand the reason the ship was empty for the first night. In order to travel on the Amazon a hammock was required, unless one preferred to spend the nights on the steel floor, as well as some kind of plate or bowl to receive the scoops of rice and boiled plantains that were provided with passage. As the ship crammed with passengers the following morning and hammocks were tied up encircling me on all sides, my unexpected quest to learn about the Amazon region through the eyes of my compatriots began. Most adults only said a few words to me during mealtimes, still trying to get used to the fact that I was obviously not from Peru yet was using the local mode of transport. The children would always be running by asking me questions, having me point out where I came from on maps that were usually only of Peru, dumbfounded when I tried to explain to them my country was far above the limits of the paper they held in front of me. In turn they told me about their lives, their favorite foods, their hopes and thoughts.
As the ship left port every passenger crowded along the edges, hanging onto the steel poles that wrapped the ship, waving goodbye to relatives, staring out onto the port, or examining the herd of cattle that was encased behind wooden railings quickly latched together at the last minute before departure. The first observation that took my breath away was only a few hours into the voyage. I realized the trees were not getting any larger. Originally it seemed that the short trees, much more reminiscent of the local park in the suburb of New Jersey that I had grown up in rather than the legendary Amazon rainforest, were due to the close proximity of the port town of Yurimaguas. However, as we progressed onward through the currents of the Amazon, I had the feeling that perhaps I wasn’t in the Amazon at all but rather in a plant nursery where the latest decorative styles were being germinated for corporate offices on Wall St.
“What do you think?” A middle-aged man asked me after lunch, speaking slowly, hesitant to see if I would even respond in Spanish. I told him I was very surprised, what happened to the jungle? Where were the trees whose roots were so large that they burst out of the ground and grew into caverns large enough to live inside? Where was the height that led to the development of the multiple levels of the rainforest, something I had remembered being taught in elementary school? He chuckled, leaning his head back and closing his eyes, taking a deep breath of the air that was free of the smog of idling ships and sawdust.
“I remember when the trees were tall enough to reach the sky,” he said, looking up, “I couldn’t see above any of them and under each canopy there was a whole wilderness of animals, flying from branch to branch, swinging, singing. My father taught me the way to navigate through it, always use the machete to cut to one side,” He made the motions with his right hand, cutting through the breeze that rushed along the deck, “that way when you retrace your steps you can always follow your path. Things haven’t been that way in a long time, since the lumber companies came in whole fields have been cleared out. It started with the forests near the cities and villages and worked its way outwards, now getting to the point where the lumber companies need to take boats for twenty days through the minor river systems to find trees that are even worth harvesting.”
* *
According to a recent Wikileaks cable as much as 90% of Peru’s broad leaf mahogany, a breed that is considered endangered prompting Brazil to halt its exports and Bolivia to drastically reduce theirs, is exported illegally, much of it being done with the government’s knowledge. To top it off the United States is the purchaser of a majority of this timber, one statistic claiming that the US bought 88% of Peru’s 2005 exports. At this rate it is no surprise the Amazon is slowly converting its status into that of a suburban backyard, with its wildlife and peoples taking the hit.
Throughout the journey the ship made frequent stops in small indigenous communities, the brown patched roofs of their houses blending into the dirt and mud on the banks of the river. Of the 15 or 20 stops they all followed the same routine. Men from the village would approach the ship carrying large bundles of plantains, handing them over to the workers on the ship in exchange for western commodities; soda, beer, and potato chips. The women and children would rush onto the ship calculating the best strategy to reach the 300 people anxiously waiting for them on top deck, sometimes dangling their arms over the edges and motioning to have articles thrown up to them, to buy and trade foodstuffs. During these moments exotic fruits, raw vegetables, cooked beef, grilled boar, and barbecued river fish were all sold to the passengers of the ship, prices ranging between $0.25 and at most $1.50. Most of the time these indigenous salespeople would leave the ship sold out, especially if the stop happened to be before mealtimes, and would use the little bits of money they had collected for purchasing the products that couldn’t be produced within the constraints of their sustainable lifestyle. At the end of the day even the most isolated communities can enjoy a glass of Coca Cola during meals.
This image seemed to me to be the most obvious example of the integration of the Peruvian Amazon into the global economy; however, the most destructive are those that are the hardest to see. The Amazon is called by many here the “lungs of the earth”, a slogan meant to give the locals pride in their communities, attract tourists, and draw attention to the enormous problems we would face globally if the Amazon was altered enough to slow down or even stop doing its job. It’s not necessary to have to explain here what happens to a living being if their lungs slowly stopped working. A recent report titled Rainforest Deforestation and Climate Change by The Environmental Defense Fund (www.edf.org) estimates that deforestation, both the removal of trees as well as the usage of cleared lands for cattle grazing and crop growing, released an equivalent of 15-35% of annual fossil fuel emissions during the 1990s. Of course, with today’s new scientific consensus on the issue it is obvious that climate change is not a linear teleology but a circular isomorphic problem and the Amazon is obviously being harmed by these logging practices in an exponential manner, besides from the loss of its aesthetic “jungle” mystery that had taken me by surprise and shocked me into attention.
The timber economy has protruded its never-ending lust for profit so deep into the jungle that its arms have reached and ensnarled areas that have never even been seen by modern man. Survival International, an organization fighting for the rights of uncontacted peoples all over the world, estimates in its numerous reports and awareness campaigns that the few remaining uncontacted tribes that call the South American Amazon their home are being blinded by the bright light of modernity by the actions of logging companies and the “representatives of the new world” that they employ and send out, axes sharpened. A radical change from the missionaries, church bureaucrats, and conquistadors that had courted the cousins of these uncontacted tribes centuries earlier, but an ironic reflection of who our society decides to send out to bring in the last patches of “barbarism” into “enlightenment”, whether we consciously choose to or not. The tribes often have to relocate, pushing them into confrontation with other tribes, and some who have chosen to make contact have described in horrific detail the fear that the monstrous stone skinned animals bring to their people as they eat away through trees leaving desolate wastelands behind them.
The ship had no set time of arrival, in a Kafkaesque way everyone seemed to know exactly where we were and how long we had traveled, though they all disagreed; I would get my updates from the man who slept to my right. On the fourth day, he was sitting up in his hammock peeling apart a papaya, motioning me over and gently tossing me a slab of its orange flesh saying, “We’ll be there tomorrow morning but I won’t see you, I’m getting off before Iquitos and I’m sure it will be around 3 or 4 in the morning.” I ate the fruit with him, and talked about his plans now that the temporary job he had on the coastal city of Piura had ended. We hurled the papaya skins over the edge of the ship from our hammocks where they stayed afloat on the river like small toy boats. He told me he was sure to be back but for now he was looking forward to spending time with his family who he hadn’t seen in 5 months. The next morning as he promised, he was gone; I had arrived in Iquitos.
A city with a dense and rich past but an uncertain future, the Plaza de Armas and surrounding blocks were filled with expensive hotels, tour guide offices touting the latest and greatest in jungle getaways, fancy restaurants serving fusions of the local cuisine, souvenir shops, and leftover ruins of the famous synagogue – a remnant from the Sephardic Jews of Morocco who arrived here during a rubber boom in the early 19th century. Marcel, the owner of the guesthouse I was staying in and supposedly a descendant of this ancient Jewish community, told me at one time Iquitos was such a popular tourist destination that there were non-stop flights from New York City. They had to be cancelled because the large planes were killing off the thousands of vultures that lived off of the leftover animal parts rotting around in the market after a long days work. “They had to clean the dead birds out of the jet engines every time! I guess the bastards got too cheap to continue” he laughed, with a thick accent and a cigarette dangling between his lips as he wiped down the ashtrays at the sink. He was a man that had an extensive knowledge of the city, living here all of his life but having attended university in Lima where he learned English and studied politics. Marcel knew everyone in Peru; either through what seemed to be flakey business ties or through the work he did with his beloved political party, Acción Popular. We passed our nights talking about Jose Carlos Mariategui, the famous Peruvian communist of the early 20th century, and what he would think of Henry, the infamous man known to everyone in Iquitos only by his first name that owned all the cargo ships and was now investing in the construction of oil tankers specially designed for river travel. He would often stumble out of his chair laughing, gasping for breath as he described Henry’s ties to the Amazonian organized crime ring, tears pouring out of his dry, bloodshot eyes.
Like any city that has had to make a switch from a semi-sustainable local economy to a tourist economy, everything in Iquitos is available to the tourist at a price. The indigenous people who still live in the outlying jungle surrounding the city will perform, sing, and dance for anyone who is willing to buy an anaconda bone bracelet afterwards. The Bora, one of the indigenous groups that I had seen perform, do not actually live the life that they put up for sale to tourists. After a further investigation, which just involved me walking through the jungle with some friends instead of leaving on the small rented motorboat we came on, I discovered what looked like a small suburban neighborhood in the larger cities of the Peruvian coasts secretly tucked behind the thick jungle foliage, with houses, plumbing, and partially paved roads.
The whole polis is organized around making money by selling goods, photo opportunities, and the shades of long forgotten cultures. Even exotic animals are traded in the marketplace of Belen. Walking up and around the fish gut stained concrete I found nets of colorful feathered birds, monkeys clawing and swinging in cages, and scaly prehistoric fish apparently able to live outside of water for three days. A booming economy of child sex work is brought to awareness by the large painting mural on the side of an old building near the main plaza reading, “No Al Turismo Sexual Infantial”. The stains of the neo-liberal drive to turn everything, and everyone, into an object to be bought or sold, traded or trashed, have penetrated deep into the river systems with lumber, oil, and tourism of all varieties.
It has become a war on two fronts in the Amazon. As if deforestation is not enough petroleum has become the new high priced commodity bubbling below the depths, and everyone is waiting on line to get a piece of the action. I remember the conversation I had with an Argentinean couple that met while living in the jungle for months organizing tribes against the exploitation of the oil companies. They were in their early twenties, coming out into the jungle because they were “tired of hearing about the change, we wanted to make the change”, as they put it. They expected an idealized life in the jungle communities, without private property, corruption, and political scandal, problems they were all too used to back home in Cordoba. “I remember the first moment reality smacked me,” the man said.
“I woke up and went to have a bite for breakfast before going off to find something to do for the day and the whole tribe was in disarray. ‘He is gone, he went with the men’ people were telling me. I was confused. It was only later that evening that I was able to piece it all together. That the man who was put in charge to lead the community, the tribal chief, had run off with about $1000 that the oil companies had paid him after selling them the community’s land. Everything that they were on, poof, gone, open for exploration, who’s next? The next few days the roads started going up, they were in perfect grids that criss-crossed through the jungle. This wasn’t because they had found anything yet, this was just for exploration, but the time was coming and the whole community was pushed over.”
The abuse is felt by everyone and resonates in all corners of the jungle. While it can manifest itself in ways that lead to local empowerment and general improvement in people’s standards of living, ideology has a way of manifesting itself in the most despicable ways. On a few instances in different jungle cities throughout Peru and Ecuador I noticed storefronts that hung swastikas in their windows. I was always stunned and confused, walking past slowly starring at the icon, never able to find anyone to ask about it. Finally I had the opportunity to confront what to me was hypocrisy when I found a small storefront in the marketplace with the symbol. The owner of the shop, which specialized in old plastic cell phone casings, explained that this is the symbol of the jungle independence movement. “We’re not racists”, he told me, shocked that I would think that,
“We are exactly what the symbol means, we are Nationalist Socialists. Our nation is the Amazon, and since the discovery of Peru until now it is being exploited while we are left out with nothing. First it was the Spanish, then it was the Americans and Europeans, and now it’s our own people, the people on the coast, the limaños, they take all of our resources to fund the country that they call Peru but where is our place in Peru. You’ve seen it, how many restaurants do you find in Lima serving cuisine from the jungle, where have you seen our music and dance, how easy is it for an Amazonian to find a job in the city, it’s impossible. They use us for everything we have and at the end we get nothing. That is why we’re standing up and saying enough is enough, we want our independence and our resources for ourselves.”
By this time the man had a few customers in his shop egging him on and throwing in examples. “We don’t get any money from the ministry of tourism!” One man in the crowd yelled, “They don’t even know what juanes or cecina is!” another woman said jokingly to me as I left the store.
The week I spent in Iquitos gave me much to think about during the six-day boat journey to Pucalpa. I was born in the United States, a country that is part of the global monster in consuming resources and causing destruction in exploring for new ways to feed the addiction. At the end of the day the events of the past two weeks are direct expressions, however negative they may be, of the way that I live at home, my family, and my friends. This is the ugly side of what it means to be developed in the 21st century and heading into a crisis of global overdevelopment while the planet reacts lurching and staggering to our human changes. While facing the challenges we today have to solve I believe it is extremely necessary to redefine what it means to be “developed”.
photo by Magdalena García B.
It did not take me long before I began to see the physical realities of my thought process. While I remembered passing many indigenous communities on the banks of the Amazon throughout my journey to Iquitos, the picture that passed in front of me as I lay in my hammock was not the same. The area of the river had flooded, taking houses, livestock, and crops with it. Every community I passed was in a different stage of shock. Some seemed to be lifeless, all human activity completely disappeared with rooftops poking through the running water. Others had adapted, with planks and bridges already in place with people running supplies to and fro; going about their daily lives. Whenever the ship stopped to unload and take on goods, workers of the ship took lists from the communities of the types of emergency supplies they would need to build back up again. Of course, this probably wouldn’t reach them for at least two weeks. Life had turned upside down.
A few days of confusion and shock from the people onboard and we were finally able to get a straight answer. Ten or twenty years ago, according to a passenger on the ship who was from one of the villages, the river had receded, giving way for about 50 feet of usable land. At first people were skeptical, thinking this was just a temporary event. “After about 5 years” he said, almost in disbelief, “we thought it was the new normal and our villages adapted, we moved closer to the river and expanded. I’m just as confused as you are, I don’t know why it came back, and after so long.” I saw him peering out over the edge of the ship everyday from then on, observing the destruction with a fixed stare on every village we passed.
On my final day aboard the ship destined for Pucalpa, I was chatting and sharing a cigarette with a 17-year-old boy heading for Lima to find a job. He stopped suddenly and starred at the shore, the land and trees slowly creeping by. I turned my head, trying to get a glimpse of what he was so mesmerized by. Finally, I had the courage to break the silence and ask, “What is it? What are you looking at?” He muttered, still in the babble of disbelief. “This is the first time I have ever seen a mountain”, he whispered to me. In the distance, climbing towards the sky, its peak lost in the grey blanket, there was something, not a mountain, at least not like the ones I had seen crossing the Andes, but a hill at least. I laughed with joy and shared in the mystery, sighing as I thought of all the things this boy was going to discover, our collective reflection as humans during 10,000 years of civilization on this planet, in the vast 9 million-person metropolis of Lima.
Riad Azar graduated from William Paterson University in 2011. He is a travelling independent journalist currently making his way from Ushuaia to New York writing about politics, society and social struggles as well as fictions. His website is www.nomadjournalism.com, and he can be reached at [email protected].
Email this to someoneTweet about this on TwitterShare on Facebook | <urn:uuid:a71bd5f1-296c-4715-bdec-ced4cc4eeecb> | 2 | 1.632813 | 0.071659 | en | 0.978694 | http://logosjournal.com/2012/winter_azar/ |
Share this story
Close X
Switch to Desktop Site
Could you pass a US citizenship test?
About these ads
Question 26 of 96
26. What is one promise you make when you become a United States citizen?
never travel outside the United States
give up loyalty to other countries
disobey the laws of the United States
not defend the Constitution and laws of the United States
Next Previous
Question 26 of 96 | <urn:uuid:bbf61dd4-3dcb-4bfc-a5ff-927af15cfdcf> | 2 | 1.523438 | 0.595408 | en | 0.82291 | http://m.csmonitor.com/USA/2011/0104/Could-you-pass-a-US-citizenship-test/Promise-when-becoming-a-US-citizen/(result)/1 |
Game of Drones
AP Photo/Kirsty Wigglesworth, File
The recent release of White House memos outlining the legal justifications the Obama administration believes it has to use drone strikes— against both foreign nationals and American citizens—reminds us that while the American public was otherwise occupied, a revolution in warfare was beginning. This revolution has some ways to go—we're not quite at the point where our next war is going to be fought by nothing but robots on land, sea, and air. But drones become more important not just to our military but to militaries all over the world with each passing year.
Unmanned aerial vehicles, and their use in war, have a history nearly as long as aviation itself. During a siege of Venice in 1849, Austria launched balloons carrying explosives over the city—the first recorded use of aerial bombing. In 1863, a New York inventor named Charles Perley patented an unmanned aerial bombing balloon for use in the Civil War (it proved less than reliable, so it had no effect on the war's outcome). The United States tested unmanned vehicles during World War I, but the war ended before any could be deployed. During World War II, Germany used the V-1, an unmanned plane that would fly to its target and detonate a bomb (or, as often happened, crash along the way), and in response the U.S. Navy retrofitted planes to be flown by remote control to target V-1 launch sites. Israel, the only country that currently rivals the United States in its use of military drones, has been building its own since the 1970s.
The current wave of drone acquisition and use isn't so much a new development as a long tradition, accelerated.
A German V-1 flying bomb from World War II
One of the privileges of global military hegemony is that we take it as our right to fly unmanned aircraft into other countries, see what's happening on the ground, and if we choose, launch a strike. If another country did that in our territory—let's say if China located a dissident expatriate it considered an enemy of the state living in Hawaii, and took him out with a missile launched from a drone—we'd consider it an unconscionable violation of our sovereignty and an act of war.
While it's unlikely that any country will be launching drone strikes onto U.S. territory any time soon, the time when only the United States sends an unmanned aircraft over a border to execute "kinetic" operations may not last much longer. As Peter Bergen and Jennifer Rowland of the New America Foundation wrote, "Just as the U.S. government justifies its drone strikes with the argument that it is at war with al Qaeda and its affiliates, one could imagine that India in the not too distant future might launch such attacks against suspected terrorists in Kashmir, or China might strike Uighur separatists in western China, or Iran might attack Baluchi nationalists along its border with Pakistan."
And we're helping along the proliferation that could make it more likely. The United States is far and away the world's leading arms merchant, supplying both developed and developing countries with all manner of weaponry, and we're selling drones abroad as well. Last year the Defense Department released policy guidelines listing 66 countries that would be eligible to buy drones from U.S. manufacturers. As yet, the government has allowed armed drones to be sold only to a few close allies, with the rest being allowed to buy surveillance drones.
Those that can't get the drones they want from us buy them from elsewhere (Israel is a big seller) or develop them themselves. According to a 2012 report from the Government Accountability Office, there are now 75 countries that possess unmanned aerial vehicles (UAVs), from large powers like the U.S. and China all the way down to places like Angola and Latvia. But that covers all kinds of UAVs; the International Institute for Strategic Studies has identified 11 countries—the United States, France, Germany, Italy, Turkey, the United Kingdom, Russia, China, India, Iran, and Israel—that have armed military drones.
Other than the United States, no country is moving as aggressively to build and deploy drones as China, which is busily constructing drone bases and unveiling one new drone model after another, many of which appear to be clones of UAVs developed for the U.S. military. Although the Chinese are eager to boast of their progress, they keep the details secret, so we have no way of knowing how many drones they have, what their capabilities are, or even if the models they've shown to the world at airshows can fly.
Before the war in Afghanistan began, Pentagon spending on unmanned aerial vehicles was modest, amounting to a few hundred million dollars a year, pocket change in a budget that runs into the hundreds of billions. But America's new protracted conflicts against dispersed enemies, combined with advancements in drone technology, made drones an increasingly attractive option for the military, and spending rose precipitously to its current level of just under $4 billion a year (that number covers only procurement, the cost of buying new drones; it doesn't count maintenance and operating costs). That's still a relatively small amount of money, but one of the chief attractions of drones is that they cost much less to purchase than many other kinds of equipment (those figures also don't include the CIA's drone program, which is separate and secret). The most expensive drone will run the Pentagon around $30 million; compare that to the $377 million production cost we paid for each F-22 fighter jet.
The biggest defense contractors—Lockheed Martin, Boeing, Raytheon, Northrup Grumman—are now all in the drone business, as are lots of smaller companies. The drones the U.S. military uses range from the massive Global Hawk with its 130-foot wingspan all the way down to small radio-controlled surveillance craft carried by individual soldiers that resemble children's toys more than military tools. British soldiers in Afghanistan are now using the Black Hornet Nano, a 4-inch long drone that looks like a toy helicopter but is equipped with a camera and can stay aloft for half an hour relaying video back to its operator.
At the beginning of last year, the armed forces had 7,500 drone aircraft, meaning that one out of every three flying machines in the military was a drone (though the majority are the small, hand-launched kind). The Pentagon is considering scaling back on procurement, on the theory that it has about enough drones for the near future. But that doesn't mean they're becoming less important; quite the contrary. The military has even created a medal you can win for piloting a drone; it will rank above the Bronze Star, despite the fact that you can earn it without any risk to life or limb.
At the moment, drones can't do everything human-piloted aircraft can do—for instance, they aren't maneuverable and quick enough to engage in dogfights—but if the technology continues to advance, it isn't hard to envision a day when human pilots sitting in the cockpits of planes have all but disappeared from the military.
If President Obama was hoping that the use of drone strikes would enable him to carry out missions targeting suspected terrorists without any of the potential public disapproval that can accompany manned missions, he seems to have been right. According to a recent poll from the Pew Research Center, majorities of Democrats, Republicans, and independents all support drone strikes against suspected terrorists. Other polls have found support reaching as high as 70 percent.
Not too surprisingly, people in other countries are less enthusiastic about U.S. drones conducting bombing raids abroad. When the Pew Global Attitudes Project asked people in 20 countries whether they approved of U.S. drone strikes, the only country where a majority said yes was the United States itself. Even among some of our allies and despite generally positive feelings about Barack Obama, there is widespread condemnation of American drone policy around the world.
Now that drones have become a key part of American military operations, defense contractors are moving to make them fancier, sexier, and more complicated—and naturally, more expensive—than ever before. Witness the Triton, a massive surveillance drone under development, or the sleek Northrup Grumman X47-B, which is designed to take off from and land on aircraft carriers.
Courtesy of Northrup Grumman
At the same time, other drones are also getting smaller and cheaper, putting them within the range of municipal governments, large public institutions, and even corporations and individuals. Dozens of local public entities, from universities to sheriff's departments, have applied to the Federal Aviation Administration for licenses to fly drones in their areas. Realtors have used drones to photograph houses for sale. It isn't hard to imagine a war between rival drug cartels being waged with armed drones dropping bombs on cartel leaders' mansions. So will we one day see fleets of the little spider drones from Minority Report skittering around buildings in search of criminal suspects, performing retinal scans on frightened civilians? We well might—there's now an entire field of inquiry called swarm robotics. And it would be surprising if the law enforcement agencies whose concern for civil liberties has been somewhat limited didn't deploy every jazzy new tool they can get their hands on.
But for now, the debate—or what passes for one—on drones concerns what happens far from our shores. The American people don't know much about what our military's drone war involves, and don't seem too concerned about it. But even those who wage that war don't know all they should. As Dexter Filkins of The New Yorker recently wrote, "Indeed, if there is one overriding factor in America's secret wars—especially in its drone campaign—it's that the United States is operating in an information black hole. Our ignorance is not total, but our information is nowhere near adequate. When an employee of the C.I.A. fires a missile from an unmanned drone into a compound along the Afghan-Pakistani border, he almost certainly doesn't know for sure whom he's shooting at. Most drone strikes in Pakistan, as an American official explained to me during my visit there in 2011, are what are known as 'signature strikes.' That is, the C.I.A. is shooting at a target that matches a pattern of behavior that they've deemed suspicious. Often, they get it right and they kill the bad guys. Sometimes, they get it wrong."
You need to be logged in to comment.
, after login or registration your account will be connected. | <urn:uuid:a339198b-7409-4815-8fb5-bdbc5d6699b0> | 3 | 2.78125 | 0.281762 | en | 0.961476 | http://mailto:[email protected]/article/game-drones |
The Cords of Vanity
A Comedy of Shirking
Published: 1909
Language: English
Wordcount: 87,798 / 255 pg
Flesch-Kincaid Reading Ease: 66.9
LoC Category: PS
Downloads: 1,927 1479
Genre: Humor
A study of the artistic temperament, being the history of a hero who degenerates progressively.
Show Excerpt
her people I knew He would say, "What an odd child!" and I liked to have people say that. Still, there was sunlight in the hall, and lots of sunlight, not just long and dusty shreds of sunlight, and I felt more comfortable when I was back in the hall.
2--Reading I lay flat upon my stomach, having found that posture most conformable to the practice of reading, and I considered the cover of this slim, green book; the name of John Charteris, stamped thereon in fat-bellied letters of gold, meant less to me than it was destined to signify thereafter.
A deal of puzzling matter I found in this book, but in my memory, always, one fantastic passage clung as a burr to sheep's wool. That fable, too, meant less to me than it was destined to signify thereafter, when the author of it was used to declare that he had, unwittingly, written it about me. Then I read again this
Fable of the Foolish Prince "As to all earlier happenings I choose in this place to be silent. Anterior adventures he had k
Cover image for
show mobile phone QR code
| <urn:uuid:b21279b1-1e55-40ef-bb01-b5aceb55f3f4> | 2 | 1.648438 | 0.021562 | en | 0.955234 | http://manybooks.net/titles/cabelljaetext068cvan10.html |
Wiki User Tools: Help
draft source
Marvel Universe
Black Panther (T'Challa)
(Redirected from Black panther)
Marvel Universe
Real Name
Luke Charles, Black Leopard, Nubian Prince, the Client, Coal Tiger, has impersonated Daredevil and others on occasion
Publicly known
Place of Birth
First Appearance
Fantastic Four Vol. 1 #52 (1966)
T'Challa is heir to the centuries-old ruling dynasty of the African kingdom Wakanda, and ritual leader of its Panther Clan. His mother died in childbirth, earning him the enduring hatred of his adopted elder brother, Hunter, who also resented T'Challa for supplanting him in the royal household. Hunter would become the White Wolf, leader of the Hatut Zeraze (Dogs of War), the Wakandan secret police. Their father T'Chaka remarried, but his second wife, Ramonda, seemingly ran away with another man when T'Challa was eight. When T'Challa was a teenager, T'Chaka was murdered by Klaw, a Dutchman seeking to plunder the rare Vibranium metal unique to Wakanda, but T'Challa used Klaw's own weapon to maim him and drive him off. T'Challa studied in Europe and America, then underwent ritual trials in Wakanda - including defeating his uncle S'yan, the existing Black Panther - to win the heart-shaped herb, enhancing his abilities and linking him spiritually to the Panther God Bast. Now Wakanda's ruler as the Black Panther, he disbanded and exiled the Hatut Zeraze and continued transforming his country into a high-tech wonderland. When tribal war broke out, T'Challa restored peace by condemning the Jabari tribe, and by picking Dora Milaje ("Adored Ones") from rival tribes to serve as his personal guard and ceremonial wives-in-training.
Taught by his father to think two steps ahead of enemies and three steps ahead of friends, T'Challa saw the world's super-beings as potential threats to Wakanda. Inviting the Fantastic Four to visit him, he forced them into a series of tests, then allied with them against a returning Klaw. He also joined the American-based Avengers to spy on them from within, but soon came to regard them as true friends and staunch allies. He adopted the identity of teacher Luke Charles while in America, romancing singer Monica Lynne, later his fiancée. Dividing his time between Wakanda and America for years, he battled foes such as Jabari malcontent M'Baku the Man-Ape, rebel leader Erik Killmonger, the snake-charmer Venomm (later an ally), voodoo charlatan Baron Macabre, the Ku Klux Klan, the ghostly Soul-Strangler, the soaring Wind Eagle, mutated drug czar Solomon Prey, arms dealer Moses Magnum and the Supremacists of Azania. He also fought Kiber the Cruel during a quest for the mystic time-shifting artifacts known as King Solomon's Frogs; these produced an alternate version of T'Challa from a future ten years hence, a merry telepathic Panther with a terminal brain aneurysm. Placing his dying future self in cryogenic storage, T'Challa broke off his engagement with Monica since he feared he had no future to give her. Wakanda and Atlantis subsequently came to the brink of war during the Kiber Island incident, which revealed Wakanda to be a nuclear power. Discovering his stepmother Ramonda had not run away, but instead had been kidnapped by Anton Pretorius, he rescued her from years of captivity in South Africa. T'Challa joined the Knights of Pendragon against their enemies, the Bane, learning in the process that he housed one of the Pendragon spirits himself. He was also used as a pawn in the efforts of the munitions company Cardinal Technology to escalate the civil war in the northern nation Mohannda, but exposed Cardinal with the aid of the mercenary Black Axe and the anti-war activist Afrikaa.
T'Challa's restrictions on exports of both Vibranium and Wakandan technology had long annoyed foreign powers. Xcon, an alliance of rogue intelligence agents and the Russian mafia, backed a coup in Wakanda led by Reverend Achebe. Learning Achebe was empowered by the demon Mephisto, T'Challa sold his soul in exchange for Mephisto's, abandoning Achebe and leaving Wakanda in peace; however, T'Challa's unity with the Panther God and its link to the spirits of past Panther Clan leaders forced Mephisto to forfeit the Panther's soul. T'Challa then presented the U.N. with evidence of the Xcon plot and its U.S. links, demanding sanctions against America. When Hunter and the Hatut Zeraze resurfaced during the Xcon incident, a wary T'Challa imprisoned them just prior to regaining his throne.
After T'Challa discharged Nakia from the Dora Milaje for trying to kill Monica Lynne in a fit of jealousy, Nakia was tortured by Achebe and rehabilitated by Killmonger, who shaped her into the mad warrior Malice. She was replaced in the Dora Milaje by Queen Divine Justice, an American-raised Jabari. T'Challa himself returned to the U.S. on a diplomatic mission, leaving his Washington envoy Everett K. Ross in charge as regent of Wakanda, until Killmonger tried to destroy Wakanda's economy; to thwart this, the Panther nationalized all foreign companies in Wakanda, causing a global run on the stock market, which Tony Stark (Iron Man) used to secure a controlling interest in the Wakandan Design Group. Returning home, the Panther fought Killmonger in ritual combat, but was distracted at a critical juncture by Ross and beaten nearly to death. Killmonger only relented when Ross, still regent, yielded on T'Challa's behalf, unwittingly giving the Black Panther title to Killmonger. T'Challa's life was mystically saved by his allies Brother Voodoo and Moon Knight. While T'Challa recovered, Killmonger tried to join the Avengers as the new Black Panther, and Achebe enlisted super-mercenaries such as Deadpool to attack Wakanda again. During the resultant Avengers visit to Wakanda, Ross freed Hunter, whose scheming resulted in Killmonger's seeming demise and the restoration of T'Challa's title.
Wakanda next came into conflict with Deviant Lemuria during a dispute over custody of a Deviant child found in Wakanda. As tensions mounted, warships from Wakanda, the U.S. and Atlantis all entered the area, and Hunter made matters worse when he decided to force T'Challa to "reclaim his dignity" by reviving Klaw, who tried to spark outright war between the nations involved. In the end, Ross's negotiation skills and information supplied by Magneto and Doctor Doom resolved the conflict. While T'Challa faced attacks by Malice, Divine Justice was kidnapped by the Man-Ape, who learned she was the rightful Queen of his tribe. T'Challa defeated M'Baku again, though not before he uncovered the frozen future Panther. Back in New York, the criminal Nightshade resurrected the fabled Chinese monster Chiantang the Black Dragon to use against T'Challa. Black Dragon had the Panther attacked by a mind-controlled Iron Fist, whose assault caused the brain aneurysm the future Panther had foreshadowed. Nightshade, meanwhile, managed to revive the future Panther.
At the same time, T'Challa learned that White Wol] had taken over Xcon and slain most of its leaders, who had used King Solomon's Frogs to replace the U.S. President and Canadian Prime Minister with brainwashed future counterparts, allowing Xcon to secretly take over both countries. Hunter continued their plan and sought revenge on Tony Stark for his buy-out of Wakandan Design Group. Uncertain of how far along Hunter's plan was, T'Challa drew Stark out with a covert message, using financial finagling to seize control of Stark Enterprises and simultaneously annex a small Canadian island in Lake Superior, prompting the U.S. and Canadian leaders to meet to discuss this crisis. The Panther and his allies, including the future Panther, then invaded the White House and foiled Xcon's plot, un-brainwashing the duplicate leaders and returning them to their own times.
Panther and his allies returned to Wakanda, where the future Panther fell into a coma. Hoping to free her tribe, Divine Justice freed the Man-Ape, but he broke his promises of non-violence by slaying the helpless future T'Challa. The original T'Challa, by now unstable and hallucinating, attacked the Jabari tribe with the intent of wiping them out; but after nearly slaying Divine Justice, he came to his senses and stopped the battle. Unable to face what he had done, the Panther handed power to his council and hid in New York. There he mentored policeman Kasper Cole (who had adopted an abandoned Panther costume), an experience which gave T'Challa the strength to face his illness, his nation and the world. His rule has since been challenged by a revived Killmonger, an issue which remains unresolved. At the same time, T'Challa renewed his ties with the Avengers, helping them battle Scorpio, secure special United Nations status and unmask U.S. Defense Secretary Dell Rusk as the evil Red Skull; however, the team disbanded after a series of devastating assaults by an insane Scarlet Witch.
For other people who have used the name Black Panther, impostors, and extradimensional counterparts, see Black Panther (disambiguation). | <urn:uuid:4f66b34b-480b-45a1-9be5-35be04b40177> | 2 | 1.523438 | 0.059658 | en | 0.959266 | http://marvel.com/universe/Black_panther |
Virality: How Does It Work and Why Do We Share?
We share things online every day — a cool DIY project, something that made us laugh, or a picture of a cute animal. We propel our ideas into the vast space known as the Internet, and become one tiny factor of a much larger process known as virality.
Why are we compelled to do this?
In our latest Mashable Explains video, we take a look at virality and memes, and discuss why the web is the perfect environment for ideas to spread.
Check out the video above to learn more about virality, and make sure to subscribe to Mashable on YouTube for more.
BONUS: Mashable Explains the Internet of Things
Load Comments
What's New
What's Rising
What's Hot | <urn:uuid:bd0d5053-8420-4f6a-b4c6-9cfe39c584f8> | 2 | 2.140625 | 0.054223 | en | 0.911244 | http://mashable.com/2014/07/07/why-do-things-go-viral/ |
Robin Hanson's Working Papers with Abstracts
Working papers are not yet published, but are essentially complete papers.
Location Discrimination in Circular City, Torus Town, and Beyond, Oct. '99
I generalize Salop's "Circular City" model of spatial competition to spaces of arbitrary integer dimension, and to "transportation" costs which are an arbitrary positive power of distance. Assuming free entry, mill (i.e., non-discriminatory) pricing is compared to price discrimination based on customer locations. For all dimensions above one, there is some non-negative cost-power below which there is too little entry and above which there is too much. This cutoff cost-power rises with increasing dimension, and is larger under price discrimination. Mill pricing induces more entry for powers of four or less, and less entry for powers of five or more. Overall, too much entry seems a more severe problem than too little entry. For moderate powers and dimensions, this tends to favor price discrimination.
Warning Labels as Cheap Talk: Why Regulators Ban Products, Nov. '96
The most frequently mentioned explanation for product bans is that regulators know more about product quality than consumers. A problem with this explanation, however, is that such regulators should prefer to just communicate the information implicit in their ban, perhaps via a ``would have banned" label. We show, however, that since product labeling is cheap talk, any small market failure, such as a use-externality, will tempt regulators to lie about quality. If consumers suspect such lies, regulators can not communicate their ban information, and so will ban instead. We also show that when regulators expect market failures to lead to underconsumption of a product, and so would not ban it for informed consumers, regulators should want to commit to not banning this product for uninformed consumers.
For Savvy Bayesian Wannabes, Disagreements Are Not About Information, May '97
Consider two agents who want to be Bayesians with a common prior, but who can not due to severe computational limitations. If these agents are aware of certain easy-to-compute implications of these limitations, then they can agree to disagree about their estimate of a random variable only if they agree to disagree (to a similar degree) about both their average errors. Yet average error can in principle be computed independently of any agent's private information. Thus disagreements must be fundamentally about priors or computation, rather than about the actual state of the world.
Patterns of Patronage -- Why Grants Won Over Prizes in Science. first version May 1995.
Prizes were a common way to patronize basic research in the eighteenth century. Science historians say grants then won over prizes because grants are a superior institution. If different patron types tend to use different patronage forms, however, perhaps the patron types who tend to use grants just became more common.
To test this hypothesis, I estimate the use of prize-like vs. grant-like funding among eighteenth century scientific societies. Societies with non-autocratic, non-local government patrons were especially likely to use grant-like funding. As these are today's dominant patrons of basic research, eighteenth century data successfully predicts current patronage forms.
Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization, Mar. '98
Attempts to model interstellar colonization may seem hopelessly compromised by uncertainties regarding the technologies and preferences of advanced civilizations. If light speed limits travel speeds, however, then a selection effect may eventually determine frontier behavior. Making weak assumptions about colonization technology, we use this selection effect to predict colonists' behavior, including which oases they colonize, how long they stay there, how many seeds they then launch, how fast and far those seeds fly, and how behavior changes with increasing congestion. This colonization model explains several astrophysical puzzles, predicting lone oases like ours, amid large quiet regions with vast unused resources.
Must Early Life Be Easy? The Rythm of Major Evolutionary Transitions. first version Sept. 1996.
If we are not to conclude that most planets like Earth have evolved life as intelligent as we are, we must presume Earth is not random. This selection effect, however, also implies that the origin of life need not be as easy as the early appearance of life on Earth suggests. If a series of major evolutionary transitions were required to produce intelligent life, selection implies that a subset of these were ``critical steps," with durations that are similarly distributed. The time remaining from now until simple life is no longer possible on Earth must also be similarly distributed. I show how these results provide timing tests to constrain models of critical evolutionary transitions.
Economic Growth Given Machine Intelligence, Aug.? '98
A simple exogenous growth model gives conservative estimates of the economic implications of machine intelligence. Machines complement human labor when they become more productive at the jobs they perform, but machines also substitute for human labor by taking over human jobs. At first, expensive hardware and software does only the few jobs where computers have the strongest advantage over humans. Eventually, computers do most jobs. At first, complementary effects dominate, and human wages rise with computer productivity. But eventually substitution can dominate, making wages fall as fast as computer prices now do. An intelligence population explosion makes per-intelligence consumption fall this fast, while economic growth rates rise by an order of magnitude or more. These results are robust to automating incrementally, and to distinguishing hardware, software, and human capital from other forms of capital.
Showing That You Care: The Evolution of Health Altruism. May '99.
Altruism, or directly caring about the outcomes of others, is often suggested as an important explanation for otherwise puzzling phenomena in health policy. There are many possible ``altruists," however, depending on which people and outcomes the altruist cares about. I propose a specific model of health altruism that 1) fits with what we know about our ancestors' behavior and environment, and 2) accounts for several health policy puzzles. It assumes that altruism was directed toward social allies, that allies prevented health-harming crisis events, and that some people knew things that others did not about who would remain an ally. This model then offers a simple unified explanation of: a) regulatory paternalism, especially toward the low status, b) value-driven support for national health insurance, c) the social-status health-gradient, and d) the near-zero marginal health-value of medical care.
Long-Term Growth As A Sequence of Exponential Modes. Sept. '98.
The long-term history of world economic growth seems to be describable as sequence of exponential growth modes. In the current mode, which has lasted about 70 years, the economy doubles every 15 years. If history is a guide, the economy may transition within the next 60 years or so to a faster mode, with a doubling time of one to two years. Scientific progress may drive the current mode, while computer hardware may drive the next.
Democratic Failure Via Adverse Selection. first version July 1996.
Government intervention and other forms of collective choice (such as collective bargaining) have been widely recommended as cures for the "market-failure" of adverse selection and other forms of excessive-signaling or screening via separating-equilibria. This is because externally-imposed signal restrictions, such as signal-limits, signal taxes, or forced-pooling, can improve efficiency in simple signaling games.
Collective choices regarding restrictions are not, however, the same as restrictions imposed ex ante by a benevolent dictator. Voting on restrictions can, for example, allow informed participants to signal with their votes, resulting in a ``democratic failure'' of lower ex-ante efficiency relative to optimal ex-ante commitment. Insurance companies should be wary, for example, that unions representing riskier employees will vote to ask for more group-insurance.
With independent individual risks, however, asymptotic ex-ante efficiency results from collective choices made by random selfish juries. This is because fractionally-small juries who only know or care about their own types can at most signal only a small fraction of the relevant information. (see also a related dissertation proposal: Does Collective Choice Mitigate Adverse Selection?, March 1996.)
On Voter Incentives To Become Informed. first version June 1994, California Institute of Technology Social Science Working Paper No. 968, May 1996.
Before an election, two candidates choose policies which are lotteries over election-day distributive positions. I find conditions under which there exist mixed-strategy probabilistic-voting equilibria which are independent, treating voter groups independently. When voter efforts determine the quality of their signals regarding candidate positions, voters can have strong incentives regarding their visible efforts made before candidates choose policies. Also, scale economies in group information production can make voters prefer large groups. Even with zero information costs, however, voters can ex ante prefer ignorance to full information. Optimal ignorance emphasizes negative over positive news, and induces candidates to take stable positions.
Non Social Science
The Great Filter - Are We Almost Past It? first version August 1996
Humanity seems to have a bright future, i.e., a non-trivial chance of expanding to fill the universe with lasting life. But the fact that space near us seems dead now tells us that any given piece of dead matter faces an astronomically low chance of begating such a future. There thus exists a great filter between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?
Combining standard stories of biologists, astronomers, physicists, and social scientists would lead us to expect a much smaller filter than we observe. Thus one of these stories must be wrong. To find out who is wrong, and to inform our choices, we should study and reconsider all these areas. In particular we should seek evidence of extraterrestrials, such as via radio signals, Mars fossils, or dark matter astronomy. But contrary to common expectations, evidence of extraterrestrials is likely bad (though valuable) news. The easier it was for life to evolve to our stage, the bleaker our future chances probably are. | <urn:uuid:ca532a20-e2d5-4779-92d7-006cd00ae901> | 2 | 1.585938 | 0.06021 | en | 0.942503 | http://mason.gmu.edu/~rhanson/workingpapers.html |
Drexel dragonThe Math ForumDonate to the Math Forum
Ask Dr. Math - Questions and Answers from our Archives
Are All Perfect Numbers Even?
Date: 01/16/97 at 10:48:35
From: Anonymous
Subject: Perfect Numbers
Are all perfect numbers even? Has it been proved that perfect numbers
MUST be even? I believe they must be, which I'll discuss below, but I
saw your "Ask Dr. Math" service, and this question immediately came to
About ten years ago, I discovered on my own a formula for generating
perfect numbers, which are natural numbers whose proper factors add up
to the number itself. (Proper factors of a number are factors less
than the number itself.) If I recall properly, the formula is:
P = [2^(n-1)] x [(2^n) - 1], where n is a prime number.
I was a bit disappointed to read within a few months that Gauss or
Euler had already discovered the same formula several hundred years
earlier, but I was still pleased with my own discovery. It seems to
me that a structural examination of why the proper factors of perfect
numbers add up to the number itself reveals that it has to do with the
perfect number apparently needing to have as one of its factors a
number that is exactly half of itself, which means that the perfect
number must of necessity be an even number.
Anything you can provide on this would be great. I teach junior high
math in St. Paul, MN.
Bob Hazen
Date: 01/27/97 at 10:40:54
From: Doctor Lorenzo
Subject: Re: Perfect Numbers
No one knows whether all perfect numbers are even, which I will talk
more about below. In your formula P = 2^(n-1) x (2^n - 1), you
actually want (2^n)-1 to be prime, not n to be prime. A prime number
of the form M_n = 2^n -1 is called a Mersenne Prime. M_2, M_3, M_5
and M_7 are prime, which led Mersenne to conjecture that all such
numbers were prime, but that's not the case. For example,
M_11 = 2047 = (23)(89). As of the late 60's (when my reference was
written), there were only 23 known Mersenne primes, the largest being
M_11213. (I think they've found a couple more since then, but the
list is still short). Let me quote a paragraph from Oystein Ore's
_Invitation to Number Theory_:
"This result shows that each Mersenne prime gives rise to a perfect
number. In Section 2.2 we mentioned that so far 23 Mersenne primes
are known, so we also know 23 perfect numbers. Are there any other
types of perfect numbers? All the perfect numbers of the form 3.4.1
[your formula] are even, and it is possible to prove that if a perfect
number is even it is of the form (3.4.1). This leaves us with the
question: ARE THERE ANY ODD PERFECT NUMBERS? Presently we know of
none and it is one of the outstanding puzzles of number theory to
determine whether an odd perfect number can exist. It would be quite
an achievement to come up with one and you may be tempted to try out
various odd numbers. We should advise against it; according to a
recent announcement by Bryant Tuckerman at IBM (1968), an odd perfect
number must have at least 36 digits."
A result that's NOT hard to prove is that an odd perfect number must
be of the form (odd prime) x (perfect square), but that doesn't narrow
things down much.
-Doctor Lorenzo, The Math Forum
Date: 01/30/97 at 05:37:39
From: Anonymous
Subject: Re: Perfect Numbers
Thank you for your reply about perfect numbers.
Here are some of my ideas about a proof regarding the existence of odd
perfect numbers. A look at the known perfect numbers suggests to me a
global approach to why perfect numbers must be even.
Consider the proper factors of 28 and 496:
28: {1,2,4,7, 14} 496: {1,2,4,8,16,31,62,124,248}
The largest proper factor of 496 is 248, which gets us sum-wise
halfway to our desired sum of 496.
Observation 1: Any perfect number P that is not even has a smallest
factor of n not equal to 2, so n = 3 at least. This means that the
largest factor of P is the number P/n, which (if n = 3) is only 1/3 of
P, which sumwise doesn't "get far" toward the desired sum of P. At
least with even P, the largest proper factor gets us halfway to P.
Another way of saying this is to think of the sequence
P = P/2 + P/4 + P/8 + ... + P/(2^m), plus exactly another of the last
term "P/(2^m)" to get us that last missing part of the desired sum.
496's next proper factor of 124, when added to 248, gets us sumwise
3/4 of the way, leaving 1/4 to add. Then 62 gets us 7/8 of the
way, and the 31 (which is 1/16 of 496) gets us 15/16 of the way. Thus
we are "missing" 31 = 1/16 of the desired sum of 496. But quite
nicely, we can sum all the consecutive powers of 2 that are less than
31 to actually get a sum of the missing 31: 1 + 2 + 4 + 8 + 16 = 31.
A similar process happens with the perfect number 28. The factors of
14 + 7 = 21, so we are "missing" a 7, for which we can use all the
consecutive powers of 2 that are less than 7, to obtain the sum of 7:
1 + 2 + 4 = 7.
Observation 2: These desired powers of 2, which always add up to 1
less than the next power of 2 (1 + 2 + 4 = 7, which is
8 - 1 = (2^3) - 1, and 1 + 2 + 4 + 8 + 16 = 31, which is
32-1 = (2^5) - 1, will be unavailable to sum up that last missing
chunk of the partial sum equal to P UNLESS that P is an even number in
the first place, or unless P has lots of factors that are consecutive
powers of 2, as made available by the Mersenne prime formula for
perfect numbers.
The part I'll keep brief is that the first factor higher than the
highest power of 2 must be a prime number, in order to avoid the
proper factor sum from being so large that P becomes an abundant
number (see definition below).
This is not a rigorous appraoch, and I'm sure it's somewhat sketchy,
but at a global level, it convinces me that perfect numbers must be
even, and moreover, must be of that Mersenne form, in order to avoid
being either deficient (having a proper factor sum less than P) or
abundant (having a proper factor sum greater than P).
Date: 01/30/97 at 08:50:15
From: Doctor Steven
Subject: Re: Perfect Numbers
Your global approach is nice but it doesn't really contain enough.
It seems you're trying to show that odd numbers "can't" (read not as
can't but as improbable) be perfect numbers, because the factors will
sum up to less than the number. Say 3, 5, 7, 9, 11, 13, and 15 are
all factors of our number, Q. So:
3R = Q
5T = Q
7S = Q
9U = Q
11V = Q
13W = Q
15Z = Q.
So R, S, T, U, V, W, and Z are also factors. Solve these equations for
the unknown factors:
R = Q/3
T = Q/5
S = Q/7
U = Q/9
V = Q/11
W = Q/13
Z = Q/15
This means that:
R + T + S + U + V + W + Z
= (Q/3) + (Q/5) + (Q/7) + (Q/9) + (Q/11) + (Q/13) + (Q/15)
= (15015Q/45045) + (9009Q/45045) + (6435Q/45045) + (5005Q/45045) +
(4095Q/45045) + (3465Q/45045) +(3003Q/45045)
Add these fractions up to get:
R + T + S + U + V + W + Z = 46027Q/45045 > Q
So we should have no problem finding an odd number Q whose factors
will add up to at least Q. Likewise we should have no problem finding
odd numbers whose factors won't add up to Q.
-Doctor Steven, The Math Forum
Date: 01/30/97 at 21:50:24
From: Anonymous
Subject: Re: Perfect Numbers
Thanks for your reply. Your remark that "trying to show that odd
numbers "can't" be perfect numbers, because the factors will sum up to
less than the number" isn't quite accurate. I am aware that some odd
numbers are deficient while others are abundant, so I'm not trying to
show that the proper factors of odd numbers will sum up to less than
the number.
I'm also aware that my global approach indeed doesn't do it at all
because it definitely is not a proof as such. Perhaps what my remarks
and insights were getting at is that a detailed analysis of the nature
of the factors of these known perfect numbers - about "half" of which
are powers of two, and the other "half" begin with a prime number that
is 1 less than the next higher power of 2, along with doubles (and
quadruples and octuples...) of that prime number. This analysis
helped me see why the sum of the various factors of the known perfect
numbers actually sum to be precisely, no more and no less, the perfect
number itself! It can't be a strict power of 2, because the sum of
any consecutive powers of 2 is always going to be 1 less than the next
power of 2. However, the powers of 2 get us close to summing to the
original number.
By now, I may be rambling. I checked a few high odd numbers
(3 x 5 x 7 x 11 x 13) and the sum of the proper factors gets close to
the number. I think my insight into the marvelous structure of
perfect numbers, and how their proper factors so neatly sum to the
number, became overgeneralized in my mind. I'll fiddle sometime with
some odd numbers and examine their structure. Fear not - I recall
your comment about how long an odd perfect number must be!
In the meantime, I have papers to correct, lesson plans to write, and
kids to feed dinner to.
Kind regards
Associated Topics:
College Number Theory
High School Number Theory
Search the Dr. Math Library:
Find items containing (put spaces between keywords):
Click only once for faster results:
parts of words whole words
Submit your own question to Dr. Math
[Privacy Policy] [Terms of Use]
Ask Dr. MathTM
© 1994-2015 The Math Forum | <urn:uuid:67db66d9-a033-4dfe-bf9d-905a59b045d9> | 3 | 2.703125 | 0.559007 | en | 0.941534 | http://mathforum.org/library/drmath/view/51526.html |
Take the 2-minute tour ×
When trying to see if a number of the form $n^8-n^4+1$ can be divisible by the square of a prime, I found that it can indeed. The first few values for $n$ are
412, 786, 1417, 1818, 2430, 2640, 2809, 2822, 2899 ...
and the first few such primes $p$ (in increasing order) are
73, 97, 193, 241, 313, 337, 409, 433, ...
Interestingly enough, the latter is precisely the beginning of this sequence which lists the primes of the form $x^2+24y^2$. I am quite sure that this cannot be a pure coincidence and that some deep number theory must be involved. The number $24$ is not accidental either, as $n^8-n^4+1=\Phi_{24}(n)$, with $\Phi_k(x)$ being the $k$th cyclotomic polynomial. Maybe there is some relation to the field $\mathbb{Q}(\zeta_{24})$...
So, the question is if the following is true:
Conjecture. A prime $p$ has the form $x^2+24y^2$ if and only if $p^2$ divides $n^8-n^4+1$ for some $n$.
share|improve this question
2 Answers 2
up vote 27 down vote accepted
Your conjecture is true in the light of the following statements.
Proposition 1. A prime $p$ has the form $x^2+24y^2$ if and only if $p\equiv 1\pmod{24}$.
Proposition 2. A prime square $p^2$ divides $\Phi_{24}(n)$ for some $n$ if and only if $p\equiv 1\pmod{24}$.
Proof of Proposition 1. The four equivalence classes of binary quadratic forms of discriminant $-96$ are represented by $x^2+24y^2$, $3x^2+8y^2$, $4x^2+4xy+7y^2$, $5x^2+2xy+5y^2$. Looking at the values in $(\mathbb{Z}/96\mathbb{Z})^\times$ assumed by these four quadratic forms, we see that they are in four different genera. This means that if $Q(x,y)$ is any of these forms and $p\geq 5$ is any prime, then $Q(x,y)$ represents $p$ if and only if it does so modulo $96$. In particular, $x^2+24y^2$ represents $p$ if and only if $p\equiv 1,25,49,73\pmod{96}$, i.e. when $p\equiv 1\pmod{24}$.
Proof of Proposition 2. By Hensel's Lemma, the square of a prime $p\geq 5$ divides $\Phi_{24}(n)$ for some $n$ if and only if $p$ divides $\Phi_{24}(m)$ for some $m$. The latter property holds if and only if $\mathbb{F}_p$ contains a primitive $24$-th root of unity, i.e. when $p\equiv 1\pmod{24}$.
References: Rose - A course in number theory; Cox - Primes of the form $x^2+ny^2$
share|improve this answer
There is another proof which is somewhat easier.
Since the multiplicative group of $\mathbb{Z}/p\mathbb{Z}$ is cyclic there exists $x$ such that $\Phi_{24}(x) = 0 (mod p)$ iff $24 | p-1$. Notice that in this case $\Phi_{24}(x)$ has $8$ different roots in $\mathbb{Z}/p\mathbb{Z}$ and the derivative at each root is not divisible by $p$, which implies that any root if $\Phi_{24}$ in $\mathbb{Z}/p\mathbb{Z}$ can be lifted to a root in $\mathbb{Z}/p^k\mathbb{Z}$ (for all $k$) and even in $p$-adics $\mathbb{Z}_p$.
Here there is nothing special about $24$, and one can replace it with any integer.
share|improve this answer
Your proof is almost identical to my proof for Proposition 2. Hensel's Lemma is the "lifting" you talk about, and $p\geq 5$ ensures that $\Phi_{24}$ does not have repeated roots modulo $p$ (as $2$ and $3$ are the only primes dividing the discriminant of $\Phi_{24}$). Still, one needs to explain why $24\mid p-1$ is equivalent to the existence of $x,y\in\mathbb{Z}$ such that $p=x^2+24y^2$. This is explained in my proof for Proposition 1. – GH from MO Dec 26 '11 at 21:27
I agree, I have not read completely your comment before posting mine. I suppose I should not have posted this as an answer... – kassabov Dec 26 '11 at 22:51
As long as people find it useful, it is OK :-) BTW you can always edit your answer or even delete it. Let me emphasize that $24$ is special for the quadratic forms part. For if a quadratic form is not alone in its genus (an example is $x^2+14y^2$), then congruence conditions do not suffice for determining which primes are represented. – GH from MO Dec 26 '11 at 23:25
Your Answer
| <urn:uuid:30e8aec0-f541-4189-b5c1-10d207235cd5> | 2 | 1.875 | 0.413045 | en | 0.868507 | http://mathoverflow.net/questions/84303/p2-dividing-n8-n41/84304 |
You are here
U.N. Fellows Program
There’s a lot of responsibility that comes with being one of the few colleges in the world to be granted Non-Governmental Organization (NGO) status at the United Nations. There’s also a lot of pride that comes with it. NGOs work with the U.N. to establish dialogue and policy essential to various issues of global importance. From human rights to health, from economic development to education, the work we do with the U.N embodies our tireless dedication to service and compassion in the Franciscan spirit.
This distinguishing status with the U.N. gives Felician students a unique opportunity to participate in our U.N. Fellowship Program, gaining unprecedented access to attend U.N. conferences and meet politicians, diplomats and ambassadors from around the world. The students who have participated in the U.N. fellowship program are as varied and diverse as the U.N. itself, with past participants representing seven different nations from around the globe, including Poland, Russia, Denmark, Korea, Bosnia, Japan and the United States.
No matter the major or background, Felician students are continually strengthened through our commitment to the U.N and our common goal of making the world a better place. | <urn:uuid:3b7a9737-66e5-4066-8e14-22f3f4052528> | 2 | 1.734375 | 0.152271 | en | 0.951345 | http://[email protected]/global/un-fellows-program |
Byrd, Robert full Robert Carlyle Byrd ( born Nov. 20, 1917 , North Wilkesboro, N.C., U.S. American Democratic politician who served as a representative from West Virginia in the U.S. House of Representatives (1953–59) and as a U.S. senator from West Virginia (1959– ). In his decades-long Senate career, Byrd held various leadership positions, including Democratic whip (1971–77), majority leader (1977–80, 1987–88), minority leader (1981–86), and president pro tempore (1989–95, 2001–03, 2007– ). In 2006 he became the longest-serving U.S. senator in history, and in 2009 he became the longest-serving member of Congress in history.
The son of working-class parents, Byrd was raised in southern West Virginia. After graduating from high school in a class of fewer than 30 students, he was a part-time student at Beckley College, Concord College, Morris Harvey College, and Marshall College (now Marshall University), all in West Virginia. Although he did not complete his bachelor’s degree from Marshall University until 1994, he earned a law degree (1963) from American University in Washington, D.C., while serving in the Senate. In the early 1940s Byrd organized a local Ku Klux Klan chapter, although years later he had a change of heart and became a strong supporter of civil rights. He worked as a butcher, a coal miner, and a grocery store proprietor before launching his political career by getting elected to the West Virginia House of Delegates in 1946. He served in the state senate (1951–52) before winning election to the U.S. House of Representatives in 1952 and to the U.S. Senate in 1958.
As a senator, Byrd earned a reputation as a strong advocate for the working class as he sought to ensure accessibility to health care and greater educational and employment opportunities for his constituents. As minority and later majority leader during the 1980s, he often found himself at odds with Pres. Ronald Reagan (1981–89); he implored the president to withdraw U.S. marines from Lebanon in 1984 and criticized him sharply during the Iran-Contra Affair in 1986. After Pres. George H.W. Bush (1989–93) signed into law the Clean Air Act (1990), which threatened the livelihood of coal miners in his home state, Byrd worked to bring industry and federal jobs to West Virginia through his position as chairman of the Senate Appropriations Committee (1988–2008). He also provided needed guidance on procedural matters during Senate hearings on the impeachment of Pres. Bill Clinton (1993–2001) in 1998. Byrd opposed the reorganization of federal security agencies undertaken by Pres. George W. Bush (2001–09)—including the creation of the Department of Homeland Security—in the wake of the September 11 attacks in 2001, and he was a vocal critic of the Iraq War (2003).
Byrd distinguished himself as an expert on the Senate’s vast historical record, and he frequently gave impromptu speeches in which he recounted long-forgotten episodes of Senate history. His celebrated four-volume series The Senate, 1789–1989 (1989–94) was followed by The Senate of the Roman Republic (1994), Losing America: Confronting a Reckless and Arrogant Presidency (2004), and Letter to a New President (2008). His memoir—Child of the Appalachian Coalfields (2005)—examined not only his political career but also the embarrassment he still felt over his early ties to the KKK. | <urn:uuid:c8c06f27-477f-4c7b-8c15-c18b7525ba90> | 2 | 1.96875 | 0.025314 | en | 0.974824 | http://media-2.web.britannica.com/eb-diffs/969/1309969-17205-475771.html |
Abstract for blackburn_icslp96
Proc. ICSLP 96, Philadelphia, October 1996
C.S. Blackburn and S.J. Young
October 1996
We describe a self-organising pseudo-articulatory speech production model (SPM) trained on an X-ray microbeam database, and present results when using the SPM within a speech recognition framework. Given a time-aligned phonemic string, the system uses an explicit statistical model of co-articulation to generate pseudo-articulator trajectories. From these, parametrised speech vectors are synthesised using a set of artificial neural networks (ANNs). We present an analysis of the articulatory information in the database used, and demonstrate the improvements in articulatory modelling accuracy obtained using our co-articulation system. Finally, we give results when using the SPM to re-score N-best utterance transcription lists as produced by the CUED HTK Hidden Markov Model (HMM) speech recognition system. Relative reductions of 18\% in the phoneme error rate and 15\% in the word error rate are achieved.
(ftp:) blackburn_icslp96.ps.gz (http:) blackburn_icslp96.ps.gz
(ftp:) blackburn_icslp96.pdf | (http:) blackburn_icslp96.pdf
| <urn:uuid:e77c0cf9-bb15-47cb-9e61-a366ecf39e8b> | 2 | 1.6875 | 0.149575 | en | 0.853703 | http://mi.eng.cam.ac.uk/reports/abstracts/speech/blackburn_icslp96.html |
Rural Home
Top End fire fighters labelled 'fire lighters'
ABC Rural
Darwin rural burn-off
The stark contrast in vegetation recently burnt-off compared to land untouched in Darwin's rural area.
Carl Curtain
Rural residents in the Top End are worried about the amount of land being burnt-off and being classed as controlled fires.
Each year Bushfires NT and many volunteers undertake controlled burns across the Northern Territory in an attempt to limit the risk posed to people and property by fire.
But there's concern the lack of resources available is forcing Bushfires NT staff to simply burn large tracts of land as opposed to implementing a strategic plan as described in its charter.
For those people who live in Palmerston and Darwin, the heavy smoke which often settles over suburban areas is now just part of the dry season, along with the dragon flies and cooler weather.
However for others, particularly those in the rural areas, there is too much fire as well as a persistent blanket of thick smoke.
Former volunteer fire fighter Diana Rickard, who has lived at Tumbling Waters in Darwin's rural area for 20 years, is one of those most concerned about the approach taken by the firies.
She says a culture shift has allowed fire staff to spend more time lighting fires than actually extinguishing them.
"When the 'fire lighters' came into the brigade, is when I got out.
"That's really why they're not getting [new] volunteers because they scare the pants off you," she said.
"It's like a war against the environment. If people feel afraid of the unknown and they're frightened to have that natural environment around them, then they war against it.
"If we can't look after our forests, we're not going to have any healthy environment."
Fire and its use as a vegetation management tool is also concerning residents further south, near Batchelor.
Alan Peterson runs an organic farm at Rum Jungle and often finds himself worrying about burn-offs in his area.
He says the large number of fires each year is having a significant impact on the local environment.
"Fire is overly used too late in the season. Generally, I've seen a thinning of the trees, particularly when there are annual burn-offs late in the season.
"Most of the fire fighters are volunteers, they're trying to do it expediently. They've got lives, they've got jobs to do as well as protect people's property," he said.
"If they just leave it until later in the year and let it go, it just cleans it all up in one mop-up and you don't have to worry about the fire threat anymore.
"It's just that it's very rough on biodiversity and the landscape," he said.
Bushfires NT, which coordinates the many volunteer fire brigades around the Territory, stands by its strategic burning effort that occurs each year.
Acting assistant director Ken Baulch says controlled burns are a useful tool in managing the grasses, particularly gamba grass.
"I wouldn't say that large tracts of land are being burnt out on any sort of regular basis. We do mosaic burning.
"Sometimes it's difficult to take out a small area because it would need us to get in there and put in control lines through a particular property," he said.
"Sometimes we do take out whole blocks when you might argue that we could take out a slightly smaller area and get the same effect.
"But when we do that, we try and do it with as cool a burn as possible,"
He says the amount of resources available does dictate how much effort can be afforded in burn-offs.
"Our volunteer resources may be dwindling, it's harder to get new volunteers and some of our regulars are getting older.
"The more resources you've got then probably the better job you can do, but I think with the resources that we've got, we do a pretty good job," he says.
Mr Baulch accepts there is criticism often made towards the increased use of fire as a management tool.
"I think it comes from a little ignorance of what it is we're trying to achieve and what resources we have available to us to do it.
"It's that really difficult decision that fire managers have to make, a little damage now or a lot of damage later," he said.
Batchelor farmer Alan Peterson believes it's the residents who have the greatest responsibility in managing fire fuel loads.
He says he's also grateful for what support the volunteer firies offer him and his family.
"I can't ask them to come and do stuff unless I accept the way that they do it.
"I've got different expectations of my burning so basically I have to do most of my stuff myself but they make fire fighting gear available if I request it," he said.
"That's very much appreciated."
Managing fire no walk in the grass says scientist
More Stories | <urn:uuid:12d99e8c-860d-4a7d-96c2-df7c16538866> | 3 | 2.875 | 0.04364 | en | 0.979181 | http://mobile.abc.net.au/news/2013-07-24/nt-controlled-burning/4841092?pfm=sm |
Welcome moon.coinfaucets.info
Get your free mooncoins now !
What is mooncoin ?
Mooncoin is a cryptocurrency and you know where it's headed! More info.
To begin, you will need a spacesuit wallet, you can download yours or use the web wallet.
What is a faucet ?
A faucet is a website that collects coins as contributions from donors, then redistribute those bit-by-bit to users who ask for a share.
The main purpose is to help people who have no, or few, coins get some to begin. One can request coins every 12 hours.
Top 4 donors
1. jepistons
2. bitverseradio
3. BitcoinRenaFaucet
4. Wraldpyk
Last donors
All these would not be possible without those awesome donors ! So please have a look at their website, spread the word, and why not, become one of them, contribute. | <urn:uuid:6ba56a3b-0afd-40f2-9114-5db1f850aed6> | 2 | 1.710938 | 0.534261 | en | 0.869962 | http://moon.coinfaucets.info/ |
ATM disaster plans for dodging hurricanes, other misfortunes
Published: September 27,2004
When Hurricane Ivan brushed the Mississippi Gulf Coast in mid-September, banks in South Mississippi were prepared for potential automated teller machine (ATM) malfunctions.
“ATMs and telephones are the most important things to keep up because that gives people access to information and cash,” said John M. Hairston, executive vice president and COO of Gulfport-based Hancock Bank. “If they can get cash and information, they’ll be OK.”
Hancock Bank’s data center, where all of the bank’s core computer systems operate, runs on a UPS (uninterrupted power supply) battery system. During power interruptions, the emergency generators crank into gear, with about 3,000 gallons of diesel fuel on hand “to last long enough for us to bring in more fuel,” explained Hairston.
“Those generators power all critical situations ranging from the hurricanes of late to someone running over a telephone pole, which happened last Friday night,” he said.
“In 1999, the eye of Hurricane Georges came right over the top of our building. When the power went out, we expected it to last for six to eight hours. However, our telephone lines never went down so the ATM machines that had power kept running.”
Odean Busby, president of Priority One, said the Magee-based community bank doesn’t have backup generators at individual branches to keep ATMs functional.
“Typically, if an ATM in one part of town gets knocked out, chances are that one close by is working,” he said. “Unless you’re in a path of total destruction, everybody’s usually doesn’t get knocked out. Also, the customer has the option of using any other ATM in the area that may be up and running. I think that’s the case for most people, especially community banks like ours.”
The most common reason an ATM goes down is because it runs out of money or has a mechanical failure, said Hairston.
“In our case, ATMs go down so rarely that our service level on them is 99.9%,” he said. “Occasionally, an ATM misbehaves because it has a mechanical problem, but when one falls below a 97% service level in a month, we consider it a critical failure and have it gutted. The only normal down time on an ATM is the few minutes at two or three o’clock in the morning when we’re backing up the system.”
At Priority One, network and software problems, not weather- or fraud- related issues, usually account for individual ATM failures, said Busby.
“But that’s rare,” he said. “There’s typically little down time on ATMs because the machines in service now are very reliable 24/7.”
Fire or regional disaster, followed by power failures and downed telephone lines, are the most common reasons an entire ATM network goes down.
“For example, when PULSE, one of the largest data networks in the system, took water during the Houston flood three years ago, the whole network was affected,” said Hairston. “Customers in our service area could use our ATMs just fine, but if they were in another city, like Chicago, they couldn’t access our bank using someone else’s network.”
At the data center in Gulfport, one computer operates all the bank’s ATMs. If one side fails, the other side takes over.
Also, the bank has a dedicated phone line from the facility to the BellSouth network, which connects to the ATMs and external networks, such as VisaNet and Banknet, explained Hairston.
“Even though it’s highly unlikely, if the main switch at BellSouth went under water, for example, we’d relocate operations for ATMs in other systems through Sungard (an IBM mainframe Disaster Recovery Service) in Chicago,” he said. “It would take us 24 hours or less to do that. When Sungard updates their equipment in Atlanta, we’ll move our processing from Chicago, and it will take even less time to reconnect.”
During Hurricane Ivan, a staff of 30 was stationed in Hancock Bank’s data center, with backup personnel on call in different counties and parishes “so that if we had a significant issue here, we’d have people who could get to Chicago and bring those systems up,” explained Hairston. “We’ve had computer systems at Hancock since 1963 and we’ve never had to move operations to Chicago, even during Hurricane Camille in 1969.”
Because each ATM costs a bank between $25,000 and $35,000, it’s not a moneymaker.
“You’ll hardly ever make enough on a foreign transaction interchange (a surcharge for non-bank customers) to cover the expenses, including line, network and other related costs,” said Busby. “But they’re an incredible convenience for our customers.”
[RSS Feed] [del.icio.us]
To sign up for Mississippi Business Daily Updates, click here.
Top Posts & Pages | <urn:uuid:979ca827-c51d-4d01-9baa-0f8a13438f32> | 2 | 1.921875 | 0.019873 | en | 0.944549 | http://msbusiness.com/blog/2004/09/27/atm-disaster-plans-for-dodging-hurricanes-other-misfortunes/ |
Marvin Lipschitz in the US
1. #70,054,949 Marvin Lipkind
2. #70,054,950 Marvin Lipofsky
3. #70,054,951 Marvin Lipousky
4. #70,054,952 Marvin Lipp
5. #70,054,953 Marvin Lipschitz
6. #70,054,954 Marvin Lipschultz
7. #70,054,955 Marvin Lipscome
8. #70,054,956 Marvin Lipsett
9. #70,054,957 Marvin Lipshonsky
person in the U.S. has this name View Marvin Lipschitz on WhitePages Raquote
Meaning & Origins
Medieval variant of Mervyn, resulting from the regular Middle English change of -er- to -ar-. Modern use may represent a transferred use of the surname derived from this in the Middle Ages. It is popular in the United States, where it is associated in particular with the American singer Marvin Gaye (1939–84) and the boxer Marvin Hagler (b. 1954).
316th in the U.S.
Jewish (Ashkenazic): variant of Lipschutz.
53,181st in the U.S.
Nicknames & variations
Top state populations | <urn:uuid:b9c55455-3d1c-42cd-9d16-8dbf95a35cbe> | 2 | 2.203125 | 0.236322 | en | 0.773785 | http://names.whitepages.com/Marvin/Lipschitz |
Sunday, April 10, 2011
Toasted Caterpillars
ALEXANDER pretending to be a bird who is camping and toasting caterpillars over a fire like marshmallows. Since it was a real caterpillar, I jumped up to stop him from actually putting it in his mouth, which caused him to console me: "Don't worry Mama, I not really a bird. I a little boy. Little boyd no eat caterpillard." Funny! He went on to instruct me all about what birds eat and what caterpillars eat and how we shouldn't eat things that other animals eat because we will get tummy ache.
What do caterpillars do? Nothing much but chew and chew.
What do caterpillars know? Nothing much but how to grow.
They just eat what by and by will make them be a butterfly.
And that is more than I can do, however much I chew and chew.
No comments: | <urn:uuid:a6f25716-26e0-4f43-a84f-073dae911a20> | 2 | 1.59375 | 0.171514 | en | 0.976291 | http://naturesrhythm.blogspot.com/2011/04/toasted-caterpillars.html |
Page last updated at 02:54 GMT, Thursday, 12 March 2009
Does the caste system still linger in the UK?
Is the caste system as much a feature of life in the UK as the Indian sub-continent? And what benefits - and problems - are associated with it? Here, two commentators from within the Hindu community argue about whether caste does permeate the life of British Hindus - and what effect it may have.
Usha Sood, barrister and law lecturer
The term caste that was historically used to denote social divisions in Indian society still lingers amongst Sikh, Hindu and Muslim migrants from South Asia.
Usha Sood
Usha Sood says caste discrimination is a real problem for many British Hindus
It has been said: The Brahmins (teachers) are the head, the Kshatriya (warriors) are the chest, the Shudras (labourers) are the hands. There are also Vaishyas (tradesmen).
It can still be found in marriage networks, with the older generations preferring to have marriages arranged within their caste or community.
The priests in the 200 or so temples in the UK are all likely to be Brahmin.
The Hindu Forum Report in 2008 disputes caste as an extant divide, arguing that it is really freedom of personal interaction and social choice.
This disregards the hierarchical retention of positions within social and religious ceremonial contexts, and provides a barrier in marriages of choice.
In a controversial report, No Escape - Caste Discrimination in the UK, researchers were told how couples who marry outside their own caste face "violence, intimidation and exclusion".
Caste discrimination is not identified or recognised in existing discrimination legislation in the UK.
Unite member Balram Sampla said in 2006 that the caste system was "a severe blight on the potential of 800,000 Dalits, or 'untouchables' in south Asia - and some 50,000 in the UK".
Some foreign employees including servants and maids are exploited and treated harshly, and are reluctant to draw attention to their plight.
Acceptance of the so-called lower castes as priests and community leaders, as well as openly non-caste based marriage advertisements, and affording equality of opportunity, would go a long way to show that these divisions are disappearing.
Kapil Dudakia, The Hindu Forum of Britain
The caste system is not faith based, but is fundamentally a social phenomenon found in many forms around the world.
Its premise is that a father's occupation is passed down through the generations.
Kapil Dudakia
Kapil Dudakia believes caste helps friends and families form strong bonds
As far as the United Kingdom is concerned, caste is well and truly non-existent, unless you happen to be a member of the Royal Family.
In my experience the vast majority of people, regardless of their historical caste, tend not to follow their ancestors' occupations.
Caste is therefore no longer relevant and it follows that caste discrimination in either services or employment is something of a red herring.
Those who campaign for a caste discrimination law are therefore using the issue as a smokescreen in an attempt to promote themselves and their real cause - which is to convert people to their faith.
The residual impact of the caste system has been for family and friends forming a closer bond and fondness for each other.
It is therefore more likely that their interaction is greater in these social circles and as such, it no doubt would also play an important part in their daily life, for example, in birth, death and marriage.
We all have a right to choose our friends and those with whom we wish to associate - what is wrong with that?
Equally we all oppose anything that is done by way of force.
A survey conducted by an anti-caste organisation revealed that more than 70% of those questioned said their children did not know their caste - how can they possibly then discriminate against other castes?
And a major research project conducted by the Hindu Forum of Britain found that more than 92% of the respondents did not believe that caste discrimination was an issue in the UK.
There you have it; two different surveys both coming to similar conclusions.
Print Sponsor
The BBC is not responsible for the content of external internet sites
Has China's housing bubble burst?
How the world's oldest clove tree defied an empire
Why Royal Ballet principal Sergei Polunin quit
Sign in
BBC navigation
Americas Africa Europe Middle East South Asia Asia Pacific | <urn:uuid:478ddd7f-7c66-43c0-8295-d48ddd7b4e65> | 3 | 2.84375 | 0.059744 | en | 0.95077 | http://news.bbc.co.uk/2/hi/uk_news/7856969.stm |
Is DNA evidence enough? An interview with David Kaye
David H. Kaye is Distinguished Professor of Law and Weiss Family Faculty Scholar in Penn State's Dickinson School of Law, and a member of the graduate faculty of the University's Forensic Science program. He is an internationally recognized legal expert on DNA and other forms of scientific evidence and the author of The Double Helix and the Law of Evidence,released earlier this year by Harvard University Press.
David Kaye sits down with Michael Bezilla
Distinguished Professor of Law and Weiss Family Faculty Scholar David H. Kaye
Why should the ordinary citizen be interested in how DNA is used in court?
The public has a vital interest in the criminal justice system. I've tried to illuminate the extent to which we can find truth in that system, because the subtleties of DNA evidence are not well understood outside of a small group of people. The popular perception is that DNA speaks the truth—you'vre either guilty or you're innocent, there's no ambiguity. But DNA is only a tool. It gives information depending on the nature of the samples and how well the analysis is done. "Garbage in, garbage out" is one concern, and the risk of overstating the implications of the evidence is another.
Who determines the quality of that analysis? Do lawyers and judges have to be scientists, too?
They donít have to be scientists, but they do have to know enough to understand what's going on and to know whether the statements the experts are making are well founded. Lawyers need to translate lab work into a form that a judge or jury can understand. They need to understand more about statistics and probability because these quantitative aspects of science have become significant in cases with scientific testimony. It's an area that's been neglected in the law school curriculum. That's changing, but those law schools that offer even one course in scientific evidence, let alone statistics, are probably still in the minority.
Hasn't the use of DNA evidence already changed the public's perception of the criminal justice system?
It has forced a lot of people to have second thoughts about the death penalty—it led to a moratorium of the death penalty in Illinois, for example. Nationwide, more than 200 individuals long imprisoned have been exonerated as a result of DNA evidence. Discovering undeniable errors in such cases also has led to improvements in procedures for pretrial investigations and, after trial, reviews of how things went wrong.
Your book has been acclaimed as the definitive history of the use of DNA evidence. Is there a historical turning point that made DNA acceptable to the courts?
Actually there were two watershed events. The first was a case known as People v. Castro in a trial court in New York. The defense, with the aid of an astute molecular biologist, showed that what the DNA labs were testifying to was not always an open and shut matter—there could be mistakes. Once that happened, the defense bar became better able to raise challenges to DNA evidence, and a number of scientists presented criticisms of the reasoning of the experts for the prosecution—particularly on the probabilities of DNA matches. The event that marked the end of this controversy in the mid-1990s was not a case. It was an article entitled "DNA Fingerprinting Dispute Laid to Rest," published in the journal Nature and written by two scientists who had been adversaries in court—one being the chief DNA scientist for the FBI, the other being the main defense scientist in the People v. Castro case. This rapprochement gave the courts more confidence in DNA evidence, and a series of opinions soon reinforced the view that the basic method of calculating probabilities was reasonable.
What lessons does your research about the past use of DNA evidence offer for the future?
Several authors have argued that the scrutiny given DNA evidence should be a model for forensic science generally. I wouldn't go that far—the courtroom battles over DNA continued far longer than the scientific record warranted, and the adversary nature of the legal system magnified and distorted disagreements among scientists. But the issues that came to the fore in litigation over DNA evidence are central to improving forensic science generally. Last year, a committee of the National Academy of Science—a committee that included Professor Robert Shaler, director of Penn State's Forensic Science Program—issued a Congressionally mandated report on the state of forensic science in America. Had the institutional reforms that the committee recommended been in place, there might have been fewer casualties in the “DNA Wars.”
Read David Kaye's Double Helix Law blog at
Last Updated July 27, 2010 | <urn:uuid:b91f9e2d-3fb3-4b2f-9bb9-3c4885a311ff> | 2 | 2.0625 | 0.064357 | en | 0.969205 | http://news.psu.edu/story/141630/2010/07/27/research/dna-evidence-enough-interview-david-kaye |
Home > detection, worlds > A Habitable Earth
A Habitable Earth
June 17th, 2007
Image Source.
There remain three blockbuster, front-page discoveries in exoplanetary science. The first is the identification of a potentially habitable Earth-mass planet around another star. The second is the detection of a life-bearing planet. The third is contact with extraterrestrial intelligence.
It’s hard to predict when (and in which order) discoveries #2 and #3 will take place. Discovery #1, on the other hand, is imminent. We’re currently 2±1 years away from the detection of the first habitable Earth-mass planet (which implies ~15% chance that the announcement will come within one year).
The breakthrough detection of a habitable Earth will almost certainly stem from high-precision Doppler monitoring of a nearby red dwarf star, and already, both the Swiss team and the California-Carnegie team are coming tantalizingly close. The following table of notable planet detections around red dwarfs gives an interesting indication of how the situation is progressing:
M star
M sin(i)
date K #obs sig µ
Gl 876 b 0.32 615 1998 210 13 6.0 247
Gl 876 c 0.32 178 2001 90 50 5.0 127
Gl 436 b 0.44 22.6 2004 18.1 42 4.5 26
Gl 581 b 0.31 15.7 2005 13.2 20 2.5 23
Gl 876 d 0.32 5.7 2005 6.5 155 4.0 20
Gl 674 b 0.35 11.8 2007 8.7 32 0.82 60
Gl 581 d 0.31 7.5 2007 2.7 50 1.23 16
Gl 581 c 0.31 5.0 2007 2.4 50 1.23 14
The masses of the stars and planets are given in Solar and Earth masses respectively. The year of discovery for each planet is listed, along with the half-amplitude, K, of the stellar reflex velocity (in m/s), the number of RV observations on which the detection was based, the average reported instrumental error (sigma) associated with the discovery observations, and a statistic, “µ”, which is K/sigma multiplied by the square root of the number of observations at the time of announcement. The µ-statistic is related to the power in the periodogram, and gives an indication of the strength of the detection signal at the time of discovery. In essence, the lower the µ, the riskier (gutsier) the announcement.
What will it take to get a habitable Earth? Let’s assume that a 0.3 solar mass red dwarf has an Earth-mass planet in a habitable, circular, 14-day orbit. The radial velocity half-amplitude of such a planet would be K=0.62 m/s. Let’s say that you can operate at 1.5 m/s precision and are willing to announce at µ=20. The detection would require N=2,341 radial velocities. This could be accomplished with an all-out effort on a proprietary telescope, but would require a lot of confidence in your parent star. To put things in perspective, the detection would cost ~10 million dollars and would take ~2 years once the telescope was built.
Alternately, if the star and the instrument cooperate to give a HARPS-like precision of 1 m/s, and one is willing to call CNN at µ=14, then the detection comes after 500 radial velocities. The Swiss can do this within 2 years on a small number of favorable stars using HARPS, and California-Carnegie could do it on a handful of the very best candidate stars once APF comes on line. Another strategy would be to talk VLT or Keck into giving several weeks of dedicated time to survey a few top candidates. Keck time is worth ~$100K per night, meaning that we’re talking a several-million dollar gamble. Any retail investor focused hedge funds out there want to make a dramatic marketing impact? Or for that matter, with oil at $68 a barrel, a Texas Oil Man could write a check to commandeer HET for a full season and build another one in return. “A lone star for the Lone Star.”
If I had to bet on one specific headline for one specific star, though, here’s what I’d assign the single highest probability:
The Swiss Find a habitable Earth orbiting Proxima Centauri. Frequent visitors to oklo.org know about our preoccupation with the Alpha-Proxima Centauri triple system. We’ve looked in great detail at the prospects for detecting a habitable planet around Alpha Centauri B, and Debra Fischer and I are working to build a special-purpose telescope in South America to carry out this campaign (stay tuned for more on this fairly soon). Proxima b, on the other hand, might be ready to announce right now on the basis of a HARPS data set, and the case is alarmingly compelling.
Due to its proximity, Proxima is bright enough (V=11) for HARPS to achieve its best radial precision. For comparison, Gl 581 is just slightly brighter at V=10.6. Proxima is effortlessly old, adequately quiet, and metal-rich. If our understanding of planet formation is first-order correct, it has several significant terrestrial-mass planets. The only real questions in my mind are, the inclination of the system plane, the exact values of the orbital periods, and whether N_p = 2, 3, 4 or 5.
The habitable zone around Proxima is close-in. With an effective temperature of 2670K, and a radius 15% that of the Sun, one needs to be located at 0.03 AU from the star to receive the same amount of energy that the Earth receives from the Sun. (Feel free to post comments on tidal locking, x-ray flares, photosynthesis under red light conditions, etc. Like it or not, if the likes of Gl 581 c is able to generate habitability headlines and over-the-top artist’s impressions, just think what a 1 Earth-Mass, T=300 K Proxima Centauri b will do…) A best guess for Proxima’s mass is 12% that of the Sun. An Earth in the habitable zone thus produces a respectable K=1.5 m/s radial velocity half-amplitude. It’s likely that HARPS gets 1.2 m/s precision on Proxima. A µ=15 detection thus requires only 144 RV observations. Given that Proxima is observable for 10 months of the year at -30 South Latitude, there are presumably already more than 100 observations in the bag. We could thus get an announcement of Proxima Cen b as early as tomorrow.
Categories: detection, worlds Tags:
1. NIKKI
June 17th, 2007 at 23:13 | #1
Hi Greg!
There are around 470 VLT/UVES
Spectras of Proxima Centauri at:
2. andy
June 17th, 2007 at 23:39 | #2
Where did Gl 849 go from the list of notable red dwarf planets? After all, it does seem to be the only known system apart from Gl 876 to have a jovian-mass planet around a red dwarf, and also seems to be the greatest orbital radius, though granted it seems to be less studied than some of the other systems.
3. greg
June 17th, 2007 at 23:48 | #3
Hi Andy, NIKKI,
That’s an interesting point regarding the availability of the spectra…
Gl 849 is indeed a notable red dwarf planet, but I left it out because I was more interested in discussing the progression to lower and lower mass red-dwarf companions.
Gl 849 is interesting because it bucks the trend toward an apparent paucity of true Jovian-mass companions to red dwarf stars (as do Gl 876 b and c).
4. June 19th, 2007 at 13:59 | #4
Hi Greg,
Given the cost that you mentioned above of surveying for several weeks a red dwarf in search of a potential habitable Earth using VLT or Keck, I can see why you would have to be very prudent in your choice of candidates and pretty darn sure that it’s worth the gamble. Thanks for enlightening us.
5. June 23rd, 2007 at 14:28 | #5
In case anyone was wondering what the mass of Earth would be if it were expressed in terms of Jupiter’s mass, 1.0 Earth mass is equal to 0.003144 of Jupiter’s mass.
Just thought you’d like to know in the event that you’re searching for one of those “habitable Earths” around one of those red dwarfs.
6. pvanes
June 25th, 2007 at 00:04 | #6
Interesting stuff. You say that we’re currently 2±1 years away from the detection of the first habitable Earth-mass plane. How do you go about calculating that? Also, what is the certainty that we will find one with 3 years? (85%?)
7. greg
June 25th, 2007 at 05:56 | #7
Hey Pieter,
The estimate is based on the (more-or-less) known rate at which the radial velocity observations are being accumulated by the various teams, and on the (theoretically completely reasonable) expectation that 1-Earth mass planets will be just as common as the 5-Earth mass planets that are already turning up around the nearby red dwarf stars. Assigning a one-sigma error bar pins down my sense of the uncertainty, but it was also slightly tongue-in-cheek for the benefit of the blogosphere.
I do think there’s an 85% chance that we’ll have a habitable Earth within three years. The real question is who’s going to get there first…
Comments are closed. | <urn:uuid:540b02ce-cb69-43e8-9ddf-d46b7f0001b0> | 4 | 3.53125 | 0.052813 | en | 0.924338 | http://oklo.org/2007/06/17/a-habitable-earth/ |
Current weather
• Scattered clouds
• 54°
Scattered clouds
Pipeline coordinator discusses project
Posted: Sunday, November 28, 2010
What would have to happen to make the Alaska natural gas pipeline a reality?
That's the $26 billion question Alaskans have to ask if they want the state's long-hoped for project to get built. Tuesday night at the Alaska Islands and Ocean Visitor Center, Larry Persily, federal coordinator for the Alaska Natural Gas Transportation Projects, spoke at a town meeting to update Homer on the pipeline.
"Alaskans need to understand how natural gas markets work so you can make the decision," Persily said. "What do I want? How much am I willing to subsidize in-state gas distribution?"
Appointed by President Barack Obama in 2009, Persily is the federal coordinator of the Alaska Natural Gas Transportation Projects, an office created by Congress in 2004 under the Alaska Natural Gas Pipeline Act to expedite federal permitting and construction of a pipeline that would deliver natural gas from the Arctic to Lower 48 markets.
"Our job is to make sure no federal agency makes a difficult project any more difficult than it needs to be," Persily said.
Permitting isn't the big problem. For example, the law creating the Office of the Federal Coordinator says that if the gas line goes ahead, agencies have to complete environmental impact statements in 18 months.
"There's nothing in Washington, D.C., stopping this project," Persily said. "What's stopping this project is economics."
Persily said he sees a couple of things that need to happen to build an Alaska gas pipeline:
* A demand for Alaska natural gas in 2020 and beyond at prices high enough to cover shipping costs, and
* Assurance to producers that they won't bear the cost alone of the pipeline.
"Producers are worried that they're the deepest pocket in Alaska. What worries them is they are the only pocket," Persily said.
As Kenai Peninsula Borough Assembly Member Charlie Pierce put it, "How much skin do we want to put in the game? How much do we want to give in order to get?"
Pierce also is the general manager for the Kenai Peninsula region of Enstar natural gas, and was one of several policy makers and industry representatives listening to Persily.
The Alaska Natural Gas Pipeline Act authorizes an $18 billion federal loan guarantee to help a project developer build a gas pipeline from the Arctic to the Lower 48. Estimates for the pipeline range from $26 to $42 billion. Congress is considering raising the loan guarantee to $30 billion. The act won't support an all-Alaska pipeline, but it does allow in-state use of the gas.
Two consortiums have formed to consider building a pipeline: the Alaska Pipeline Project, a joint venture of TransCanada and ExxonMobil, and Denali -- The Alaska Gas Pipeline, a joint venture between BP and ConocoPhillips. Those projects would be a pipeline from the North Slope to Fairbanks and to Alberta, Canada, joining the North American pipeline grid.
Alaska Pipeline and Denali both held open seasons to probe interest in shipping gas. An open season could be compared to a shopping center developer looking for major tenants before committing to build.
Right now, both projects are negotiating financial terms with potential shippers. The next step would be the signing of what are called Precedent Agreements. That could happen this month or early next year. A Precedent Agreement commits shippers to backing development costs.
"It really is significant if sometime in 2011 you start seeing Precedent Agreements," Persily said.
Several factors could make Alaska natural gas attractive to the American gas market. Natural gas production in Western Canada is declining, which means there will be room in the North American gas grid over the next decade. Shale gas production also has increased and is now 20 percent of the U.S. gas supply, creating more stable prices.
Shale gas has environmental issues. To produce shale gas, drillers inject water into underground reservoirs, a process called hydraulic fracturing, or fracking.
"If I were a utility, I'm not sure I'd bet on shale gas meeting 100 percent of the needs," Persily said.
Environmental concerns about coal also have made electric utilities think about natural gas, Persily said. The big market for Alaska natural gas isn't the home heating market, but for making electricity.
"In many ways the EPA is our best friend," Persily said of the U.S. Environmental Protection Agency. "Anything the federal government or states do that makes utilities more nervous about burning coal and drives them to gas, drives up gas demand."
Some Alaskans have advocated a bullet line, a smaller natural gas pipeline that would bring natural gas from the North Slope to Fairbanks, Anchorage and other Railbelt markets. The problem with a bullet line is that those markets won't support the total cost of construction and the state would have to subsidize it at a cost of $4 to $5 billion, Persily said. That also won't spur production on the North Slope. Instead of subsidizing a bullet line, why not use that money to leverage a bigger gas line? Persily asked.
Developing a large gas pipeline to the Lower 48 would lead to more exploration, Persily said.
"Because you're moving so much gas off the Slope in a big project, they're going to have to start exploring for gas immediately," he said.
A change in the tax structure would be needed to make producers and shippers more comfortable about investing in a gas pipeline, Persily said. Maybe it could be tax breaks on property that would be deferred until gas starts to ship. Maybe it could be a change in how natural gas is taxed compared to oil.
Splitting taxes on natural gas and oil can be complicated, warned Rep. Paul Seaton, R-Homer. Sometimes the same equipment is used to explore and drill for natural gas and oil.
"It's a lot more complicated than saying 'split apart oil and gas' because you then have to split apart production," Seaton said.
Key to the success of a natural gas pipeline is support for the project by the public -- and confidence it will happen.
"If the public doesn't believe the gas line is possible, then they don't give permission to the legislators to work on a deal," Persily said.
For updates and information on the Alaska natural gas pipeline, visit the Office of the Federal Coordinator website at
Michael Armstrong can be reached at
Trending Content
• 150 Trading Bay Rd, Kenai, AK 99611
• Switchboard: 907-283-7551
• Circulation and Delivery: 907-283-3584
• Newsroom Fax: 907-283-3299
• Business Fax: 907-283-3299
• Accounts Receivable: 907-335-1257
• View the Staff Directory
• or Send feedback | <urn:uuid:3925ec14-24d9-41aa-a7dd-95d5d09ab32b> | 2 | 2.140625 | 0.057615 | en | 0.952501 | http://peninsulaclarion.com/stories/112810/new_742683754.shtml |
by Rodrigo
submit your photo
Hall of Fame
View past winners from this year
Please participate in Meta
and help us grow.
Take the 2-minute tour ×
I always hear a lot about The Rule of Thirds. I'd like to know more about other 'tried-and-true' composition techniques (not special effects) that can make a photo more interesting.
In particular, I'd especially like to know:
• The name of the technique
• Any particular types of settings the technique is particulary useful
• Interesting ways to 'break' the rule
share|improve this question
I think this is best covered by the composition-basics tag. – mattdm Jul 12 '11 at 3:52
You're right. I feel pretty silly having asked the question now. I did a little more digging and I found a site where I can put in questions like this and get immediate feedback without wasting other people's time or feeding the ego's of elitist technocrats. I just typed this question there just now and was provided a listing of numerous sites with well-developed ideas and thorough explanations. You just gave me a large portion of my life back. Thanks! – Ian Felton Jul 12 '11 at 4:46
The point of directing you to the existing tag isn't to be disparaging or "technocratic" or say that you're wasting time. It's to make it easy to find what are essentially direct, already-there-for-you answers to your question. That, in addition to the answers given below, should be immediately helpful to you or to anyone else who comes along later. I'm not sure that really warrants the tone of your response. – mattdm Jul 12 '11 at 5:11
3 Answers 3
up vote 13 down vote accepted
While this isn't a duplicate, this can essentially be answered by linking to a few questions we've collected regarding other composition techniques (thanks largely to @JayLancePhotography!):
Searching the composition and composition-basics tag provides a wealth of knowledge.
share|improve this answer
Apart from rules of thumb like the rule of thirds, there are mand general compositional principles which are generally the same in all art forms, things such as balance, space, pattern, texture, lines and shapes, light and shadow.
Very common compositional techniques in photography that I can think of
• leading lines - leading the viewer's eye through the image
• patterns, and I think even more importantly broken/interrupted patterns
• selective focus or color (attracting attention to the subject by blurring/desaturating the background, I guess vignettes fall into this category
• negative space
• unusual perspectives - images of objects from a viewpoint not usually seen (ant's eye view of a flower or pet), extreme wide angle or tele shots
• framing - leaving space in front of the subject if moving, or looking out of the picture
• with wide angle images, having strong foreground interest
• use of strong contrast, bright objects or bright colors to draw the viewer's eye
• lines - diagonal lines and curves are more "dynamic", while vertical ines imply strenght and horizontal lines are more static and calming
• horizon - generally should not be placed in the center of the image, either the foreground or sky should be given more space - one exception would be water reflections where dead center often works
• in general the main subject should be off centre (rule of thirds or otherwise) but usually needs balancing by other objects
• triangles generally make for strong compositions
I think the best images are ones the attract the eye even when looking at a small thumbnail, and you're not sure what the subject is, but the eye is attracted by a strong pattern, shape or color.
The article below is worth a read. It covers a lot of the above, and more.
Wikepedia article Composition
Also, you might want to look into Gestalt Theory, very relevant to photographic composition. For example here: PDF
share|improve this answer
For interesting ways to break a rule, learn why the rule works and break it when you want to achieve opposite effect. For example, break the rule of odds when you want to stress symmetry and dullness of a scene.
share|improve this answer
Your Answer
| <urn:uuid:545cd8c5-6ad5-4e26-b422-371299b10974> | 2 | 1.773438 | 0.030388 | en | 0.926232 | http://photo.stackexchange.com/questions/13864/what-are-other-popular-composition-techniques-in-addition-to-the-rule-of-thirds/13943 |
by Rodrigo
submit your photo
Hall of Fame
View past winners from this year
Please participate in Meta
and help us grow.
Take the 2-minute tour ×
We already know that the black and white filters can be used to see geometric layouts of a certain scene, without showing or emphasizing the colors. (i.e. where the colors are not important)
My question is is there a specific scenario where we should use sepia effect? And what is the significance of this effect to a photo?
Note: Wikipedia only gives me this piece of detail ...give a black-and-white photographic print a warmer tone and to enhance its archival qualities...
share|improve this question
2 Answers 2
up vote 9 down vote accepted
Actual sepia toning of silver/gelatin prints works by replacing some or all of the silver with a substance that doesn't react as readily to oxygen (tarnish), so it made prints last longer. It also lends a warmer (browner/yellower) tone to the image, which can be very pleasing for some subjects (particularly people, where a stark black-and-white or a colder/bluer toned print is more "mechanical" and less organic).
As an effect on a digital photograph, it's entirely an aesthetic choice. It makes no difference (usually) to the longevity of the print. If it looks right, it is right. If it looks wrong, it is wrong. It's entirely subjective, but it's usually a better idea to go slightly warm (not necessarily all the way to sepia) with, say, a portrait than to go cold or to make a pure black-and-white. It just seems more "alive" to most people, but, again, that's subjective.
share|improve this answer
I agree with the idea that if your work is good sepia will only add a diferente flavour to it, and if it's not as good as you wanted or expected to be, sepia tones won't change the final result.
share|improve this answer
Can you describe in a bit more depth what the different flavor would be, and why and when one might want that? – mattdm Dec 28 '12 at 14:23
Your Answer
| <urn:uuid:0239d3bc-a9fb-4eb4-8838-dba6ccd266f1> | 2 | 2.234375 | 0.871062 | en | 0.931524 | http://photo.stackexchange.com/questions/30839/when-is-application-of-a-sepia-effect-or-filter-useful |
by Rodrigo
submit your photo
Hall of Fame
View past winners from this year
Please participate in Meta
and help us grow.
Take the 2-minute tour ×
I have a '5 Way Reflector' that I want to use in my photography, which includes the following 5 options:
• Gold reflective
• Silver reflective
• Gold/silver alternating pattern reflective
• White
• Scrim/opaque
• Black*
What is the basic effect that each of these surfaces produces, and what are some examples of when I would want to use each of them?
share|improve this question
*OK, busted... This would make it a 6-way reflector, but often these products swap out one of the previous 5 options for black, and since we're here anyway, why not shoot for completeness in the answer. :-) – Jay Lance Photography Feb 22 '11 at 6:44
1 Answer 1
up vote 21 down vote accepted
The white reflector provides the softest, most natural-appearing light. The reflected light is very diffuse, and matches the colour temperature of the ambient (or main) light. Except when it is used as a main front light on a backlit subject (and similar situations) it doesn't add any noticeable highlights or shadows of its own, apart from a soft catchlight in the eyes (or other elements of the subject that have specular reflection). Used as a main light source, especially as a front light, it may be too "flat". Its main failing, though, is that in order to provide enough reflected light for many shots, it needs to be very close to the subject in relation to its size. It may not be possible to both frame and light the shot the way you want using a white reflector.
The silver reflector has a lot in common with the white reflector, except that it is more reflective and usually much more directional. The light from a silver reflector is "hotter" (it is more likely to cause noticeable highlights and shadows), but it also has a greater reach (the reflector can be farther away from the subject). You can get much lower lighting ratios (the range of light intensity falling on the subject) using single-light setups using a silver reflector than using a white reflector. Because of the directionality of the reflected light, it can reveal texture better than a white reflector -- but that's a two-edged sword, since it also reveals skin problems. Placement and aiming are also more critical than with a white reflector; outdoors it's almost always necessary to have an assistant hold a silver reflector rather than just clamp it to a C-stand.
The gold reflector had all of the lighting characteristics of the silver reflector, except that it radically shifts the colour temperature. In the film days, it would be used to give people a healthy tanned appearance, lke using an 81C warming filter, but without affecting the rest of the scene. It can still serve much the same function, but it will often look unnatural unless the overall light is balanced very cool (like "north light" -- indirect sunlight from a blue sky, which is often at 6500 kelvins or more). Note that the gold reflector will shift the colour temperature of everything it hits, so if you're photographing a model wearing white, and that white has to look white in the final picture, you're just going to have to put things back the way they were in post anyway
The zebra reflector (the one striped silver and gold) is a better compromise for warming the colour temperature of the reflected light in many circumstances. It adds that glow of tan, but not to the same degree as the gold reflector.
The scrim is not a reflector (although a white one can be used as a reflector in close if you're really in a pinch). They come in a variety of materials, each used for a slightly different purpose. A white, tight-weaved scrim (one you can't see through clearly when you hold it close to your eyes) is used most often to soften a harsh light source. Think of it as a shoot-through umbrella (or a softbox) for the sun. This is the type normally included in a multi-reflector kit. It will allow you to take pleasing pictures of people in lighting that would normally be far too harsh. (White scrims used in the movie industry can be the size of a large event canopy/tent, but are usually of a looser weave so the sunlight isn't comletely flattened.) It does, though, hold back a considerable amount of light, so you need to be careful balancing your subject and background (no scrim is that big).
That's where the looser-weaved varieties of scrims come in. Often, they are black rather than white, and they are place behind the subject you are lighting. Think of them as a neutral density filter that affects only the background. (A white one will also brighten and reduce the contrast of the background.) It's not likely you'll find a black scrim in a 5-in-1 kit, but it's worth including for completeness.
The black "reflector" is quite emphatically not the same thing as no reflector at all. Its task is to prevent uncontrolled reflections from falling on your subject (or other elements in the picture). It is often used to increase contrast, but it can also keep that lovely mint-cream green wallpaper from giving your otherwise carefully-lit subject a sort of undead pallor.
As always, I invite edits and suggestions for editing -- the point of the game is to have the best answer on this site, not reputation or self-promotion.
share|improve this answer
great answer! +1 – JoséNunoFerreira Aug 9 '11 at 11:46
actually, i'd like to see the black reflector section expanded :D – JoséNunoFerreira Aug 9 '11 at 11:46
Thank you for the answer above sir. – user19416 Apr 17 '13 at 9:08
Your Answer
| <urn:uuid:2d296b6c-57dc-405c-a125-b390f86c5547> | 2 | 1.820313 | 0.704042 | en | 0.93915 | http://photo.stackexchange.com/questions/9127/what-lighting-effects-do-the-various-surfaces-of-a-5-way-reflector-produce?answertab=active |
Meta Battle Subway PokeBase - Pokemon Q&A
Regular poison vs. toxic?
0 votes
Badly poisoning obviously does a lot of damage but takes time, whereas regular poison does considerably less, but does more health damage in the short term, where toxic would be "warming up". My question is basically how many turns toxic needs to be active to do more damage than poison would. Thinking up a new toxic spikes strategy, see. Any thoughts?
asked Jun 2, 2013 by Mechanism L
2 Answers
2 votes
Best answer
A poisoned Pokemon looses 1/8 hp per turn
A badly poisoned Pokemon looses 1/16 hp on the first turn, and the amount lost increases by 1/16 each turn
So lets see:
Turn 1:
Poison: 1/8 hp loss
Badly poisoned: 1/16 loss
Turn 2:
Poison 1/4 loss
Badly poisoned: 3/16 loss
Turn 3:
Poison: 3/8 loss
Bad poison: 3/8 loss
Turn 4:
Poison: 1/2 loss
Bad poison: 5/8 loss
So, by the third turn, they tie, and bad poison surpasses at the fourth turn.
answered Jun 2, 2013 by Dudeicolo
selected Jun 2, 2013 by Mechanism L
Perfect. Thanks.
0 votes
Standard poisoning takes 1/8 of damage every turn, however toxic (badly poisoned) on the first turn takes 1/16 the on the second turn takes 2/16 (1/8) and the third turn takes 3/16 etc etc..
From full health toxic will cause a Pokemon to faint in 6 turns, which is good for walls like blissey who can hold out that long. Standard poison takes 8 turns to cause fainting on a Pokemon at full health, so toxic seems better
If you use toxic spikes twice it badly poisons anyway.
So to answer your question, toxic in 2 turns does the same damage as standard poison and in 3 it does more damage
answered Jun 3, 2013 by natanjames
edited Jun 2, 2013 by natanjames | <urn:uuid:fa582789-287b-4f35-abaf-a558542ed9c4> | 2 | 1.773438 | 0.119439 | en | 0.906903 | http://pokemondb.net/pokebase/140232/regular-poison-vs-toxic |
Sunday, May 23, 2010
What if 25% of the Cars were plug in...How much power is needed?
Last week Toyota announced a partnership with Tesla motors backed by $50M in investments. Tesla is the manufacturer of the trendy $100K all electric plug in sports car and has a model for us all in the works, the Model S. Toyota wants the technology and I can just imagine a Tesla/Prius in every garage. Gov. Schwarzenegger hailed the joint venture as the future and asked us all to imagine CA with more plug ins.
“What we are witnessing today is an historic example of California’s transition to a cleaner, greener and more prosperous future. We challenged auto companies to innovate, and both Tesla and Toyota stepped up in a big way, not only creating vehicles that reduce emissions and appeal to consumers but also boosting economic growth,” said Governor Schwarzenegger.
How will all these plug ins be powered? Everyone seems to think that electricity comes from a plug in the wall. Power has to come from somewhere. How will we make a green lifecycle from source to vehicle? Wind turbines? Coal? Gas? Solar? Nuclear?
Lets break it down.
136,000,000 registered passenger vehicles in 2007. Lets say 25% of the cars suddenly become plug ins. Therefore: 34,000,000 vehicles.
16.8 KW = 56miles charged per hour per the Tesla website.
Assume 12,000 miles per year driven we have 214 hrs of charge at 16.8 KW or 3600 KW-Hrs per car.
with 34M cars we have 1.2 x 10^11 KW-Hrs
A new nuclear power plant generates 13 billion kilowatt-hours (kWh) or 1.3 x 10^10 (assuming 1600MWe and 92% availability).
So the final answer is: 9.4 new nuclear plants would be required to keep all those vehicles charged. One plant charges 3.6M vehicles. There were 16,153,952 new vehicles (cars trucks and SUVs) sold in 2007.
Conclusion: We need one new 1600MWe plant a year if 25% of the new cars are all electric using the numbers and 2007 sales rates above.
Electric vehicles are great, we just need to remember that the power source is part of the equation and that conservation and alternative energy will not be enough to account for future energy demands.
3 billion barrels of gasoline were refined in 2006 out of 5.5 billion barrels of crude oil. 1.6 x10^9 gallons or 3.8 x 10^7 barrels of gasoline would be removed per year if 25% of new cars were all electric. Using the ratio of gas to oil equates to 7 x 10^7 barrels of crude oil saved per year (2006 refining and 2007 car sales and 30mpg).
Numbers and calculations are for illustrative purposes. I am hoping for credit for error carried forward--ECF.
Good points raised from readers comments:
1. The number of cars calculation I used omits trucks and SUVs reducing the overall number of cars.
2. What about reduced electricity demand off peak at night? Good question. I did not take that into account, however smartgrid technology and offpeak charging will mitigate the effects of EV. There is also talk of VTG or vehicle to grid where the electric vehicle could actually supply power during peak or the most expensive time of day and then charge during off peak or cheaper times of day.
1 comment:
1. what is the peak off-peak differential of the current generation capacity? | <urn:uuid:248afeb6-66e7-4706-8b57-d9492ee15d0f> | 3 | 2.6875 | 0.311907 | en | 0.939256 | http://powertrends.blogspot.com/2010/05/what-if-25-of-cars-were-plug-inhow-much.html |
Take the 2-minute tour ×
I'm writing several JavaScript plugins that are run automatically when the proper HTML markup is detected on the page. For example, when a tabs class is detected, the tabs plugin is loaded dynamically and it automatically applies the tab functionality. Any customization options for the JavaScript plugin are set via HTML5 data attributes, very similar to what Twitter's Bootstrap Framework does.
The appeal to the above system is that, once you have it working, you don't have worry about manually instantiating plugins, you just write your HTML markup. This is especially nice if people who don't know JavaScript well (or at all) want to make use of your plugins, which is one of my goals.
This setup has been working very well, but for some plugins, I'm finding that I need a more robust set of options. My choices seem to be having an element with many data-attributes or allowing for a single data-options attribute with a JSON options object as a value. Having a lot of attributes seems clunky and repetitive, but going the JSON route makes it slightly more complicated for novices and I'd like to avoid full-blown JavaScript in the attributes if I can. I'm not entirely sure which way is best.
1. Is there a third option that I'm not considering?
2. Are there any recommended best practices for this particular use case?
share|improve this question
2 Answers 2
up vote 8 down vote accepted
I've been working with a similar pattern over the past several months. My personal opinion is that it is ok to mix these two conventions depending on the needs of the plugin. If you have a small number (i.e. < 5) of well defined parameters or if you want to select elements based on a particular attribute then data attributes for each parameter is ok. If you have a large number of parameters, or if the parameters are highly dynamic (i.e. request parameters to an ajax call) then json within a data attribute may be more beneficial.
In any case, IMO it is very important to clearly document what the parameters are and how they should be used.
share|improve this answer
+1 for "clearly document"; the rest of the answer is pretty good, too. Documentation is not optional for anything other than a toy, one-developer, no-one-else-will-ever-look-under-the-hood app. And even then its a good idea. – Peter Rowell Sep 23 '12 at 16:18
As you pointed out, both of your approaches have good points, and drawbacks. I am currently working with the later, which I personally find easier to use. However, the first approach seems to be easier to read for others, so.. Instead of looking for an other, better solution you could just use both your ideas. How about this?
<div data-foo="bar" data-bar="foo" data-other="example">
<div data-options="{foo: 'bar', bar: 'foo', other: 'example'}">
share|improve this answer
Your Answer
| <urn:uuid:4ec083ff-85ad-4bb2-b151-d182ab44e952> | 2 | 1.726563 | 0.126291 | en | 0.909801 | http://programmers.stackexchange.com/questions/165836/proper-use-of-html-data-attributes?answertab=votes |
Take the 2-minute tour ×
I am trying to understand how a radio button is created in a Dynamic field by reading from an XML using Netbeans 7.0.
I know the radio button is created because of the XML being read from database, but I cannot see how the radio button is created.
Also, since I don't know where to place the break points, I can't see how I would debug the creation of components on the Dynamic Editor.
Maybe I am trying a wrong approach or something, so how do I efficiently debug an application like this?
share|improve this question
1 Answer 1
up vote 3 down vote accepted
All you need might be a conditional breakpoint. Both Netbeans and Eclipse allow you to edit properties of a breakpoint by right-clicking it and add a condition (a piece of code) that the debugger will evaluate every time it reaches the breakpoint, and stop only of the condition is true. In your case the condition would be some string that identifies the component being created dynamically as the one you're interested in.
share|improve this answer
Your Answer
| <urn:uuid:325a1c00-c2d5-49dc-92fd-e5e2c4b04f88> | 2 | 1.992188 | 0.251009 | en | 0.938335 | http://programmers.stackexchange.com/questions/184558/how-to-debug-through-a-swing-based-application-effectively |
Take the 2-minute tour ×
We've long had coding standards for our .Net code, and there seem to be several reputable sources for ideas on how to apply them which evolve over time.
I'd like to be able to put together some standards for the SQL that is written for use by our products, but there don't seem to be any resources out there on the consensus for what determines well written SQL?
share|improve this question
Pinal Dave has a list of coding standards on his site. They look like a fair basis for a set of standards. – Will A Jan 17 '11 at 15:05
There is a related question on SO. – Scott Whitlock Jan 18 '11 at 17:54
@Scott that only covers identing; nothing about naming, use of cursors/stored procedures/choices of datatype or anything that actually affects the quality of the code... – Rowland Shaw Jan 18 '11 at 18:06
exactly, hence why I said it was "related", not a "duplicate". – Scott Whitlock Jan 19 '11 at 14:41
3 Answers 3
In my experience the main things I'd look for would be:
• Table and column naming - look at whether you use ID, Reference or Number for ID type columns, singular or plurals for names (plurals being common for table names - e.g. THINGS, singular for column names - e.g. THING_ID). For me the most important things here are consistency which avoids people wasting time (for instance you don't run into typos where someone has put THING as a table name because you just know intuitively that table names are never singular).
• All creates should include a drop (conditional on the object existing) as part of their file. You might also want to include grant permissions, up to you.
• Selects, updates, inserts and deletes should be laid out one column name, one table name and one where clause / order by clause per line so they can be easily commented out one at a time during debugging.
• Prefix for object types particularly where they might be confused (so v for view being the most important). Not sure if it still applies but it used to be inefficient for stored procedures other than system procedures to begin sp_. Probably best practice to differentiate them anyway usp_ was what I've used most recently.
• A standard indicating how the name of a trigger should include whether it's for update/insert/delete and the table it applies to. I have no preferred standard but this is critical information and must be easy to find.
• Standard for ownership of objects in earlier versions of SQL Server or the schema it should exist in for 2005 and later. It's your call what it is but you should never be guessing who owns something/where it lives) and where possible the schema/owner should be included in the CREATE scripts to minimise the possibility of it being created wrongly.
• An indicator that anyone using SELECT * will be made to drink a pint of their own urine.
• Unless there is a really, really good reason (which does not include laziness on your part), have, enforce and maintain primary key / foreign key relationships from the start. This is after all a relational database not a flat file and orphaned records are going to make your support life hell at some point. Also please be aware that if you don't do it now I can promise you you'll never manage to get it implemented after the event because it's 10 times the work once you have data (which will be a bit screwed because you never enforced the relationships properly).
I'm sure I've missed something but for me they're the ones that actually offer real benefit in a decent number of situations.
But as with all standards, less is more. The longer your coding standards, the less likely people are to read and use them. Once you get past a couple of well spaced pages start looking to drop the stuff that isn't really making a practical difference in the real world because you're just reducing the chance of people doing any of it.
EDIT: two corrections - including schemas in the ownership section, removing an erroneous tip about count(*) - see comments below.
share|improve this answer
Some strange choices... "SELECT COUNT(*)" is bad? Ever heard of schemas (which is not the same as owner)? Your others are good though – gbn Jan 17 '11 at 19:50
@Jon Hopkins - I know why its bad to use SELECT *. It would be great if you could say why using SELECT COUNT(*) is bad. – k25 Jan 17 '11 at 21:00
@gbn @k25 - A few years back (2002?) I had a DBA who was very hot on count(*) but Googling in response to your questions it seems that this is now outdated (if it was ever true). sqlservercentral.com/articles/Performance+Tuning/adviceoncount/… (Registration required). She was primarily an Oracle DBA so it may have been a genuine issue there which she assumed was also an issue for the SQL optimiser. – Jon Hopkins Jan 17 '11 at 21:32
@gbn - Yes I have, though I've been relatively hands off since they were introduced so my automatic reaction was users. I'll update the answer to cover schemas. – Jon Hopkins Jan 17 '11 at 21:34
@gbn, @k25 - More digging on count(*). Apparently this was an issue in Oracle 7 and earlier, fixed in 8i and beyond. Not clear if it was ever an issue in SQL Server but certainly isn't any more. My DBA was out of date it would seem. – Jon Hopkins Jan 17 '11 at 21:38
That's because there is no consensus. Just as an example, I would have different answers for at least half the items in Jon Hopkins' list, and based on the amount of detail on his list, it's a safe guess that we both work with databases for a living.
That said, a coding standard is still a good thing to have, and a standard that everyone on the team understands and agrees with is a better thing, because that standard will more likely be followed.
share|improve this answer
+1. I think the most important thing is that you've got consistency among your team. – Dean Harding Jan 17 '11 at 23:12
out of interest what would you do differently? Are they largely matters of taste (layout and so on) or are there any "hard" errors? – Jon Hopkins Jan 18 '11 at 9:44
@Jon: no hard errors, just subjective things like singular table names, hatred of triggers, etc.. BTW, "SELECT *" is fine inside an "EXISTS()". – Larry Coleman Jan 18 '11 at 13:19
fair example (and I do use it with EXISTS and don't force myself to drink urine). – Jon Hopkins Jan 18 '11 at 14:02
In addition to Jon Hopkins' answer...
• Separate internal vs external objects
• IX, UQ, TRG, CK etc for constraints and indexes etc
• lower case or CapsCase for client facing eg uspThing_Add
• For internal objects, make them explicit if "non default"
• UQ = unique constraint
• UQC = unique clustered constraint
• PK = primary key
• PKN = nonclustered primary key
• IX = index
• IXU = unique index
• IXC = clustered index
• IXCU or IXUC = unique clustered index
• Use schemas to simplify naming + permissions. Examples:
• Helper.xxx for internal procs
• HelperFn.xxx for udfs
• WebGUI.xxx for some facing code
• Data and/or History and/or Staging for tables
share|improve this answer
Your Answer
| <urn:uuid:17135a98-bb30-4180-9a1c-88464756ea72> | 2 | 1.960938 | 0.084397 | en | 0.93933 | http://programmers.stackexchange.com/questions/37476/can-anyone-recommend-coding-standards-for-tsql/37491 |
Take the 2-minute tour ×
I'm reading Ralph Kimball's books and I'm currently exploring the following data warehouse schema.
AdventureWorks 2008 Data Warehouse Schema
Are both dimension & fact tables populated during the data warehouse creation/update? How often? What about the DimDate table? Do we populate it with all possible dates or only with date used by facts tables?
What is the standard process of generating a data warehouse?
share|improve this question
What, you're not going with Inmon's DW 2.0? Or Dan Lindstedt's Data Vault? There's lots of standards... ;) – TMN May 19 '11 at 14:51
@TMN: very interesting. Do you have more? – user2567 May 19 '11 at 14:53
Not handy, those are just the big ones (in addition to Kimball) that I remember off the top of my head. I know Oracle and SAP have their own DW products, and ISTR that there was a coalition of open-source groups (Talend and some others) that were proposing (or going to propose) a standard or guidelines or something. I'll see if I can find my notes and follow up with any substantial links. – TMN May 19 '11 at 15:10
3 Answers 3
up vote 2 down vote accepted
Are both dimension & fact tables are populated during the data warehouse creation/update?
Vague and hard to answer.
How often?
Harder to answer
What about the DimDate table. Do we populate it with all possible dates or only with date used by facts tables.
Really, really hard to answer.
Please keep reading. You need to read up on "Dimension Conformance" and Kimball's idea of the "Dimension Bus".
1. Dimensions must be populated first. They "accrete" information, sometimes from multiple sources. A common dimension (like "product") will often have multiple viewpoints in multiple applications. This leads to attributes which are loaded separately from separate sources.
2. Dimensions are "conformed". Data coming in may agree with the existing dimension. Good. Data may not agree. There are many standard "Slowly Changing Dimension" (SCD) algorithms to manage change in dimensional attributes. This is a deep and complex subject. Keep reading.
3. Facts are matched to conformed dimensions when they're loaded. Fact load schedules depend on the source applications and the warehouse purpose. There's no simple answer to "how often?"
4. Some dimensions can be pre-populated (like Time) because they're either from external sources (like Time) or they're essentially static. In some cases, it's a handy business fiction to simply declare the dimension static and use special almost-manual utilities to tweak the dimension when a business change occurs. Sometimes a dimension is defined by law or other external standards.
Pre-populating time is common because it slightly simplifies dimensional conformance. Also, the Time dimension is never a "Slowly Changing Dimension" because time instances never have modifiable attributes.
Accumulating time as rows are loaded can be annoying because a row in the time dimension often includes rich information like accounting periods and other facts that aren't trivially derivable from the simple Y/M/D date that's available in the input.
share|improve this answer
I'm going to share my one experience building a datacube. I used SQL Server Analysis Services 2005. The company is in retail business and it has stores in several locations. Each store has its own database server but uses the same database schema.
First I pull data from each site into one central database. This is done periodically, in my case monthly. This central database uses the same schema as site database.
Then from this central database, data is massaged to form the 'star schema'. In my case, I wanted to build a sales cube. This sales cube should be able to be sliced by product, date, and location. The sales cube should be able to show sum of item quantity sold, sum of gross sales, and sum of net sales.
In order to create this star schema, I chose to create some views to flatten some table references:
• One view to join sales header and sales detail tables, exposing sales date, product code, location code, quantity sold, unit price, qty * price, and qty * price - discount. This can be called sales fact table.
• One view to join product table with its subtables like product category etc. This is the data source for the product dimension.
• One view to join location table with its subtables. This is the data source for the location dimension.
For date, I created a calendar table containing all dates from 1 Jan 2001 to 31 Dec 2030 that looked like this:
|date |year|month|dayofweek|
|2001-01-01|2001|1 |0 |
This calendar table is the data source for date dimension.
Next I created a new 'analysis services' project in visual studio. I set the views and tables above as data sources, linked the product code in the sales view to the product dimension, link the sales date to the date dimension, etc, and build the cube.
Analysis services will then set the cube definitions and populate the cube and dimensions. After this process is done, the cube is ready for use.
So the cube is populated when you process it. It will stay the same if you don't reprocess it.
share|improve this answer
Daily updates look reasonable to me, but choose period based on business requirements. I developed solution for trading company, we chose daily overnight updates. Today you can see all transactions and inventory inclusive yesterday, good enough for analytics. Also, we avoided performance problems with transactions systems, we read data before business user start updating data.
When you get data from transaction system, first update your dimensions, you can not insert data into facts tables if you don't have corresponding dimensions.
Populate DimDate with all dates in range. For example, if you don't have sales on 19. may you won't to see that.
Standard process? There are different methodologies. If you like Kimball's approach, try http://www.kimballgroup.com/
share|improve this answer
Your Answer
| <urn:uuid:f9a49464-e24a-4343-b8ca-b55647388027> | 2 | 1.992188 | 0.471614 | en | 0.911919 | http://programmers.stackexchange.com/questions/77435/how-and-when-dimensions-are-filled-in-a-data-warehouse-schema/77468 |
Take the 2-minute tour ×
What are some common algorithmic optimization opportunities that everyone should be aware of? I have recently be revising/reviewing some code from an application, and noticed that it appeared to be running considerably slower than it could. The following loop turned out to be the culprit,
float s1 = 0.0;
for (int j = 0; j < size; ++j) {
float diff = a[j] - b[j];
s1 += (diff*diff * c[j]) + log(1.0/c[j]);
This is equivalent to,
j { (aj-bj)2*cj + log(1/cj) }
Each time the program is run, this loop is called perhaps over 100k times, thus the repeated calls to log and divide result in a very large performance hit. A quick look at the sigma representation makes it pretty clear that there is a trivial fix - assuming you remember your logarithm identities well enough to spot it,
j { (aj-bj)2*cj } + ∑j { log(1.0/cj) } =
j { (aj-bj)2*cj } + log(1.0/(Πjcj))
and leads to a much more efficient snippet,
float s1 = 0.0;
float s2 = 1.0;
s2 *= c[j];
s1 += (diff*diff * c[j]);
s1 += log(1.0/s2);
this lead to a very large speed-up, and should have made its way into the original implementation. I assume it did not because the original developer(s) either weren't aware, or weren't 'actively aware' of this simple improvement.
This made me wonder, what other, similar, common opportunities and I missing out on or overlooking, and how can I learn to better spot them? I'm not so much interested in complex edge cases for particular algorithms, but rather examples like the one above that involve what you might think of as 'obvious' concepts that crop up frequently, but that others may not.
share|improve this question
migrated from stackoverflow.com Jul 9 '11 at 13:18
This question came from our site for professional and enthusiast programmers.
you've sped it up, but you may have altered the accuracy, depending on the size of the Cj – Mitch Wheat Jul 9 '11 at 2:27
Might be a disasterous optimization. If "size" is a big number, and c[j]>1, you might have introduced the opportunity for an overflow in " s2*=c[j]" where there was none before. If c[j]<1, you might have reduced the value of s2 to zero, causing an overflow at "s1+=log(1.0/s2)". – Ira Baxter Jul 9 '11 at 9:54
Did you write unit tests to compare the results? – Job Jul 9 '11 at 16:17
log(1.0 / s2) equals -log(s2), optimizing away the division – tiwo Jul 22 '12 at 9:03
4 Answers 4
My 2 cents:
1. If possible, change the data structure of the program could be very helpful, even if the change was trivial. Once I changed a sparse matrix's presentation from adjacency table to a typical sparse matrix representation, and the average running time halved for my program.
2. Get rid of recursion. This is hard to do but could be beneficial. However, if done improperly, this could lead to serious problems, and the non-recursive code is generally not as intuitive as the recursive version.
3. Cache some of the frequently used values. Although this looks like cheating, it could be very beneficial - all the contest programmers should already know this. Also see memorization, mentioned in James Black's comment.
4. Use shortcut evaluation properly. This won't lead to much performance boost normally, and can lead to unreadable code. But if the expression being evaluated has some really heavy work to do, this can help quite a bit.
5. EDIT: If your job is computational-heavy and involves floating point computation (especially when it involves approximation), then sometimes restate your formula (NOT redesign the algorithm, just change the formula to a equivalent one) could speed your program greatly because of the floating point arithmetics the computers use. Many examples could be found in numerical analysis and scientific computing books. For the really interested, What Every Computer Scientist Should Know About Floating-Point Arithmetic is a great paper.
share|improve this answer
loop unrolling? All of the above, bar 1, are not really algorithmic optimisations (including loop unroling), more code optimisations – Mitch Wheat Jul 9 '11 at 2:32
+1 - For your (3) memoization could be useful: codebetter.com/matthewpodwysocki/2008/08/01/…, but recursion can be useful, depending on the language. Profile, determine where it is slow, don't just optimize just because you assume that section is slow, it may be slower than optimal but may not overall impact the speed of the program. – James Black Jul 9 '11 at 2:34
Well caching could be viewed as a kind of algorithm if use the Wikipedia definition strictly: an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The algo here: get the input - find the answer - return. Just minimal:) – Ziyao Wei Jul 9 '11 at 2:35
Your #2 absolutely depends on the used language and is not a general statement. Also depending on the algorithm the iterative implementation may be MUCH more complex (try a cache oblivious matrix multiplication iterative..) and is most certainly harder to parallelize. – Voo Jul 9 '11 at 2:36
@Voo Realized my wording is inaccurate and edited. Thanks! – Ziyao Wei Jul 9 '11 at 2:37
The Implementation May Cheat, but it Must Not Get Caught
After profiling an application to determine where the time is being spent, the next step is determining exactly what the rest of the program expects out of the problem code. Since inner loops are often the culprit the amount of code is often small but the difference between the important work and the extraneous work may be very subtle. Once you know what the callers are relying on you can brainstorm new ways to produce the same result.
For example, your profiling run turns up strlen() at the head of the list. It is being called many, many times. You examine strlen() and you see it finds the length of the string by counting all of the bytes. Now, what part of that is important? How can we cheat? Does the caller actually care that we touch every byte? Probably not. Does it even care if we dereference any part of the string memory? Maybe not. Perhaps we can memoize the results. Will we get caught? Now you have to make sure your cached results are invalidated if the string changes. How else can we cheat? If you examine the callers you may find they are doing strlen(s) > 10. Now stop counting at 11 and you will do less work and not get caught.
The example in the question is a subtle one, as others have pointed out. You cheat by hoisting a math operation out of a loop. Will you get caught? Better think about the precision issues involved and how the intermediate floating point values will affect the results.
In one real-world example I discovered that a mirrored database startup was very slow. The code ensured database integrity by picking one of the copies, ensuring it was the most recent, and then loading all of the remaining N copies to compare against the first. This could not be readily parallelized because there wasn't enough memory to read all N copies in simultaneously. How can we cheat? Well, what is the actual goal? The code doesn't actually care about reading and comparing at all. What it cares about is having every copy be the same as the chosen one. What if instead of reading all the other copies we instead overwrite them all with the known good copy? Now we can do a lot of parallel IO and the operation goes many times faster. How can we get caught? Well, our write can be interrupted partway through. Or only some of our writes can be interrupted. Dealing with these corner cases was the bulk of the work. The fast, parallel write was worth the effort, though.
share|improve this answer
+1: Nice perspective, well explained. – Ira Baxter Jul 9 '11 at 3:46
+1 I agree with Ira. I would put it as "find out why the time is being spent". What you want to find is time being spent for poor reasons. That's where the money is. – Mike Dunlavey Jul 11 '11 at 16:21
Algorithmic optimization opportunities. Here's the way I think about them generally:
Is the algorithm's complexity O(NlogN) or less? If so, it's probbaly good enough.
If not, I start looking for other algorithms.
Ultimately with very large data sets, changing the constant of proportionally doesn't do much (which is essentially what your posted example does). Only changing the algorithm's complexity will provide speed ups in the asymptotic case.
If you have fixed sized data sets, then maybe the improvements are worth it.
Oh almost forgot!: Make sure you measure before optimising.
share|improve this answer
There's an infinite supply of algebraic equivalences, for all the various algebras with which one might compute. I don't think you can write down a useful specific extended list.
Similarly there are lots of algorithm equivalences. These are generally worthwhile, as they can affect the computation time in strongly nonlinear ways.
You'll also found there are variety of optimizations motivated by the computing hardware structures, such as sum trees instead of linear reductions, and caching reusable computation results when you have lots of cache.
Best to just be aware that such equivalences exist (modulo possible accuracy changes and different resource demands), when coding, and when one discovers where the code bottlenecks really are.
share|improve this answer
I agree it certainly wouldn't make sense to try and lay out everything; that's what a textbook is for. I do think we can make recommendations though; but perhaps we need more information about the problem domain. In my case I'm working on speech and NLP related problems, so situations like the above are quite common. It is of great practical use to be aware of these simple log identities. Maybe a better way of asking would be to ask about problem domains and things one 'ought to have an inkling of'. – xhs7is82wl Jul 9 '11 at 2:55
@blackkettle: If you specify a problem domains (which often implicitly selects a set of solution methods by virtue of the fact that's the only way we know how work those problems), you might be able to focus on some kinds of optimizations. But even problem domains can have many solutions and there many optimizations; you wouldn't do a lot better if you narrowed this discussion to "scientific computation" and its solution methods. You mean diff eqn solvers? Relaxation systems? Computational fluid dynamics? Protein folding? .... Pick a very narrow solution area and you might have a chance. – Ira Baxter Jul 9 '11 at 3:44
Your Answer
| <urn:uuid:b90c929e-4101-46aa-9372-9cdfc0a9333b> | 2 | 1.914063 | 0.665434 | en | 0.9332 | http://programmers.stackexchange.com/questions/91202/what-are-some-common-algorithm-optimization-opportunities-mathematical-or-othe |
June 8, 2010
The unix program diff identifies differences between text files; it is most useful for comparing two versions of a program.
Given the longest common subsequence between two files, which we computed in a previous exercise, it is easy to compute the diff between the two files; the diff is just those lines that aren’t part of the lcs.
Your task is to write a program that finds the differences between two files. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
About these ads
Pages: 1 2
2 Responses to “Diff”
1. […] Praxis – Diff By Remco Niemeijer In today’s Programming Praxis exercise our task is to write a diff command line tool. Let’s get started, […]
2. Remco Niemeijer said
My Haskell solution (see http://bonsaicode.wordpress.com/2010/06/08/programming-praxis-diff/ for a version with comments):
import Data.List.LCS.HuntSzymanski
data Change = D | A | C
linenum :: (Int, Int) -> String
linenum (s, e) = if s == e then show s else show s ++ "," ++ show e
header :: (Int, Int) -> String -> (Int, Int) -> IO ()
header l op r = putStrLn $ linenum l ++ op ++ linenum r
section :: Char -> [String] -> IO ()
section c = mapM_ (\s -> putStrLn $ c:' ':s)
diff :: String -> String -> IO ()
diff xs ys = f 0 0 (lines xs) (lines ys) where
f n1 n2 = g where
g [] b = change A [] b
g a [] = change D a []
g a b = case lcs a b of
[] -> change C a b
(d:_) -> case (head a == d, head b == d) of
(True, True) -> rec 1 1
(True, _ ) -> change A q1 q2 >> rec len1 len2
(_ , True) -> change D q1 q2 >> rec len1 len2
_ -> change C q1 q2 >> rec len1 len2
where [q1, q2] = map (takeWhile (/= d)) [a, b]
[len1, len2] = map length [q1, q2]
rec l r = f (n1+l) (n2+r) (drop l a) (drop r b)
change D a _ = header (n1+1, n1+length a) "d" (n2, n2) >>
section '<' a
change A _ b = header (n1, n1) "a" (n2+1, n2 + length b) >>
section '>' b
change C a b = header (n1+1, n1+length a) "c" (n2+1, n2+length b) >>
section '<' a >> putStrLn "---" >> section '>' b
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
Get every new post delivered to your Inbox.
Join 657 other followers
%d bloggers like this: | <urn:uuid:c60fa381-ebdc-4958-bc0a-439e14cb01c2> | 4 | 3.578125 | 0.025769 | en | 0.814342 | http://programmingpraxis.com/2010/06/08/diff/?like=1&source=post_flair&_wpnonce=ea0388e61b |
Psychology Wiki
Preganglionic autonomic fibres
34,190pages on
this wiki
Redirected from Preganglionic fibers of the sympathetic system
Main article: Autonomic ganglia
In the autonomic nervous system, fibers from the CNS to the ganglion are known as preganglionic fibers.
All preganglionic fibers, whether they are in the sympathetic division or in the parasympathetic division, are cholinergic (that is, these fibers uses acetylcholine as their neurotransmitter).
Sympathetic preganglionic fibers tend to be shorter than parasympathetic preganglionic fibers because sympathetic ganglia are often closer to the spinal cord than are the parasympathetic ganglia.
See alsoEdit
External linksEdit
Around Wikia's network
Random Wiki | <urn:uuid:58e436f4-87fb-4e55-a293-0fe924395520> | 3 | 2.90625 | 0.018427 | en | 0.730666 | http://psychology.wikia.com/wiki/Preganglionic_fibers_of_the_sympathetic_system |
Nuclear Waste Reduction Using Molecularly Imprinted Polymers
Nuclear Waste Reduction Using Molecularly Imprinted Polymers
Joe Nero, Jon Bartczak
Open Access OPEN ACCESS Peer Reviewed PEER-REVIEWED
Nuclear Waste Reduction Using Molecularly Imprinted Polymers
Joe Nero1, Jon Bartczak1,
1University of Pittsburgh, USA
Nuclear power accounts for just over twenty percent of America’s electrical output and does not contribute to greenhouse gas emissions. Unfortunately, nuclear power does produce a deleterious by-product known as radioactive waste. One of the primary goals of nuclear power proponents is the development of methods that reduce the volume of radioactive waste, such as cobalt. Radioactive cobalt is usually accompanied by non-radioactive iron, making it more difficult to solely extract the harmful cobalt atoms. The application of molecularly imprinted polymers and chitosans increase the effectiveness of the removal of radioactive cobalt from cooling medium in order to reduce the overall volume of nuclear waste by having a high selectivity for the radioactive cobalt ions even in the presence of similar particles. This method’s efficacy will be analyzed and compared to the current procedures for removing radioactive cobalt from cooling medium. A relevant explanation of a nuclear reactor’s inner workings, radioactive waste formation, along with societal implications of cleaner nuclear power, and the benefits of its successful implementation, will also be discussed.
At a glance: Figures
Cite this article:
• Nero, Joe, and Jon Bartczak. "Nuclear Waste Reduction Using Molecularly Imprinted Polymers." Journal of Polymer and Biopolymer Physics Chemistry 2.2 (2014): 29-36.
• Nero, J. , & Bartczak, J. (2014). Nuclear Waste Reduction Using Molecularly Imprinted Polymers. Journal of Polymer and Biopolymer Physics Chemistry, 2(2), 29-36.
• Nero, Joe, and Jon Bartczak. "Nuclear Waste Reduction Using Molecularly Imprinted Polymers." Journal of Polymer and Biopolymer Physics Chemistry 2, no. 2 (2014): 29-36.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Why Is Energy a Problem?
As the populations of developed countries rise, so do their standards of living. Such standards are only made possible through an abundant supply of energy [1]. Additionally, statistics show that the average life expectancy of people in nations with substantial energy availability is over seventy-five years, while those in underdeveloped countries have a life expectancy averaging a mere forty years [2]. The world’s population is currently doubling every thirty five years, while its energy usage is doubling every fourteen years [2]. Due to the limited supply of fossil fuels, which comprise the majority of the world’s energy producing sources, this increase in population, and living standards will not be indefinitely sustainable unless other viable energy sources are widely employed. The rate of oil production is expected to peak within the next few years, and although there is still a copious supply of coal to utilize, both types of fuel are major contributors of greenhouse gas emissions and overall climate change [2]. The burning of coal produces over five hundred pounds of airborne pollution per second in America alone, while the burning of oil contributes a similar amount of these harmful carbon pollutants [3]. The atmospheric waste produced by fossil fuels also leads to acid rain, which severely weakens surrounding vegetation, and can completely decimate aquatic life. Alternative fuel sources including solar, wind, and hydroelectric power simply do not produce enough energy to make a significant contribution to worlds the overall energy needs, and are also limited by geographic location. It is this composite of increasing need witha diminishing supply that constitutes the energy crisis facing the world today [2]. This crisis can be met with the systematic integration of the large scale usage of nuclear power.
Table 1. Annual Waste Produced by a 1000 Mw Plant [3]
2. The Potential of Nuclear Power
Using nuclear power as a major source of electrical production for a country can alleviate many of the issues associated with the use of fossil fuels. Nuclear power uses only a minute amount of material to yield a massive energy output while releasing zero greenhouse gasses. As the fissionable material of choice, uranium is a better source for energy production compared to fossil fuels due to its high energy density. One kilogram of coal can keep a 100 watt light bulb lit for about four days, One kilogram of natural gas can only last for about six days, and One kilogram of uranium can keep the light bulb lit for over 140 years [3].
Table 2. Energy Density for Various Materials [3]
The basic mechanics behind the production of electricity is the use of large generators. These generators produce electricity when relative motion is present between the conductors and magnets inside them. This motion is achieved by the use of steam driven, mechanical turbines. What differs from between the two types of power plants is the way in which this cooling water is heated to make the steam which drives the turbines. As opposed to continually burning incredibly large quantities of fossil fuels to generate the heat needed to boil the water, the heat produced from the process of nuclear fission is harnessed and used to our benefit.
The process of nuclear fission occurs when an atom’s nucleus is bombarded with another particle, normally a neutron, and made to become so unstable that it breaks up into at least two smaller particles, some free neutrons, and releases gamma radiation. These free neutrons can go on to create a chain reaction that, with the use of proper materials, can be safe and controlled. After a fission event, these smaller particles move apart at high speeds, colliding with nearby molecules, increasing the overall kinetic energy in the system. The energy released is due to the small loss in mass of the products in the reaction. Einstein’s famousequationE=mc2 allows us to see just why there is such a large release of energy [4]. Even when the mass is on the atomic scale, when multiplied by the speed of light squared, it will yield an enormous amount of energy. This increase in energy corresponds to a rise in temperature of the system that can be used to boil water, create steam, spin a turbine, and then generate electricity.
2.1. Further Comparisons For The Use Of Nuclear Power
A comparison of nuclear power to the use of fossil fuel for energy production should also include examining capacity, cost, reliability, and safety. Since the basic constructions of both types of power plants are similar, their capacities for energy production in either design are also similar at around 1000 megawatts [1]. Over 440 reactors are in operation around the world supplying more than a fifth of the energy used [1]. Due to the ample supply of uranium and other fissionable materials estimated to be available, there is no danger of extinguishing the supply for at least a century at today’s current expenditure.
Cost per kilowatt produced in the different styles of power plant is difficult to truly evaluate. Government funding and tax incentives play a large role in the creation of a nuclear power plant; however, there are many other factors to consider related to cost. A nuclear plant requires fewer personnel to operate and maintain it compared to a coal plant. With higher standards and consistent operation, nuclear power plants experience less down time for unplanned maintenance reasons, operating at around 90% of the time for their almost fifty year life. They are also unaffected by an increase in the cost of the fissionable fuel chosen after construction is complete, unlike coal plants that are constantly purchasing and burning more fuel to create electricity. With the decline in supply of fossil fuels, the associated cost is sure to keep rising thereby keeping the cost of energy produced from coal on the rise. Considering these cost savings for the nuclear production of energy, researchers approximate the cost per unit of energy produced to be equivalent to or less than that of a similar quantity of energy produced by a coal plant.
The final area of significant concern with any nuclear reactor is safety. Safety is always a major consideration in the design of any nuclear power plant. The general public has a limited knowledge of the hazards associated with nuclear power, such as radiation and contamination. The media is able to take advantage of this knowledge deficit by exaggerating, as well as not giving a proper quantifiable analysis when comparing a nuclear issue to normal levels of radiation and contamination for the area. For instance, from the years 1969-1986 there were one hundred eighty-seven mining disasters, three hundred thirty-four oil well fires, and nine dam bursts. Unless personally affected by these disasters, any given person is unlikely to remember them, but the one nuclear accident that happened, Chernobyl, is remembered by everyone [1]. The risks stemming from reactor accidents can be accurately estimated by probabilistic risk analysis. A fuel melt down can be expected to occur once in every twenty thousand years, accompanied by an average of 400 deaths per meltdown. Coal burning alone is estimated to cause 10,000 deaths per year, showing that the dangers associated with nuclear power are grossly exaggerated, and that nuclear power is actually much safer when compared to the burning of fossil fuels [5].
All potential energy sources have inherent risks associated with them. The unavoidable risks of nuclear power deal directly with the high energy byproducts produced from the process of fission itself. During fission, the release of high energy gamma rays can travel long distances through thick layers of material prior to slowing down enough to not pose as large a risk to our health. The irradiation of many different types of materials, within the reactor, causes them to become unstable and radioactive. These materials are known as nuclear waste, and can be especially detrimental to plant employees, the general public, as well as the environment if handled and disposed of improperly.
3. Nuclear Waste
The creation of nuclear waste is an unavoidable process in the production of electrical energy via nuclear power. When the materials used to create the reactor, such as iron, nickel, cobalt, and their alloys, absorb a neutron they may become an unstable radioactive isotope of that element. Many of these isotopes are of little concern as they have a half-life that is very short, however when cobalt-59 absorbs a neutron and becomes the highly radioactive cobalt-60 isotope there is concern. Cobalt-60 has a half-life around 5.27 years with a summed, peak gamma emission energy around 2.5 MeV [6]. A half-life is defined as the time that it takes for a given amount of a substance to decay, by the emission of smaller particles and energy, to half of its original amount. The mode of radioactive decay is the release of particles and energy of a substance in the attempt to become stable. The types of particles released are electrons, protons, positrons, neutrons, alpha particles, and high energy electromagnetic waves (gamma radiation) [7]. Each of these particles is a type of radiation that has varying degrees of detrimental effects on the human body.
The unit of measure that compares the amount of damage done to the human body, compared to that of one rad of gamma radiation, it is known as the “rem”. The average person receives about 85 mrem worth of exposure a year from natural earthbound and cosmic sources of radiation. To give perspective to the low level of radiation emitted from the Three Mile Island accident for example, individuals in the surrounding area received around 1.2 mrem above their normal background level of approximately 60 mrem/year [8]. No negative effects have been found to occur at an increasein radiation exposure that is this small.
Figure 2. Approximate yearly exposure from normal sources [8]
The high temperature cooling water that circulates within the reactor picks up corrosion particles, such as these radioactive isotopes, and carries them throughout the entire system. This causes an increase in the overall radiation levels within the reactor plant. Built-in purification systems, attached to the main cooling water system, function continuously with the cooling medium flowing through a bed of resin beads, filtering out mechanical particulate and removing certain ions from solution. These resin beds lack the selectivity to remove cobalt ions in the presence of ferrous ions due to their similar molecular structure. This leads to a larger amount of waste due to the presence of nonradioactive deposits along with the radioactive ones within this resin. Ion exchanger resin is difficult and very costly to replace. Radioactive waste levels could be significantly reduced if a resin with the ability to select what ions to bind to could be made. The utilization of molecularly imprinted polymers that readily accept cobaltous ions is the answer to this problem.
Figure 3. Basic Representation of a Nuclear Poer Plant [9]
4. Molecular Imprinting
Molecular imprinting is a technique employed by chemists and chemical engineers in order to endow certain desired properties onto a chemical substance. This method is primarily used when attempting to induce selective adsorption characteristics onto a chemical filter and has been shown effective with various types of molecules and metal ions. There are generally two standard processes used to facilitate molecular imprinting, one of which is a single step process while the other is a two-step process. The single step procedure consists of subjecting a functional monomer—which is a complex made of a polymer and a substance with a low molar mass—the metal salt—also referred to as the template molecule—with which you hope to instill absorption properties—and a cross link monomer are all subjected to polymerization within one container. The two step process differs in that the complex used is first isolated before being subjected to the polymerization process [9]. Before one can begin the polymerization process, the metal salt template and the functional monomer must also be chemically bound to form a complex of their own. The mechanism most commonly used to complete this task is that of relatively weak non-covalent bonding that typically consists of either hydrogen bonds, ionic interactions, or a combination of both. This bonding is controlled by an equilibrium reaction, necessitating a large amount of functional monomer in order to drive the reaction in the desired direction [10]. Once these bonds are in place the polymerization process is able to begin. The most common way of polymerization used to create molecularly imprinted polymers is the chain reaction method. This technique involves the chaining together of many separate molecules in order to create long molecules of high molecular mass, otherwise known as polymers [11]. Once the polymerization process has been completed, all of the components exist as one highly cross linked polymer in solution. The original metal salt template is then precisely extracted from the solution, thus leaving gaps in the forged polymer. These gaps closely resemble the size, shape, and general properties of the template molecule that preceded it. It is this similarity that allows the resulting gaps to act as binding sites for future substances that feature the very same characteristics, enabling the newly created chemical to selectively absorb and react to specific ions in a complicated solution.
Figure 4. Diagram of the Molecular Imprinting Process [12]
This interaction mimics the method and efficiency of natural receptor interactions while providing further benefits. The process of creating such molecularly imprinted polymers is relatively cheap, allowing them to be a sustainable alternative to using natural receptor interactions in most filtering processes. Polymers that are conceived in this fashion are also known for their strength and ability to hold together in extreme environments. The functionality of molecularly imprinted polymers has been shown to remain effective over a wide range of temperatures, pH values, and solvent attributes, which is particularly important when considering their application toward nuclear waste management [10].
5. Polymer Applications to Nuclear Waste
The selectivity of a molecularly imprinted polymer in the context of chemical filtering specific ions in complicated solution is ideal for the reduction of radioactive waste volume through cobalt extraction and disposal. The most commonly used chemical filters in nuclear reactor decontamination lack a high selectivity toward radioactive cobalt ions. This, coupled with the fact that radioactive cobalt ions are nearly always found mixed with non-radioactive ferrous ions in solution, causes a problem when attempting to manage radioactive waste. The lack of selectivity will eventually lead to the generation of considerable quantities of radioactive ion exchange waste that is troublesome and expensive to dispose. Molecularly imprinted polymers created by the following method were able to show a large selectivity for radioactive cobalt ions, despite the presence of non-radioactive ferrous ions, and displayed adsorption properties efficient enough to reduce the overall volume of radioactive waste by 80-90% [10].
6. Template Complex Synthesis Techniques
A two-step procedure was used in order to create the molecularly imprinted polymer, meaning that the metal template/cross link complex was synthesized prior to the polymerization process. In order to synthesize the polymer, pure crystals of the functional ligand d [N-(4-vinylbenzyl)imino]diacetic acid were used as the functional monomer in the imprinting process. The pure crystals of d [N-(4-vinylbenzyl)imino]diacetic acid were effectively produced by dissolving 3.99 grams of iminodiacetic acid (approximately 30 millimol) and 2.10 grams of sodium hydroxide (approximately 52.5 millimol), into a 60 milliliter mixture consisting of thirty milliliters of methanol and thirty milliliters of water. Then 4.30 grams of 4-Vinylbenzyl chloride (30 millimol), were added slowly to the solution from a dropping funnel over the course of 30 minutes. This was also done at a constant temperature of 30 degrees Celsius. A second 660 gram amount of NaOH was then added to the solution and was allowed to react for 45 minutes at a constant temperature of 60 degrees Celsius. The solution was then vacuum evaporated to one half of its original volume, and underwent diethyl ether extractions a total of four times. The solution was then diluted to twice the current volume with deionized water, and the pH was decreased to 1.0 by the addition of concentrated hydrochloric acid. The solution was then placed in a refrigerator where the d [N-(4-vinylbenzyl)imino]diacetic acid crystals developed over a twenty-four hour period. Once the crystals were formed they were first filtered, and then washed with ether. This was all done in an effort to eliminate any potential impurities that may negatively affect the bonding of the monomer to the metal ion template or the polymerization process [10].
In order to chemically bond the pure d [N-(4-vinylbenzyl)imino]diacetic acid crystals to the radioactive cobalt ion template, two and a half grams of the pure crystals were first suspended in fifty milliliters of water. The pH. of the water was increased to 9.0 through the addition of 1 molar NaOH solution, which allowed the pure crystals to dissolve. A mixture of 1.45 grams of cobalt(ІІ) Nitrate hexahydrate dissolved in one hundred and fifty milliliters of deionized water was added to the mixture of pure crystals and water from a dropping funnel. As the two solutions were being mixed, they underwent a constant stirring process to ensure an equal distribution of the solvent. After the mixing was complete the resultant solution was filtered in order to remove any remaining insoluble pieces. The filtered solution is then freeze dried in an effort to remove any excess water. The resultant substance is mixed with methanol, filtered, and evaporated. The solid obtained is sent through the same process once more, thus forming the complex used in the polymerization process. This intricate reaction procedure is necessary to limit the contaminants in the complex. Contaminants present in your metal template/monomer complex will result in unwanted gaps being formed in the end result. These unwanted gaps decrease the effectiveness of the polymer’s selectivity and adsorption abilities, thus every measure is taken to ensure the most pure product is being formed [6].
6.1. Elemental Analysis
Elemental analysis is a technique employed by chemists and chemical engineers in order to analyze the elemental composition of an unknown or synthesized substance. The most common way of performing an elemental analysis is the CHN method, which is most useful when dealing with organic compounds such as the molecularly imprinted polymer. An elemental analysis will tell you ratio of present elements in your chemical compound in its simplest form, also known as the empirical formula. For example, The empirical formula for Octane is C4H9, although Octane only exists as C8H18. The ratio with which the elements actually exist is known as the molecular formula. The molecular formula is always a multiple of the empirical formula, thus you can determine the molecular formula based on the substance’s empirical formula and molecular/molar mass. However, to find the empirical formula of a substance the mass percentages of the elements in that substance must first be determined. When dealing with organic compounds, such as our molecularly imprinted polymer, a sample of the substance is burned to react all of the present carbon with oxygen, forming carbon dioxide (CO2), and all of the present hydrogen to H20 by the following reaction:
The mass percentage of Nitrogen is determined by converting it to ammonia. The ammonia can then be titrated with a strongly acidic substance in order to find the amount of ammonia present, and thus the amount of nitrogen present by the following chemical reaction:
The elemental analysis for the polymer showed the following empirical ratios: C, 30.87; H, 3.19; N, 8.05 [6, 13].
6.2. Determination of Cobalt and Present Impurities
The amount of cobalt present is able to be determined through a procedure known as atomic absorption spectroscopy, orAAS. AAS uses a hollow cathode lamp in order to send a narrow band light source, such as a laser, through molecules or atoms in the gas phase. These atoms or molecules absorb the energy from the light, and are thus excited to higher energy levels. The concentration of the element being measured can then be determined from the amount of energy absorbed [14]. The amount of cobalt present in the sample was determined to be 5.66% [6].
In order to determine the amount of sodium impurities present a technique known as flame atomic emission spectroscopy, or flame photometry, was performed. In flame photometry, a flame is used to sublimate and atomize the metal in which you are trying to measure; in this case it is sodium present in our polymer. The flame is also able to excite some of sodium’s valence electrons to a higher level energy state. As the sodium atoms cool, their valence electrons will return to their original ground energy states, emitting light in the process. A quantitative analysis of the amount of sodium atoms present is possible by measuring the intensity of this measured light. The intensity of the light given off is related to the amount of sodium present by the following equation:
Where I represents the measured intensity of light, K represents a proportionality constant, C represents the concentration of the metal that is analyzed. The amount of sodium impurities present in the sample was determined to be 13.60%. These results show that the metal template/functional monomer complex was successfully formed; however, there are a significant amount of sodium impurities present [10, 15].
7. Polymer Synthesis Techniques
With the metal template/functional monomer complex created, the polymerization process is able to begin. Polymerization is essentially the chemical reaction by which single molecules or monomers are combined to form long chains of the molecules/monomers linked together, otherwise known as a polymer. For the molecularly imprinted polymer the metal template/cross link complex is linked together using the chain reaction method. The chain reaction method of polymerization requires a certain chemical, referred to as an initiator, to trigger the chemical process. For the molecularly imprinted polymer the initiator used was AIBN. A cross link in reference to a polymer is a bond formed between individual chains of the polymer separate from the bonds holding the polymer chain together. Cross linking bonds are generally not as strong as the bonds linking the individual chains together, but they do play an important role in determining the resulting polymers properties. Generally, a high degree of cross linking grants the polymer rigidity and strength when faced with stressful conditions. This is particularly important to consider when crafting a polymer for use in nuclear waste reduction. The cross linking reagent used in this polymerization reaction is EDMA [6, 12].
7.1. Polymerization Procedure
180 Milligrams of Bis(vinylbenzyliminodiacetato) cobaltate(II), Which is our metal template/functional monomer complex, is added to a container along with 1.25 grams of EDMA and 15 milligrams of AIBN. The resultant mixture was subjected to three Freeze-Thaw cycles, and then allowed to polymerize at sixty two degrees Celsius for 24 hours. The AIBN initiator is highly reactive, and will quickly react with the Bis (vinylbenzyliminodiacetato) cobaltate(II) to form a new area on the chain that is also reactive. This can be seen by the following general reaction:
The resultant reactive complex will continue to react with neighboring metal template/functional monomer complexes, thus growing in length and forming polymer chains. This can be seen by the following general reactions:
After the 24 hour period was complete, the polymer was cured at seventy-five degrees Celsius for an additional twenty-four hours. The resultant substance was smashed, and then cleaned with methanol in an effort to rid any monomers that had remained unreacted. The resultant substance is then treated with hydrochloric acid in order to extract the cobalt metal ions from the polymer, effectively completing the molecular imprinting process and leaving cobalt shaped gaps in the polymer. These cobalt shaped gaps are then able to recognize and bind to radioactive cobalt ions in solution. The gaps also enable the polymer to display a high selectivity towards radioactive cobalt ions, allowing them to be extracted while the non-radioactive ferrous ions that normally coexist with cobalt are ignored [16, 6].
8. Radioactive Cobalt Ion Retrieval Analysis
As mentioned previously, the flame photometry study of the metal template/functional monomer complex indicated the amount of sodium impurities to be 13.60%. This is a noteworthy amount of impurity that was factored into the polymerization reaction; however, as shown by the following analysis, the impurities did not adversely affect the polymers capabilities.
In order to test the polymers ability to filter out cobalt ions in a complex solution in a realistic environment, studies were conducted in the presence of nitrilotriacetic acid (1.4 millimol), ascorbic acid (1.7 millimol), and citric acid (2.4 millimol). The preceding three acids are commonly used complexants during nuclear reactor cleanup, and are thus relevant to the polymer’s application. Along with the three acid complexants, the solutions also contained a large excess of ferrous ions (4 millimol), a smaller amount of radioactive cobalt ions (an activity of .8 mCi/l, bringing the total cobalt activity of the solution to 2µCi). These conditions accurately simulate a typical solution used during nuclear reactor cleanup, and this give an accurate assessment of the polymers capabilities in real world situations. Twenty-five grams of the polymer were then added to this solution and aloud to react until chemical equilibrium was reached. The results of the study showed that the molecularly imprinted polymer was able to extract cobalt ions while completely disregarding any circulating ferrous ion. Using AAs, The total active cobalt extracted from the solution was determined to be 44.0 µCi/g, thus reducing the solution’s radioactivity by 55%. For the polymers primary application in nuclear waste disposal, its ability to specifically recognize and absorb cobalt over ferrous iron is more important than its actual capacity for cobalt ion uptake. These results signify that the polymer is capable of selectively extracting radioactive cobalt ions in the presence of a common solution occurring with nuclear reactor cleanup [6].
Once the radioactive cobalt was successfully extracted, the cobalt ions were then removed from the polymer using hydrochloric acid. The desorption process was able to fully remove all bound cobalt ions, and also finished quickly with both .1 molar HCl and .5 molar HCl. The resultant polymers then underwent the same test in order to determine their selectivity for cobalt ions, and their uptake capacity for cobalt ions. Despite being reused for five trials, no appreciable reduction in selectivity or extraction capacity was noted. This adds extra sustainability to the polymer, as it is also reusable [6].
8.1. Comparison to Currently Available Filtering Agents
Amberlite-IRC-718 is a commercially available resin with a similar functional monomer to the molecularly imprinted polymer. In order to assess the Amberlites capacity to extract radioactive cobalt ion from solution in comparison the molecularly imprinted polymers capabilities, both substances were subjugated to the same tests. A solution of cobalt ion, ferrous ion, copper ion, nickel ion, and a citrate buffer (used to keep the pH of the solution at 4.8) was prepared. The commercially available resin and the molecularly imprinted polymer were then separately reacted with this solution. The amberlite was able to extract 485 µmol/g of cobalt, although it extracted 125 µmol/g of ferrous ion as well. The molecularly imprinted polymer, on the other hand, was able to extract roughly 60 µmol/g of cobalt ion while completely excluding ferrous ions in solution. Although the commercially available resin was able to extract a significantly larger amount of cobalt than the molecularly imprinted polymer, the amberlite’s level of specificity toward cobalt was not great enough to be of use in nuclear reactor decontamination. This is further exacerbated due to the fact that non-radioactive ferrous ion is present in great excess in comparison to the amounts of radioactive cobalt found during nuclear reactor decontamination. It is this specificity that is absolutely key for making a noteworthy reduction in the volume of radioactive waste [6].
9. Potential of Polymer to Reduce Radioactive Waste
During a common nuclear reactor decontamination campaign about 13 Ci of radioactive cobalt activity is removed. The Ci system is a method of measuring the radioactivity present in a given system [8]. A single Ci is a staggeringly large amount of radioactivity, and decontamination of 13 Ci of radioactive cobalt produces a similarly large amount of radioactive waste. Nearly 3500kg of commercial resin is currently required to successfully remove the radioactive cobalt. The cobalt binds to these resins, thus creating 3500kg of solid radioactive waste that still needs to be disposed of. The molecularly imprinted polymer is able to extract 1.1 µCi of radioactive cobalt for every 25mg used. Simple stoichiometry reveals that the amount of molecularly imprinted polymer needed to successfully extract the 13 Ci of radioactive cobalt would be 325kg. Taking into account that the radioactive cobalt ions are a high level radioactive waste, this is a significant decrease. Using the molecularly imprinted polymer, the material that extracts and thus contains the radioactive cobalt could be reduced by 90%. A 90% reduction would result in a much easier storing process, and an overall easier to manage load of radioactive waste [6].
9.1. Overview of Sustainability
Sustainability is a broadly defined term that often takes on different meanings depending on the context with which it’s used. Sustainability here refers to nuclear powers potential as an alternative energy source and to the molecularly imprinted polymers potential to alleviate some of the problems concerning nuclear energy. As stated previously, The molecularly imprinted polymer is able to cause a 90% reduction in the mass of high level radioactive waste. Not only does this increase the sustainability of nuclear power, as the waste output is considerable reduced, it is also a cheaper alternative to the commercially available resins currently in use. To capture and extract the same amount of radioactive cobalt only 325 kg of the molecularly imprinted polymer must be used, whereas 3500 kg of a commonly used commercial resin must be used [17]. This decrease in the amount of filtering material needed to extract radioactive cobalt can decrease costs dramatically.
The application of the molecularly imprinted polymer also has the potential to eliminate radioactive ion-exchange waste. Radioactive Ion-exchange waste is formed when commercially available resins are used to filter radioactive particles in solution. Since the resins lack any kind of special selectivity toward the radioactive ions, both radioactive ions and normal ions are absorbed into the resin. This necessitates a more costly and elaborate disposal procedure that would otherwise not be needed. The molecularly imprinted polymer’s selectivity toward radioactive cobalt ions could solve this problem, as it would drastically decrease the amount of radioactive ion-exchange waste. Additionally, the polymer is able to be reused up to five times without any significant decrease in its ability to extract cobalt ions. The polymer can also be cheaply manufactured, making it a viable option for industrial applications[17].
10. The Societal Implications of Cleaner Nuclear Power
A survey recently conducted in Europe asked people who live near nuclear reactors if they were either supportive or against the utilization of nuclear power. The study was evenly divided, with 44% of the participants being in favor of nuclear energy, and 45% of them being against it. Of the 45% who were against nuclear energy, 39% of them claimed that if a permanent and safe solution for managing radioactive waste could be found then they would change their minds. In a separate study, people surveyed were found to be more worried by the management of radioactive waste than by the chance of a nuclear reactor accident [18].
The public’s perception is an important thing to factor in when designing and implementing new technology. When a large majority of the public’s opinion is negative, it can be difficult to perpetuate new technology or solutions despite of their efficiency. Such is the case for nuclear power, whose inherent danger is grossly overstated, and is thus not widely implemented despite of its enormous potential for energy.
Many leading scientists and engineers agree that nuclear power is a relatively safe and reliable means of providing energy for the world. The widespread utilization of nuclear power could not only help bring an end to the energy crisis, but it could also alleviate the negative effects that fossil fuel dependence has placed on the environment. Despite these assertions, the public still remains skeptical of the inherent risks in managing radioactive waste, which thus makes it difficult to increase the widespread usage of nuclear reactors [19]. The molecularly imprinted polymer has the capacity to reduce the amount of high level radioactive cobalt waste by 90 percent of its original amount. A decrease in waste levels of this extent may bear an impact on the public’s perception of radioactive waste. If dramatically less waste were to be formed, then the inherent dangers in managing that waste would also decrease, and public perception may become more favorable. The utilization of the polymer to achieve such an outcome would be the first step taken towards mitigating the public perception of nuclear waste management, and possibly the public perception of nuclear power as a whole, thus paving the way for nuclear power to become more widely utilized.
We would like to acknowledge Judith Bring, the head of the Bevier Engineering Library, for guiding us in the process of conducting legitimate research. Furthermore, we would like to thank Jared Helms for allowing us to sample his grammatical expertise. Finally, we would like to thank the Writing Center Staff for being incredibly thorough and helpful in all of our affairs.
[1] P. Hodgeson. (2008, October). “Nuclear Power and the Energy Crisis.” First Principles. [Online.] Available: http://www.firstprinciplesjournal.com/articles.aspx?article=1110&loc=qs.
In article
[2] P. Hodgeson. (2008, October). “The Energy Crisis” First Principles. [Online.] Available: http://www.firstprinciplesjournal.com/articles.aspx?article=1080&loc=fs.
In article
[3] M. Jason. (2012, Feb 10). “Energy Density and Waste Comparison of Energy Production.” Nuclear Fissionary. [Online Article]. Available: http://nuclearfissionary.com/2010/06/09/energy-density-and-waste-comparison-of-energy-production/
In article
[4] J. Wiley. “Nuclear Fission Basics” [Online]. http://www.dummies.com/how-to/content/nuclear-fission-basics.html.
In article
[5] B. Cohen. (2012, March). “Risks of Nuclear Power.”
In article
[6] A. Bhaskarapillai, S. Narashima, B. Sellergren. (2009, April). “Synthesis and Characterization of Imprinted Polymers for Radioactive Waste Reduction.” Industrial and Engineering Chemistry Research. [Online]. Available: http://web.ebscohost.com/ehost/detail?vid=4&hid=17&sid=cee5d4a5-7e9c-451f-aa38-0a613f704e9c%40sessionmgr12&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=37374694
In article
[7] D. D. Ebbing, S. D. Gammon. (2009). General Chemistry: Ninth edition. Belmont, Ca: Brooks/Cole. Page 821-857.
In article
[8] (2011, August). “Radiation Basics.” Health Physics Society. [Online]. Available: http://hps.org/publicinformation/ate/faqs/radiation.html.
In article
[9] (2012, January 22). “Nuclear Dawn.” The Economist [Online]. Available: http://www.economist.com/node/9719029.
In article
[10] A. Martin-Esteban. (2010, September). “Molecular Imprinting.” Sciverse. [Online]. http://www.scitopics.com/Molecular_Imprinting.html
In article
[11] “Polymer Structure” Case Western Reserve University. [Online]. Available: http://plc.cwru.edu/tutorial/enhanced/files/polymers/struct/struct.htm.
In article
[12] N. Lemcoff, S. Zimmerman. (2004, May). “Synthetic Hosts Via Molecular Imprinting—are Universal Synthetic Antibodies Realistically Possible?” Chem. Comm. [Online]. Available: http://www.rsc.org/Publishing/Journals/cc/article.asp?Type=Issue&Journalcode=CC&Issue=1&SubYear=2004&Volume=0&Page=0&GA=on.
In article
[13] D. Blauch. (2009). “Elemental Analysis: Carbon and Hydrogen.” Davidson College. [Online]. Available: http://www.chm.davidson.edu/vce/stoichiometry/ch.html.
In article
[14] B. Tissue. (2012, March). “Atomic Absorption Spectroscopy.” Virginia Technical Institute. [Online]. Available: http://www.files.chem.vt.edu/chem-ed/spec/atomic/aa.html
In article
[15] S. Luca. “Flame Photometry.” Standard Base. [Online]. Available: http://www.standardbase.com/tech/FinalHUTechFlame.pdf.
In article
[16] (2008) “Chain-Growth Polymerization.” Steinwall Inc. [Online]. Available: http://www.steinwall.com/ART-chain-growth-polymerization.html.
In article
[17] N. Abdul, B. Anupkumar, V. Sankaralingam, N. Sevilimedu. (2012, March). “Cobalt Imprinted Chitosan for Selective Removal of Cobalt During Nuclear Reactor Decontamination.” Carbohydrate Polymers. [Online]. Available: http://web.ebscohost.com/ehost/detail?vid=6&hid=107&sid=a55a330c-edfa-4024-a955-9e59f81bc796%40sessionmgr14&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=70039845.
In article
[18] A.Azapagic, C. Greenhalgh. (2009, December). “Review of Drivers and Barriers for Nuclear Power in the UK.” Environmental Science and Policy. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1462901109000987.
In article
[19] L. Warren. (1998, September). “Public Perception of Radioactive Wastes.” Interdisciplinary Science Revies. [Online]. Available: https://sremote.pitt.edu/content/maney/isr/1998/00000023/00000003/,DanaInfo=www.ingentaconnect.com+art00004?token=003f13882d47b76504c48663b252c232b6c533142595e6a333f257666954838.
In article
comments powered by Disqus
• CiteULikeCiteULike
• MendeleyMendeley
• StumbleUponStumbleUpon
• Add to DeliciousDelicious
• FacebookFacebook
• TwitterTwitter
• LinkedInLinkedIn | <urn:uuid:23717ac8-32b3-401e-993b-aa0e0edc676b> | 3 | 2.640625 | 0.534805 | en | 0.922764 | http://pubs.sciepub.com/jpbpc/2/2/1/index.html |
Advertisement Upgrade to remove ads
Refrigeration & Air Conditioning Technoloby
An example of a fossil is hydrogen
The law that states that "energy is neither created or destroyed, but can be converted from one form to another" is called the _____.
Law of conservation of energy
The type of energy stored in fossil fuels is referred to as _____ energy.
Most of the energy we use comes from something we already have on Earth. The only "new" energy we get comes from the _____.
A material that occupies space and has weight is called _____.
_____ ft/lb of work is accomplished when an 800-lb. condensing unit is lifted to the top of a 40 ft. building.
One horsepower of work energy equals the amount of work done when lifting _____ pounds to the height of _____ foot in _____ minute.
33,000; one; one
The unit of measurement of electrical power is the _____.
The three basic states of material are _____, _____, and _____.
solids; liquids; gases
One pound of ice at 20°F exerts its force downward. After absorbing 200 Btus, what direction(s) will the force be exerted? After absorbing 2000 Btus?
*Outward & downward
Define an atom.
*The smallest particle of an element.
Define a molecule.
The smallest particle that a substance can be broken into and still retain its chemical identity.
Define density.
The density of a substance describes its mass-to-volume relationship. The mass contained in a particular volume is the density of that substance.
Define specific gravity.
It is a unit less number because it is the density of a substance divided by the density of water.
Define specific volume.
Compares the volume that each pound of gas occupies.
Define power.
Rate of doing work.
Charles' Law
Dalton's Law
Boyle's law
P1 x V1 = P2 x V2
Specific volume
33,000 ft-lb/min
Specific gravity
no units
1 kW
3413 Btu
Force x distance
Please allow access to your computer’s microphone to use Voice Recording.
Having trouble? Click here for help.
We can’t access your microphone!
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
Please upgrade Flash or install Chrome
to use Voice Recording.
For more help, see our troubleshooting page.
Your microphone is muted
For help fixing this issue, see this FAQ.
Star this term
You can study starred terms together
NEW! Voice Recording
Create Set | <urn:uuid:153e97e9-a948-43a8-bc9f-553bdbb516ea> | 3 | 3.453125 | 0.103038 | en | 0.847879 | http://quizlet.com/13988291/study-guide-2-flash-cards/ |
Thursday, March 20, 2008
Religious debate
Note: I originally wrote this when there were more candidates in the current U.S. Presidential race so it contains a few references which are slightly out of date.
In general, people often find it socially acceptable to say “I’d like to convince you that your opinion about X” is wrong where X is almost any political topic, e.g. health-care, the War in Iraq, or the minimum wage. People often find it acceptable when X is almost any non-political issue. People can argue over PC v. Mac v. Unix v. Linux. People can argue over how much of evolution is due to neutral drift and how much to selection. But there is one thing that is for many people otherwise open to vigorous debate socially unacceptable to say: “I want to convince you that your religion is wrong. You should convert to this one.”
Consider Ann Coulter’s recent book “If Democrats Had Any Brains, They'd Be Republicans.” No one batted an eye about the title but when Coulter went on CNBC and stated that she wanted all the Jews to convert to Christianity, people were shocked. This is puzzling, Coulter’s book title directly insults about half of the American public. She thinks that they are stupid for making the choices they’ve made. However, Coulter’s comments about Judaism are far more restrained; she thinks that Jews are, mistaken, in her language, not “perfected.”
This social taboo about arguing about religion is unjustified for and unproductive.
I’m aware of five arguments for this taboo. None are convincing.
The first argument runs that people are more emotionally invested in their religious opinions than their opinions about other topics. Opinions about religion are more deeply tied into personal identity. Therefore, attempts to change someone’s opinion about religion can feel like attacks on the person’s core identity. This is not really an argument for why we should have such a taboo but it is a highly plausible explanation for why such a taboo exists.
Second and related to the first argument, since religious beliefs are tied into our respective personal identities, arguments about religious opinions are much less likely to result in a change of opinion than other types of arguments. They are less productive. I doubt this is in fact the case. Consider how infrequently most people change opinions about anything even after multiple arguments and discussions about the topic.
Third, since people take religious opinions very seriously and have historically killed over them, for a liberal democracy to function we need to keep religious arguments to a minimum. This argument is an argument of low expectations. If we have so little allegiance to the ideals of liberal democracy that we need to worry about persecuting each other if we allow people to discussion religion we have a serious problem. Unfortunately, this argument does has some merit. Look at the recent Republican primary. Once religion came onto the table at all the fact that Mitt Romney is a Mormon became an overriding concern to many Christian voters. While this isn’t the same scale as killing people the basic issue is the same.
Fourth, religious beliefs often rest on very little actual evidence. Thus, these conversations are particularly unlikely to get anywhere. “My interpretation of the 2000 year old text is better than your interpretation” isn’t persuasive and “My religion’s record of supposed revelation to a small group of people 2500 years ago is better than your religion’s record of supposed revelation to a small group of people 2000 years ago” isn’t compelling either. This combines with the second argument above. Deep emotional investment on all sides and a lack of evidence is does not make for productive dialogue. This is especially the case when some people cite “faith” for their reason to believe something.
Fifth, many people today acknowledge that their religious beliefs are not determined by being correct but by what works for them, in terms of giving them satisfaction and happiness with life. If so, religion is a completely personal decision. I find this view hard to understand. Whether or not I believe in a God does not alter whether or not there is a God. If Jerry Falwell was correct, regardless of what religion personally works for me, I’m almost certainly going to hell.
However, despite these reasons, this taboo on religious debate is ultimately misguided and unproductive.
First, there’s no inherent logical distinction between questions of religion and questions in other areas that somehow put religion off limits to debate.
Second, in an era when religion is intertwined in politics it is unreasonable to let religion influence politics but not let the religious aspects be open to question. If a politician is anti-abortion for religious reasons then that politician should be able to explain what the theological basis is for his or her belief. If the politician is a Christian then what Biblical verses are justifying the stance? Similarly, if politicians claim that their “faith” is important to them, they should be prepared to defend it. How does Hillary Clinton reconcile her “faith” with abortion? Where they stand on the Documentary hypothesis? How do they handle the notion of a loving God ordering the slaughter of babies. You can’t only talk about religion when it is convenient to you. If religion is fair game, then it is always fair game.
Third, the taboo is not always respected and the religious groups that refuse to respect it gain an advantage over those that do respect it. All sorts of religious groups missionize and spend their time trying to convert people. Ann Coulter does not reside in a vacuum. Obviously, they’ll gain more converts than the religions that aren’t trying. Their memes will be more fruitful. Indeed, the religions that are least like to respect this sort of taboo are also the religions that will most likely be problematic, religions that don’t care about anyone else’s social norms or damage to society and just care about spreading their personal view of the Truth. This sort of taboo is much more common among politically moderate Christians and Jews than almost anyone else. These religious groups are simply hurting themselves while often irrational religious fanatics have free reign.
Fourth, if anyone really believes his religion is correct, why not argue for it? If you believe that you are correct about something you should want to convince others so that they too know what is correct. And if you think religion is simply a personal decision then you should have no objection to discussing for starters whether religion really is just an arbitrary personal decision.
In summary, we should not have a taboo against arguing about religion, be it whether a unicorn would be kosher, or whether Jesus was the son of God, or whether God exists, or any other religious question. If a politician says that religion is important to him or her, then that opens his or her religious opinions to detailed examination. Mike Huckabee should be pressed on what he really thinks about Mormonism and how old he thinks the Earth is. Hillary Clinton should explain what she means when she says her “faith” is important to her. This applies not just to Presidential candidates, and not just to politicians. We should all be willing to discuss and debate our religious viewpoints. If someone is unwilling to defend their religious views, it is likely because they cannot. So let’s call them on it. May the most reasonable views win. | <urn:uuid:fe13becb-074e-4eac-b3a0-38bada369659> | 2 | 1.609375 | 0.497924 | en | 0.965178 | http://religionsetspolitics.blogspot.com/2008_03_01_archive.html |
The Shortfall of Network Load Balancing
Applications running across networks encounter a wide range of performance, security, and availability challenges as IT department strive to deliver fast, secure access from anywhere, at any time, on any device. Read this paper to learn why applications fail, why network load balancing alone offers little protection, and why context-aware application delivery is so critical to making mission-critical applications highly available, fast and secure.
Sponsor: F5 Networks | <urn:uuid:9926e51e-b1d5-451f-b0cd-67157d806f59> | 2 | 1.710938 | 0.137979 | en | 0.89917 | http://resources.infoworld.com/ccd/assets/58946/detail |
Pee Commerce
I got into a discussion about fertilizing with human manure, and defended the practice, mainly because my grandmother did it.
Obsessively, I searched the internet for information about the topic, and was blown away. A few websites mentioned the Edo era in Japan, which ruled from 1600 to 1863.
During the Edo era, Japan closed off trade with other countries, so they lacked access to cheap sources of everything. In some sense, they had a closed, "sustainable" economy. Part of what they did was collect human feces and urine and use them as fertilizer. They also collected ash, and mixed it with the feces to assist in the decomposition and balance out the fertilizer. (I guess that's what my grandmother was doing too.) It was a business, to collect urine, feces and ashes, and produce fertilizer.
This is a picture of what a shit collector carried around.
See page 4 of this PDF for information about the Edo recycling system for crap:
Pages 1 and 2 of the following pdf file state that this business was profitable and led to sanitary conditions better than in contemporary European cities. This is because urban human waste was purchased from residents, converted to fertilizer, and sold to farmers. In European cities, feces went into the gutter and was never treated to create a benign fertilizer, and this spread disease. When trade was opened up, cheaper fertilizers hit the market, and the shit peddlers were pauperized, and the city got dirtier. The government intervened to improve fecal collection and recycling, by giving shit peddlers an economic edge. At one time, all areas of Tokyo were served by these recyclers.
This paper doesn't say it, but, we could reduce our use of water and sewers if we put a high tax on fertilizers and chemicals, and went back to a system of inspired by early 20th century Tokyo (in 1910, the population of Tokyo was 2.1 million people).
Additionally, in the same timeframe of the late 1800s, it appears that English inventors were figuring out how to use human waste as fertilizer. Traditionally, Europeans (and Americans) crapped into deep cesspits, which were then covered over after years of use. The new technologies involved burying waste at shallow depths, where plants could access the nutrients.
One of the big negatives about using feces as fertilizer is the risk of disease. The most mentioned in the news is e. coli, which is present in all feces. According to the NIH, e. coli is present over 120 days after using raw manure as fertilizer. To reduce the e. coli, composting or aging is recommended. E. coli decreases if the fertilizer is incorporated into the soil (rather than being placed on top of the soil), and the fertilization is done far enough ahead of time.
Comment viewing options
without reading the blog, my
without reading the blog, my parents always tell me that when they lived in Germany it was a common practice and the german phrase when translated is "night soil"....a whole new meaning to potty mouth. | <urn:uuid:d6c03c58-3319-4f03-b9f8-e0cfb7af32d5> | 2 | 2.46875 | 0.315966 | en | 0.975522 | http://riceball.com/d/content/pee-commerce |
Wednesday, July 20, 2005
I was only 4 when Armstrong first walked on the moon. I don't remember that specific event, but I do remember getting up early several mornings to watch "the rockets". A dozen or more years later I would get up at 3 am or so to watch the shuttles take off. I grew up idolizing Armstrong, but the astronauts I remember seeing were Crippen and Young, Engle and Truly, Lousma and Fullerton, Mattingly and Hartsfield. And America finally joined the Soviets when Sally Ride went into space, something Valentina Tereshkova did in 1963.
The suburb I grew up in, North Highlands, had a Moonwalk Parade each year to commorate the big event. I remember marching in it myself once, with my cub scout troop. I'm only a few miles from North Highlands now, but I've heard nothing about such a parade. In fact, I've lived here for 8 years now and haven't once heard reference to a Moonwalk Parade. I hope it hasn't gone the way of the dodo but I fear it has. What a fantastic event to celebrate!
Update, 7/21/05 12:16 am: Here's an interesting article that says that the Apollo program never was designed for true exploration, and the shuttle sucks.
No comments: | <urn:uuid:93633033-7fc1-4434-9526-44e4e04f7060> | 2 | 1.59375 | 0.091265 | en | 0.965908 | http://rightontheleftcoast.blogspot.com/2005/07/moonwalk.html |
Wednesday, October 28, 2009
Four Animals One Grinder
I am looking for translators to translate this post into Dutch, Polish and Chinese. Email me if you are interested.
This post has been translated into other languages. Italian version (traduzione in Italiano). French version (en Français). Korean version (한국). Portuguese version (em Português). Spanish version (en Español). German version (in deutscher Sprache). Swedish version (på svenska).
The first animal is a cow, the second one is a pig, the third another cow and the last a horse.
I can't believe this video. It isn't really horrible or evil like most of the others on here. It's kind of gross, but hey that's life, man. Mostly it's just incredible. It just shows what goes on at a rendering plant. Whole dead farm animals are fend into the rendering machine via lifter and then ground up by this unbelievable machine, bones, heads, hooves and all.
A lot of posts on the Net are saying that these cows are alive. It's not true. They just appear to be alive since once the grinder starts, they start moving around a lot due to the incredible force of the thing.
Another common misconception is that these animals are being ground up for human food like hot dogs.
That's not true.
These are dead animals that died on farms somewhere so they are not really fit for consumption. Some say that the result might go into animal feed (especially for chickens) or pet food, and that's not a pleasant thought (this is how Mad Cow Disease is being caused). The thought that this goes into pet food also bothers me. If it's true, that does it. I'm never going to eat dog food again.
I think usually the rendered dead animals will just go to make fertilizer, which is a harmless use of them. They also make yellow (non-vegetable) oil out of this stuff. That's used as grease for machinery. They also make soap out of this ground up Mr. Ed Puree.
People don't realize that animals die all the time on farms, especially on modern factory farms. What people never think about is, how do you get rid of dead horses, cows and pigs? You can't exactly drag them to the curb and leave them there for the garbageman. And it's kind of hard to bury them in a hole. We don't have animal graveyards for cows and horses, and incinerators don't accept them.
This is where the rendering plant comes in. I guess you sell the dead animal to the rendering plant, and they come and pick it up for you. They take it back to the plant and grind it up for Mulch N Grow or whatever. One thing you might want to know about these rendering plants is that the smell emanating from them is truly horrendous, as people who live near them attest.
The guy driving that lift must have one of the country's nastiest jobs. Can you imagine being the guy who has to clean the grinder out? If you look at that thing, it's a horrible mess.
At the end the lift tosses a horse in, and watching that sucker get ground up is pretty incredible. One thing that blew me away was the sound of this crushing machine as it ground up bones and skulls. Wow!
There's a particularly nasty segment at the second cow (2:11 in the video) where the thing lets out this massive spurt as it's being crunched up. That means that that dead cow had been decaying for a while and was getting bloated as dead animals tend to do. That's another reason why these meat is not really fit for consumption by humans.
This video has been up for a few years, but it is just starting to make the rounds in a big way around mid-August 2009.
Isn't it incredible the stuff that we can see on the Interwebs? Before Al Gore invented the Internets, how many of us ever saw a rendering plant in action?
The company that makes this sucker is out of Denmark. Just think of the tech that went into this machine. This thing is called the PB 30/60 Crusher.
A few thoughts:
Wouldn't this be a great death penalty machine? Screw this lethal injection crap. 1st degree murder? I sentence you to the Grinder! We could sell tickets r large amounts of money for spectators to watch the killers get ground up alive and use the proceeds to help fund the state so the state can spend the money to help people.
Damn I want one of these machines! Where can I buy one? I'd like to use it on some of my enemies. I would tie them up, throw them in the loader and dump them in the Grinder. Then I would charge like $1,000/head for spectators to watch, get rich and retire on the proceeds.
We should use this thing on dead humans to grind them up. That way we could save lots of graveyard space and use the future would-be graveyard space to build strip malls and Walmarts and other useful things.
Actually, I think when I die, I want to be ground up like this. We could make it like a funeral thing and all of the funeral guests could come watch me get ground up and eat popcorn and stuff. It would be a great end to my life.
After I get ground up, I would like to be canned as Robert Lindsay Chow and fed to my pet cats, assuming that I have any. If I don't have any cats, I would ask to be made into cat food, because I love cats, and this way, cats could feast on someone who really loves them. Cats have given me so much love in my life, this would be my special way of giving back!
They should have had some really brutal death metal music playing in the background of this video, don't you think?
Wouldn't it be cool to see a dead elephant or giraffe get thrown in that thing, just for fun?
In my dream world, there would be like 600 channels on cable. One of them should be the Animal Shredder Channel. That channel would just show this machine grinding animals all day. To make it more interesting, they could vary the types of animals getting ground up. I would just turn it on and leave it on for hours at a time while I do my work and whatnot, just like background you know. Except I would probably change the channel when I was eating.
There are a lot of possibilities for alternate uses for this machine.
We could take some fat White kid raised by a single Mom on Twinkies and video games and stick him underneath the machine. The meat from the ground up farm animals would fall all around him and all over him. It would land on his face, covering him.
We would have workers with shovels to shovel the meat off of him so he wouldn't get buried. He would keep his mouth open, and some of the meat would fall in. Then he would eat it. We would keep him under there, and he would get fatter and fatter. After about 10 years of that, he would be so fat that he could become the King of Germany.
We could take the ground up animals and give them to Disney. Disney could reconstitute them into humans, especially teen idols like Selena, Miley and Britney. Little would their swooning fans realize that their favorite teen star was really a ground up horse!
We could use the machine to try to solve intractable conflicts. By grinding up pigs and cows both and making movies of it and distributing it to conflict zones, possibly we could make headway in the Hindu-Muslim conflict in Kashmir.
The possibilities are endless!
Anonymous said...
mike12570 said...
the first one was a sheep :/, and if you pay attention, you can see that it also squirts some liquid out. LMAO "fat white kid, king of Germany" too damn random. <3 Germany.
Anonymous said...
yeah, the first one is a sheep. the cow doesn't seem all that decomposed, i think the squirts are most likely from organs popping, such as the bladder, intestines and numerous stomachs.mmmmmmm. chyme.
i've seen the ground up inards of dead farm animals loads of times.
that's why i don't eat it
Anonymous said...
I want one of those.
Eploasyin said...
I am sorry to say, but these animals are alive, basically when the sheep head moved, I paused, went down, read they weren't alive and continued watching, but the cow tongue was moving licking the cows nose, which is what the cows do to keep Flies away. If someone could explain to me that it is infact, please tell me. I hope it is true and they are not alive.
Mina McKay said...
Im really glad I saw THIS version of the video because I saw the horse being ground up in a version where it was just the horse. It was an anti-horse racing video. But the video was a bit grainier and I could have sworn watching it that the horse was still alive. I was so upset at the thought of it. I mean, I still think the meat grinder is nasty...I wouldnt want to eat anything coming out of that or FEED it to my pets either. Ugh!
IndianaJohn said...
I think that the fourth animal was a hornless unicorn.
IndianaJohn said...
This comment has been removed by the author.
Anonymous said...
U need to grow up. The grinder is a tool that is useful but the other comments that u made were un called for. It's dumb fucks like u that want to grind up people and be fed to your fucking cats that are a waste of space on earth. I am a slaughter buyer and appreciate each animal to be used to its fullest extent for the well being of the human race and you blowing your mouth off just fucks peoples heads up more by filling them with bull shit. Get a real job because obviously you are ignorant about this industry and not responsible enough to be talking about it.
Anonymous said...
Wow! Are you related to Hitler?! No respect to human life?
Just so you are aware farmers are not paid for their dead animals. They have to PAY, around here
$50 per cow, $30 per sheep/goat
The guys shoot if first before hauling it onto the trailer to take it away. They use the dead animals for pet food, mink food. I know because I have worked on/currently work on several different farms and also have my own livestock. There is no WAY any farmer I know would let an animal of their be ground live. | <urn:uuid:fe426115-9b57-4c8d-8021-cb99992a112b> | 2 | 1.609375 | 0.720613 | en | 0.978149 | http://robert-lindsay.blogspot.com/2009/10/four-animals-one-grinder.html |
15 September 2009
Are You a Climate Skeptic?!
Um, OK.
1. A colleague of mine who works in renewable energy said that he used to be a climate skeptic, but had changed his mind because of the recent evidence that climate change was occurring far more rapidly than previously predicted.
I asked whether he was thinking of global temperature (which was below long term trend) or arctic ice (which was recovering from the 2007 low) or sea level rise (which was following the long term trend or just possibly flattening off), as none of these major indicators seemed to show any signs of acceleration. He accused me of cherry picking recent years: the fact that he had picked these years as evidencing rapid change didn't seem to concern him.
It's a funny old world in climate science.
2. Yes. It seems perfectly valid to extrapolate from an unverifiable anecdotal account of something said by an un-named person to "the community". I can't see any problems with that at all.
3. -2-RW
You are right to imply that it is not a statistically robust sample. However, this sort of thing occurs enough in my interactions in this community that I find to be troubling. Perhaps your experiences are different.
4. RW,
A number of scientists have said that there is a pervasive groupthink mentality in the scientific community on this issue. Some have even said that they were not able to speak freely until after they retired because so much research money is tied to global warming. In an atmosphere where expressing skepticism is career suicide, Roger's anecdote is pretty mild.
"What are you -- a climate skeptic?" is precisely the kind of ad hom comment that should raise eyebrows in an academic community.
5. Btw, since Roger was present as a witness (in fact, a participant), what do you mean by "unverifiable"? Are you questioning his veracity?
6. "an unverifiable anecdotal account of something said by an un-named person"
Nice way to call the man a liar. I can't see any problem with that at all.
It certainly fits the pattern of ad hominem-first attacks when any true-believer statements are questioned.
7. Roger
Do you think the pressure of the public/political spotlight is responsible for the 'issues' ? Did you perhaps feel that you were being suspected of being a 'traitor' ?
8. Do you mean Black et al? The paper linked seems to be Knight et al.
9. I was charitably taking the view that RW was referring to my anecdote, not Roger's. As nobody knows who I am , it is indeed unverifiable.
10. That's the scientific community's idea of an insult? That's like a Church-goer hearing you saying "I don't know if god exists" and responding "What are, an atheist?"
As an agnostic, this has happened to me...
Although that would actually be a little insulting.
11. Hi Roger. I admire your honest approach to these matters. I'm not sure why you act as if you are allergic to the "skeptic" label. I don't see why anyone would regard it as a pejorative term in relation to most fields of science, and certainly not in a field experiencing as much flux as climate science.
As I read your stuff you come across as what I would call a garden-variety skeptic; you are skeptical of what alarmists pitch, and also of what the so-called "deniers" pitch. I would say that positions you quite well, and you should wear the label quite proudly. It's one of the reasons I give your posts the time of day even though I'm not on board with some of your thoughts on policy and I fall in a different spot on the spectrum regarding what current science has to say about the effects of CO2.
If I'm misunderstanding you (again) feel free to correct me (again).
I'm concerned about your statement that this 10-year trend is not a "statistically robust sample". Not sure what you mean by that. A decade-long trend is not a "sample" of current trends. Surely "current" means no more than what is happening, trend-wise, in the most recent 10 years or so.
In statistics we distinguish between the notion of a "poll" and that of a "census". A poll samples a subset of a population and infers something about the population. A census, in contrast, collects numbers from the entire population and reports them. While a poll estimates population behavior, a census is the population behavior.
If it is an objective fact that the earth has cooled (by a particular measure) over the last decade, and if by "current" we mean a time-fram subsumed by this decade, then it is tautological that the earth is "currently cooling" (by this measure) -- this is census data, not poll data. There is no question of robustness -- a concept that only applies to inference from poll data.
Another meaning for "robustness" concerns the sensitivity of the data to the choice of initial and terminal points. You must know that when one reports a "trend" over a time period in this field it is not the slope of a secant line (which is highly sensitive to choice of initial and terminal points). It is generally a least-squares or similarly obtained regression line. Such regressions are about as robust as possible: change the initial/terminal points by a year or two, and the resulting slope will not change by very much, which is what I take you to mean by "robust". Make it 12 years or make it 8, or anything between, and the conclusion remains the same. That's robustness!
If you are referring to long-term trends, then of course a 10-year census is not a "robust" sample in the loose sense of being non-arbitrary or non-representative. But neither is any contiguous 20 or 30-year census, particularly if we wish to infer 100 year or longer trends. Indeed, the actual time scales we should be dealing with are millennial at minimum, and possibly geologic. On such scales no climate trends from the 20th century look at all out of place. It is precisely the "non-robust" selection of a tiny decade-scale period in late 20th century, with appallingly inaccurate or outright false comparisons to centuries-long "proxy data" that artificially begs an alarming conclusion.
The most recent 10 year data is a perfectly "robust" and appropriate counterpoint, as it demonstrates how seriously flawed the GCMs relied on by the IPCC are, with CO2 continuing to rise while temperature falls, and with the complete failure of the tropical "hot spot", that is the model-inferred signature of the hypothesized CO2-based global warming, to appear.
Scientific theories (or models) cannot be proven by statistical correlation, no matter how robust. But they can be falsified by robust data that contradicts their predictions. 10 years of cooling and failure of the hot spot to appear gives, in my mind, ample "robustness" to justify rejection of these models.
12. 1o years selective? How about 150 years on the scale of 10.000 years of climate changes?
13. After spending some time today trying to find a current working definition of the term “climate skeptic” it seems that it’s a changing, inexact term. The term is mostly applied to people who are skeptical of the science justifying believe in AGW. With that usage Roger is not a climate skeptic.
However, it seems that as the global warming debate gets more rancorous the term “climate skeptic” is being used as a pejorative to be fired at anyone who disagrees with the true believers. By that measure it fits Roger with his contrarian view of cap and trade.
Sometimes it’s just too hard to deal with conflicting viewpoints. For example; since I failed to understand how Roger could be called a climate skeptic I just concluded that the person throwing the pejorative was a naïve, unthinking believer in the church of Global Warming. I could be wrong but now I can stop thinking about it. ‘-)
14. Imagine what this conversation could be like in 2012 if the sun remains quiet...
...and the years 2010 to 2012 are actually cooler than 2009.
15. FWIW, scenario B in Hansen's 1988 testimony has a ~12 year period between 1970 and 1982 where the temperature remains flat and it's pretty damn flat between 2010 and 2020.
16. I actually wrote about this issue in my blog in July. Here is the conclusion part:
If all scientists are skeptics, then why would skeptic be an insult? The obvious reason is that the people who use the term are not skeptics, but "climate believers". These are people who have accepted the meme of AGW without the skeptical science that created the hypothesis. For the believing mind, skepticism is not part of their mental outlook. Once something is incorporated in their belief system, questioning it, testing it, trying alternative explanations are not normal scientific inquiry, but heresy to be punished.
17. Roger,
What is the citation for that BAMS paper? I'm finding nothing in BAMS with Knight in the author list for 2009.
As a climate scientist myself I share the concern over the use of "skeptic" as a pejorative.
18. -18-WillH
It is part of this online supplement
19. Roger - my experience certainly is different. I am an astronomer. I work in a department where a lot of people work on atmospheric physics. One of the atmospheric physicists is giving a seminar here in a few weeks called "A Global Warming Sceptic's Case". I'm sure everyone will be very happy to listen to what he has to say.
On a point of substance, obviously I don't know what your colleague was actually arguing. But is the 1999-2008 trend in global temperatures statistically different from the 1979-2008 trend?
20. Eli
Back-projections before 1988 don't count since that was the reality-based tuning constraint. And if the clear rising trend of scenario B (CO2 linear increase) can be described as "pretty flat" then you'll have no trouble describing reality up to 2009 as very flat, since it follows scenario C (no CO2 increase beyond 2000).
21. Roger,
That’s of course too quickly of a judgement, though I don’t find the reaction entirely surprising either, in the context of the popular debate about climate change. Not offering an excuse here, but merely a possible explanation.
The ‘global warming has stopped since 1998’ canard (Lucia?) is so often used as a pretext for not addressing the long term change in climate, that scientists have understandably grown very wary of even the mentioning of these short term trends. 99 out of a 100 times this is done in order to bash ‘AGW’. I could equally imagine an evolutionary biologist getting defensive when missing fossils are brought up; good chance that it’s a creationist talking. Better not to start accusing someone immediately, but with so many creationists/”skeptics” around, it’s at least understandable (though not excusable).
And a possible analogy:
Over at the Examiner, Thomas Fuller had a post outlining a ‘new generation of skeptical arguments against the theory of anthropogenic global warming’ (AGW), which he felt had a lot of merit. I got to his post via a comment thread at RealClimate, where he asked for input. He did get quite a lot of feedback, but unfortunately, a lot of it was packaged in a rather negative tone. However, he framed his questions as “skeptical arguments advanced against the theory of anthropogenic global warming”. He also acknowledged that scientists are getting frustrated “answering the same ‘primitive’ objections repeatedly, only to see them resurface shortly thereafter, something that I am sure is frustrating.” I think a logical consequence is that his framing of the topic aroused a defensive reaction from supporters of the scientific consensus.
I recount this event here: http://ourchangingclimate.wordpress.com/2009/07/12/next-generation-questions/
22. Bart/ ourchanging --
Talk about carnards; Three in one!
* I have always cautioned anyone from making any comparison starting in 1998.
* I haven't ever said global warming stopped at all.
* I also have never said we should not address long term changes in climate.
I've consistently told my readers that global warming is real both based on the physics and based on statistics etc. I'm for promoting alternate energy for a variety of reasons including limiting carbon emissions. I specifically want to see nuclear including in the mix.
What I have said I've said the AR4 multi-model mean overpredicts warming while always pointing out the data show warming clealry exists and has not "ended". I have said the second many, many times.
However, for some reason, some people (you?) seem to want to translate any suggestion that some specific set of model runs might be a tad above the actual trend as being the same as saying there is no warming. Then you use your mis-representation of what I have said as an excuse to mis-represent or over react to what other people say.
Frankly, I think the tendency of some scientists (you?) to provide examples of precisely the behavior Roger discusses in the blog post tends to make readers believe Roger's reports that these reactions are common enough.
23. "The ‘global warming has stopped since 1998’ canard (Lucia?) is so often used as a pretext for not addressing the long term change in climate,..."
Hypothetically, how long into the future would the world have to go without a year that was, say, 0.1 degree Celsius warmer than 1998, before you would agree that "global warming has stopped"?
To 2013? 2018? 2023? 2028? Longer?
24. Lucia,
Me mentioning your name was purely a tongue in cheek comment regarding me using the word "canard", which you pointed out to me I had used in the wrong way previously. It had nothing whatsoever to do with what I think your views are regarding the (future) state of the warming. No need for the defensive reaction (which happens indeed to be the red line of this blog post as I read it).
There are several papers out that point to the normalcy of 10 year periods that show no warming, without the underlying warming trend actually having changed. I'm not aware of the same conclusion to hold for 15-20 year time periods, so my guess is that in such a time frame, warming will resume. If not, we'll have to closely examine the reasons why it hasn't.
25. Yourclimate: "Several Papers" presumably refers to Easterling and Wehner (ONE paper!) which said that some models in some scenarios don't warm sometimes for ten year periods at a time and the rest of the stuff you said. The problem of course is that models may have something interesting to say reality is much more pertinent. So Easterling and Wehner looked at that to. And they found a couple of periods like the recent period, and concluded that such things can happen in the middle of substantial warming. BUT all the periods in the actual data were associated with volcanic eruptions, so there is no evidence that such periods on the way to warming are "normal" at all in the real world.
Now ask yourself, if there is an extended period of no warming-that is, if the recent situation continues for a few more years, does that not at the very least mean that projections of much more rapid warming are far to pessimistic? No? Yes? I can't seem to get anyone to be clear on this and I figure you think you know quite a bit.
By the way, in the troposphere, the period of no warming is actually in excess of twelve years now, not ten.
26. Bart/Ourchanging--
Ahh.. Ok. Even on re-reading, it still seems to suggest that I advocate the things in that sentence. But, I can now see where that was not your intention.
People accusing me of advocating those things or wishing to block action on warming is not unprecedented on the web. So.. yea.. I read it that way.
I apologize for my misunderstanding that. You were using the word "canard" correctly this time. :)
27. Bart,
You write, "There are several papers out that point to the normalcy of 10 year periods that show no warming, without the underlying warming trend actually having changed. I'm not aware of the same conclusion to hold for 15-20 year time periods..."
Are you counting the Knight et al. BAMS piece among those papers?
It's reasonable that you would, because the Knight et al. BAMS piece says this:
"Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability."
But notice how they say that the simulations rule out (at a 95% level) zero trends for 15 years or more, but they make it seem like 10 years is nothing special.
So what about 11 years? 12 years? 13 years?
I think their "10 years or less...are common" is really just spin. They simply don't wish to acknowledge that even a 10-year interval of non-warming creates a greater-than-50-percent likelihood of a discrepancy with predicted warming rates.
P.S. Notice they write periods of "a decade OR LESS" (emphasis added) are common. Good spin!
28. That BAMS Knight paper... there really not much there there, is there? It's pretty short. But based on what the paper says:
* after the temperature trends were observed, they collected together 10 model runs every one of which had a lower long term trend than the average from IPCC A1B runs, then found that that group of lower warming runs exhibits some number of zero or negative 10 year trends.
Things not revealed to the reader:
1) How that group of 10 models hindcasts.
2) The variability of "weather noise" in that group of 10 model runs relative to the earth's variablity
3) What the comparison would revela if they did not correct for ENSO. (This actually matters-- depending on how well ENSO explains anything in the models. Maybe model ENSO's aren't very explanatory? We don't know unless it's shown or discussed.)
I'm sure I could think of more. But, quite honestly, I don't think Knight is an example of a paper that shows 10 year periods with zero trends were "expected". As it stands, I don't even think they showed readers of the paper that 10 year periods with zero trends would be expected based on any reasonable standard other than, possibly, "EMERGENCY. WE NEED A PAPER THAT SAYS THIS!"
29. jg,
FWIW, Hansen's scenerio B has a linear increase in the CO2 FORCING, not the CO2 mixing ratio (that is exponential, see down at the bottom of the link).
The global temperature rises in Scenerio B rises from about 1980 to 2010 and then goes pretty much flat for ten years (see the first figure in the link). Moreover the 1970-1980 period in Scenerio B, which Hansen considered the most likely, was somewhat flatter than the measured trend.
So yeah.
30. Eli--
Yes. Hansen's projections published in 1988, did not do a very good jop hindcasting the dip due to the eruption of Fuego during the 70s, and consequently, it's hindcast of the trend during that period was not very good. This has nothing to do with Knight or even "Scenario B".
31. The Earth has recently recovered from a period of unusual cold (LIA). The speed of recovery and present level are fairly typical of the last few thousand years, and the present level is near the average. The last decade or so has flat to slightly dropping global temperatures, and presently many (including AGW supporters) think it will continue for a decade or more. There is no tropic hot spot in the predicted level. The Arctic, which melted a lot recently seems to be recovering, and Antarctic is generally cool. Would someone please tell me what predictions have been made and supported that strongly support AGW as opposed to clearly falsifying it.
What is the slope of a linear regression line through the years 2010 to 2019?
"Moreover the 1970-1980 period in Scenerio B, which Hansen considered the most likely,..."
The years 1970 to 1980 had already passed when James Hansen did his analysis. So all three scenarios should have had the same same forcing for the 1970-1980 period, which should have been the actual forcing. As pointed out by jgdes, the fact that a model mimics the past doesn't prove much about the model's ability to predict the future. As he noted, the actual temperatures in 2009 are closest to James Hansen's Model C, in which all greenhouse forcings stopped increasing in 2000 (i.e. climate change was solved in 2000).
33. From accused denier Steven Levitt:
34. In my opinion any scientist who isn't a sceptic isn't doing his job right. Always questions your results/conclusions, until you can no longer find a reason to doubt them.
Then they are 'true' only as long as noone finds any evidence to counter them. | <urn:uuid:b69da2b1-659a-4b49-b876-12fdfd288c88> | 2 | 1.710938 | 0.219186 | en | 0.965903 | http://rogerpielkejr.blogspot.com/2009/09/are-you-climate-skeptic.html?showComment=1253277489660 |
Wet mammals shake at tuned frequencies to dry
Andrew K. Dickerson , Zachary G. Mills , David L. Hu
In cold wet weather, mammals face hypothermia if they cannot dry themselves. By rapidly oscillating their bodies, through a process similar to shivering, furry mammals can dry themselves within seconds. We use high-speed videography and fur particle tracking to characterize the shakes of 33 animals (16 animals species and five dog breeds), ranging over four orders of magnitude in mass from mice to bears. We here report the power law relationship between shaking frequency f and body mass M to be fM−0.22, which is close to our prediction of fM−0.19 based upon the balance of centrifugal and capillary forces. We also observe a novel role for loose mammalian dermal tissue: by whipping around the body, it increases the speed of drops leaving the animal and the ensuing dryness relative to tight dermal tissue.
1. Introduction
Water repellency has previously been viewed as a static property of surfaces such as plant leaves and insect cuticle [1,2]. An equally important trait is dynamic water repellency, whereby muscular energy is applied to remove water. This paradigm may have use in sensor design. For example, digital cameras already rely upon internal shakers for removing dust from sensors [3]. Such functionality may have improved the capability of the Mars Rover [4,5], which suffered reduced power from the accumulation of dust on its solar panels. In the future, self-cleaning and self-drying may arise as an important capability for cameras and other equipment subject to wet or dusty conditions.
Many animals evolved physical adaptations to minimize infiltration of water into their furs or feathers [6,7]. Semi-aquatic mammals possess a dense underfur that maintains large air pockets to insulate the body during a dive [8]. Fur itself often has specialized geometries, such as the grooved interlocking hairs of otters that mechanically resist infiltration of water [9]. Certain animals, such as sheep, additionally secrete oily substances such as lanolin that act to increase the hydrophobicity of hair and so discourage fluid–fur contact. In order to arrange their hairs regularly and to uniformly coat them with oil, many animals groom [10] by preening, licking and shaking. Such behaviours may also remove particles in addition to water: birds have been observed to remove dust by shaking after dust-bathing [5] and perform aerial shakes to remove water [11].
Shaking water from an animal surface reduces the combined energetic costs of carrying this water and evaporating it. Small animals may trap substantial volumes of water in their fur for their size [1214]: emerging from a bath, a human carries 1 pound of water, a rat 5 per cent its mass and an ant three times its mass. Wet fur is a poor insulator because water's conductivity of 0.6 Wm−1 K−1 is 25 times greater than that of air and 12 times greater than that of dry fur [15], causing a wet animal to lose heat very quickly. Evaporation of the entrapped water from an animal's fur may sap a substantial portion of the animal's energy reserves. The specific energy required [16] is e = 0.6 λ, where the heat of vaporization of water λ = 2257 kJ kg−1. Consequently, a wet 60-pound dog, with one pound of water in its fur, would use 20 per cent of its daily caloric intake simply to air-dry. It is thus a matter of survival that terrestrial animals remain dry in cold weather [17].
In this study, we investigate a mechanism used by mammals to dry quickly, the wet-dog shake shown in figure 1a. We begin §2, with a description of the novel methods developed in this study, including a robotic wet-dog-shake simulator. We proceed by measuring the masses and frequencies of shakes spanning a wide range of mammals. Next, we characterize the kinematics of the shaking response using high-speed video and fur-tracking. We proceed by presenting models for both drop ejection and the ensuing dryness of the animal, testing these models using experiments with a spinning tuft of fur. Lastly, we discuss the implications of our work and suggest directions for future research.
Figure 1.
Kinematics of fur during the wet-dog shake. (a) A droplet cloud generated by a Labrador retriever during mid-shake. (b) Time-lapse images of a dog shaking its fur. The thin black line highlights a marker glued between the shoulders of the dog's back. (c) Time course of the angular position of the skin and vertebrae of the dog. Error bars indicate the standard deviation of measurement (N = 3). Blue solid line, skin; black dashed line, back bone; red solid line, best fit.
2. Material and Methods
2.1. Animal measurements
We and the Zoo Atlanta staff measured by hand the masses and radii of 28 of the 33 animals in our study. The masses and radii of the remaining five animals (squirrel, black bear, brown bear, lion and tiger) were inferred using a combination of methods. Tiger and lion masses were provided by zoo staff from recent veterinary procedures in which the animal was anesthetized and weighed. Chest girth measurements for the tiger and lion were not safely measurable by zoo staff, and were thus inferred from the literature, based on the animals' masses [18,19]. Videos of three species (squirrel, black bear and brown bear) were obtained from YouTube and BBC, where their masses and radii were estimated based on previous measurements of adults in the literature [2025].
2.2. Wet-dog simulator
We built a ‘wet-dog simulator’ apparatus to visualize the motion of drops on a shaking mammal. The apparatus is described further in the electronic supplementary material. ‘Dog’ fur was provided by three 6.3 cm2 squares of white-tailed deer tanned fur, which were glued (with non-water-soluble glue) to plastic bases clipped to the rotating axis of our device. Prior to experiments, loose hairs were removed and samples immersed in water for 4 h to ensure complete saturation into skin and fur. Samples were spun for 30 s on the wet-dog simulator at a radius of 2 cm at various frequencies. Between trials, samples were weighed, resaturated with water and drip-dried for 30 s.
2.3. Brush experiments
In order to test Tate's Law, we used 19 brushes with round bases (Loew Cornell Nylon 1812 brushes, Loew Cornell Bristle 1812 brushes and Sterling Studio synthetic brushes SS-100 round set). Originally tapered at a range of slopes, we shaved the brushes to produce a flat tip. We weighed drops dripping from the brushes on an analytical balance. To obtain data in figure 4d, three brushes were placed on the ‘wet-dog simulator’ and the mass of ejected drops at various rotational speeds was determined through image processing with Matlab. The cylindrical shell method was used to determine the volume of elliptical drops.
3. Results
3.1. Shaking frequencies across mammals
Using high-speed video at 500–1000 fps, we filmed the shakes of 33 wet mammals, spanning 16 species and five breeds of dogs (figure 2). Shakes were prompted by sprinkling small animals with a spray bottle, and large animals with a hose. We found animals generally shook after the flow of water had ceased.
Figure 2.
Photo sequences of animals filmed in this study: (a) adult mouse, (b) rat, (c) Kunekune pig, (d) Boer goat and (e) Labrador retriever.
We characterized animal sizes using measurements of body mass M and the chest circumference 2πR measured posterior to the shoulder, where R is cross-sectional radius of the chest. In general, one specimen per species or breed was available, but several specimens of mice and rats provided the opportunity to determine variability in frequency and mass within a species. The averages and standard deviations of measurements are presented in table 1 with corresponding error bars in figure 3. Among four juvenile mice, four adult mice and four adult rats, the standard deviations for both mass and frequency were only 5–10% of their respective averages, indicating that there is very low variability in these measurements. This also suggests that each animal has a particular frequency at which it shakes.
View this table:
Table 1.
Size and shaking speeds of animals studied. The radius and mass of the squirrel, black bear and brown bear were estimated from the sizes of average adults in literature [20,21,23,25]. The radii measurements of the lion and tiger were unattainable by the Zoo staff and were estimated similarly sized adults in literature [18,19].
Figure 3.
The relation between shaking frequency f and animal radius R. Dogs are denoted by a circle, other mammals by a square and the semi-aquatic otter by an X. Best fit is given in equation (3.1). Error bars indicate the standard deviation of measurement.
Figure 3 and table 1 show the relation between frequency of shaking f and animal mass M for the animals in our study. To calculate a best fit, we tried to obtain a fair and uniform sample of the animals studied. The mass and frequency changed little within certain sample groups, such as juvenile mice, adult mice, and adult rats. In these groups, only the average of each group was considered to avoid bias towards particular species in our best fit. Specimens of certain canine breeds such as Labradors and huskies were obtainable in a wider range of masses and so were considered individually rather than as an average for each breed. Otherwise, we calculated a best fit using a sample that consisted of one specimen of each canine breed and one specimen of each non-canine species. In all, among the 33 animals measured, we used a sample of 25 data points to determine our best fit.
The best fit using the method of least squares yieldsEmbedded Image 3.1Note that the goodness of fit R2 = 0.95 is high, despite over four orders of magnitude in mass (0.01–260 kg) of the animals considered. Among these animals, we observe a clear dependency of shaking frequency on body size: mice must shake at 30 Hz, dogs at 4.5–8 Hz and bears at 4 Hz.
In figure 3, the vertical distance between the points and the best fit denotes the vigour of the animal's shake with respect to the average. We suspect deviations from the trend are due to modifications in shaking style according to the animal's anatomy, or as in the case of dogs, centuries of domestication. While animals generally shook on four legs, rodents such as mice and rats stood on hind legs to shake (figure 2a,b). Otters and sheep did not shake at frequencies lower than the best fit, as one might expect from the lower adhesion of drops to their waxy fur.
The largest animals such as bears shook at frequencies of 4 Hz, slightly higher than predicted by the best fit (3.5 Hz). Generally, animals in the size range of 4–260 kg exhibited a slightly smaller range in frequency (4–6 Hz) than indicated by the best fit (3.5–9 Hz). This departure from the best fit is likely due to the decreasing importance of shaking with size. The largest animals such as elephants need not shake because of a combination their large thermal mass, thickness of dermal layers and lack of hair. Thus, we expect animals to depart from the observed trends at some critical size, and this departure may in fact begin for the largest animals studied.
3.2. Shaking kinematics
Four Labrador retrievers (M = 32.5 ± 6.5 kg) served as model organisms to characterize the shaking kinematics because they were tame and accessible. A typical shake by a Labrador is shown in figure 1a, where a fluorescent fiducial marker is taped to the dog's fur in the middle of its back (figure 1b). The angular position θ of the marker with respect to the vertical is shown in figure 1c and the electronic supplementary material, video S1. We find the shake is closely approximated by simple harmonic motion, in whichEmbedded Image 3.2where the shake amplitude is A = 90 ± 10° (N = 3) and the frequency (in cycles per second) is f = 4.5 ± 0.25 (N = 3). The peak angular velocity of the shake is ω = dθ/dt ∼ 2πfA. We observed qualitatively that drops are shed continuously throughout the cycle, with small bursts of increased shedding when the fur changes direction.
We observe in figure 1b a surprisingly large amplitude of motion A ≈ 90° despite the dog's four paws remaining in contact with the ground. Rotating the dog's skin by hand, while keeping the vertebrae static, indicates that the dermal tissue alone has a maximum deflection of As ≈ 60°. Loose dermal tissue, which roughly contains all substance between fur and muscles, had been previously hypothesized [6] to reduce the energetic cost of locomotion by facilitating limb movement, and we find here it serves another purpose by increasing the amplitude of the shake.
We infer the vertebral motion during the shake has a smaller amplitude Av = AAs ≈ 30°, as shown by the time course of the dotted line in figure 1c. The vertebral amplitude is three times less than the dermal tissue amplitude during the shake, indicating that loose dermal tissue has an important role in amplifying the shake. We also observed loose dermal tissue in other animals, such as our X-ray videos of rats (see electronic supplementary material, video S2). In our analysis of the forces involved, we will see how this increase in amplitude improves the efficacy of the shake through increasing the centrifugal force on drops within the fur.
3.3. Drops ejection from hair clumps
We now rationalize the observed power law scaling by consideration of the physics of drop release from an animal's furry surface. A wet furry animal will drip water under the influence of gravity. As the animal dries, the falling ligaments of water transform into streams of drops. Because the animal coat is wetting, it is energetically favourable for this departing fluid to follow the animal's hair, from root to tip. Photographs of wet animals such as otters, bears and dogs (see electronic supplementary material and figure 5a) often show wet animal hair forms a fairly uniform series of wet aggregations, resembling wetted paintbrush bristles. These clumps are formed through a complex process that depends on hair spacing, length, curvature, material properties and degree of wetness [2630]. Tabulated properties [6,3134] of animal fur properties, including length, diameter, density, and stiffness, show no dependency on animal size for the range of animal masses we have considered (see electronic supplementary material, figure S1 and S2).
Figure 4.
Drop departure from fibre aggregations. (a,b) Video sequences of drop ejection under gravity and due to spinning, respectively. In the latter, centrifugal forces are Rω2/g = 11 and smaller drops are ejected. (c) The dependence of drop mass m and hair aggregation size R0 for dripping under gravity. The mass of drops dripping from glass capillaries is shown for comparison. (d) The relation between drop mass and dimensionless centrifugal acceleration for three hair aggregations of varying diameters. Best fits in (c,d) are given by equation (3.3) using F(R0/a) = 0.4. (Online version in colour.)
We performed a series of drip tests with variable-sized paintbrushes, ranging in diameter from 1.2 to 11.5 mm meant to simulate the range of hair clump sizes observable in animals. We shaved the tips of the paintbrush bristles flat in order to obtain uniformity in our experiments. The paintbrushes are then suspended in a ‘wet-dog simulator’ (see electronic supplementary material, figure S5), consisting of a high-strength spinning frame that rotates the brush along with a high-speed camera at a given frequency f. This device allowed us to visualize the flow of fluid as if a dog is shaking at the frequency the device is spun.
3.4. Visualization of drop release
The detachment of drops may be clearly visualized using our system. Figure 4a shows a video sequence of drop release from a paintbrush under gravity. The corresponding drop release from a spinning brush (at 2.61 m s−1 with rotation rate of 610 r.p.m.) is shown in figure 4b, and is visually similar to release due to gravity. In both processes, fluid entrained from the brush engorges the drop. This engorgement occurs at a rate that depends on the remaining moisture content of the brush and the applied centrifugal or gravitational forces. During engorgement, the drop remains pinned to the brush. In figure 4a, pinning occurs at the circumference of the hair clump, whereas in figure 4b, at points within the center of clump. Once the drop has grown to a critical size, the pinch-off and release process is quite fast, occurring within 10 ms. In both gravitational- and centrifugal-force-driven dripping, a portion of the drop remains attached to the clump after release.
Figure 5.
Properties of hair clumps measured using our wet-dog simulator. (a) The separation of wet aggregations upon spinning. (b) The relation between RMC and non-dimensionalized centrifugal acceleration. Error bars indicate the standard deviation of measurement. (c) Mass of water held on a wet animal's body versus animal mass. Data for wet and dry masses for Cerradomys. sp. nov, Lindbergh's Oryzomys, Atlantic forest Nectomys and Amazonian Nectomys were collected by Santori et al. [13], while the mink was gathered from Korhonen & Niemela [14]. (Online version in colour.)
This phenomenon of drop release has been well-studied in dripping from capillaries [35,36] in the context of intravenous drug delivery and in spinning disk spray applications [37,38]. In these cases, drop size can be very carefully controlled. The conditions for drop detachment from a capillary are given by Tate's Law [39,40]: to detach, a drop's effective weight mG must overcome the surface tension force σR0 binding the drop to adjacent hairs, where m is the drop's mass, σ = 72 dynes cm−1 is the surface tension of water and R0 is the paintbrush radius. During normal dripping G = g, the acceleration of gravity. During shaking, drops have a larger effective weight owing to centrifugal forces, Fcent = mRω2, which, for the mammals, we have filmed can be 10–70 times gravity (table 1). As shown in figure 4a,b, the high centrifugal forces cause extruded drops to be smaller; we will see later that they result in far more fluid extracted than simply by gravity.
Note that because our device spins at a constant speed, our experiments do not account for the dynamics of oscillating, pendulum-like motion, which may also act to eject drops. The relative magnitudes of centrifugal to acceleration forces are comparable, Faccel/Fcent = |(dω/dt)/ω2| = A−1 ≈ 0.65, suggesting that drops are likely to be shed by a combination of both mechanisms; nevertheless, we consider only centrifugal forces in our analysis.
3.5. Tate's Law applied to hair clumps
Our experiments reveal that drops formed by wet paintbrushes very consistently satisfy Tate's Law. Figure 4c shows the dependency of drop mass on clump size R0 under gravity: drop mass is linearly proportional to clump size, as shown by the red points. Note this behaviour is similar to that shown previously for capillary tubes, as shown by the diamonds in figure 4c. Figure 4d shows the dependency of drop mass on rotational speed for three clump sizes (R0 = 1.1, 1.7, 2.1 mm): drop mass is inversely proportional to centrifugal acceleration Rω2. Together, these findings demonstrate the modified Tate's Law for the mass of drops released,Embedded Image 3.3where σ is surface tension of water, ρ is density and Embedded Image is capillary length. For the best fit trend lines in figure 4c,d, we estimate the correction factor [41] for hair clumps as a constant function, F(R0/a) = 0.4. The correction factor for an analogous system, glass pipettes, was previously determined to be a non-constant function by Harkins & Brown [41]. Measurements of this function in our experiments yielded a small range, from 0.3 to 0.6, indicating the low impact of approximation as a constant function (see electronic supplementary material). As shown by the agreement between the solid lines and the experimental data in figure 4c,d, equation (3.3) well predicts the mass of the drops shed for an animal shaking at a fixed rotational velocity ω with hair clumps of size R0.
We surmise that the drying of animals proceeds as follows. Large drops, whose size are on the order of the capillary length, naturally depart the animal under to gravity as in figure 4a. However, thin films of water on the hairs and the smallest drops remain attached and so can be removed only by shaking, as shown in figure 4b. Equation (3.3) shows that if an animal increases its rotational velocity and so its centrifugal force compared with gravity, it may extend the range of drop masses shed. However, at a given rotational velocity, the residual drop masses left behind after shaking, shown in figure 4a,b, may be too small to be ejected by centrifugal forces, and so may remain attached to the animal.
3.6. Prediction of shaking allometry
We may simplify Tate's Law to formulate a ‘wet-dog shake rule’, an allometric relation between animal mass and shaking frequency. Formulation of such a scaling law requires determining which variables within equation (3.3) are independent of animal mass and so may be fixed as constant. We consider each of the five variables in turn (σ, A, m, R0, f), turning first to variables that are independent of animal mass, as found either in our experiments or in literature. Clearly, material properties of the fluid such as surface tension σ are independent of animal mass. In our experiments, we observe shaking amplitude A varies over a range of 60–110° without clear trends in animal mass. We find hair properties such as hair length and density do not vary systematically across mammal mass (see electronic supplementary material). Thus, we fix wet hair clump radius, which depends primarily on hair length and density.
The remaining variables in equation (3.3) are the radius R, which is an independent variable, and two dependent variables, the chosen shaking frequency f and the shed drop size m. The shed drop mass is a function of both the radius and the frequency of shake. In particular, over the range of Rω2/g = 10–70 for animals studied (table 1), equation (3.3) predicts drop mass will vary by a factor of 7. This amount is low in comparison with the variation in other variables considered. Variation in animal radius R is a factor of 24 (from 1 to 24 cm); moreover, variation in the square of frequency (4–30 Hz) is a factor of 50. Each of these factors are greater than seven. Moreover, their combined variation of Rω2 varies by an even larger factor of 1200 if R and ω were to vary independently. Thus, we assume that drop mass is constant and proceed with our scaling to determine the relation between frequency and radius.
We apply an allometric relation relating animal mass and radius previously found by McMahon & Bonner [42]: animals are nearly isometric according to Kleiber's Law such that MR8/3. Applying this law, the resulting scaling relation between animal mass and shaking frequency isEmbedded Image 3.4By shaking at such frequencies, furry animals act like a high-pass filter, causing drops above a critical size m to eject. This critical drop size is determined by the scaling pre-factor in equation (3.4), which depends on the drop's surface tension and density according to equation (3.3). It is noteworthy that our predicted exponent of −0.19 (R2 = 0.92) is close to the observed value of −0.22. Our exponent is within the 99.8 per cent confidence intervals for our experimental best fit, indicating that there is only a 0.2 per cent chance that our the predicted exponent is different from the measured one. We attribute this small discrepancy, which scales as an infinitesimal M0.03, to simplifications in our model, most likely regarding animal radius.
The increase in shaking speed for smaller animals is important in compensating for their smaller radius. This tuning of shaking frequency with body size is necessary to generate the large centrifugal forces required to shed drops, Rω2/g = 10–70 gravities, for the animals listed in table 1. If, for example, all animals shook at the frequency of a dog, the smallest animals would have insufficient force to remove drops: for example, a mouse shaking at 4 rather than 30 Hz would generate only 1 gravity of centrifugal acceleration, and would remain just as wet.
3.7. Shaking animals achieve similar residual moisture content
In our experiments with paintbrushes, we found that the frequencies required for drop detachment depend on clump size R0. We now use experiments with real animal fur to measure how clump size changes during longer durations (30 s) of shaking. Figure 5a shows the hair clump configurations at various speeds of rotation for a 6.3 cm2 square sample of deer fur. As rotation speed increases so that centrifugal forces increase from 1 to 40g, the clumps separate into a cascade of smaller clumps. By weighing these clumps, we find this separation is accompanied by an exponentially increasing difficulty in drying, which gives further rationale for the frequencies used by the animals.
Figure 5b shows the relation between the centrifugal forces applied and the remaining moisture content RMC within our deer fur sample. We define RMC as the ratio of the post-shake mass to the initial mass of water in the clump, following by textile-drying engineers [43]. In figure 5b, the limiting RMC values of D = 30 per cent show excellent agreement with our measurements of RMC = 0.31 ± 0.12 (N = 10) on live rats, suggesting our experiments with spun deer hair are representative of shaking live animals. From the combination of these results, we conclude that 30 per cent RMC is the lowest level of dryness obtainable using shaking. Moreover, the lowest RMC values of 0.3–0.4 values occur for speeds in which the associated centrifugal force isEmbedded Image 3.5as indicated in the shaded region in figure 5.
As shown in table 1, all shaking mammals in our study have centrifugal forces in the range of 10–70, a relatively small range considering the four orders of magnitudes of mass of the animals. Notably, this range of forces coincides with the region of peak dryness given by equation (3.5), which was found independently with our wet-dog simulator. We conclude that animals shake to achieve nearly equal and maximal levels of dryness.
3.8. Physical basis of residual moisture content
We may rationalize the trends observed in figure 5b, beginning with the initial RMC of deer fur under gravity. The mass of water in the hair is proportional to the corresponding water column height within the fur. When fur is initially wetted, surface tension competes with elasticity to bring the water column between the hairs to an equilibrium height [26] of Embedded Image, where hair length in the deer fur sample L ≈ 40 mm, hair follicle radius b ≈ 200 µm, inter-hair spacing d ≈ 0.028 cm, elastocapillary length is Embedded Image, the Young's modulus is E = 3.7 GPa, I = πb4/4 ≈ 1.26 × 10−3 mm4 is the area moment of inertia and θe = 60° is the equilibrium contact angle of water on hair [34]. We find this model is fairly accurate for our 6.3 cm2 square sample of deer fur. Given its combined water column cross-sectional area Afur of 2.4 cm2 (measured by the area of space between the furs), we predict the hairs will retain minitial = ρHinitialAfur = 4.4 g of water, whereas it held 4.7 g in our experiments. Thus, elastocapillarity theory is sufficient to account for the wet weight of deer fur under gravity, and the discrepancy between experiment and theory suggests water may soak into the hair fibres.
When fur is spinning, the associated centrifugal force competes with surface tension to decrease the height of the water column to a modified Jurin's height H = 2σcos θe/ρRω2d. Using the definition of RMC, we may write RMC = H/Hinitial + D, where D is a free parameter used for fitting the asymptote in figure 5b. This parameter D cannot be predicted with our simple model, as it represents the moisture that soaks into the rough surfaces of the hair and skin, and cannot be removed even under extreme centrifugal forces [4447]. The remaining moisture content simplifies toEmbedded Image 3.6where C = 2σcos θe/ρdHinitial = 14.2 m s−2 and D = 0.3. Our measurements of RMC of deer fur in figure 5b show a power law qualitatively consistent with equation (3.6), having a goodness of fit R2 = 0.93 within the range of Rω2/g = 2–35. For small rotation speeds Rω2/g < 2, RMC is sigmoidal and outside the scope of our model, owing to elastic and gravitational forces becoming comparable with centrifugal forces in this regime.
Using our measured trends in moisture content in figure 5b, we may quantify the benefits of the animal's loose dermal tissue, observed in figure 1c. Previously, we found loose dermal tissue increases the shake amplitude by threefold, and thus the speed of shaking by ninefold. If dermal tissue were tight rather than loose (as on humans), animal shaking frequencies would cause RMC values to remain close to 1, indicating the animal would remain wet. Thus, an important role of the loose dermal tissue is to increase the efficacy of the shake, as shown by the sensitivity of the RMC to changes in skin speed.
3.9. Shaking energy expenditures
We may assess the effectiveness of shaking by comparing the energetic costs of shaking versus evaporating the water. The shaking energy can be estimated as the peak kinetic energy for simple harmonic motion, given by Embedded Image. If animals can shake 70 per cent of their water off, as shown in §3.7, the energy required to evaporate that mass of water is given by Eevap = 0.42λMw, where Mw is the mass of the water on the animal and λ is the heat of vaporization of water. The coefficient 0.42 is a product of 0.6, the fraction of energy needed from the animal to evaporate water [16] and 0.7, the fraction of water an animal can shake off as found in our experiments.
Figure 5c shows the relation between an animal's body mass M and the mass of water in its fur, Mw. Data for seven mammals were found by combining our own measurements of wet animals along with others [13,14]. The trend follows the power law Mw = 0.047M0.97, where M and Mw are both in kilograms, with high accuracy (R2 = 0.95, N = 7). The mass of the water held in the fur is approximately 3–10% of the animal's body mass for the masses considered (0.1–4 kg).
We combine the above energetic estimates with our measurements of animal frequency to calculate an efficacy of shaking. We define this efficacy as the ratio of the energy expended to shake to the energy expended to air-dry as η = Eshake/Eevap. We find that over the four orders of magnitude of mass for animals studied, this efficacy has a small range from 10−4 to 10−3. In this range, all values are far less than one, indicating the great energy savings achieved by shaking.
4. Discussion
4.1. Predictions based on power
In tests with animal hair, we found the observed frequencies were capable of drying the animal appreciably, as defined by removing 70 per cent of its accumulated water. Furthermore, we found that increased speeds imparted to the fur would achieve diminishing return. Thus, the animal obtains a reasonable amount of dryness without expending an excess of energy. The work per shake W of an animal [42] scales as the mass of its muscles (WM). The kinetic energy expended per shake is WMR2ω2MR2f2; thus if animals shook as fast as possible, their frequency would scale as, fR−1M−3/8. We note that our observed scaling exponent of −0.22 falls between the minimum frequencies for drop release (−0.19) and the maximum frequencies based on power output (−0.375). Thus, it may not be possible for animals to dry further because they may be at the upper limits of shaking speed they can generate. In studies of galloping, stride frequencies of small mammals varied from 2 to 9 strides per second, whereas the frequencies of shaking are faster (4–30 shakes per second). Studies regarding drug-induced tremors in mice reported shivering frequencies to be very close to shaking frequencies at 25 shakes per second [48], which is quite possibly at the limits of oscillatory motion of muscle.
4.2. Comparison with other scaling laws
Our measurements of shaking frequency may be related to other frequencies associated with animal movement. Stahl measured the heartbeat frequency [49] of resting animals scaling as fhbM−0.25. Joos et al. [50] found that the wing beat frequency in bees scales with the bee's mass as fwbM−0.35. Heglund [51] found frequency, in the trot–gallop transition, for mice, rats, dogs and horses, scales as ftrotM−0.14. He noticed, for all animals tested, stride frequency became asymptotic, changing less than 10 per cent as the animals increased their speed from the trot–gallop transition to their maximum. These power law scalings are comparable in exponent to the ones we measured for shaking animals, possibly because similar muscles are used to power both locomotion and shaking.
4.3. Scaling frequency with body radius
In predicting a scaling for frequency, care must be taken in choosing an appropriate independent variable. There are two choices to characterize body size, radius and mass. When choosing body mass, our model fits the experimental results quite well. Rewriting our model, equation (3.4), in terms of frequency and body radius, yields fR−1/2. This result is in high variance with respect to experimental data in table 1, which yields fR−0.77 (R2 = 0.96). This discrepancy arises from approximating body proportion with a single circumference measurement around the shoulders. However, this is not the only region in need of drying. In fact, other regions of the animal will have a different characteristic radius. Consequently, an animal's mass, which scales as its volume, more accurately captures the average radius of the animal that the drops encounter during shaking. This reasoning explains why our scaling with respect to animal mass ultimately yielded a closer fit to experimental data.
4.4. Predictions and exceptions
We hypothesize shaking when wet is an ancient survival mechanism, dating back to the emergence of furry mammals. Many Pleistocene mammals were covered with hair. Giant beavers [52], similar in size to a black bear, and short-faced bears [53], similar in size to grizzlies, would have probably shaken similarly to their modern counterparts. Although the ability to shake probably spans generations of furry mammals, it is not a characteristic of all mammals, even of those today. The largest mammals' thermal mass is a likely cause for its inability to shake. In addition, aquatic mammals and those covered with a hard shell, such as an armadillo, have no need to shake dry. Other animals with specialized slow lifestyles such as the giant sloths may not possess the speed to initiate a shake.
Hairless mammals may have no shaking instinct. While filming, we observed hairless guinea pigs did not shake, but only shivered. In personal observations, some species such as the sparsely haired warthog spend their days bathing in muddy water. We expect that nearly hairless species, adapted to hot environments, have not developed the behaviour to shake when wet.
We saw earlier that loose dermal tissue played a role in increasing the speed, force and efficacy of the shake. This constraint may also prevent certain species from shaking. For instance, while humans do not generally have loose dermal tissue, some humans can use their long hair to shed water. This technique involves repeated motion of the head and upper torso in the dorsoventral plane to whip water from their hair. Although the head is oscillated at the low frequencies of only 1–2 Hz, the hair length aids to increase the amplitude and speed at which hair tips are whipped, and consequently the ensuing dryness.
5. Conclusion
In this study, we demonstrated that reciprocal high-speed twisting commonly observed in dogs has a broad generality among mammals. We found that drops remain adhered to a wet animal's hair due to the forces of surface tension. To eject drops and achieve dryness levels of 30 per cent, we found animals generated centrifugal forces equivalent to 10–70. In order for animals of variable size to attain this magnitude of force, we predicted animals must shake at frequencies of fM−3/16, which was similar to experimentally measured frequencies. We conclude animal frequency is tuned to (i) the animal's size and (ii) the properties of water, namely surface tension and density, which set the magnitudes of the centrifugal and capillary forces in our model. Consequently, such mechanisms work poorly when animals are subjected to fluids with properties different from water such as crude oil, whose wetting properties and inability to evaporate prevents the shake from being effective.
Animals were provided by Zoo Atlanta, the local park, and neighbouring laboratories at our institution, and filmed according to IACUC protocols A09036 and A10066.
The authors thank R. DeBernard, P. Foster, L. O'Farrell for laboratory assistance, J. Aristoff for useful discussions, and the NSF (PHY-0848894) for support.
• Received May 25, 2012.
• Accepted July 24, 2012.
View Abstract | <urn:uuid:82a1587e-fced-4c84-b7bc-70bce0ccb3b7> | 3 | 3.40625 | 0.065588 | en | 0.933914 | http://rsif.royalsocietypublishing.org/content/9/77/3208 |
DNA Mutations
Eight types of Spontaneous DNA mutations.
The DNA molecule in every cells of our bodies is a template of life instructions. The DNA is made out of four bases that paired complementary and it form a helical structure. This helical structure of DNA molecule is the sources of many cancers in our life time. When the DNA molecule undergo damages, it is called mutations. DNA mutations may be caused by environmental factors, or may be passed down from parent to off springs. Unlike the DNA mutations that passed down from parent to off springs, DNA mutations that are caused by environmental factors are called spontaneous DNA mutations. There are eights types of spontaneous DNA mutations and all types have severe consequences for our bodies.
1. Tautomeric shifts: Changing of base structure spontaneously.
2. Depurination: Breaking of linkage between purines and deoxyribose spontaneously.
3. Toxic metabolic products: chemical toxins reactive agents that alter the structure of DNA.
4. Aberrant segregation: segregation of chromosomal causing aneuploidy or polyploidy.
5. Aberrant recombination: abnormal crossing over of DNA causing deletions, duplications.
6. Errors in DNA replication: Mistake by DNA polymerase causing point mutation.
7. Transposable elements: insertion of elements into gene sequences.
8. Deamination: Cytosine deaminate to create uracil spontaneously.
A person is at the risk of these eight types of spontaneous DNA mutations. Since spontaneous DNA mutations are caused by environmental factors, older people have higher chances of having their DNA mutated spontaneously because they are more exposed to these environmental factors. Therefore, it is important to monitor risk for DNA mutation that may cause cancer and utilize appropriate remediation to lower the risk of the environmental factors.
Liked it
No Responses to “DNA Mutations”
Post Comment
comments powered by Disqus | <urn:uuid:082edd43-0e27-493b-a94c-47aa68382ec1> | 3 | 3.390625 | 0.352559 | en | 0.928841 | http://scienceray.com/biology/dna-mutations/ |
Scout Culture
Rates of membership retention varied greatly across the troops in the sample, though some patterns emerged. Scouts who had earned badges were more likely to continue in Scouts or to move up to Venturers. Larger troops have greater latitude in choosing programs and can engage in between-patrol competition. This should lead to greater satisfaction with the program. Troops have better retention when they are active, that is, when they have lots of outdoor activities. Having activities in the summer months also helps. Scouts crave autonomy. They are more likely to return in troops where they have opportunities to be on their own or when they have significant responsibility for their food at camps. There is reason to believe that making full use of the uniform also boosts retention.
Putting these findings together, the picture of a troop with high membership retention emerges as one which is relatively large, participates in many outdoor activities year-round, Scouts are actively involved in earning badges, Scouts are given significant autonomy, and proudly wear their uniforms as confirmation of their identity as Scouts. In short, it is a picture of a troop that makes the most of the things which differentiate Scouting from other activities. It is a troop which has fully embraced Scout culture.
By Scout culture, I mean the attitudes, values, norms, and behaviours that characterise Scouting. Underlying Scout culture is a radically child-centred approach to education. This child-centredness is the essential characteristic that sets Scouting apart from other approaches to education.
Generally speaking, education is about training young people to meet certain adult-defined standards. Whether teaching sports skills, a musical instrument, or school classes, education is about creating an environment where the kids will move towards the adult understanding of the topic. This is not to denigrate this sort of education. If one is doing math or playing Beethoven, there is a right answer. However, Scouting's subject matter, building character, is one that demands a different approach.
Scouting's radically child-centred approach makes the most of the natural characteristics of young people. BP wrote that patrol system puts young people ``into fraternity-gangs which is their natural organisation, whether for games, mischief, or loafing...'' (BP 1945:18). The Scouter ``has got to put himself on the level of the older brother'' (BP 1945:3). Scouters guide their Scout patrols away from mischief, not by suppressing it but by proposing Scouting activities instead. The Scouter does not fight the natural gang organisation of the Scouts, rather attempts to work with it. While when teaching music, the students become mini-maestros, in Scouting, the Scouter becomes a ``boy-man'' (BP 1945:19).
This is not to say that the idea of adult standards is absent from Scouting. The very idea of education requires that there be some sort of goal which is being pursued. Standards in Scouting can be seen in the form of the Scout badges. But standards or rules are kept at a minimum. Scouting is ``the man's job cut down to boy's size'' (quoted by BP 1945:15). Within this environment, Scouts are given maximum autonomy and responsibility, and they rise to the challenge (BP 1945:23).
One of the reasons why Scouting works is because within this ``child-sized'' environment, something which I refer to as ``necessary ethics'' emerges. The true implications of meanness, a small theft, or other minor misdemeanours is not apparent in a neighbourhood of hundreds. However, when one is in the backwoods with the other six Scouts in one's patrol, it quickly becomes obvious that hogging the Oreos ultimately makes the whole trip less pleasant. Necessary ethics are the ethical rules which become obviously necessary in an isolated small group situation. Scouts are given the Scout Law as a starting point, then learn what it really means by this natural process. If they bring this with them when they return to the city and into adulthood, Scouting has succeeded.
Liam Morland | <urn:uuid:6597e22f-31f1-4e25-865d-b6a362e3770b> | 3 | 2.921875 | 0.169026 | en | 0.967528 | http://scoutdocs.ca/Membership_Retention/node16.html |
Take the 2-minute tour ×
Supposing there is private/secret data on a machine (i.e a x86 arch) the most reasonable thing to me would be to encrypt this data, so to protect it.
I understand that the purpose of secured,safe,untampered & secret data is only served when the key/passphrase is kept safe. Also I know that, as there is software necessary for the de-/encryption I have to some degree trust this software (i.e. need to trust linux kernel, gnu software, LUKS framework for linux-based disk encryption. etc.). Being mostly open source an autid of the software mentioned above might seem at least possible.
Still what is with the PC BIOS software? The most common thing is that it is a proprietary (closed source) peace of code. Theoretically and I think practically there is a good change that some malware in the PC BIOS is setting up a keylogger --> passphare is captured --> data safety is nil. (right?)
Also I have just started hearing (with some shock) about the x86 feature of the Secured Management Mode SMM which it seems can be exploited to work in a bluepill fashion with some rootkit. Also again this could do arbitrary mischief and of course seems to break my encryption purpose by logging the keys.
Even worse I wonder about the optionrom thing in the BIOS. To what I understand the PC Bios would load and execute code from a device (like a network card chip) and execute(!!!) it. So at worst a chinese/american/german (name any other untrusted country/organisation of your choice) could manifacture devices as some kind of computer virus (using this optionrom feature).
With all that I really wonder about the value of disk encryption if there is so much unkown and danger before the (maybe trusted - because open source) linux kernel runs.
Honestly if I was to create a rootkit I would also seek to put(or anchor) it somewhere in the TPM, PCBIOS, optionroms, kernel etc. Only for the kernel there is some trust.
I would be very happy to hear your thoughts.
Good questions like:
give insight mostly about how to keep PC BIOS free/save from malware? etc, but my paranoia is also to not know what the BIOS does (closed source) in the first place.
some Update
About the threat from changing/tampering the BIOS code I wonder if the linuxBIOS or coreboot(www.coreboot.org) projects do not show both the possibility to modify bios by (also malware) own code and the option to do this on purpose for security reasons. After all a selfwritten or at second best open source BIOS seems doable as the things BIOS do are not "the world" (just some initialization work). So I still fear that -given the hardware documentation or reverse engineering- it is doable to create a malware bios and deploy it (see flashrom.org tool)
Update: Another source showing the problem of proprietary firmware/hardware is http://arstechnica.com/security/2015/02/how-omnipotent-hackers-tied-to-the-nsa-hid-for-14-years-and-were-found-at-last/
share|improve this question
Even if changing tampering BIOS would be difficult. What about the optionrom which might trigger "untrusted code" to be executed from bios? – humanityANDpeace Oct 1 '12 at 9:46
2 Answers 2
Your question essentially boils down to risk management. The simplest way to analyse a particular risk is to see where it falls in terms of the following three categories:
• Probability - What's the chance that this risk will become a problem?
• Impact - If a problem occurs due to this risk, how much of a problem is it?
• Cost - How much will it cost to reduce this risk probability or impact to an acceptable level?
Note that cost is not just monetary - it includes time, effort, convenience, etc.
Most risks cannot ever be fully mitigated. The goal is to find an acceptable balance between probability, impact and cost. Now let's look specifically at your issue:
In reality, the probability of an attacker installing fake BIOS and TPM chips is tiny. Anecdotally, I can't quote a single case in the history of infosec where a real attacker has done this. As far as I'm aware, it's only ever been demonstrated in lab conditions. Your real risk is having a hardware keylogger device installed between the keyboard and USB or PS/2 port. How likely is it that an attacker will have physical access? The attacker has a risk model too - are they likely to risk of being caught for the sake of what you're protecting? Be realistic here - are you protecting a photo sharing web service used by 20 people, or a bank with tens of thousands of customers? I'd argue that the probability of BIOS / TPM attacks is minuscule, but the probability of hardware keyloggers is low to medium, depending on your storage and access circumstances.
What do you stand to lose? What's the absolute maximum fiscal loss, including projected sales losses due to loss of confidence? How much are you likely to have to pay in overtime when your staff are cleaning up after a breach? Might you be sued or fined for breach of data protection laws, such as DPA 1998? From your projected loss model, how much has it cost other companies in similar circumstances? Remember that your loss risk should be relative to business operations as a whole. Think in terms of assets - what do you consider to be important? Is it your source code? Is it your financial records? Your product design documents? What at-risk information are you most concerned about losing? I'd guess you're looking at high impact here, since keyloggers would allow an attacker to gain almost uninhibited access to your network.
How much will it cost you to bring the risk or impact down to an acceptable level? Note that mitigation methods may not have to be implemented across every machine. Can proper access controls and auditing be used? Is your physical security strong enough?
A few potential ways to reduce risk of tampering:
• Don't give staff access to areas they don't need to be! If you use RFID key fobs for keyless entry, don't allow the CEO's PA to use theirs to get into the server room.
• Locks on computer cases, to prevent internal tampering.
• Place computer towers inside locked cabinets, to make it difficult to interfere with keyboard cables.
• Remove or disable USB slots on servers and computers that don't need them.
• Laptops kept in a locked cabinet when not in use.
• Use Kensington locks on laptops and other sensitive equipment.
• Use security tape to seal servers and other high-risk systems.
• Use high quality locks. This one scares me more than anything else - I recently learnt to pick them and the cheap ones are way too easy to break. A decent Yale lock will keep an skilled intruder busy for 3-4 minutes. Cheap non-brand ones can take less than 15 seconds. Maglocks alone aren't sufficient, and can be defeated with a home-made electromagnet, a small circuit, and a laptop battery.
• Have your alarms and CCTV systems tested frequently, and prioritise the placement of security sensors (e.g. PIRs, cameras, magnetic entry sensors, etc) in high risk areas.
All in all, you could probably bring this risk down to an acceptable level by implementing 4 or 5 of those ideas.
share|improve this answer
Regarding key loggers at banks, the big one was the Sumitomo bank attack, that very nearly succeeded: news.bbc.co.uk/1/hi/uk/7909595.stm – Rory Alsop Oct 1 '12 at 8:27
I really appreciate the elaborate explenations and information regarding the aspects of risk management. This has been really done very well, thank you. Still my doubts and concerns are that encryption is supposed to promote some level of safety. Indeed I fear that all safety measures taken (kernel safety) would mean little when an attacker can use ways (before the kernel acts) hence bios. I am not conviced by the "only in lab conditions" remark. Arguably such a bios tamper attack seems hard to implement. But in place it is a remote and not a local attack. Maybe worth the effort for some? – humanityANDpeace Oct 1 '12 at 9:21
@humanityANDpeace I have some experience with hardware tampering and reverse engineering, and can tell you that desoldering modern chips is a nightmare if it's anything but a DIP or SOIC case, especially on multi-layer boards. I have to get my SOIC desoldering done professionally, with expensive desoldering equipment. BIOS chips are usually TQFP cases, and TPMs are often BGA, which are almost impossible to desolder without destroying the pads or board, even with the proper equipment in a lab. – Polynomial Oct 1 '12 at 9:29
@Polynomial I absolutely trust your opinion in this point of changing any hardware / disecting chips. Might there nonetheless be enough "room" to modify BIOS code by simply flashing the code stored in the BIOS ROM? Additionally we need not go so far. If the source of the BIOS is not open, how can we be sure there is not some kind of backdoor in the BIOS right from the manufacture. This is mere conjecture and yes , sorry paranoia, but conceivable, right? If I was a reputable TLA I would make this my backdoor to IT hardware in some way. malware in user or kernel space is nothing compared :) – humanityANDpeace Oct 1 '12 at 9:38
@humanityANDpeace The BIOS standard requires them to implement a checksum that allows only authorised vendor code to be flashed to it. This is usually implemented by a custom hash function in the chip's inbuilt ROM, which cannot be altered. It's possible to de-cap a BIOS chip and use an electron microscope to reverse engineer the algorithm and key constant, but the equipment is expensive and the expertise required is uncommon. The more modern chips use public key cryptography to authenticate the update payloads, so even decapping is useless. – Polynomial Oct 1 '12 at 9:45
I'm going to come right out and say that you are right in that your BIOS could be compromised, in fact your entire computer could be designed with the expressed purpose of siphoning up every single piece of information on it. Additionally there's absolutely no way you could ever be sure otherwise. There's no tool set or method that will ever give you peace of mind here.
The fact is that it's extremely unlikely that even if your hardware was able to be taken over that someone with that capability would want to do it to you. Subjecting a bios and making it work with software based threats is difficult, very difficult in fact, and unnecessary as pure software-based malware that uses unpatched vulnerabilities in OS or loaded software will do just as well and is much easier.
The chances are that your biggest threat is from data loss due to theft or misplacement of your device, and that's what encryption is there to protect you against. It won't protect your data from being siphoned off by malware while the machine is on and the data unencrypted, but it will if you leave your laptop in a taxi.
share|improve this answer
I thank you for your opinion. The first paragraphed seems to confirm my worries regarding the topic. Anyhow drawing some insights also from the excellent answer of Polynomial, the thing maybe is: Not to be 100% sure, but to be 10% surer. If one would have a open source BIOS as I understand this would help. – humanityANDpeace Oct 1 '12 at 9:42
"Data loss". Sometimes not having the data anymore is bad. Sometimes the fact that another person has the data is even worse. Backups will mitigate the changes of data loss due to misplacement or theft of the device. The SMM, BIOS, rootkit thing is worse to me as the data is actually not even lost, but given to somebody – humanityANDpeace Oct 1 '12 at 9:43
Not really @humanityANDpeace, if the hardware is designed to collect and report on information an open-source bios isn't going to help much. – GdD Oct 1 '12 at 9:44
thank you. So you consider hardware a viable way to espionage, etc? It makes me wonder about: if hardware is to this extend "IT-security-dangerous", where do TLAs and other organisations buy their fancy equipment? – humanityANDpeace Oct 1 '12 at 9:52
Viable yes, worthwhile no from an espionage perspective, at least on PC hardware. It's a great deal of risk for the manufacturer for one as if it was ever found out they'd lose their business, and the logistics are difficult. Network equipment is a different story, high-end routers concentrate a great deal of data and are have a better risk-reward proposition for espionage. – GdD Oct 1 '12 at 9:58
Your Answer
| <urn:uuid:b89ec715-780e-452c-8253-01d36be4f1c0> | 2 | 2.421875 | 0.250051 | en | 0.945805 | http://security.stackexchange.com/questions/20890/what-is-disk-encryption-worth-threat-of-proprietary-bios-smm-optionrom-tpm-e/20892 |
Take the 2-minute tour ×
The encryption app that we are using seems to generate the same output for the same input. That is bad right? I'm not smart enough to understand the scheme being used though.
The header of each encrypted file has a string called "hexIdentifier" which is apparently how the app stores the password (hash?) for the encrypted files generated by the app. If we use the same password for different files, then the same HexIdentifier string is used. Does that mean that the password is stored in the file in a trivial fashion that would be easy to re
App is here: http://www.koingosw.com/products/dataguardian.php
share|improve this question
2 Answers 2
It sounds like this "hexidentifier" is a simple hash of the password, and it's at least a weakness that you can determine if different files were encrypted with the same password. That's not necessarily fatal. Most encryption schemes include a "salt" which is effectively part of the password, to prevent the same file from being generated. That's also a weakness.
In general, ad hoc encryption systems developed by well meaning amateurs (like me) are plagued by these kinds of systemic weaknesses. Claiming "1zillion bit blowfish" is blowing smoke. On the other hand, unless you're storing national secrets, your encrypted data is never going to be subjected to the kind of professional attack that could exploir these weaknesses.
share|improve this answer
Just to clarify, if the same password is used we get the same 'HexIdentifier" regardless of what data is being encrypted. – bpqaoozhoohjfpn Oct 2 '12 at 20:23
yes, so if they could trick you into revealing the password for one file, a lot of other files could be decripted too (that's your fault for using the same password) but they would know which files (that's the encryption system's fault) – ddyer Oct 2 '12 at 20:37
In my limited experience I have never seen app hash the same password to the same value. Isn't that encryption 101? – bpqaoozhoohjfpn Oct 2 '12 at 20:50
see my previous comment about "well meaning amateurs" – ddyer Oct 2 '12 at 20:52
I don't see how having a salt is a weakness.. – Brendan Long Oct 2 '12 at 21:03
Blowfish... I feel younger. This is an algorithm from more than 15 years ago, which is not bad in itself, but the author of Blowfish itself (Bruce Schneier) proposed an enhanced version called Twofish, back in 1998, for the AES competition. That the product claims to be "the ultimate database solution" but neglects the last 15 years of science and technology, including the AES, is not a good sign.
If using the same password for distinct files yields the same "identifier", but a different password produces a different identifier, then this "identifier" can be used to attack the password with cost sharing -- in other words, a rainbow table. An attacker could go through the expense of building the table once, because he could then apply it to every instance of a file coming from that product. This is not a bad sign, this is a worse sign.
None of this means that the password could be trivially extracted from the files (none of this excludes this possibility either), but it is sufficient to recommend not using this product.
share|improve this answer
Your Answer
| <urn:uuid:de07105d-d5ed-4e56-9484-551a25ef4254> | 2 | 2.328125 | 0.584477 | en | 0.928943 | http://security.stackexchange.com/questions/20986/encryption-app-stroring-the-password?answertab=oldest |
Subsets and Splits