text
stringlengths 116
653k
| id
stringlengths 47
47
| edu_int_score
int64 2
5
| edu_score
float64 1.5
5.03
| fasttext_score
float64 0.02
1
| language
stringclasses 1
value | language_score
float64 0.65
1
| url
stringlengths 14
3.22k
|
---|---|---|---|---|---|---|---|
info prev up next book cdrom email home
Pollard p-1 Factorization Method
A Prime Factorization Algorithm which can be implemented in a single-step or double-step form. In the single-step version, Primes $p$ are found if $p-1$ is a product of small Primes by finding an $m$ such that
m\equiv c^q\ \left({{\rm mod\ } {n}}\right),
where $p-1\vert q$, with $q$ a large number and $(c,n)=1$. Then since $p-1\vert q$, $m\equiv 1\ \left({{\rm mod\ } {p}}\right)$, so $p\vert m-1$. There is therefore a good chance that $n\notdiv m-1$, in which case $\mathop{\rm GCD}\nolimits (m-1,n)$ (where GCD is the Greatest Common Divisor) will be a nontrivial divisor of $n$.
In the double-step version, a Primes $p$ can be factored if $p-1$ is a product of small Primes and a single larger Prime.
See also Prime Factorization Algorithms, Williams p+1 Factorization Method
Bressoud, D. M. Factorization and Prime Testing. New York: Springer-Verlag, pp. 67-69, 1989.
Pollard, J. M. ``Theorems on Factorization and Primality Testing.'' Proc. Cambridge Phil. Soc. 76, 521-528, 1974.
© 1996-9 Eric W. Weisstein | <urn:uuid:ab5053ed-55e1-43f5-91a1-592f1888cabc> | 3 | 3.046875 | 0.028506 | en | 0.672152 | http://archive.lib.msu.edu/crcmath/math/math/p/p439.htm |
Mormon persecution
Re "Mormon apostle Packer warns against 'tolerance trap'" (Tribune, April 7):
In the 19th century, Mormons pleaded with the rest of the nation to show them tolerance for their divergent, non-traditional marriages. But self-righteous Victorian America could not "tolerate legalized acts of immorality," what Mormons called celestial marriage, and so the nation mercilessly beat the Mormons into submission.
Today, another persecuted minority is begging for tolerance for their choice of whom to love, but Mormons see it as an "act of immorality" and any tolerance of it as a "vice." Incredibly, the persecuted have become the persecutors.
It's more than ironic, it's hypocritical, that Mormons won't grant the same tolerance that they once begged for.
There was love and goodness in Mormon polygamy, and there's love and goodness in same-sex coupling.
No one's asking Mormons to perform gay marriages in their chapels and temples, only to allow those who want to to do so in theirs. It is highly arrogant for Mormons to not "tolerate legalized acts of immorality," as they see it, in someone else's church. Keep to your own affairs.
Brian Barber
Salt Lake City | <urn:uuid:6d082a35-2b29-4819-bad2-bb7b65196bdf> | 2 | 1.726563 | 0.364507 | en | 0.963667 | http://archive.sltrib.com/printfriendly.php?id=56129987&itype=cmsid |
Muslims demand apology from pope
September 16, 2006|By Anthony Shadid, The Washington Post
The remarks in question came as the pope began his lecture at Germany's University of Regensburg on Tuesday by quoting from a 14th Century dialogue between Byzantine Emperor Manuel II Paleologos and a Persian scholar. In a passage on the concept of holy war, Benedict recited a passage of what he called "startling brusqueness."
The pope neither endorsed nor denounced the emperor's words but rather used them as a preface to a discussion of faith and reason.
But the reaction was quick.
Pakistan's parliament adopted a resolution Friday condemning the pope and seeking an apology. The Foreign Ministry summoned the Vatican's ambassador to express regret over Benedict's remarks.
"He is a poor thing that has not benefited from the spirit of reform in the Christian world," Salih Kapusuz said. "It looks like an effort to revive the mentality of the Crusades."
News agencies reported that a political party led a demonstration outside the largest mosque in the capital, Ankara, and about 50 people placed a black wreath outside the Vatican's diplomatic mission.
About 100 people protested in Egypt, where demonstrators chanted, "Oh, Crusaders, oh, cowards! Down with the pope!"
| <urn:uuid:8c475d8c-1437-4b89-9d56-4ddd945f9fdc> | 2 | 1.984375 | 0.038309 | en | 0.951385 | http://articles.chicagotribune.com/2006-09-16/news/0609160077_1_muslim-brotherhood-shiite-muslim-vatican-radio |
Cosmic rays on the sky – where do they come from?
The Earth is constantly reached by highly energetic nuclei from our Galaxy and beyond that we call “cosmic rays”. When these nuclei, mostly protons, interact with our atmosphere, they produce showers of particles that can be detected by balloon experiments or by experiments on the ground. The origin of these cosmic rays is not well understood. They span such a large range of energies (from 108 eV to 1020 eV, roughly), that it is hard to think that they could have a common origin. The lower energy cosmic rays (below ~ 1017eV) are thought to arise from the remnants of supernova explosions, while the more energetic ones are suspected to come from active galactic nuclei, gamma-ray bursts and quasars in other galaxies.
In general, it is hard to pin point the direction of the sky from which the cosmic ray is coming. The typical distance (or gyroradius) that a cosmic ray can travel before changing its direction due to inhomogeneities in the magnetic field of our Galaxy is 1 light-day. Any source of cosmic rays that we can think of (like supernova remnants) are much farther away. For example, the Vela supernova remnant is 800 light years away. Hence, the initial direction of the cosmic rays should be washed out before they reach us. However, scientists are puzzled: several experiments have reported an excess of TeV (1012 eV) cosmic rays coming from certain directions in the sky.
The High-Altitude Water Cherenkov Observatory (HAWC) is an experiment under construction near Sierra Negra, Mexico. It is originally built for the purpose of detecting gamma-rays. However, highly energetic cosmic rays are also detected by the experiment. When a cosmic ray reaches the atmosphere, its shower of secondary particles produces Cherenkov light when they traverse HAWC’s water tanks, and it is this light that is detected. With this information, the direction of the cosmic ray can be inferred to within 1.2 degrees. On the one hand, Cherenkov light from cosmic ray showers are a nuisance to the gamma-ray observations that are the main aim of HAWC, but it also constitutes an interesting measurement on its own. After roughly one year of gathering data, HAWC has measured variations in the cosmic ray intensity across the sky at the level of 0.0001.
The TeV cosmic ray sky as seen by HAWC.
Figure 1. The TeV cosmic ray sky as seen by HAWC. Large scale variations (on scales > 60 degrees), which are sensitive to incomplete sky coverage, have been subtracted from this map. The three excess regions are identified in the map, and these coincide with those found previously by other experiments. Figure 5 of Abeysekara et al.
The HAWC team has found an excess of cosmic rays coming from three different regions of the sky, as shown in Figure 1 above. All of these regions had previously been identified by other experiments (the Milagro experiment and ARGO-YBJ), and one of these regions is now detected more clearly in the HAWC data, confirming the previous results. The colors in the map indicate the significance level: a comparison of the level of detection of each feature to the noise in the measurement. The authors also explore the energy spectrum of the cosmic rays coming from Region A, the most significant region detected, and they find them to be more energetic than those that come from the whole sky, on average.
The team has also computed the power spectrum of the cosmic ray intensity. This is a function that tells us the relative abundance of intensity variations of a given scale in the map (commonly used in cosmology), and it is shown below in Figure 2. The blue points give the power spectrum of the whole map, while the red points correspond to a version of the map where the largest scale variations have been subtracted. The gray bands indicate the expected result if the cosmic rays came from random directions in the sky. Both the detection of the three excess regions and the structure in this plot can help elucidate the structure of the magnetic field in the neighborhood of the Earth, the physics of how cosmic rays propagate throughout the interstellar medium and the locations of Galactic sources of cosmic rays.
Power spectrum of cosmic ray intensity map from the HAWC measurements.
Figure 2. Power spectrum of cosmic ray intensity map from the HAWC measurements. The blue points correspond to the power spectrum of the map of the whole sky, while for the red points, the largest scales variations have been subtracted. The authors are most interested in the red power spectrum in this work, which shows variations in the intensity of cosmic rays across the sky on scales smaller than 60 degrees. Figure 8 of the Abeysekara et al.
About Elisa Chisari
1 Comment
1. Hi Elisa…. let me see if I understand correctly. This paper deals with the issue of where rays come from, as the authors find the areas in the sky where there is an excess of cosmic rays. Therefore, as you say, the paper makes a contribution on the locations of Galactic sources of cosmic rays…. However, due to the changes in the direction of rays throughout the space, the concentration in certain areas says very little about where they may be coming from…. Am I correct?? Thanks… your articles are so interesting!!!
Leave a Reply | <urn:uuid:6d00c328-c76b-4e34-af3d-b18d5623b2ba> | 4 | 4.0625 | 0.019788 | en | 0.934383 | http://astrobites.org/2014/08/25/cosmic-rays-on-the-sky-where-do-they-come-from/ |
Giving to Diabetes Charities: Where Does the Money Go?
Email this to someoneTweet about this on Twitter6Share on Facebook0Share on Google+1Pin on Pinterest0
A couple of weeks ago, Bloomberg Magazine ran an article about some truly sketchy fundraising efforts being supported by some of America’s major charities, including the American Diabetes Association. I highly recommend reading the whole piece, which is simultaneously fascinating and disturbing — but here’s the gist: charities such as ADA often hire telemarketing firms, in this case InfoCision, to recruit volunteers by phone to send out fundraising letters to family and friends to raise money for the charity. Not only are InfoCision’s callers often quite aggressive (and inaccurately refer to themselves as volunteers, rather than paid employees), but they actively lie about one important point: very little of the money goes to the actual non-profit. How little? Allow me to quote from the article:
“According to documents obtained through an open records request with the Ohio attorney general, the Diabetes Association approved a script for InfoCision telemarketers in 2010 that includes the following line: ‘Overall, about 75 percent of every dollar received goes directly to serving people with diabetes and their families, through programs and research.’
“Yet that same year, InfoCision’s contract with the association estimated that the charity would keep just 15 percent of the funds the company raised; the rest would go to InfoCision.”
Yes, you read that correctly: fifteen percent. What’s more, the ADA defended itself against the idea that this was sketchy by using the sort of semantic justification that I’m more accustomed to hearing from politicians. As the article reports, “Association Vice President Erb offers no apologies for the script, saying the association runs many fundraising campaigns and, overall, [my italics] about 75 percent of the money goes to its programs. He acknowledges that the contract with InfoCision estimated that the telemarketer would get to keep 85 percent of the funds it raised.”
In other words, as long as the telemarketer included the “overall,” then he or she was technically telling the truth. To his credit, Erb reportedly wasn’t pleased to hear that some of the recruited volunteers weren’t happy when they found out the truth behind the numbers — but he didn’t exactly exonerate himself.
“’Obviously, if people feel betrayed or that we’re not being honest with them, it doesn’t make me feel well,'” he told Bloomberg. “’But the thing is, we’re a business. There has never been a time or a place where we said, ‘Most of this money is coming to us.'”
I’m sorry, what? First, you’re a business? Tell that to the IRS. And second, the ADA and other charities (the American Cancer Society is another one mentioned frequently in the piece) frequently claim that most of the money is going to them. How often have you received a fundraising letter claiming that 70 to 85 percent of your money is going straight to research? And what about all those charity report cards that rank non-profits by how cost-efficient they are?
Well, thankfully, that part is likely true. When charities raise money for themselves, via their own phone calls or fundraising letters, a much higher percentage of the money goes directly to them; the crazy numbers are when telemarketers are involved. The article is not suggesting that you scrap your charitable donations altogether.
So, if there’s such a risk to their reputation (and if it’s so not profitable), why do charities invest in this type of service?
Basically, it’s to get potential donors’ names for future donations — by outsourcing the initial recruitment, non-profits are hoping to identify a pool of willing volunteers and donors that they can call upon down the line. And in one sense, that makes sense — signing people up is really hard work, and requires a lot of time and effort that the charity itself might not have the manpower to handle. Why not hire a separate company to do that initial work for you?
The problem, as I see it, is that these telemarketing companies lie. And, by hiring them and approving the scripts, the charities become liars as well. A senior vice president at the American Cancer Society defended the practice by saying that it makes sense to invest in some outreach efforts that don’t immediately bring in money, since their goal is to “engage people in long-term, meaningful relationships.”
To which I say, has this man ever been in a relationship? How many people would really make a second donation, or volunteer more of their time, to an organization that has lied to them? Much like a marriage, charitable giving relies on trust. Break that trust, and it’s very hard to earn it back.
Here’s how one woman, recruited by the ADA to send out fundraising letters, reacted to the news of how little of the money actually went to the charity:
“‘It’s like a betrayal,’” Patterson [said], sitting in her kitchen in June, after being shown copies of the North Carolina report and the contract the association signed with InfoCision. “’I know I won’t donate again. It’s like they stabbed you in the back. It’s terribly wrong.’”
There are other sketchy things as well. For example, the contract with InfoCision includes a clause, which apparently charities don’t always read or understand, that allows InfoCision to rent out the list of donors to other charities if InfoCision isn’t fully paid for its contract. Translation: your name, phone number and address could be given out to other non-profits in order to punish the original non-profit for not paying its bills. Um, that’s a little f’d up, no?
InfoCision has many other big-name clients, including the American Lung Association, the ASPCA, March of Dimes Foundation and National Multiple Sclerosis Society. According to Bloomberg, InfoCision “brought in a total of $424.5 million for more than 30 nonprofits from 2007 to 2010, keeping $220.6 million, or 52 percent, according to state-filed records.” And even at its 15/85 percent breakdown, the ADA isn’t even the worst offender:
“In fiscal 2010, InfoCision gathered $5.3 million for the [American Cancer Society -- which hired them from 1999 through 2011]. Hundreds of thousands of volunteers took part, but none of that money — not one penny — went to fund cancer research or help patients, according to the society’s filing with the U.S. Internal Revenue Service and the state of Maine.
That last part about filings is apparently key, too, since apparently it’s very easy for non-profits to bury their telemarketing expenses in reports to the IRS, if they’re reported at all. “The nonprofits have become adept at hiding the money they spend on telemarketing firms,” says the Bloomberg piece. “An examination of hundreds of annual filings that nonprofits are required to submit to the IRS shows how charities can bury, and sometimes omit, their expenditures on telemarketing. . . . It’s an InfoCision filing with North Carolina that reveals that the Diabetes Association got just 22 percent of the money raised nationally by volunteers recruited by the telemarketer in 2011. That figure isn’t found in any public filing with the IRS.”
So, there you have it: your daily moment of outrage, brought to you courtesy of one of the main charities that is supposed to be helping raise money to fight your disease. Of course, it would be silly to dismiss the entire ADA by virtue of this one questionable practice. But considering that the largest legal penalty so far was a $75,000, one-time settlement paid by InfoCision (less than one tenth of one percent of its revenue from charity fundraising from 2007-2010, says Bloomberg), it seems that it might make sense for some of the people involved with these charities to complain directly to the charities themselves. For by approving, and even encouraging, lies, these non-profits are risking losing the trust of the American public. Lose that trust, and you lose your donations. Lose your donations, and you end up hurting the future of the very group of people whom you’re supposed to be serving. In the case of the ADA, that means us.
As a side note: you should also beware of “chuggers” — short for charity muggers — those young people with clipboards who accost you as you walk down the street and try to get you to sign up for recurring donations. I had a friend in journalism school who did an expose on the companies who run those campaigns, and the financial breakdown was very similar: much of the initial money goes straight to the marketing company. If you want to donate to a charity, it’s best to do so directly. That way you can be more confident that your money is actually going where you think it should be.
Comments (2)
1. hmbalison at
Hi Catherine,
Great article. I’ve had issues about the ADA for a while regarding their stance about the glycemic index. Even when other diabetes associations in the UK and Australia were adopting the glycemic index, our ADA was still saying that 15g of potato chips was equal to 15g of brown rice in terms of the impact on blood sugar…Since then, I’ve sometimes questioned just exactly *who* the ADA represented. And your article doesn’t exactly give me faith that they are focused on the right things when it comes to fundraising. You’ve given me a lot to think about. I’d like to see the ADA address this honestly. It seems really skivvy….
2. Jennifer Jacobs
Jen at
Great read, Catherine. Another reason to get on the Do Not Call list! It makes me think about the ethics behind these mega charities in general. So much of it is business driven. Diseases are profitable. If they found a cure, how long would it take them to tell us?
Add a comment
| <urn:uuid:deff0f31-4c5a-47c5-bdc7-7f35904103d1> | 2 | 1.890625 | 0.021118 | en | 0.95643 | http://asweetlife.org/catherine/blogs/diabetes-advocacy/giving-to-diabetes-charities-where-does-the-money-go/30630/ |
Fiction > Harvard Classics > Björnstjerne Björnson > A Happy Boy > Chapter VII
Björnstjerne Björnson (1832–1910). A Happy Boy.
The Harvard Classics Shelf of Fiction. 1917.
Chapter VII
THE SCHOOLMASTER had gone on the right track when he advised the minister to put Eyvind’s fitness to the test. During the three weeks which elapsed before the confirmation he was with the boy every day. It is one thing for a young and tender soul to receive an impression, and another thing to retain it steadfastly. Many dark hours fell upon the boy before he learnt to take the measure of his future by better standards than those of vanity and display. Every now and then, in the very midst of his work, his pleasure in it would slip away from him. “To what end?” he would think, “what shall I gain?” and then a moment afterwards he would remember the schoolmaster’s words and his kindness; but he needed this human stand-by to help him up again every time he fell away from the sense of his higher duty. 1
During those days preparations were going on at Pladsen not only for the confirmation, but also for Eyvind’s departure to the Agricultural College, which was to take place the day after. The tailor and shoemaker were in the house, his mother was baking in the kitchen, his father was making a chest for him. There was a great deal of talk about how much he would cost them in two years; about his not being able to come home the first Christmas, perhaps not even the second; about the love he must feel for his parents who were willing to make such an effort for their child’s sake. Eyvind sat there like one who had put out to sea on his own account but had capsized and was now taken up by kindly people. 2
Such a feeling conduces to humility, and with that comes much besides. As the great day drew near, he ventured to call himself prepared and to look forward with trustful devotion. Every time the image of Marit tried to mingle in his thoughts he put it resolutely aside, but felt pain in doing so. He tried to practise doing this, but never grew stronger; on the contrary, it was the pain that grew. He was tired, therefore, the last evening when, after a long self-examination, he prayed that Our Lord might not put him to this test. 3
The schoolmaster came in as the evening wore on. They gathered in the sitting-room after they had all washed and tidied themselves, according to custom the evening before one is to go to communion. The mother was agitated, the father silent; parting lay beyond to-morrow’s ceremony, and it was uncertain when they would all sit together again. The schoolmaster took out the psalm-books, they had prayers and sang, and afterwards he said a little prayer just as the words occurred to him. 4
These four persons sat together until the evening grew very late and thought turned inwards upon itself; then they parted with the best wishes for the coming day and the compact it was to seal. Eyvind had to own as he lay down that never had he gone to bed so happy; and by that, as he now interpreted it, he meant: “Never have I lain down so submissive to God’s will and so happy in it.” Marit’s face at once came to haunt him again; and the last thing he was conscious of was lying there saying to himself: “Not quite happy, not quite,” and then answering: “Yes I am, quite,” and then again: “Not quite.”—“Yes, quite.”—“No, not quite.” 5
When he awoke, he immediately remembered the day, said his prayers and felt himself strong, as one does in the morning. 6
Since the summer, he had slept by himself in the loft; he now got up and put on his handsome new clothes carefully, for he had never had the like before. There was, in particular, a short jacket which he had to touch a great many times before he got used to it. He got a little mirror when he had put on his collar, and for the fourth time put on his coat. As he now saw his own delighted face, set in extraordinarily fair hair, smiling out at him from the glass, it struck him that this, again, was doubtless vanity. “Well, but people must be well-dressed and clean,” answered he, while he drew back from the mirror as though it were a sin to look in it. “Certainly, but not quite so happy about it.” “No, but Our Lord must surely be pleased that one should like to look nice.” “That may be, but He would like it better if you did so without being so much taken up about it.” 7
“That’s true, but you see it’s because everything is so new.” 8
“Yes, but then by degrees you must leave it off.” He found himself carrying on such self-examining dialogues in his own mind, now on one subject, now on another, in order that no sin should fall upon the day and stain it, but he knew, too, that more than that was needed. 9
When he came down, his parents were sitting full-dressed, waiting breakfast for him. He went and shook hands with them and thanked them for the clothes. 10
“May you have health to wear them.” 1 11
They seated themselves at table, said a silent grace, and ate. The mother cleared the table and brought in the provision-box in preparation for church. The father put on his coat, the mother pinned her kerchief, they took their hymnbooks, locked up the house and set off. When they got upon the upper road they found it thronged with church-going folk, driving and walking, with confirmation candidates amongst them, and in more than one group white-haired grandparents, determined to make this one last appearance. 12
It was an autumn day without sunshine—such as portends a change of weather. Clouds gathered and parted again, sometimes a great assemblage would break up into twenty smaller ones which rushed away bearing orders for a storm; but down on the earth it was as yet still, the leaves hung lifeless, not even quivering, the air was rather close; the people carried cloaks but did not use them. An unusually large crowd had assembled round the high-lying church, but the young people who were to be confirmed went straight in to be settled in their places before service began. Then it was that the schoolmaster, in blue clothes, tail-coat and knee-breeches, high boots, stiff collar, and his pipe sticking out of his tail-pocket, came down the church, nodded and smiled, slapped one on the shoulder, spoke a few words to another, reminding him to answer loud and clear, and so made his way over to the poor-box, where Eyvind; how stood answering all his friend Hans’s questions with reference to his journey. 13
“Good morning, Eyvind; how fine we are to-day,”—he took him by the coat-collar as if he wanted to speak to him. “Listen; I think all’s well with you. I’ve just been speaking to the minister: you are to take your place, go up to Number One, and answer distinctly!” 14
Eyvind looked up at him astonished; the schoolmaster nodded, the boy moved a few steps, stopped, a few more steps and stopped again. “Yes, it’s really so, he has spoken for me to the minister;” and the boy went up quickly. 15
“You’re Number One after all, then?” someone whispered to him. 16
“Yes,” answered Eyvind, softly, but he still was not quite sure whether he dared take his place. 17
The marshalling was completed, the minister arrived, the bell rang, and the people came streaming in. Then Eyvind saw Marit of the Hill Farms standing just opposite him. She looked at him, too, but both were so impressed by the sacredness of the place that they dared not greet each other. He saw only that she was dazzlingly beautiful and was bareheaded; more than that he did not see. Eyvind who, for more than six months, had been nursing such great designs of standing opposite her, now that it had come to the point forgot both her and the place—forgot that he had ever thought of them. 18
When it was all over, kinsfolk and friends came to offer their congratulations; then his comrades came to bid him good-bye, as they had heard that he was to go away next day; and then came a lot of little ones with whom he had sledged on the hills and whom he had helped at school, and some even shed a tear or two at leave-taking. Last came the schoolmaster and shook hands silently with him and his parents and made a sign to go,—he would come with them. They four were together again, and this evening was to be the last. On the way there were many more who bade him good-bye and wished him luck, but they did not speak amongst themselves until they were sitting indoors at home. 19
The schoolmaster tried to keep up their courage; it was evident that now it had come to the point, they were all three dreading the long two years’ separation, seeing that hitherto they had not been parted for a single day; but none of them would own it. As the hours went on, the more heart-sick did Eyvind become; he had to go out at last to calm himself a little. 20
It was dusk now and there was a strange soughing in the wind; he stood on the doorstep and looked up. Then, from the edge of the rock he heard his own name softly called; it was no delusion, for it was twice repeated. He looked up and made out that a girl was sitting crouched amongst the trees and looking down. 21
“Who’s that?” he asked. 22
“I hear you are going away,” said she, softly, “so I had to come to you and say good-bye, as you would not come to me.” 23
“Why, is that you, Marit? I will come up to you.” 24
“No don’t do that, I have waited such a long time and that would make me have to wait still longer. Nobody knows where I am, and I must hurry home again.” 25
“It was kind of you to come,” said he. 26
“I couldn’t bear that you should go away like that, Eyvind; we have known each other since we were children.” 27
“Yes, we have.” 28
“And now we haven’t spoken to each other for six months.” 29
“No, we haven’t.” 30
“And we parted so strangely the last time.” 31
“Yes—I must really come up to you.” 32
“No, no, don’t do that! But tell me; you’re not angry with me, are you?” 33
“How can you think so, dear?” 34
“Good-bye then, Eyvind, and thank you for all our life together!” 35
“No, Marit——!” 36
“Yes, I must go now, they will miss me.” 37
“Marit, Marit!” 38
“No, I daren’t stop away any longer, Eyvind; good-bye!” 39
“Good-bye!’ 40
After that he moved as if in a dream, and answered at random when they spoke to him. They put it down to his going away and thought it only natural; and indeed that was what was in his mind when the schoolmaster took leave at night, and put something into his hand which he afterwards found to be a five-dollar note. 41
But later on, when he went to bed, it was not of his going away he was thinking, but of the words which had come down from the edge of the rock and of those which had gone up again. As a child she had not been allowed to come to the edge because her grandfather was afraid she might fall over. Perhaps she would one day come over all the same! 42
Note 1. A customary phrase. [back]
Click here to shop the Bartleby Bookstore.
| <urn:uuid:044822f1-1518-4bc0-ae21-950e43d7f937> | 2 | 2 | 0.019495 | en | 0.989525 | http://bartleby.com/320/2/7.html |
The True Science of Elections?
For a couple of days, I was inclined to buy the theory that Obama won the election because his campaign was so "metric driven." Metric driven in this case seems to mean the Machiavellian science of manipulation of the voters. It's measurable knowledge of who's going to vote and why in key places. It's knowledge of how to turn out just enough voters in the battleground states to win. It's also, to some extent, knowledge of how to depress the turnout of the key groups supporting your opponent.
The science, this time, had to be pretty precise, because there was very little margin for error. The best the president could conceivably expect is a fairly narrow victory. And so it's not like he had total control of the behavior of voters. He thought he developed just enough of an effective method to be pretty sure he was going to win. The fact of that control was reflected in state polls. That's why those who followed the state polls pretty exactly predicted the election's outcome.
The best evidence for the science is that the president won every one of the battleground states with the exception of NC, which he was perfectly willing to semi-concede in advance.
On election morning, it seemed to me, he was pretty darn certain he was going to win all the battleground states except VA and FL. And he didn't need those two states to get reelected. His people were a little afraid of surprises, but not too much. They knew they had the most expert get-out the-vote effort ever, and the turnouts from key places met or slightly exceeded their expectations.
Romney, we now know, really thought he was going to win the election, because his polls were based on flawed turnout models. His groups—beginning with white males and Republicans—were over-represented in his polls.
Not only that, Romney mobilization and turnout operation was a lot less elaborate and sophisticated that Obama's. And his commercials and so forth were fewer in number and less clever in pushing key buttons. He relied a lot more on volunteers he couldn't really supervise or control. I told a Romney operative a few weeks before the election that the Romney ground game seemed to stink by comparison to the president's. He said don't worry, we're counting on the enthusiasm of evangelical volunteer efforts. Obama wouldn't have left something so important to chance!
But maybe this story of Obama's "rational control"—which admittedly contains a lot of truth—is overhyped. It turns out that Romney's highly centralized computerized system full of the information on which his election-day GOTV effort depended just didn't work. Because of that crash, his 30K+ volunteers were left clueless. And of course election day was much bigger for Romney than Obama, because he had done a lot less to mobilize his guys as early voters. Here's the conclusion of an amazing account of the collapse of Project ORCA:
Go back and look at the result in OH. It's, I think, closer than Obama thought it would be. And the rural vote for Romney is lower than most polls expected. The also quite close VA and FL also show lower-than-expected turnouts by rural evangelicals and similar groups.
These facts, noticed, of course, by Republicans, have been attributed by some to a small but significant backlash against the Mormon. Others have claimed that it's the result of Obama's clever characterization of Romney as an out-of-touch plutocrat. But another theory is that Romney people just weren't doing what anyone would normally expect in getting out key vote. You can't tell me that the Obama people knew THAT was going to happen.
I'm not saying that an effective implementation of Project ORCA would have turned the election. It's pretty lame, after all, compared to the comparable Obama operation. But you can't convince it wouldn't have made the close states at least noticeably closer.
My point is not to show Romney didn't deserve to lose. It's merely to show that the outcome of elections is more subject to chance than those bragging about the new science of rational prediction think.
comments powered by Disqus | <urn:uuid:cc49a8b8-0650-43d1-ab44-58c919668e2b> | 2 | 1.78125 | 0.258981 | en | 0.983304 | http://bigthink.com/rightly-understood/the-true-science-of-elections |
Ruby and the Art of Computer Programming
Recently, I started reading Knuth’s Art of Computer Science. To spice up the exercises, I am writing them out in Ruby. Thinking about the basic math of programming and how to implement it in a high level language like Ruby has been fun.
The evolution of programming languages has gone far since the book was written. We can use Ruby syntax to do multiple steps in an algorithm!
Also, this is an exercise in writing clean code. Some of the algorithms in the book are a bit hard to read in there pure math form. So, when I write them out in Ruby, I try to use as much expressiveness as possible without adding any clutter.
This is no easy task, and often times I have to walk away from an algorithm for awhile and come back to it, because I will have my head deep in the math and less in the code. Or vice versa.
I was showing one of my solutions to a colleague of mine, Doug Bradbury, who saw a better way in Ruby to solve the same problem I was with less lines of code and a higher readability.
So, I decided to share one of the problems, and in a few days, I will post my version of the solution. We can see different solutions in different languages and different styles. Go ahead and try it out.
Here is Euclid’s algorithm for the greatest common divisor, as written in the book:
1 E0 [Ensure m >= n.] If m < n, exchange m < n.
3 E2 [Is it zero?] If r = 0, the algorithm terminates;n is the answer.
4 E3 [Reduce.] Set m <- n><- r>
And here is a solution written in Ruby:
1 def are_whole_numbers?(*numbers)
2 numbers.each {|number| return false if number.to_i.to_f != number}
3 return true
4 end
6 def euclid(m, n)
7 raise "Must be whole numbers." unless are_whole_numbers?(m, n)
8 return euclid(n, m) if m < n
10 remainder = m % n
12 return n if remainder == 0
13 return euclid(n, remainder)
14 end
16 #Here are some examples
17 puts euclid(35.0, 40.0) #should be 17.0
18 puts euclid(119.0, 544.0) #should be 17.0
19 puts euclid(555.0, 666.0) #should be 111.0
Paul Pagel, Chief Executive Officer
| <urn:uuid:12b69792-c30d-4e4f-9d8a-f9d8240ef27b> | 3 | 3.078125 | 0.515192 | en | 0.910749 | http://blog.8thlight.com/paul-pagel/2007/11/06/ruby-and-the-art-of-computer-programming.html |
Touch ID Fingerprint Scanner In iPhone 5S: Everything You Need To Know
Apple strengthens user protection in their new flagship smartphone meaning biometric identification might finally go mainstream. Is it good or bad, and what are the potential consequences?
First, we’ll try to calm all conspiracy theorists: it doesn’t seem that Apple introduced biometric ID just to please their NSA friends and collect the fingerprints of taxpayers for the feds. Apple stated that fingerprints are stored in a specially produced derived form (i.e. not photos) and always kept locally, never getting transmitted to the Net. In addition, fingerprints and Touch ID scanners are unavailable to third-party apps; only iOS can use it. So, what can be protected with all these restrictions?
Quite a bit can. Most obviously, it’s much easier for legitimate owners to unlock their smartphones. All it takes is a simple Home button press, and an embedded capacitive sensor will instantly recognize the fingerprint, granting access to the person on the “white list.” Unauthorized persons or owners in gloves will see a message saying it’s impossible to recognize this fingerprint. In this case, they would have to type an alphanumerical backup password. In addition to people wearing gloves, technology might fail in cold weather, when hands are wet or covered with lotion, scarred or burnt. That’s why it’s still important to memorize a password since it might come handy quite often.
Owners will be obliged to pass a Touch ID check when approving iTunes or App Store purchases and in other situations, when iOS normally asks for a password.
Owners will be obliged to pass a Touch ID check when approving iTunes or App Store purchases and in other situations, when iOS normally asks for a password. We suggest enrolling multiple fingers from both hands to increase convenience.
Of course it’s very interesting to wonder if new protection mechanisms are robust and secure enough. As we previously mentioned, biometric sensors are imperfect. To implement Touch ID, Apple bought Authentec, a specialized company with quite interesting biometric technology developments. The scanner reads not only dermal ridges, but sub-epidermal layers of the skin as well, which makes fingerprint forgery much more complicated. The new sensor probably has some vulnerabilities that will be discovered by curious hackers when 5S becomes mainstream. However, we have no information about such vulnerabilities or their mere existence at this point.
Update: Just two days after sales started, hackers from Germany-based Chaos Computer Club published a blogpost regarding an easy and cheap 5S sensor hack. They claim, that the iPhone fingerprint scanner is no different from previous models, but it has a higher resolution. Thus it’s very easy to pick up a fingerprint from any surface and forge it using latex.
It’s not easy to choose between familiar pin lock and novel fingerprint protection. Pin codes are easier to snoop and it takes more time to type them. Fingerprints are harder to forge and easier to use, but someone who desperately needs your data may just force you to touch your smartphone with the right finger. Of course, this scenario is more appropriate for a Hollywood action movie, not real life, but if you’re in possession of really valuable information you have to consider this and possibly avoid storing that information on your smartphone.
When talking about “ordinary people,” it seems they shouldn’t be afraid of Apple’s new technology for now. However, there is a speculation the next step for Apple will be an own payment system with biometrics serving as a primary authentication for purchase approval. In this case, fingerprint transmission over the network seems to be inevitable, and this gives hackers very good reason to develop an attack targeted at a mainstream audience. So if you’re worried about your fingerprints falling into wrong hands, re-consider using Apple biometrics when you hear about payment systems or any other ecosystem development, which might be based on extended fingerprints usage.
Send to Kindle
1. Elissa says:
Can one use the iPhone 5s without the touch ID feature? In other words can you ‘opt out’, not provide your fingerprint at all to use the phone?
1. Kaspersky Team says:
Hi Elissa,
Yes, you can choose to opt out of this feature within the phone’s settings. Please let us know if we can help you with anything else.
2. I think we need to get used to the idea that “Big Data” will soon have some biometric data on us. It’s not an easy thought, but biometrics are harder to use once hacked, versus traditional plain text passwords. This sensor in the home key might not work as flawless as thought, but its a step in the right direction.
3. Phil says:
So if my wife needs to use my phone while I am in the shower…can two people have their fingerprints in the system?
1. Brian Donohue says:
You sure can. In Apple’s words:
“Touch ID lets you enroll multiple fingerprints, it knows the people you trust, too.”
2. Kaspersky Team says:
Hi Phil,
You will be allowed to store up to 5 fingerprints on your phone. Please let us know if we can help you with anything else.
4. James McQ says:
Can you use the passcode AND biometrics to help secure the phone?
1. Kaspersky Team says:
Hi James,
Yes, you can have both options set up. If you do this, when you wake your phone up you will either be able to enter your password or provide your fingerprint to unlock your device.
5. “… fingerprint transmission over the network seems to be inevitable,….”
I’m not sure that’s a given. iTunes may simply use the fingerprint to authenticate and allow the unlocking of the datastore securing the iTunes password.
6. Megan says:
what if you dont have a fingerprint?
1. Kaspersky Team says:
Hi Megan,
You will still be able to use the traditional passcode security feature if you wish to opt out of the fingerprint scanner.
7. Radix says:
I know this is not a direct security comment. But it’s too bad that, along with the security aspect of the sensor, once logged in you can’t just use the sensor or at least the capacitive ring sensor portion to allow to be interpretation as a single home button press. In this way you do not have to press the home button all the way down, adding additional wear to a home button that seems to invariably fail after the first year.
8. kin says:
can I lock app like whatsapp using the fingerprint scanner?
1. Kaspersky Team says:
Hi Kin,
You can use Touch ID to unlock your phone and to purchase items in iTunes or the App store at this time.
9. madhu mohan says:
can I have only the biometric without the passcode?
1. Kaspersky Team says:
Hi Madhu,
You can lock your phone using just the Touch ID, however, if for some reason your phone is unable to recognize your fingerprint after a few attempts, you will be required to enter your passcode as backup to gain access to your phone.
10. Mike says:
I just bought iPhone 5s and he says it’s fake as he can’t find the fingerprint scanner. Is this possible? Does all iPhone 5s have fingerprint scanner? If yes, how can i enlighten him?
1. Serge Malenkovich says:
All original iPhones 5s have touch id sensor. The home button has its distinctive look without traditional square image on it.
11. Ray says:
Is it possible to completely opt out the fingerprint scanner.? I mean i don’t want to use it for any application( itune, screen unlock or whatever apps which employ security authentication) at all.
P.S: I want to buy iPhone 5 but right now its not available in market that’s why i have to go for iPhone 5s
1. Kaspersky Team says:
Hi Ray,
Yes you can choose not to use the ID Fingerprint security feature, but we do recommend having a passcode set up in that case. | <urn:uuid:9623730a-14c6-43e4-ae37-b2bcbf5d75c0> | 2 | 1.828125 | 0.05819 | en | 0.925708 | http://blog.kaspersky.com/fingerprint-scanner-iphone-5s/ |
By Ed Yong | September 20, 2012 2:00 pm
Christopher Kaelin and Xing Xu focused on the region that Eizirik had identified, using DNA samples taken from Californian feral cats that had been captured for sterilisation. By comparing mackerel and blotched individuals, the team found one gene that was responsible for the different markings. They dubbed it Taqpep. All blotched tabbies have one of two critical mutations in both their copies of Taqpep, while all mackerel cats have one or two unblemished versions.
Taqpep is also responsible for the king cheetah’s unmistakeable coat. Kaelin and Xu sequenced the gene in Kgosi, a captive king cheetah, and found another mutation in Taqpep, one that greatly enlarges the protein encoded by the gene.
Kaelin got in touch with Ann van Dyk, the woman who first identified that king cheetahs were a mutant version of the regular ones. She runs a cheetah conservation centre in South Africa that’s Kgosi, and all other captive kings, came from. By analysing all of her cheetahs, van Dyk confirmed that Kgosi’s Taqpep mutation is found in all the kings, and none of the 217 wild spotted cheetahs do.
Taqpep is activated in cat skin, and not in other organs. But in both cheetahs and domestic tabbies, it’s active at low levels in both dark and light patches. So what is Taqpep actually doing, and if it isn’t directly changing the colour of the felines’ fur, then what is?
Another member of the team, Kelly McGowan, found an important clue. A tabby’s markings start appearing when the foetus is seven weeks old. At that point, the cradle-shaped organs known as follicles are fully developed, and they start to produce hairs. If a follicle produces lots of melanin – a dark pigment – it grows a dark hair. So why do some follicles churn out melanin, while others exercise more restraint?
To find out, Lewis Hong looked for genes whose activity varied between the yellow and black zones of a cheetah’s skin. His search yielded 60 genes that were more active in the black spots, and one of them stood out – Edn3.
Edn3 produces a hormone that fuels the growth and division of melanocytes – the cells that make melanin. It’s active in the follicles, and more so in black regions than yellow ones – something that Hong confirmed in cheetahs, tabbies, and even one leopard. When he artificially increased the activity of Edn3 in the skins of laboratory mice, their coats blackened.
Let’s put this all together. The team thinks that Taqpep somehow creates periodic zones in the skins of foetal cats. These zones determine how active Edn3 will later be, which in turn dictates the number of melanocytes in the follicles. More melanocytes means more melanin, which means dark hair. Note that the “zones” that Taqpep sets up are invisible. They only become apparent through their subsequent influence on Edn3. If Taqpep is disrupted by certain mutations, as in blotched tabbies and king cheetahs, it sets up wider and more erratic zones, leading to chaotic stains rather than even spots.
How can Taqpep create these zones if it’s evenly active throughout a cat’s skin? There are a few possibilities, but here’s the most intriguing one: Taqpep creates a protein called Tabulin that usually sits in the membranes of skin cells, but can detach and drift between them. So Taqpep might be active everywhere, but Tabulin can diffuse through the skin and become concentrated in certain areas.
This is really exciting, because diffusing molecules are central to a longstanding explanation for animal patterns. Back in 1952, Alan Turing, the legendary computer scientist and code-breaker, suggested that animal skins could produce beautiful complex patterns through a lively tango between two molecules – an activator and an inhibitor. Both diffuse throughout the skin, and react with each other. Over short distances, the activator reinforces itself, but over longer distances, the inhibitor blocks it. Depending on how quickly they spread and how strongly they interact, they can produce everything from spots to blotches (play around with this Java applet to see what I mean.)
Scientists have tested Turing’s “reaction-diffusion” ideas in many different animals, but it has been next to impossible to find the actual activators or inhibitors. Barsh thinks that Taqpep (or rather, Tabulin) may be one of them, and the team is now looking into it further.
Hopi Hoekstra, who studies the genetics of animal colour at Harvard University, is very impressed with the study. “It’s a tour de force,” she says. “We previously knew very little of about how patterns are formed and maintained in mammals. This is largely because the workhorse of pigment genetics—the laboratory mouse—doesn’t display stripes or spots.” Kaelin’s team got around that problem by studying the diversity of wild and domestic cats. Thanks to their work, we’re getting closer to the real story about how the leopard (or cheetah) got its spots, or the tiger its stripes. Turing drafted the outline, and Barsh and O’Brien and filling in the details.
Reference: Kaelin, Xu, Hong, David, McGowan, Schmidt-Kuntzel, Roelke, Pino, Pontius, Cooper, Manuel, Swnson, Marker, Harper, van Dyk, Yue, Mullikin, Warren, Eizirik, Kos, O’Brien, Barsh & Menotti-Raymonds. 2012. Specifying and Sustaining Pigmentation Patterns in Domestic and Wild Cats. Science
Image by Jurvetson
Comments (3)
1. IW
And I thought cheetahs looked that way because of all the speed they do!
2. Stu
Such an interesting topic and clever titling, but I did flinch at it, because, kittehs.
3. If you want more on that, look at Shigeru Kondo’s lab page
If I remember well a talk 2 years ago, his theory is that you have to split patterned animals in two categories:
– small animals like zebrafish where the patterns are dynamic during the animal life. There, the diffusion/inhibition mechanism does take place.
– large animals like mamals where the patterns are static, decided once and for all in the womb. Their patterns are too big to be caused by Turing mechanism.
However, the latter do behave like the former at the foetal stage. The foetus is small enough for Turing patterns to be created on the forming skin. Then the pattern is frozen as it is and the animal grows, dilating the existing pattern but not modifying its shape.
Discover's Newsletter
Not Exactly Rocket Science
See More
Collapse bottom bar
Login to your Account
E-mail address:
Remember me
Forgot your password?
Not Registered Yet?
| <urn:uuid:f098d8e5-f075-449d-bf3f-b899dd6885f2> | 3 | 2.96875 | 0.022999 | en | 0.937649 | http://blogs.discovermagazine.com/notrocketscience/2012/09/20/tabby-cat-blotches-king-cheetah-stripes-spots-taqpep/ |
5.2. JavaScript and the DOM
In the application layer of Mozilla, there is little distinction between a web page and the graphical user interface. Mozilla's implementation of the DOM is fundamentally the same for both XUL and HTML. In both cases, state changes and events are propagated through various DOM calls, meaning that the UI itself is content -- not unlike that of a web page. In application development, where the difference between application "chrome" and rendered content is typically big, this uniformity is a significant step forward.
5.2.1. What Is the DOM?
The DOM is an API used to access HTML and XML documents. It does two things for web developers: provides a structural representation of the document and defines the way the structure should be accessed from script. In the Mozilla XPFE framework, this functionality allows you to manipulate the user interface as a structured group of nodes, create new UI and content, and remove elements as needed.
Because it is designed to access arbitrary HTML and XML, the DOM applies not only to XUL, but also to MathML, SVG, and other XML markup. By connecting web pages and XML documents to scripts or programming languages, the DOM is not a particular application, product, or proprietary ordering of web pages. Rather, it is an API -- an interface that vendors must implement if their products are to conform to the W3C DOM standard. Mozilla's commitment to standards ensures that its applications and tools do just that.
When you use JavaScript to create new elements in an HTML file or change the attributes of a XUL button, you access an object model in which these structures are organized. This model is the DOM for that document or data. The DOM provides a context for the scripting language to operate in. The specific context for web and XML documents -- the top-level window object, the elements that make up a web document, and the data stored in those elements as children -- is standardized in several different specifications, the most recent of which is the upcoming DOM Level 3 standard.
5.2.2. The DOM Standards and Mozilla
The DOM specifications are split into different levels overseen by the W3C. Each level provides its own features and Mozilla has varying, but nearly complete, levels of support for each. Currently, Mozilla's support for the DOM can be summarized as follows:
Mozilla strives to be standards-compliant, but typically reaches full support only when those standards have become recommendations rather than working drafts. Currently, Level 1 and Level 2 are recommendations and Level 3 is a working draft.
Standards like the DOM make Mozilla an especially attractive software development kit (SDK) for web developers. The same layout engine that renders web content also draws the GUI and pushes web development out of the web page into the application chrome. The DOM provides a consistent, unified interface for accessing all the documents you develop, making the content and chrome accessible for easy cross-platform development and deployment.
5.2.3. DOM Methods and Properties
Methods in the DOM allow you to access and manipulate any element in the user interface or in the content of a web page. Getting and setting attributes, creating elements, hiding elements, and appending children all involve direct manipulation of the DOM. The DOM mediates all interaction between scripts and the interface itself, so even when you do something as simple as changing an image when the user clicks a button, you use the DOM to register an event handler with the button and DOM attributes on the image element to change its source.
The DOM Level 1 and Level 2 Core specifications contain multiple interfaces, including Node, NodeList, Element, and Document. The following sections describe some interface methods used to manipulate the object model of application chrome, documents, or metadata in Mozilla. The Document and Element interfaces, in particular, contain useful methods for XUL developers. getAttribute
Attributes are properties that are defined directly on an element. XUL elements have attributes such as disabled, height, style, orient, and label.
<box id="my-id" foo="hello 1" bar="hello 2" />
In the snippet above, the strings "my-id," "hello 1," and "hello 2" are values of the box element attributes. Note that Gecko does not enforce a set of attributes for XUL elements. XUL documents must be well-formed, but they are not validated against any particular XUL DTD or schema. This lack of enforcement means that attributes can be placed on elements ad hoc. Although this placement can be confusing, particularly when you look at the source code for the Mozilla browser itself, it can be very helpful when you create your own applications and want to track the data that interests you.
Once you have an object assigned to a variable, you can use the DOM method getAttribute to get a reference to any attribute in that object. The getAttribute method takes the name of the desired attribute as a string. For example, if you add an attribute called foo to a box element, you can access that attribute's value and assign it to a variable:
<box id="my-id" foo="this is the foo attribute" />
var boxEl = document.getElementById('my-id');
var foo = boxEl.getAttribute('foo');
The dump method outputs the string "this is the foo attribute," which is the value of the attribute foo. You can also add or change existing attributes with the setAttribute DOM method. getElementsByTagName
Another very useful method is getElementsByTagName. This method returns an array of elements of the specified type. The argument used is the string element type. "box," for example, could be used to obtain an array of all boxes in a document. The array is zero-based, so the elements start at 0 and end with the last occurrence of the element in the document. If you have three boxes in a document and want to reference each box, you can do it as follows:
<box id="box-one" />
<box id="box-two" />
<box id="box-three" />
Or you can get the array and index into it like this:
var box = document.getElementsByTagName('box');
box[0], the first object in the returned array, is a XUL box.
To see the number of boxes on a page, you can use the length property of an array:
var len = document.getElementsByTagName('box').length;
console output: 3
To output the id of the box:
<box id="box-one" />
<box id="box-two" />
<box id="box-three" />
var el = document.getElementsByTagName('box');
var tagId = el[0].id;
console output: box-one
To get to an attribute of the second box:
<box id="box-one" />
<box id="box-two" foo="some attribute for the second box" />
<box id="box-three" />
var el = document.getElementsByTagName('box');
var att = el[1].getAttribute('foo');
dump(att +"\n");
console output: some attribute for the second box
getElementsByTagName is a handy way to obtain DOM elements without using getElementById. Not all elements have id attributes, so other means of getting at the elements must be used occasionally.[1] Getting an element object and its properties
In addition to a basic set of attributes, an element may have many properties. These properties don't typically appear in the markup for the element, so they can be harder to learn and remember. To see the properties of an element object node, however, you can use a JavaScript for in loop to iterate through the list, as shown in Example 5-1.
Note the implicit functionality in the el object itself: when you iterate over the object reference, you ask for all members of the class of which that object is an instance. This simple example "spells" the object out to the console. Since the DOM recognizes the window as another element (albeit the root element) in the Document Object Model, you can use a similar script in Example 5-2 to get the properties of the window itself.
The output in Example 5-2 is a small subset of all the DOM properties associated with a XUL window and the other XUL elements, but you can see all of them if you run the example. Analyzing output like this can familiarize you with the interfaces available from window and other DOM objects. Retrieving elements by property
You can also use a DOM method to access elements with specific properties by using getElementsByAttribute. This method takes the name and value of the attribute as arguments and returns an array of nodes that contain these attribute values:
<checkbox id="box-one" />
<checkbox id="box-two" checked="true"/>
<checkbox id="box-three" checked="true"/>
var chcks = document.getElementsByAttribute("checked", "true");
var count = chcks.length;
dump(count + " items checked \n");
One interesting use of this method is to toggle the state of elements in an interface, as when you get all menu items whose disabled attribute is set to true and set them to false. In the xFly sample, you can add this functionality with a few simple updates. In the xfly.js file in the xFly package, add the function defined in Example 5-3.
Although this example doesn't update elements whose disabled attribute is not specified, you can call this function from a new menu item and have it update all menus whose checked state you do monitor, as shown in Example 5-4.
When you add this to the xFly application window (from Example 2-10, for example, above the basic vbox structure), you get an application menu bar with a menu item, Toggle, that reverses the checked state of the three items in the "Fly Types" menu, as seen in Figure 5-2.
The following section explains more about hooking scripts up to the interface. Needless to say, when you use a method like getElementsByAttribute that operates on all elements with a particular attribute value, you must be careful not to grab elements you didn't intend (like a button elsewhere in the application that gets disabled for other purpose).
You can use other DOM methods, but these methods are most commonly used in the XPFE. Mozilla's support for the DOM is so thorough that you can use the W3C specifications as a list of methods and properties available to you in the chrome and in the web content the browser displays. The full W3C activity pages, including links to the specifications implemented by Mozilla, can be found at http://www.w3.org/DOM/. | <urn:uuid:4572013e-219c-4f35-b2cd-37de38ac7709> | 4 | 3.625 | 0.203178 | en | 0.833165 | http://books.mozdev.org/html/mozilla-chp-5-sect-2.html |
Thursday, October 18, 2012
This one works
(With apologies to Tillerman)
It might be crazy but that's what makes it interesting. Pushing the edge of the ol' envelope on foiling carbon fibre wing sail big-cats, that's the bleeding edge technology for sailing in 2012.
And lets not forget it involves the likes of Jimmy Spithill, Ben Ainslie, Grant Dalton, Russell Coutts, and Tom Slingsby.
That's a long list of the world's top sailors.
O Docker said...
Did you notice how flat the water was in this video?
I haven't read any of the buzz on the Oracle crash from the smart people - just saw the video - but my take is this:
They capsized on a broad reach in 'normal' SF Bay sailing conditions - 25 knot winds and a 3-4 foot chop. One of the world's best helmsmen was driving, with lots of experience in big racing cats. The conditions - not structural failure - were enough to pretty much destroy the boat. So, is the AC72 something so ungainly that it can't be sailed?
Or have they built the world's most complex and expensive inland lakes boat?
As Tillerman implied (or did I infer?), even a Laser is designed to survive capsizes.
(Sometimes I think that's all it was designed to do.)
JP said...
What struck me was that just before the crash Oracle had really rolled in the jib and to get balance on a standard rigged boat you'd normally put in a reef on the main.
But of course in this case they couldn't, so the nose dug in.
That's a problem with wing-sails, and the designers need to find a way round it. | <urn:uuid:e7286f2b-7d3e-46b3-b9fc-41246ecad7f3> | 2 | 1.851563 | 0.4684 | en | 0.962911 | http://captainjpslog.blogspot.com/2012/10/this-one-works.html |
Take the 2-minute tour ×
They are called cooks. It's like saying there are amateur heartsurgeons.
share|improve this question
closed as not constructive by Sam Holder, rumtscho, yossarian Nov 29 '11 at 13:44
am·a·teur [am-uh-choor, -cher, -ter, am-uh-tur] noun 1. a person who engages in a study, sport, or other activity for pleasure rather than for financial benefit or professional reasons. – talon8 Nov 29 '11 at 15:56
1 Answer 1
Chef is a word with multiple definitions:
2. any cook.
In common usage chef is most commonly used to refer to a cook of great skill or accomplishment. Many people each year attend the formal education required to become a chef, at Le Cordon Blue etc, merely for the personal joy of cooking for themselves, their family and friends. These people are both 'chef's' and are 'amateurs'. Today many 'amateurs' have advanced their culinary skills to the point where they are indeed deserving of the title 'Chef'.
I would deny no one with the passion for the art of cooking the title of 'amateur chef'. This is, after all, how Julia Childs started out....
share|improve this answer
I didn't know whether to berate you for feeding trolls, or upvote you for giving a good answer to a bad question... After all, I went with the upvote, because your calmness and patience is admirable. – rumtscho Nov 29 '11 at 13:24
| <urn:uuid:65adfec9-73fa-42c7-91fa-f75341b2c011> | 2 | 1.820313 | 0.311455 | en | 0.937249 | http://cooking.stackexchange.com/questions/19209/do-you-guys-know-that-there-is-no-such-thing-as-an-amateur-chef |
Take the 2-minute tour ×
just bought an Intellichef by Morphy Richards,instructions state boil temperature is 240 degrees C,slow cook temp 120 degrees C;this seems to be too high for slow cooking;my old slow-cooker just bubbled away slowly,this one appears to be boiling almost straight away.I have contacted the company but not had a reply yet,does anyone else have one of these new multicookers?
share|improve this question
Are you sure you're not confusing the Celsius and Fahrenheit units? – Mien Mar 16 '14 at 15:15
@Mien Those temperatures are still inappropriate in Fahrenheit (240 is too high, and 120 is too low to be safe). But the manufacturer is British, and definitely doing Centigrade. – SAJ14SAJ Mar 16 '14 at 15:19
1 Answer 1
You are correct that the indicated temperatures are too high. 240 C (480 F) is a very fast oven temperature, and inappropriate for almost all direct contact cooking methods. Even 120 C is well past water's boiling point, and so not achievable in slow cooking.
A more typical an appropriate braising or slow cooking temperature would be 82 C (180 F).
Assuming this is the instruction booklet for the product you have, it appears the instructions for slow cooking are actually trying to get you to do a two part braise (they even seem close to identical to the braising instructions): first searing the meat for flavor development through browning, and then a longer cooking period. The higher temperature is intended for the searing phase; they then you have reduce the temperature to about 100-140 C.
This is still too high for braising, but that may be the temperature the device's thermometer will perceive at the bottom of the cooking insert. You want the temperature in the actual food to be about 82 C—use an instant read thermometer to help learn where to set the dial.
Similarly, the boiling temperature is almost certainly again set too high (as water, by definition, boils at 100 C), with the intention of putting the device at its maximum power to bring the water to boil and keep it boiling rapidly.
share|improve this answer
Your Answer
| <urn:uuid:bbfceb91-b585-448b-8526-fe42314a53e7> | 2 | 2.0625 | 0.292476 | en | 0.929196 | http://cooking.stackexchange.com/questions/42794/morphy-richards-new-intellichef-states-boil-at-240-degrees-c |
money is power definition, money is power meaning | English dictionary
1 a medium of exchange that functions as legal tender
3 a particular denomination or form of currency
silver money
4 property or assets with reference to their realizable value
5 (Law, or, archaic) pl , moneys, monies a pecuniary sum or income
6 an unspecified amount of paper currency or coins
money to lend
7 for one's money in one's opinion
8 in the money
Informal well-off; rich
9 money for old rope
Informal profit obtained by little or no effort
10 money to burn more money than one needs
11 one's money's worth full value for the money one has paid for something
12 put money into to invest money in
13 put money on to place a bet on
14 put one's money where one's mouth is See mouth 19
Related adj
(C13: from Old French moneie, from Latin moneta coinage; see mint2)
appearance money
n money paid by a promoter of an event to a particular celebrity in order to ensure that the celebrity takes part in the event
big money
n large sums of money
there's big money in professional golf
black money
1 that part of a nation's income that relates to its black economy
3 (U.S.) money to fund a government project that is concealed in the cost of some other project
blood money
1 compensation paid to the relatives of a murdered person
2 money paid to a hired murderer
3 a reward for information about a criminal, esp. a murderer
boot money
Informal unofficial bonuses in the form of illegal cash payments made by a professional sports club to its players
call money
n money loaned by banks and recallable on demand
caution money
n (Chiefly Brit) a sum of money deposited as security for good conduct, against possible debts, etc.
cob money
n crude silver coins issued in the Spanish colonies of the New World from about 1600 until 1820
conscience money
n money paid voluntarily to compensate for dishonesty, esp. money paid voluntarily for taxes formerly evaded
danger money
n extra money paid to compensate for the risks involved in certain dangerous jobs
easy money
1 money made with little effort, sometimes dishonestly
2 (Commerce) money that can be borrowed at a low interest rate
fiat money
n (Chiefly U.S) money declared by a government to be legal tender though it is not convertible into standard specie
folding money
Informal paper money
gate money
n the total receipts taken for admission to a sporting event or other entertainment
head money
1 a reward paid for the capture or slaying of a fugitive, outlaw, etc.
2 an archaic term for poll tax
hot money
n capital transferred from one financial centre to another seeking the highest interest rates or the best opportunity for short-term gain, esp. from changes in exchange rates
hush money
key money
n a fee payment required from a new tenant of a house or flat before he moves in
Maundy money
n specially minted coins distributed by the British sovereign on Maundy Thursday
money cowry
1 a tropical marine gastropod, Cypraea moneta
2 the shell of this mollusc, used as money in some parts of Africa and S Asia
Informal seeking greedily to obtain money at every opportunity
money-grubber n
money market
n (Finance) the financial institutions dealing with short-term loans and capital and with foreign exchange
Compare capital market
money of account
money order
n another name (esp. U.S. and Canadian) for postal order
money spider
n any of certain small shiny brownish spiders of the family Linyphiidae
Informal an enterprise, idea, person, or thing that is a source of wealth
money supply
n the total amount of money in a country's economy at a given time
See also M0 M1 M2 M3 M3c M4 M5
money wages
pl n (Economics) wages evaluated with reference to the money paid rather than the equivalent purchasing power, (Also called) nominal wages Compare real wages
near money
n liquid assets that can be converted to cash very quickly, such as a bank deposit or bill of exchange
option money
n (Commerce) the price paid for buying an option
paper money
n paper currency issued by the government or the central bank as legal tender and which circulates as a substitute for specie
pin money
1 an allowance by a husband to his wife for personal expenditure
2 money saved or earned to be used for incidental expenses
plastic money
n credit cards, used instead of cash
(C20: from the cards being made of plastic)
pocket money
1 (Brit) a small weekly sum of money given to children by parents as an allowance
2 money for day-to-day spending, incidental expenses, etc.
prize money
1 any money offered, paid, or received as a prize
2 (formerly) a part of the money realized from the sale of a captured vessel
push money
n a cash inducement provided by a manufacturer or distributor for a retailer or his staff, to reward successful selling
ready money , cash
n funds for immediate use; cash, (Also called) the ready, the readies
seed money
n money used for the establishment of an enterprise
ship money
n (English history) a tax levied to finance the fitting out of warships: abolished 1640
sit-down money
n (Austral)
informal social security benefits
smart money
a money bet or invested by experienced gamblers or investors, esp. with inside information
b the gamblers or investors themselves
2 money paid in order to extricate oneself from an unpleasant situation or agreement, esp. from military service
3 money paid by an employer to someone injured while working for him
4 (U.S. law) damages awarded to a plaintiff where the wrong was aggravated by fraud, malice, etc.
spending money
n an allowance for small personal expenses; pocket money
table money
n an allowance for official entertaining of visitors, clients, etc., esp. in the army
token money
n coins of the regular issue having greater face value than the value of their metal content
English Collins Dictionary - English Definition & Thesaurus
Collaborative Dictionary English Definition
This is a term rising in popularity
eMoney is electronic money exchangeable electronically via cyber digital device.
easily gained money
be exactly right
means "that's just the way it is"
c'est comme ça, point barre
the decision is yours
a person with more power or authority than others
something is easy to do
It is healthy to laugh
charver is another word for chav
grunt work is hard, uninteresting work
US informal
home is the best place to be no matter where it is
he is a very good seller
to release sth that is tied up
expression used when nothing is going well
canned by Theodore Roosevelt
game of power inside a company's board or management team
Reverso Community
• Create your own vocabulary list
• Contribute to the Collaborative Dictionary
• Improve and share your linguistic knowledge | <urn:uuid:ea03fbcb-e871-45d8-8983-762ee1c3f283> | 3 | 2.953125 | 0.484511 | en | 0.750552 | http://dictionary.reverso.net/english-definition/money%20is%20power |
Monday, November 21, 2005
Smoked salmon
Since she knows I'm obsessed with food, my wife often turns to me with the obscure culinary questions that pop into her head from time to time. What's the difference between escarole and kale? What are the non-animal sources of gelatin? Are the brussel sprout and the endive from the same family of plant? Usually I delight in being able to come up with a speedy and accurate answer to her questions. But yesterday I was stumped.
What's the difference between Nova and lox? Not being a native New-Yorker, I had no idea. My wife's theory went like this. Nova is just another name for smoked salmon. Lox sounds a bit like the end of the Scandinavian gravlax (at least when both are pronounced with an American accent), so it must be essentially that: cured, not smoked, salmon.
I didn't buy this theory for a second. But I didn't have an alternative to offer my wife, so she spent most of the day chuckling to herself, and I spent most of the day grumbling.
Imagine my surprise when I discovered that my wife was basically right, at least in the sense that Nova is smoked and lox is not. But there's a bit more to it than that. Here's what I've managed to piece together from various sources across the internet. There are some inconsistencies out there, but everyone seems to agree on at least these basic facts:
Nova. Nova is salmon that is cured, usually in a sugar and salt brine, and then lightly smoked. Nova is usually made from Atlantic salmon, which itself is usually from the coast of Nova Scotia, hence the name.
Lox. The word lox comes from the German lachs, which is itself etymologically related to the Scandinavian lax. These words mean "salmon". Lox is heavily cured in a salt brine (and later soaked in water to remove the saltiness), and is not smoked at all.
That's how things should be, at least. But it seems that it's a bit more complicated than that. For one thing, most manufacturers seem to call their Nova "Nova Lox", contrasted with just plain "lox", or sometimes "belly lox" (presumably from the belly of the fish, a significantly more fatty cut).
In any case, whatever people end up calling it, true Nova has a lighter, less salty taste than lox. And Nova is more expensive than lox. Scottish, or Scottish-style, smoked salmon, by the way, is cold-smoked for much longer than Nova, and as a result is smokier-tasting and drier in texture.
So now I know, and by tonight, so will my wife.
Anonymous Anonymous said...
very timely! living around the corner from russ & daughter's, i just was asking myself this same question. thanks.
9:44 AM
Anonymous Anonymous said...
Thank you for clearing that up! I was looking everywhere for the difference between smoked salmon and lox.
11:30 PM
Anonymous Anonymous said...
6:04 AM
Anonymous Anonymous said...
12:25 AM
Anonymous Anonymous said...
7:16 AM
Anonymous Anonymous said...
paragraph gives good understanding even.
Stop by my homepage web tv
10:01 PM
Anonymous Anonymous said...
my weblog New Tablets
3:13 PM
Post a Comment
<< Home | <urn:uuid:7cde07b9-569f-4d19-8e58-97dcebf97d71> | 2 | 1.632813 | 0.817615 | en | 0.965673 | http://eatitupnow.blogspot.com/2005/11/smoked-salmon.html |
Electronics/History/Chapter 4
From Wikibooks, open books for an open world
Jump to: navigation, search
Frequency Spectrum
Beam power[edit]
Microwaves can be used to transmit power over long distances,
and post-World War II research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using Solar Power Satellite (SPS) systems with large solar arrays that would beam power down to the Earth's survace via microwaves.
Van allen radiation belt[edit]
The presence of a radiation belt had been theorized prior to the Space Age and the belt's presence was confirmed by the Explorer I on January 31, 1958 and Explorer III missions, under Doctor James van Allen. The trapped radiation was first mapped out by Explorer IV and Pioneer III.
New technology was added to FM radio in the early 1960s to allow FM stereo transmissions, where the frequency modulated radio signal is used to carry stereophonic sound, using the pilot-tone multiplex system.
On December 29, 1949 KC2XAK of Bridgeport, Connecticut became the first UHF television station to operate on a regular daily schedule.
In Britain, UHF television began with the launch of BBC TWO in 1964. BBC ONE and ITV soon followed, and colour was introduced on UHF only in 1967 - 1969. Today all British terrestrial television channels (both analog and digital) are on UHF.
The Federal Communications Commission (FCC) is an independent United States government agency, created, directed, and empowered by Congressional statute.
The FCC was established by the Communications Act of 1934 as the successor to the Federal Radio Commission and is charged with regulating all non-Federal Government use of the radio spectrum (including radio and television broadcasting), and all interstate telecommunications (wire, satellite and cable) as well as all international communications that originate or terminate in the United States. The FCC took over wire communication regulation from the Interstate Commerce Commission. The FCC's jurisdiction covers the 50 states, the District of Columbia, and U.S. possessions.
Table of contents [showhide] 1 Organization 2 History 2.1 Report on Chain Broadcasting 2.2 Allocation of television stations 3 Regulatory powers 4 External links
As the chief executive officer of the Commission, the Chairman delegates management and administrative responsibility to the Managing Director. The Commissioners supervise all FCC activities, delegating responsibilities to staff units and Bureaus. The current FCC Chairman is Michael Powell, son of Secretary of State Colin Powell. The other four current Commissioners are Kathleen Abernathy, Michael Copps, Kevin Martin, and Jonathon Adelstein.
History Report on Chain Broadcasting In 1940 the Federal Communications Commission issued the "Report on Chain Broadcasting." The major point in the report was the breakup of NBC (See American Broadcasting Company), but there were two other important points. One was network option time, the culprit here being CBS. The report limited the amount of time during the day, and what times the networks may broadcast. Previously a network could demand any time it wanted from an affiliate. The second concerned artist bureaus. The networks served as both agents and employees of artists, which was a conflict of interest the report rectified.
Allocation of television stations The Federal Communications Commission assigned television the Very High Frequency, VHF, band and gave TV channels 1-13. The 13 channels could only accommodate 400 stations nationwide and could not accommodate color in its state of technology in the early 1940s. So in 1944 CBS proposed to convert all of television to the Ultra High Frequency band, UHF, which would have solved the frequency and color problem. There was only one flaw in the CBS proposal, everyone else disagreed. In 1945 and 1946 the Federal Communications Commission held hearings on the CBS plan. RCA said CBS wouldn't have its color system ready for 5-10 years. CBS claimed it would be ready by the middle of 1947. CBS also gave a demonstration with a very high quality picture. In October of 1946 RCA presented a color system of inferior quality which was partially compatible with the present VHF black and white system. In March 1947 the Federal Communications Commission said CBS would not be ready, and ordered a contiuation of the present system. RCA promised its electric color system would be fully compatible within five years, in 1947 an adaptor was required to see color programs in black and white on a black and white set.
In 1945 the Federal Communications Commission moved FM radio to a higher frequency. The Federal Communications Commission also allowed simulcasting of AM programs on FM stations. Regardless of these two disadvantages, CBS placed its bets on FM and gave up some TV applications. CBS had thought TV would be moved according to its plan and thus delayed. Unfortunately for CBS, FM was not a big moneymaker and TV was. That year the Federal Communications Commission set 150 miles as the minimum distance between TV stations on the same channel.
There was interference between TV stations in 1948 so the Federal Communications Commission froze the processing of new applications for TV stations. On September 30, 1948, the day of the freeze, there were thirty-seven stations in twenty-two cities and eighty-six more were approved. Another three hundred and three applications were sent in and not approved. After all the approved stations were constructed, or weren't, the distribution was as follows: New York and Los Angeles, seven each; twenty-four other cities had two or more stations; most cities had only one including Houston, Kansas City, Milwaukee, Pittsburgh, and St. Louis. A total of just sixty-four cities had television during the freeze, and only one-hundred-eight stations were around. The freeze was for six months only, initially, and was just for studying interference problems. Because of the Korean Police Action, the freeze wound up being three and one half years. During the freeze, the interference problem was solved and the Federal Communications Commission made a decision on color TV and UHF. In October of 1950 the Federal Communications Commission made a pro-CBS color decision for the first time. The previous RCA decisions were made while Charles Denny was chairman. He later resigned in 1947 to become an RCA vice president and general consel. The decision approved CBS' mechanical spinning wheel color TV system, now able to be used on VHF, but still not compatible with black-and-white sets.
RCA, with a new compatible system that was of comparable quality to CBS' according to TV critics, appealed all the way to the U.S. Supreme Court and lost in May, 1951, but its legal action did succeed in toppling CBS' color TV system, as during the legal battle, many more black-and-white television sets were sold. When CBS did finally start broadcasting using its color TV system in mid-1951, most American television viewers already had black-and-white receivers that were incompatible with CBS' color system. In October of 1951 CBS was ordered to stop work on color TV by the National Production Authority, supposedly to help the situation in Korea. The Authority was headed by a lieutenant of William Paley, the head of CBS.
The Federal Communications Commission, under chairman Wayne Coy, issued its Sixth Report and Order in early 1952. It established seventy UHF channels (14-83) providing 1400 new potential stations. It also set aside 242 stations for education, most of them in the UHF band. The Commission also added 220 more VHF stations. VHF was reduced to 12 channels with channel 1 being given over to other uses and channels 2-12 being used solely for TV, this to reduced interference. This ended the freeze. In March of 1953 the House Committee on Interstate and Foreign Commerce held hearings on color TV. RCA and the National Television Systems Committee, NTSC, presented the RCA system. The NTSC consisted of all of the major television manufacturers at the time. On March 25, CBS president Frank Stanton conceded it would be "economically foolish" to pursue its color system and in effect CBS lost.
December 17, 1953 the Federal Communications Commission reversed its decision on color and approved the RCA system. Ironically, color didn't sell well. In the first six months of 1954 only 8,000 sets were sold, there were 23,000,000 black and white sets. Westinghouse made a big, national push and sold thirty sets nationwide. The sets were big, expensive and didn't include UHF.
The problem was that UHF stations would not be successful unless people had UHF tuners, and people would not voluntarily pay for UHF tuners unless there were UHF broadcasters. Of the 165 UHF stations that went on the air between 1952 and 1959, 55% went off the air. Of the UHF stations on the air, 75% were losing money. UHF's problems were the following:(1) technical inequality of UHF stations as compared with VHF stations; (2) intermixture of UHF and VHF stations in the same market and the millions of VHF only receivers; (3) the lack of confidence in the capabilities of and the need for UHF television. Suggestions of de-intermixture (making some cities VHF only and other cities UHF only) were not adopted, because most existing sets did not have UHF capability. Ultimately the FCC required all TV sets to have UHF tuners. However over four decades later, UHF is still considered inferior to VHF, despite cable television, and ratings on VHF channels are generally higher than on UHF channels.
The allocation between VHF and UHF in the 1950s, and the lack of UHF tuners is entirely analogous to the dilemma facing digital television of high definition television fifty years later.
Regulatory powers The Federal Communications Commission has one major regulatory weapon, revoking licenses, but short of that has little leverage over broadcast stations. It is reluctant to do this since it operates in a near vacuum of information on most of the tens of thousands of stations whose licences are renewed every three years. Broadcast licenses are supposed to be renewed if the station met the "public interest, convenience, or necessity." The Federal Communications Commission rarely checked except for some outstanding reason, burden of proof would be on the compaintant. Fewer than 1% of station renewals are not immediately granted, and only a small fraction of those are actually denied.
Source: from Federal Standard 1037C
See also: concentration of media ownership, Fairness Doctrine, frequency assignment, open spectrum
There was an urgent need during radar development in World War II for a microwave generator that worked in shorter wavelengths - around 10cm rather than 150cm - available from generators of the time. In 1940, at Birmingham University in the UK, John Randall and Harry Boot produced a working prototype of the cavity magnetron, and soon managed to increase its power output 100-fold. In August 1941, the first production model was shipped to the United States.
FM radio is a broadcast technology invented by Edwin Howard Armstrong that uses frequency modulation to provide high-fidelity broadcast radio sound.
W1XOJ was the first FM radio station, granted a construction permit by the FCC in 1937. On January 5, 1940 FM radio was demonstrated to the FCC for the first time. FM radio was assigned the 42 to 50 MHz band of the spectrum in 1940.
After World War II, the FCC moved FM to the frequencies between 88 and 106 MHz on June 27, 1945, making all prewar FM radios worthless. This action severely set back the public confidence in, and hence the development of, FM radio. On March 1, 1945 W47NV began operations in Nashville, Tennessee becoming the first modern commercial FM radio station.
Main Page | Recent changes | Edit this page | Page history
Printable version | Disclaimers
Not logged in
Log in | Help
Other languages: العربية | Български | Dansk | Deutsch | Ελληνικά | Español | Esperanto | Français | Kurdî | Lietuvių | Nederlands | 日本語 | Polski | Português | Română | Simple English | Suomi | Svenska | 中文 Television
From Wikipedia, the free encyclopedia.
Television is a telecommunication system for broadcasting and receiving moving pictures and sound over a distance. The term has come to refer to all the aspects of television programming and transmission as well. The televisual has become synonymous with postmodern culture. The word television is a hybrid word, coming from both Greek and Latin. "Tele-" is Greek for "far", while "-vision" is from the Latin "visio", meaning "vision" or "sight".
Table of contents [showhide] 1 History 2 TV standards 3 TV aspect ratio 4 Aspect ratio incompatibility 5 New developments 6 TV sets 7 Advertising 8 US networks 9 European networks 10 Colloquial names 11 Related articles 11.1 External links 11.2 See also: 12 Further Reading 12.1 TV as social pathogen, opiate, mass mind control, etc.
History Paul Gottlieb Nipkow proposed and patented the first electromechanical television system in 1884.
A. A. Campbell Swinton wrote a letter to Nature on the 18th June 1908 describing his concept of electronic television using the cathode ray tube invented by Karl Ferdinand Braun. He lectured on the subject in 1911 and displayed circuit diagrams.
TV standards See broadcast television systems.
There many means of distributing television broadcasts, including both analogue and digital versions of:
* Terrestrial television
* Satellite television
* Cable television
* MMDS (Wireless cable)
TV aspect ratio All of these early TV systems shared the same aspect ratio of 4:3 which was chosen to match the Academy Ratio used in cinema films at the time. This ratio was also square enough to be conveniently viewed on round Cathode Ray Tubes (CRTs), which were all that could be produced given the manufacturing technology of the time -- today's CRT technology allows the manufacture of much wider tubes. However, due to the negative heavy metal health effects associated with disposal of CRTs in landfills, and the space-saving attributes of flat screen technologies that lack the aspect ratio limitations of CRTs, CRTs are slowly becoming obsolete.
In the 1950s movie studios moved towards wide screen aspect ratios such as Cinerama in an effort to distance their product from television.
* in "letterbox" format, with black stripes at the top and bottom
* with the image horizontally compressed
* with black vertical bars to the left and right
* with upper and lower portions of the image cut off
* with the image horizontally distorted
New developments
* Digital television (DTV)
* High Definition TV (HDTV)
* Pay Per View
* Web TV
* programming on-demand.
For many years different countries used different technical standards. France initially adopted the German 441 line standard but later upgraded to 819 lines, which gave the highest picture definition of any analogue TV system, approximately four times the resolution of the British 405 line system. Eventually the whole of Europe switched to the 625 line standard, once more following Germany's example. Meanwhile in North America the original 525 line standard was retained.
European colour television was developed somewhat later, in the 1960s, and was hindered by a continuing division on technical standards. The German PAL system was eventually adopted by West Germany, the UK, Australia, New Zealand, much of Africa, Asia and South America, and most West European countries except France. France produced its own SECAM standard, which was eventually adopted in much of Eastern Europe. Both systems broadcast on UHF frequencies and adopted a higher-definition 625 line system.
* standalone TV sets;
* component systems with separate big screen video monitor, tuner, audio system which the owner connects the pieces together as a high-end home theater system. This approach appeals to videophiles that prefer components that can be upgraded separately.
* Component Video- three separate connectors, with one brightness channel and two color channels (hue and saturation), and is usually referred to as Y, B-Y, R-Y, or Y Pr Pb. This provides for high quality pictures and is usually used inside professional studios. However, it is being used more in home theater for DVDs and high end sources. Audio is not carried on this cable.
* SCART - A large 21 pin connector that may carry Composite video, S-Video or, for better quality, separate red, green and blue (RGB) signals and two-channel sound, along with a number of control signals. This system is standard in Europe but rarely found elsewhere.
* Composite video - The most common form of connecting external devices, putting all the video information into one stream. Most televisions provide this option with a yellow RCA jack. Audio is not carried on this cable.
* Coaxial or RF (coaxial cable) - All audio channels and picture components are transmitted through one wire and modulated on a radio frequency. Most TVs manufactured during the past 15-20 years accept coaxial connection, and the video is typically "tuned" on channel 3 or 4. This is the type of cable usually used for cable television.
Advertising From the earliest days of the medium, television has been used as a vehicle for advertising. Since their inception in the USA in the late 1940s, TV commercials have become far and away the most effective, most pervasive, and most popular method of selling products of all sorts. US advertising rates are determined primarily by Nielsen Ratings
European networks In much of Europe television broadcasting has historically been state dominated, rather than commercially organised, although commercial stations have grown in number recently. In the United Kingdom, the major state broadcaster is the BBC (British Broadcasting Corporation), commercial broadcasters include ITV (Independent Television), Channel 4 and Channel 5, as well as the satellite broadcaster British Sky Broadcasting. Other leading European networks include RAI (Italy), Télévision Française (France), ARD (Germany), RTÉ (Ireland), and satellite broadcaster RTL (Radio Télévision Luxembourg). Euronews is a pan-European news station, broadcasting both by satellite and terrestrially (timesharing on State TV networks) to most of the continent. Broadcast in several languages (English, French, German, Spanish, Russian, etc.) it draws on contributions from State broadcasters and the ITN news network.
Colloquial names
* Telly
* The Tube/Boob Tube
* The Goggle Box
* The Cyclops
* Idiot Box
Related articles
* List of 'years in television'
* Lists of television channels
* List of television programs
* List of television commercials
* List of television personalities
* List of television series
o List of Canadian television series
o List of US television series
o List of UK television series
* Animation and Animated series
* Nielsen Ratings
* Home appliances
* Reality television
* Television network
* Video
* Voyager Golden Record
* V-chip
* Wasteland Speech
* DVB
* Television in the United States
External links
* "Television History"
* Early Television Foundation and Museum
* Television History site from France
* TV Dawn
* British TV History Links
* UK Television Programmes
* aus.tv.history - Australian Television History
* TelevisionAU - Australian Television History
* Federation Without Television
See also: Charles Francis Jenkins Federation Without Television
Further Reading TV as social pathogen, opiate, mass mind control, etc.
* Jerry Mander Four Arguments for the Elimination of Television
* Marie Winn The Plug-in Drug
* Neil Postman Amusing Ourselves to Death
* Terence McKenna Food of the Gods
* Joyce Nelson The Perfect Machine
* Andrew Bushard Federation Without Television: the Blossoming Movement
Alternate use of the term: Television (band) Television camera
Other languages: العربية | Български | Dansk | Deutsch | Ελληνικά | Español | Esperanto | Français | Kurdî | Lietuvių | Nederlands | 日本語 | Polski | Português | Română | Simple English | Suomi | Svenska | 中文 Main Page | About Wikipedia | Recent changes |
This page was last modified 20:30, 17 Apr 2004. All text is available under the terms of the GNU Free Documentation License (see Copyrights for details). Disclaimers. Wikipedia is powered by MediaWiki, an open source wiki engine. Main Page Recent changes Random page Current events Community Portal Edit this page Discuss this page Page history What links here Related changes Special pages Contact us Donations
Renewable energy[edit]
From Wikipedia, the free encyclopedia.
Renewable energy is energy from a source which can be managed so that it is not subject to depletion in a human timescale . Sources include the sun's rays, wind, waves, rivers, tides, biomass, and geothermal. Renewable energy does not include energy sources which are dependent upon limited resources, such as fossil fuels and nuclear fission power.
Table of contents [showhide] 1 General Information 2 Pros and cons of renewable energy 3 Renewable energy history 3.1 Wood 3.2 Animal Traction 3.3 Water Power 3.4 Wind Power 3.5 Solar power 3.6 The renewable energy movement 4 Renewable Energy Today 5 Modern sources of renewable energy 5.1 Smaller-scale sources 5.2 Renewables as solar energy 5.3 Solar energy per se 5.3.1 Solar electrical energy 5.3.2 System problems with solar electric 5.3.3 Solar thermal electric energy 5.3.4 Solar thermal energy Solar water heating Solar heat pumps Solar ovens 5.4 Wind Energy 5.5 Geothermal Energy 5.6 Water power 5.6.1 Electrokinetic energy 5.6.2 Hydroelectric Energy 5.6.3 Tidal power 5.6.4 Tidal stream power 5.6.5 Wave power 5.6.6 OTEC 5.7 Biomass 5.7.1 Liquid biofuel 5.7.2 Solid biomass 5.7.3 Biogas 6 Renewable energy storage systems 6.1 Hydrogen fuel cells 6.2 Other renewable energy storage systems 6.2.1 Pumped water storage 6.2.2 Battery storage 6.2.3 Electrical grid storage 7 Renewable energy use by nation 8 Renewable energy controversies 8.1 The funding dilemma 8.2 Centralization versus decentralization 8.3 The nuclear "renewable" claim 9 References
General Information Most renewable forms of energy, other than geothermal, are in fact stored solar energy. Water power and wind power represent very short-term solar storage, while biomass represents slightly longer-term storage, but still on a very human time-scale, and so renewable within that human time-scale. Fossil fuels, on the other hand, while still stored solar energy, have taken millions of years to form, and so do not meet the definition of renewable.
Renewable energy resources may be used directly as energy sources, or used to create other forms of energy for use. Examples of direct use are solar ovens, geothermal heat pumps, and mechanical windmills. Examples of indirect use in creating other energy sources are electricity generation through wind generators or photovoltaic cells, or production of fuels such as ethanol from biomass (see alcohol as a fuel).
Pros and cons of renewable energy Renewable energy sources are fundamentally different from fossil fuel or nuclear power plants because of their widespread occurrence and abundance - the sun will 'power' these 'powerplants' (meaning sunlight, the wind, flowing water, etc.) for the next 4 billion years. Some renewable sources do not emit any additional carbon dioxide and do not introduce any new risks such as nuclear waste. In fact, one renewable energy source, wood, actively sequesters carbon dioxide while growing.
A visible disadvantage of renewables is their visual impact on local environments. Some people dislike the aesthetics of wind turbines or bring up nature conservation issues when it comes to large solar-electric installations outside of cities. Some people try to utilize these renewable technologies in an efficient and aesthetically pleasing way: fixed solar collectors can double as noise barriers along highways, roof-tops are available already and could even be replaced totally by solar collectors, etc.
Some renewable energy capture systems entail unique environmental problems. For instance, wind turbines can be hazardous to flying birds, while hydroelectric dams can create barriers for migrating fish ? a serious problem in the Pacific Northwest that has decimated the numbers of many salmon populations.
Another inherent difficulty with renewables is their variable and diffuse nature (with the exception being geothermal energy, which is however only accessible where the Earth's crust is thin, such as near hot springs and natural geysers). Since renewable energy sources are providing relatively low-intensity energy, the new kinds of "power plants" needed to convert the sources into usable energy need to be distributed over large areas. To make the phrases 'low-intensity' and 'large area' easier to understand, note that in order to produce 1000 kWh of electricity per month (a typical per-month-per-capita consumption of electricity in Western countries), a home owner in cloudy Europe needs to use ten square meters of solar panels. Systematic electrical generation requires reliable overlapping sources or some means of storage on a reasonable scale (pumped-storage hydro systems, batteries, future hydrogen fuel cells, etc.). So, because of currently-expensive energy storage systems, a small stand-alone system is only economic in rare cases.
If renewable and distributed generation were to become widespread, electric power transmission and electricity distribution systems would no longer be the main distributors of electrical energy but would operate to balance the electricity needs of local communities. Those with surplus energy would sell to areas needing "top ups".
Renewable energy history The original energy source for all human activity was the sun via growing plants. Solar energy's main human application throughout most of history has thus been in agriculture and forestry, via photosynthesis.
Wood Firewood was the earliest manipulated energy source in human history, being used as a thermal energy source through burning, and it is still important in this context today. Burning wood was important for both cooking and providing heat, enabling human presence in cold climates. Special types of wood cooking, food dehydration and smoke curing, also enabled human societies to safely store perishable foodstuffs through the year. Eventually, it was discovered that partial combustion in the relative absence of oxygen could produce charcoal, which provided a hotter and more compact and portable energy source. However, this was not a more efficient energy source, because it required a large input in wood to create the charcoal.
Animal Traction Motive power for vehicles and mechanical devices was originally produced through animal traction. Animals such as horses and oxen not only provided transportation but also powered mills. Animals are still extensively in use in many parts of the world for these purposes.
Water Power Animal power for mills was eventually supplanted by water power, the power of falling water in rivers, wherever it was exploitable. Direct use of water power for mechanical purposes is today fairly uncommon, but still in use.
Originally, water power through (hydroelectricity) was the most important source of electrical generation throughout society, and is still an important source today. Throughout most of the history of human technology, hydroelectricity has been the only renewable source of electricity generation significantly tapped for the generation of electricity.
Wind Power Wind power has been used for several hundred years. It was originally used via large sail-blade windmills with slow-moving blades, such as those seen in the Netherlands and mentioned in Don Quixote. These large mills usually either pumped water or powered small mills. Newer windmills featured smaller, faster-turning, more compact units with more blades, such as those seen throughout the Great Plains. These were mostly used for pumping water from wells. Recent years have seen the rapid development of wind generation farms by mainstream power companies, using a new generation of large, high wind turbines with two or three immense and relatively slow-moving blades.
Solar power Solar power as a direct energy source has been not been captured by mechanical systems until recent human history, but was captured as an energy source through architecture in certain societies for many centuries. Not until the twentieth century was direct solar input extensively explored via more carefully planned architecture (passive solar) or via heat capture in mechanical systems (active solar) or electrical conversion (photovoltaic). Increasingly today the sun is harnessed for heat and electricity.
The renewable energy movement Renewable energy as an issue was virtually unheard-of before the middle of the twentieth century. There were experimentations with passive solar energy, including daylighting, in the early part of the twentieth century, but little beyond what had actually been practiced as a matter of course in some locales for hundreds of years. The renewable energy movement gained awareness, credence and strength with the great burgeoning of interest in environmental affairs in the mid-1900s, which in turn was largely due to Rachel Carson's ?'Silent Spring'?.
The first US politician to focus significantly on solar energy was Jimmy Carter, in response to the long term consequences of the 1973 energy crisis. No president since has paid much attention to renewable energy.
Renewable Energy Today Around 80% of energy requirements are focused around heating or cooling buildings and powering the vehicles that ensure mobility (cars, trains, airplanes). This is the core of society's energy requirements. However, most uses of renewable power focus on electricity generation.
Geothermal heat pumps (also called ground-source heat pumps) are a means of extracting heat in the winter or cold in the summer from the earth to heat or cool buildings.
Modern sources of renewable energy There are several types of renewable energy, including the following:
* Solar power.
* Wind power.
* Geothermal energy.
* Electrokinetic energy.
* Hydroelectricity.
* Biomatter, including Biogas Energy.
Smaller-scale sources Of course there are some smaller-scale applications as well:
* Piezo electric crystals embedded in the sole of a shoe can yield a small amount of energy with each step. Vibration from engines can stimulate piezo electric crystals.
* Some watches are already powered by movement of the arm.
* Special antennae can collect energy from stray radiowaves or even light (EM radiation).
Renewables as solar energy Most renewable energy sources can trace their roots to solar energy, with the exception of geothermal and tidal power. For example, wind is caused by the sun heating the earth unevenly. Hot air is less dense, so it rises, causing cooler air to move in to replace it. Hydroelectric power can be ultimately traced to the sun too. When the sun evaporates water in the ocean, the vapor forms clouds which later fall on mountains as rain which is routed through turbines to generate electrity. The transformation goes from solar energy to potential energy to kinetic energy to electric energy.
Solar energy per se Since most renewable energy is "Solar Energy" this term is slightly confusing and used in two different ways: firstly as a synonym for "renewable energies" as a whole (like in the political slogan "Solar not nuclear") and secondly for the energy that is directly collected from solar radiation. In this section it is used in the latter category.
There are actually two separate approaches to solar energy, termed active solar and passive solar.
Solar electrical energy For electricity generation, ground-based solar power has serious limitations because of its diffuse and intermittent nature. First, ground-based solar input is interrupted by night and by cloud cover, which means that solar electric generation inevitably has a low capacity factor, typically less than 20%. Also, there is a low intensity of incoming radiation, and converting this to high grade electricity is still relatively inefficient (14% - 18%), though increased efficiency or lower production costs have been the subject of much research over several decades.
Two methods of converting the Sun's radiant energy to electricity are the focus of attention. The better-known method uses sunlight acting on photovoltaic (PV) cells to produce electricity. This has many applications in satellites, small devices and lights, grid-free applications, earthbound signaling and communication equipment, such as remote area telecommunications equipment. Sales of solar PV modules are increasing strongly as their efficiency increases and price diminishes. But the high cost per unit of electricity still rules out most uses.
Several experimental PV power plants mostly of 300 - 500 kW capacity are connected to electricity grids in Europe and the USA. Japan has 150 MWe installed. A large solar PV plant was planned for Crete. In 2001 the world total for PV electricity was less than 1000 MWe with Japan as the world's leading producer. Research continues into ways to make the actual solar collecting cells less expensive and more efficient. Other major research is investigating economic ways to store the energy which is collected from the Sun's rays during the day.
Alternatively, many individuals have installed small-scale PV arrays for domestic consumption. Some, particularly in isolated areas, are totally disconnected from the main power grid, and rely on a surplus of generation capacity combined with batteries and/or a fossil fuel generator to cover periods when the cells are not operating. Others in more settled areas remain connected to the grid, using the grid to obtain electricity when solar cells are not providing power, and selling their surplus back to the grid. This works reasonably well in many climates, as the peak time for energy consumption is on hot, sunny days where air conditioners are running and solar cells produce their maximum power output. Many U.S. states have passed "net metering" laws, requiring electrical utilities to buy the locally-generated electricity for price comparable to that sold to the household. Photovoltaic generation is still considerably more expensive for the consumer than grid electricity unless the usage site is sufficiently isolated, in which case photovoltaics become the less expensive.
System problems with solar electric Frequently renewable electricity sources are disadvantaged by regulation of the electricity supply industry which favors 'traditional' large-scale generators over smaller-scale and more distributed generating sources. If renewable and distributed generation were to become widespread, electric power transmission and electricity distribution systems would no longer be the main distributors of electrical energy but would operate to balance the electricity needs of local communities. Those with surplus energy would sell to areas needing "top ups". Some Governments and regulators are moving to address this, though much remains to be done. One potential solution is the increased use of active management of electricity transmission and distribution networks.
Solar thermal electric energy The second method for utilizing solar energy is solar thermal. A solar thermal power plant has a system of mirrors to concentrate the sunlight on to an absorber, the resulting heat then being used to drive turbines. The concentrator is usually a long parabolic mirror trough oriented north-south, which tilts, tracking the Sun's path through the day. A black absorber tube is located at the focal point and converts the solar radiation to heat (about 400°C) which is transferred into a fluid such as synthetic oil. The oil can be used to heat buildings or water, or it can be used to drive a conventional turbine and generator. Several such installations in modules of 80 MW are now operating. Each module requires about 50 hectares of land and needs very precise engineering and control. These plants are supplemented by a gas-fired boiler which ensures full-time energy output. The gas generates about a quarter of the overall power output and keeps the system warm overnight. Over 800 MWe capacity worldwide has supplied about 80% of the total solar electricity to the mid-1990s.
One proposal for a solar electrical plant is the solar tower, in which a large area of land would be covered by a greenhouse made of something as simple as transparent foil, with a tall lightweight tower in the centre, which could also be composed largely of foil. The heated air would rush to and up the centre tower, spinning a turbine. A system of water pipes placed throughout the greenhouse would allow the capture of excess thermal energy, to be released throughout the night and thus providing 24-hour power production. A 200 MWe tower is proposed near Mildura, Australia.
Solar thermal energy Solar energy need not be converted to electricity for use. Many of the world's energy needs are simply for heat ? space heating, water heating, process water heating, oven heating, and so forth. The main role of solar energy in the future may be that of direct heating. Much of society's energy need is for heat below 60°C (140°F) - e.g. in hot water systems. A lot more, particularly in industry, is for heat in the range 60 - 110°C. Together these may account for a significant proportion of primary energy use in industrialized nations. The first need can readily be supplied by solar power much of the time in some places, and the second application commercially is probably not far off. Such uses will diminish to some extent both the demand for electricity and the consumption of fossil fuels, particularly if coupled with energy conservation measures such as insulation.
Solar water heating Domestic solar hot water systems were once common in Florida until they were displaced by highly-advertised natural gas. Such systems are today common in the hotter areas of Australia, and simply consist of a network of dark-colored pipes running beneath a window of heat-trapping glass. They typically have a backup electric or gas heating unit for cloudy days. Such systems can actually be justified purely on economic grounds, particularly in some remoter areas of Australia where electricity is expensive.
Solar heat pumps With adequate insulation, heat pumps utilizing the conventional refrigeration cycle can be used to warm and cool buildings, with very little energy input other than energy needed to run a compressor. Eventually, up to ten percent of the total primary energy need in industrialized countries may be supplied by direct solar thermal techniques, and to some extent this will substitute for base-load electrical energy.
Solar ovens Large scale solar thermal powerplants, as mentioned before, can be used to heat buildings, but on a smaller scale solar ovens can be used on sunny days. Such an oven or solar furnace uses mirrors or a large lens to focus the Sun's rays onto a baking tray or black pot which heats up as it would in a standard oven.
Wind Energy Wind turbines have been used for household electricity generation in conjunction with battery storage over many decades in remote areas. Generator units of more than 1 MWe are now functioning in several countries. The power output is a function of the cube of the wind speed, so such turbines require a wind in the range 3 to 25 m/s (11 - 90 km/h), and in practice relatively few land areas have significant prevailing winds. Like solar, wind power requires alternative power sources to cope with calmer periods.
There are now many thousands of wind turbines operating in various parts of the world, with utility companies having a total capacity of over 39,000 MWe of which Europe accounts for 75% (ultimo 2003). Additional windpower is generated by private windmills both on-grid and off-grid. Germany is the leading producer of wind generated electricity with over 14,600 MWe in 2003. In 2003 the U.S.A. produced over 6,300 Mwe of wind energy, second only to Germany.
New wind farms and offshore wind parks are being planned and built all over the world. This has been the most rapidly-growing means of electricity generation at the turn of the 21st century and provides a complement to large-scale base-load power stations. Denmark generates over 10% of its electricity with windturbines, whereas windturbines account for 0.4% of the total electricity production on a global scale (ultimo 2002). The most economical and practical size of commercial wind turbines seems to be around 600 kWe to 1 MWe, grouped into large wind farms. Most turbines operate at about 25% load factor over the course of a year, but some reach 35%.
Geothermal Energy Where hot underground steam or water can be tapped and brought to the surface it may be used to generate electricity. Such geothermal power sources have potential in certain parts of the world such as New Zealand, United States, Philippines and Italy. The two most prominent areas for this in the United States are in the Yellowstone basin and in northern California. Iceland produced 170 MWe geothermal power and heated 86% of all houses in the year 2000. Some 8000 MWe of capacity is operating over all.
There are also prospects in certain other areas for pumping water underground to very hot regions of the Earth's crust and using the steam thus produced for electricity generation. An Australian startup company, Geodynamics, proposes to build a commercial plant in the Cooper Basin region of South Australia using this technology by 2004.
Water power Energy inherent in water can be harnessed and used, in the forms of kinetic energy or temperature differences.
Electrokinetic energy This type of energy harnesses what happens to water when it is pumped through tiny channels. See electrokinetics (water).
Hydroelectric Energy Hydroelectric energy produces essentially no carbon dioxide, in contrast to burning fossil fuels or gas, and so is not a significant contributor to global warming. Hydroelectric power from potential energy of rivers, now supplies about 715,000 MWe or 19% of world electricity. Apart from a few countries with an abundance of it, hydro capacity is normally applied to peak-load demand, because it is so readily stopped and started. It is not a major option for the future in the developed countries because most major sites in these countries having potential for harnessing gravity in this way are either being exploited already or are unavailable for other reasons such as environmental considerations.
The chief advantage of hydrosystems is their capacity to handle seasonal (as well as daily) high peak loads. In practice the utilization of stored water is sometimes complicated by demands for irrigation which may occur out of phase with peak electrical demands.
Tidal power Harnessing the tides in a bay or estuary has been achieved in France (since 1966) and Russia, and could be achieved in certain other areas where there is a large tidal range. The trapped water can be used to turn turbines as it is released through the tidal barrage in either direction. Worldwide this technology appears to have little potential, largely due to environmental constraints.
Tidal stream power A relatively new technology development, tidal stream generators draw energy from underwater currents in much the same way that wind generators are powered by the wind. The much higher density of water means that there is the potetial for a single generator to provide significant levels of power. Tidal stream technology is at the very early stages of development though and will require significantly more research before it becomes a significant contributor to electrical generation needs.
Wave power Harnessing power from wave motion is a possibility which might yield much more energy than tides. The feasibility of this has been investigated, particularly in the UK. Generators either coupled to floating devices or turned by air displaced by waves in a hollow concrete structure would produce electricity for delivery to shore. Numerous practical problems have frustrated progress.
OTEC Ocean Thermal Energy Conversion is a relatively unproven technology, though it was first used by the French engineer Jacques Arsene d'Arsonval in 1881. The difference in temperature between water near the surface and deeper water can be as much as 20°C. The warm water is used to make a liquid such as ammonia evaporate, causing it to expand. The expanding gas forces its way through turbines, after which it is condensed using the colder water and the cycle can begin again.
Biomass Biomass, also known as biomatter, can be used directly as fuel or to produce liquid biofuel. Agriculturally produced biomass fuels, such as biodiesel, ethanol and bagasse (a byproduct of sugar cane cultivation) are burned in internal combustion engines or boilers.
Liquid biofuel Liquid biofuel is usually bioalcohols -like methanol and ethanol- or biodiesel. Biodiesel can be used in modern diesel vehicles with little or no modification and can be obtained from waste and crude vegetable and animal oil and fats (lipids). In some areas corn, sugarbeets, cane and grasses are grown specifically to produce ethanol (also known as alcohol) a liquid which can be used in internal combustion engines and fuel cells.
Solid biomass Direct use is usually in the form of combustible solids, either firewood or combustible field crops. Field crops may be grown specifically for combustion or may be used for other purposes, and the processed plant waste then used for combustion. Most sorts of biomatter, including dried manure, can actually be burnt to heat water and to drive turbines. Plants partly use photosynthesis to store solar energy, water and CO2. Sugar cane residue, wheat chaff, corn cobs and other plant matter can be, and is, burnt quite successfully. The process releases no net CO2.
Biogas Animal feces (manure) release methane under the influence of anaerobic bacteria which can also be used to generate electricity. See biogas.
Renewable energy storage systems One of the great problems with renewable energy, as mentioned above, is transporting it in time or space. Since most renewable energy sources are periodic, storage for off-generation times is important, and storage for powering transportation is also a critical issue.
Hydrogen fuel cells Hydrogen as a fuel has been touted lately as a solution in our energy dilemmas. However, the idea that hydrogen is a renewable energy source is a misunderstanding. Hydrogen is not an energy source, but a portable energy storage method, because it must be manufactured by other energy sources in order to be used. However, as a storage medium, it may be a significant factor in using renewable energies. It is widely seen as a possible fuel for hydrogen cars, if certain problems can be overcome economically. It may be used in conventional internal combustion engines, or in fuel cells which convert chemical energy directly to electricity without flames, in the same way the human body burns fuel. Making hydrogen requires either reforming natural gas (methane) with steam, or, for a renewable and more ecologic source, the electrolysis of water into hydrogen and oxygen. The former process has carbon dioxide as a by-product, which exacerbates (or at least does not improve) greenhouse gas emissions relative to present technology. With electrolysis, the greenhouse burden depends on the source of the power, and both intermittent renewables and nuclear energy are considered here.
Nuclear advocates note that using nuclear power to manufacture hydrogen would help solve plant inefficiencies. Here the plant would be run continuously at full capacity, with perhaps all the output being supplied to the grid in peak periods and any not needed to meet civil demand being used to make hydrogen at other times. This would mean far better efficiency for the nuclear power plants.
About 50 kWh (1/144,000 J) is required to produce a kilogram of hydrogen by electrolysis, so the cost of the electricity clearly is crucial.
Other renewable energy storage systems Sun, wind, tides and waves cannot be controlled to provide directly either reliably continuous base-load power, because of their periodic natures, or peak-load power when it is needed. In practical terms, without proper energy storage methods these sources are therefore limited to some twenty percent of the capacity of an electricity grid, and cannot directly be applied as economic substitutes for fossil fuels or nuclear power, however important they may become in particular areas with favorable conditions. If there were some way that large amounts of electricity from intermittent producers such as solar and wind could be stored efficiently, the contribution of these technologies to supplying base-load energy demand would be much greater.
Pumped water storage Already in some places pumped storage is used to even out the daily generating load by pumping water to a high storage dam during off-peak hours and weekends, using the excess base-load capacity from coal or nuclear sources. During peak hours this water can be used for hydroelectric generation. However, relatively few places have the scope for pumped storage dams close to where the power is needed.
Battery storage Many "off-the-grid" domestic systems rely on battery storage, but means of storing large amounts of electricity as such in giant batteries or by other means have not yet been put to general use. Batteries are generally expensive, have maintenance problems, and have limited lifespans. One possible technology for large-scale storage exists: large-scale flow batteries.
Electrical grid storage One of the most important storage methods advocated by the renewable energy community is to rethink the whole way that we look at power supply, in its 24-hour, 7-day cycle, using peak load equipment simply to meet the daily peaks. Solar electric generation is a daylight process, whereas most homes have their peak energy requirements at night. Domestic solar generation can thus feed electricity into the grid during grid peaking times during the day, and domestic systems can then draw power from the grid during the night when overall grid loads are down. This results in using the power grid as a domestic energy storage system, and relies on ?'net metering'?, where electrical companies can only charge for the amount of electricity used in the home that is in excess of the electricity generated and fed back into the grid. Many states now have net metering laws.
Renewable energy use by nation Iceland is a world leader in renewable energy due to its abundant hydro- and geothermal energy sources. Over 99% of the country's electricity is from renewable sources and most of its urban household heating is geothermal. Israel is also notable as much of its household hot water is heated by solar means. These countries' successes are at least partly based on their geographical advantages.
Leading countries by renewable electricity production, (2000) Hydro Geothermal Wind PV Solar 1. Canada U.S. Germany Japan 2. U.S. Philippines U.S. Germany 3. Brazil Italy Spain U.S. 4. China Mexico Denmark India 5. Russia Indonesia India Australia
Share of the total power consumption in EU-countries that are renewable.
< td> 5,73 < td> 7,54 < td> 5,19
1985 1990 1991 1992 1993 1994
EUR-15 5,61 5,13 4,92 5,16 5,28 5,37 Belgium 1,04 1,01 1,01 0,96 0,84 0,80 Denmark 4,48 6,32 6,38 6,80 7,03 6,49 Germany 2,09 2,06 1,61 1,73 1,75 1,79 Greece 8,77 7,14 7,63 7,13 7,33 7,16 Spain 8,83 6,70 6,56 6,49 6,50 France 7,24 6,34 6,75 7,32 7,98 Ireland 1,75 1,65 1,68 1,59 1,59 1,63 Italy 5,60 4,64 5,16 5,34 5,50 Luxembourg 1,28 1,21 1,14 1,26 1,21 1,34 The Netherlands 1,36 1,35 1,35 1,37 1,38 1,43 Austria 24,23 22,81 20,99 23,39 24,23 23,71 Portugal 25,07 17,45 17,03 13,88 15,98 16,61 Finland 18,29 16,71 17,02 18,10 18,48 18,28 Sweden 24,36 24,86 22,98 26,53 27,31 24,04 United Kingdom 0,47 0,49 0,48 0,56 0,54 0,65 Table from [1]
Renewable energy controversies As with anything, even renewable energy generates controversies.
The funding dilemma Research and development in renewable energies has been severely hampered by only receiving a tiny fraction of energy R&D budgets, with conventional energy sources getting the lion's share.
The nuclear "renewable" claim Some nuclear advocates claim that nuclear energy should be regarded as renewable energy. Arguments they put forward include:
* The view that nuclear energy does not contribute to global warming (although evaporative cooling has a minor effect by introducing additional water vapor into the atmosphere, along with the heat production of the process).
* Fast breeder reactors can produce more fuel than they consume.
* The view that uranium and thorium, being radioactive, are not theoretically long-term resources.
* The view that nuclear waste, since it will eventually become less radioactive than the original ore bodies, is not theoretically a long-term problem.
This viewpoint is strongly rejected by most renewable energy advocates. The fact that nuclear power uses a depleting resource (uranium or thorium), that the half-life of uranium 238 is 4.5 billion years, and that the decay of the waste to a safe level may take three thousand years or longer (depending on the technology used) means that it cannot be included in such a classification. Breeder reactors consume uranium or thorium to produce fissile fuel, so this particular argument is a simple misunderstanding of the basic processes involved. Similar arguments can also be applied against proposed nuclear fusion power stations using deuterium and tritium, the latter bred from lithium, as fuel.
* U.S. Energy Information Administration provides lots of statistics and information on the industry.
* Boyle, G. (ed.), Renewable Energy: Power for a Sustainable Future. Open University, UK, 1996.
Solar power[edit]
From Wikipedia, the free encyclopedia.
Solar power has become of increasing interest as other finite power sources such as fossil fuels and hydroelectric power become both more scarce and expensive (in both fiscal and environmental terms). As the earth orbits the sun it receives 1,410 W / m2 as measured upon a surface kept normal (at a right angle) to the sun. Of this approximately 19% of the energy is absorbed by the atmosphere, while clouds reflect 35% of the total energy upon average.
After passing through the Earth's atmosphere most of the sun's energy is in the form of visible and ultraviolet light. Plant's use solar energy to create chemical energy through photosynthesis. We use this energy when we burn wood or fossil fuels. There have been experiments to create fuel by absorbing sunlight in a chemical reaction in a way similar to photosynthesis without using living organisms.
Most solar energy used today is converted into heat or electricity.
Types of solar power
Methods of solar energy have been classified using the terms direct, indirect, passive and active.
Direct solar energy involves only one transformation into a usable form. Examples:
* Sunlight hits a photovoltaic cell creating electricity. (Photovoltaics are classified as direct despite the fact that the electricity is usually converted to another form of energy such as light or mechanical energy before becoming useful.)
* Sunlight hits a dark surface and the surface warms when the light is converted to heat by interacting with matter. The heat is used to heat a room or water.
Indirect solar energy involves more than one transformation to reach a usable form. Example:
* systems to close insulating shutters or move shades. Passive solar systems are considered direct systems although sometimes they involve convective flow which technically is a conversion of heat into mechanical energy.
Active solar energy refers to systems that use electrical, mechanical or chemical mechanisms to increase the effectiveness of the collection system. Indirect collection systems are almost always active systems.
Solar design is the use of architectural features to replace the use of electricity and fossil fuels with the use of solar energy and decrease the energy needed in a home or building with insulation and efficient lighting and appliances.
Architectural features used in solar design:
* South facing windows with insulated glazing that has high ultraviolet transmitance.
* Thermal masses.
* Insulating shutters for windows to be closed at night and on overcast days.
* Movable awnings to be repositioned seasonally.
* A well insulated and sealed building envelope.
* Exhaust fans in high humidity areas.
* Passive or active warm air solar panels.
* Passive or active Trombe walls.
* Active solar panels using water or antifreeze solutions.
* Passive solar panels for preheating potable water.
* Photovoltaic systems to provide electricity.
* Windmills to provide electricity.
Solar hot water systems are quite common in some countries where a small flat panel collector is mounted on the roof and able to meet most of a household's hot water needs. Cheaper flat panel collectors are also often used to heat swimming pools, thereby extending their swimming seasons.
Solar cooking is helping in many developing countries, both reducing the demands for local firewood and maintaining a cleaner environment for the cooks. The first known record of a western solar oven is attributed to Horace de Saussure, a Swiss naturalist experimenting as early as 1767. A solar box cooker traps the sun's power in an insulated box; these have been successfully used for cooking, pasteurization and fruit canning.
Solar cells (also referred to as photovoltaic cells) are devices or banks of devices that use the photoelectric effect of semiconductors to generate electricity directly from the sunlight. As their manufacturing costs have remained high during the twentieth century their use has been limited to very low power devices such as calculators with LCD displays or to generate electricity for isolated locations which could afford the technology. The most important use to date has been to power orbiting satellites and other spacecraft. As manufacturing costs decreased in the last decade of the twentieth centery solar power has become cost effective for many remote low power applications such as roadside emergency telephones, remote sensing, and limited "off grid" home power applications.
Solar power plants generally use reflectors to concentrate sunlight into a heat absorber.
* Heliostat mirror power plants focus the sun's rays upon a collector tower. The vast amount of energy is generally transported from the tower and stored by use of a high temperature fluid. Liquid sodium is often used as the transport and storage fluid. The energy is then extracted as needed by such means as heating water for use in stream turbines.
* Trough concentrators have been used successfully in the State of California (in the U.S.) to generate 350MW of power in the past two decades. The parabolic troughs can increase the amount of solar radiation striking the tubes up to 30 or 60 times, where synthetic oil is heated to 390°C. The oil is then pumped into a generating station and used to power a steam turbine.
* Parabolic reflectors are most often used with a stirling engine or similar device at its focus. As the single parabolic reflector achieves a greater focusing accuracy than any larger bank of mirrors can achieve, the focus is used to achieve a higher temperature which in turn allows a very efficient conversion of heat into mechanical power to drive a electrical generator. Parabolic reflectors can also be used to generate steam to power turbines to generate electricity.
Applying Solar Power
Deployment of solar power depends largely upon local conditions and requirements, for example while certain European or U.S. states could benefit from a public hot water utility, such systems would be both impractical and counter-productive in countries like Australia or states like New Mexico. As all industrialised nations share a need for electricity, it is clear that solar power will increasingly be used to supplying a cheap, reliable electricity supply.
Many other types of power generation are indirectly solar-powered. Plants use photosynthesis to convert solar energy to chemical energy, which can later be burned as fuel to generate electricity; oil and coal originated as plants. Hydroelectric dams and wind turbines are indirectly powered by the sun.
In some areas of the U.S., solar electric systems are already competitive with utility systems. The basic cost advantage is that the home-owner does not pay income tax on electric power that is not purchased. As of 2002, there is a list of technical conditions: There must be many sunny days. The systems must sell power to the grid, avoiding battery costs. The solar systems must be inexpensively mass-purchased, which usually means they must be installed at the time of construction. Finally, the region must have high power prices. For example, Southern California has about 260 sunny days a year, making it an excellent venue. It yields about 9%/yr returns of investment when systems are installed at $9/watt (not cheap, but feasible), and utility prices are at $0.095 per kilowatt-hour (the current base rate). On grid solar power can be especially feasible when combined with time-of-use net metering, since the time maximum production is largely coincident with the time of highest pricing.
For a stand-alone system some means must be employed to store the collected energy for use during hours of darkness or cloud cover - either as electrochemically in batteries, or in some other form such as hydrogen (produced by electrolysis of water), flywheels in vacuum, or superconductors. Storage always has an extra stage of energy conversion, with consequent energy losses, greatly increasing capital costs.
Several experimental photovoltaic (PV) power plants of 300 - 500 kW capacity are connected to electricity grids in Europe and the U.S. Japan has 150 MWe installed. A large solar PV plant is planned for the island of Crete. Research continues into ways to make the actual solar collecting cells less expensive and more efficient. Other major research is investigating economic ways to store the energy which is collected from the sun's rays during the day.
See also
Main Renewable resource, Renewable energy, Sustainable design
Solar: Solar box cooker, Solar thermal energy, Sun, Solar power satellite, Current solar income
Energy crisis: 1973 energy crisis, 1979 energy crisis
Electricity: Electricity generation, Electricity retailing, Energy storage, Green electricity, Direct current, Photoelectric effect, Power station, Power supply, Microwave power transmission, Solar cell, Power plant
Lists: List of conservation topics, List of physics topics
People: Leonardo da Vinci, Charles Eames, Charles Kettering, Menachem Mendel Schneerson
Other: Autonomous building, Solar-Club/CERN-Geneva-Switzerland, Electric vehicle, Lightvessel, Mass driver, Clock of the Long Now, Tidal power, Cumulonimbus Smart 1, Science in America, Slope Point, Back to the land, Architectural engineering, Ecology, Geomorphology, List of conservation topics, Nine Nations of North America | <urn:uuid:fbd99d1f-b8ef-4564-b60b-c5b831ca9f4e> | 4 | 3.53125 | 0.265616 | en | 0.948205 | http://en.wikibooks.org/wiki/Electronics/History/Chapter_4 |
Contract year phenomenon
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Contract year phenomenon is a term used in North American sports to describe the occurrence when athletes perform at a very high level in the season prior to their free agency eligibility. Most often, these athletes have seasons that are statistically better than previous years, but then once they sign their new contract, they return to their previous level of performance.[1][2]
Despite these beliefs offered by the sports media, some research suggests otherwise. Indeed, a social psychological study examining the contract year phenomenon in the NBA and MLB found that only scoring statistics in the NBA increased in the contract year; however, other statistics (e.g. rebounds, batting average) stayed at the baseline. Additionally, contrary to popular belief, performance actually decreased the year after the contract year instead of going back to the baseline. In the NBA, for example, PER went up during the contract year, but below baseline the year after signing said deal. Also, while blocks and steals did not increase during the contract year, they still significantly dipped below the baseline the year after. In the MLB, nothing increased during the contract year, but metrics for batting still dropped off the year afterwords. This is in line with recent psychological theory that suggests salient external motivators (like a large monetary contract) may work to undermine intrinsic motivation (and thus performance).[3]
The contract year phenomenon is most associated with the NBA due to the league's high salaries and lengthy guaranteed contracts. This occurrence is sometimes seen in MLB,[4] but it is almost never found in the NFL due to the league's relatively low salaries and most importantly, the lack of guaranteed contracts. NFL players who sign contracts with new teams and then don't perform can simply be released from their team, as the team is then only held responsible for the bonuses in the contract.
1. ^ Bill Simmons (2006). "CuriousGuy:Malcolm Gladwell". ESPN. Retrieved 2008-02-12.
2. ^ Eric Williams (2007). "NBA Preview: Contract year phenomenon". Daily Utah Chronicle. University of Utah. Retrieved 2008-02-12.
3. ^ Mark H. White II; Kennon M. Sheldon (2013). "The contract year syndrome in the NBA and MLB: A classic undermining pattern". Motivation and Emotion. doi:10.1007/s11031-013-9389-7.
4. ^ Sergiy Butenko, Panos M. Pardalos, Jaime Gil-Lafuente (2004). Economics, Management and Optimization in Sports. Springer. pp. 163–184. ISBN 3-540-20712-0. | <urn:uuid:24224b21-9ab3-4375-b567-470ae5b1c3fa> | 2 | 1.695313 | 0.475457 | en | 0.928291 | http://en.wikipedia.org/wiki/Contract_year_phenomenon |
History of the Amiga
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Amiga Corporation[edit]
The Amiga's Original Chip Set was designed by a small company called Amiga Corporation during the end of the first home video game boom. Wary of industrial espionage, the developers codenamed the chipset "Lorraine" during development. Development of the Lorraine project was done using a Sage IV (m68k/8 MHz/1MB) machine, nicknamed "Agony".[1] Amiga Corp. funded the development of the Lorraine by manufacturing game controllers, and later an initial bridge loan from Atari Inc. while seeking further investors. The chipset was to be used in a video game machine, but following the video game crash of 1983, the Lorraine was repurposed to be a multi-tasking multi-media personal computer.
The company demonstrated a prototype at the January 1984 Consumer Electronics Show in Chicago with an attempt to get investors on board.[2] Reporters saw it perform the Boing Ball demo with stereo sound.[3] The Sage acted as the CPU, and BYTE described "big steel boxes" substituting for the chipset that did not yet exist.[4] The magazine reported in April 1984 that Amiga Corporation "is developing a 68000-based home computer with a custom graphics processor. With 128K bytes of RAM and a floppy-disk drive, the computer will reportedly sell for less than $1000 late this year."[5]
Follow up presentations were made, at the following CES in June 1984, to Sony, Hewlett-Packard, Philips, Apple, Silicon Graphics and others.[2] Steve Jobs, CEO of Apple, who had just introduced the Macintosh in January, was shown the original prototype for the first Amiga and stated that there was too much hardware -even though the newly redesigned board consisted of just three silicon chips which had yet to be shrunk down.[3][6] Investors became increasingly wary of new computer companies in an industry that the IBM PC dominated,[4]
Employees mortgaged their homes to keep the company running.[2]
In July 1984, Atari was bought by the recently fired CEO and founder of Commodore, Jack Tramiel, who was taking a substantial number of Commodore's employees with him.[7] He offered $500,000 which must be paid back in one month or Atari would own all of the Amiga technology. It was agreed on of desperation.[2]
And then in a "surprising" development, in August 1984, the Amiga group found an interested Commodore.[8] Amiga was purchased by Commodore for $27 million -including paying off the Atari loan.[3] And Atari went on to develop the ST which launched in June 1985.
1985-87, The early years[edit]
When the first Amiga computer was released in July 1985 by Commodore, it was simply called the Amiga -devoid of references to Commodore. Commodore marketed it both as their intended successor to the Commodore 64 and as their competitor against the Apple Macintosh. It was later renamed the Commodore Amiga 1000.
1990-93, Height of popularity[edit]
1992-94, Trouble ahead[edit]
An Amiga 4000 (1992)
Commodore began 1992 early by introducing the Amiga 500+, a slightly updated and cost reduced Amiga 500, officially. This model had actually been introduced the year before to meet good sales of the Amiga 500. Viewed primarily as a game machine, especially in Europe, this model was criticized for not being able to run popular games such as SWIV, Treasure Island Dizzy, and Lotus Esprit Turbo Challenge), and some people returned them to dealers demanding an original Amiga 500.
By the early 1990s the IBM PC platform dominated the market for computer games. In December 1992 Computer Gaming World reported that MS-DOS accounted for 82% of game sales in 1991, compared to Macintosh's 8% and Amiga's 5%. In response to a reader's challenge to find a DOS game that played better than the Amiga version the magazine cited Wing Commander and Civilization, and added that "The heavy MS-DOS emphasis in CGW merely reflects the realities of the market".[19] Instead of discontinuing the obsoleting Amiga 500 and 500+ Commodore envisioned it taking the place of the Commodore 64 in the low-cost segment. To make that possible Commodore set out to design the Amiga 600, a system intended to be much cheaper than the Amiga 500. The Amiga 500 itself would be replaced by Amiga 1200, also under development.
Shortly after releasing the Amiga 600 Commodore announced that two new super Amigas would be released at the end of the year. In classic Osborne style, consumers decided to wait for the new Amigas and Commodore had to close their Australian office in face of plummeting sales,[20] At the same time, Commodore's foray into the highly competitive PC market was unsuccessful. This contributed to Commodore's 1992 profits falling to an unimpressive $28 million,[20] and made the need for a successful new Amiga launch all that more critical.
In October 1992, Commodore released the Amiga 1200 and the Amiga 4000. Each featured the new AGA chipset and the third release of AmigaOS.
Computer Gaming World reported in March 1993 that declining Amiga sales were "causing many U.S. publishers to quit publishing Amiga titles",[21] and in July that at the Spring European Computer Trade Show the computer was, unlike 1992, "hardly mentioned, let alone seen".[22] That year Commodore marketed the CD32, which was one of the earliest CD-based consoles and was also the world's first 32-bit game machine, with specifications similar to the A1200.
Amiga in the United States[edit]
The rights to the Amiga platform were successively sold to Escom and later Gateway 2000.[23] Escom had almost immediately gone bankrupt itself (due to non-Amiga related problems), while Gateway decided to keep the patents and sell the remaining assets to a new company later renamed to Amiga, Inc. (no relation to the original Amiga Corporation) in 1999. Amiga also received a license to use Amiga-related patents, which were retained by Gateway until they expired.[24] Amiga Inc. sold the copyrights for works created up to 1993 to Cloanto[25][26] and commissioned development of AmigaOS 4 to Hyperion Entertainment.
New Amigas[edit]
Amiga compatibles[edit]
Minimig 120x120 mm PCB board (Nano-ITX size)[27]
Natami was a hardware project to build 68k based computer to run AmigaOS.[28]
AmigaOS 4 systems[edit]
AROS systems[edit]
MorphOS systems[edit]
2. ^ a b c d "A history of the Amiga, part 3: The first prototype".
3. ^ a b c Gareth Knight. "Amiga History". Amigahistory.co.uk. Retrieved 2013-04-20.
7. ^ "A history of the Amiga, part 4: Enter Commodore".
10. ^ "Info Magazine issue 17".
17. ^ "Info Magazine issue 14".
20. ^ a b Edge, August 1995.
23. ^ "1994-1998: From Commodore-Amiga to ESCOM to Gateway". Amiga Documents. Retrieved 2015-02-20.
24. ^ "1998-1999: Gateway Scraps "Amiga" Brand". Amiga Documents. Retrieved 2015-02-20.
25. ^ "Cloanto". Amiga Documents. Retrieved 2015-02-20.
26. ^ "Cloanto confirms transfers of Commodore/Amiga copyrights". amiga-news.de. Retrieved 2015-02-20.
27. ^ "Minimig rev 1.0 PCB". 2006-06-11 amiga.org
28. ^ http://www.natami.net/
29. ^ http://aros.sourceforge.net/
Further reading[edit] | <urn:uuid:d0073177-bda7-4422-8257-77c1a6181c5a> | 3 | 2.90625 | 0.037846 | en | 0.94769 | http://en.wikipedia.org/wiki/History_of_the_Amiga |
Medical genetics of Jews
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The medical genetics of Jews is the study, screening, and treatment of genetic disorders more common in particular Jewish populations than in the population as a whole.[1] The genetics of Ashkenazi Jews have been particularly well-studied, resulting in the discovery of many genetic disorders associated with this ethnic group. In contrast, the medical genetics of Sephardic Jews and Mizrahi Jews are more complicated, since they are more genetically diverse and consequently no genetic disorders are more common in these groups as a whole; instead, they tend to have the genetic diseases common in their various countries of origin.[1][2] Several organizations, such as Dor Yeshorim,[3] offer screening for Ashkenazi genetic diseases, and these screening programs have had a significant impact, in particular by reducing the number of cases of Tay–Sachs disease.[4]
Genetics of Jewish populations[edit]
Different ethnic groups tend to suffer from different rates of hereditary diseases, with some being more common, and some less common. Hereditary diseases, particularly hemophilia, were recognized early in Jewish history, even being described in the Talmud.[5] However, the scientific study of hereditary disease in Jewish populations was initially hindered by scientific racism, which believed in racial supremacism.[6][7]
Ashkenazi diseases[edit]
The most detailed genetic analysis study of Ashkenazi was published in September 2014 by Shai Carmon and his team at Columbia University. The results of the detailed study show that today's 10 million Ashkenai Jews descend from a population only 350 individuals who lived about 600-800 years ago. That population derived from both Europe and the Middle East. [14]There is evidence that the population bottleneck may have allowed deleterious alleles to become more prevalent in the population due to genetic drift.[15] As a result, this group has been particularly intensively studied, so many mutations have been identified as common in Ashkenazis.[16] Of these diseases, many also occur in other Jewish groups and in non-Jewish populations, although the specific mutation which causes the disease may vary between populations. For example, two different mutations in the glucocerebrosidase gene causes Gaucher's disease in Ashkenazis, which is their most common genetic disease, but only one of these mutations is found in non-Jewish groups.[4] A few diseases are unique to this group; for example, familial dysautonomia is almost unknown in other populations.[4]
Genetic disorders common in Ashkenazi Jews[1]
Disease Mode of inheritance Gene Carrier frequency
Favism X-linked G6PD
Bloom syndrome Autosomal recessive BLM 1/100
Canavan disease Autosomal recessive ASPA 1/60
Congenital deafness Autosomal recessive GJB2 or GJB6 1/25
Cystic fibrosis Autosomal recessive CFTR 1/25
Haemophilia C Autosomal recessive F11 1/12
Familial dysautonomia Autosomal recessive IKBKAP 1/30
Familial hypercholesterolemia Autosomal dominant LDLR 1/69
Familial hyperinsulinism Autosomal recessive ABCC8 1/125–1/160
Fanconi anemia C Autosomal recessive FACC 1/100
Gaucher disease Autosomal recessive GBA 1/7–1/18
Glycogen Storage Disease type 1a Autosomal recessive G6PC 1/71
Mucolipidosis IV Autosomal recessive MCOLN1 1/110
Nonclassical 21 OHase deficiency Autosomal recessive CPY21 1/6
Parkinson's disease Autosomal dominant LRRK2 1/42[17]
Torsion dystonia Autosomal dominant DYT1 1/4000
Usher syndrome Autosomal recessive PCDH15 1/72
Tay–Sachs disease[edit]
Tay–Sachs disease, a fatal illness of children that causes mental deterioration prior to death, was historically more prevalent among Ashkenazi Jews,[18] although high levels of the disease are also found in some Pennsylvania Dutch, southern Louisiana Cajun, and eastern Quebec French Canadian populations.[19] Since the 1970s, however, proactive genetic testing has been quite effective in eliminating Tay–Sachs from the Ashkenazi Jewish population.[20]
Lipid transport diseases[edit]
Gaucher's disease, in which lipids accumulate in inappropriate locations, occurs most frequently among Ashkenazi Jews;[21] the mutation is carried by roughly one in every 15 Ashkenazi Jews, compared to one in 100 of the general American population.[22] Gaucher's disease can cause brain damage and seizures, but these effects are not usually present in the form manifested among Ashkenazi Jews; while sufferers still bruise easily, and it can still potentially rupture the spleen, it generally has only a minor impact on life expectancy.
Ashkenazi Jews are also highly affected by other lysosomal storage diseases, particularly in the form of lipid storage disorders. Compared to other ethnic groups, they more frequently act as carriers of mucolipidosis[23] and Niemann–Pick disease,[24] the latter of which can prove fatal.
The occurrence of several lysosomal storage disorders in the same population suggests the alleles responsible might have conferred some selective advantage in the past.[25] This would be similar to the hemoglobin allele which is responsible for sickle-cell disease, but solely in people with two copies; those with just one copy of the allele have a sickle cell trait and gain partial immunity to malaria as a result. This effect is called heterozygote advantage.[26]
Some of these disorders may have become common in this population due to selection for high levels of intelligence (see Ashkenazi intelligence).[27][28] However, other research suggests no difference is found between the frequency of this group of diseases and other genetic diseases in Ashkenazis, which is evidence against any specific selectivity towards lysosomal disorders.[29]
Familial dysautonomia[edit]
Familial dysautonomia (Riley–Day syndrome), which causes vomiting, speech problems, an inability to cry, and false sensory perception, is almost exclusive to Ashkenazi Jews;[30] Ashkenazi Jews are almost 100 times more likely to carry the disease than anyone else.[31]
Other Ashkenazi diseases and disorders[edit]
Non-Ashkenazi disorders[edit]
In contrast to the Ashkenazi population, Sephardic and Mizrahi Jews are much more divergent groups, with ancestors from Spain, Portugal, Morocco, Tunisia, Algeria, Italy, Libya, the Balkans, Iran, Iraq, India, and Yemen, with specific genetic disorders found in each regional group, or even in specific subpopulations in these regions.[1]
Genetic disorders common in Sephardic and Mizrahi Jews[1]
Disease Mode of inheritance Gene or enzyme Carrier frequency Populations
Oculocutaneous albinism Autosomal recessive TYR 1/30 Morocco
Ataxia telangiectasia Autosomal recessive ATM 1/80 Morocco, Tunisia
Creutzfeldt–Jakob disease Autosomal dominant PRNP 1/24,000 Libya
Cerebrotendinous xanthomatosis Autosomal recessive CYP27A1 1/70 Morocco
Cystinuria Autosomal recessive SLC7A9 1/25 Libya
Familial Mediterranean fever Autosomal recessive MEFV 1/5–7 Libya, Morocco, Tunisia
Glycogen storage disease III Autosomal recessive AGL 1/35 Morocco
Limb girdle muscular dystrophy Autosomal recessive DYSF 1/10 Libya
Tay–Sachs Autosomal recessive HEXA 1/110 Morocco
Genetic disorders common in Mizrahi Jews[1]
Disease Mode of inheritance Gene or enzyme Carrier frequency Populations
Factor VII deficiency Autosomal recessive F7 1/40 Iran
Familial Mediterranean fever Autosomal recessive MEFV 1/5–1/7 Iraq, Iran, Armenia, North African Jews, Ashkenazi
Inclusion body myopathy Autosomal recessive GNE 1/12 Iran
Metachromatic leukodystrophy Autosomal recessive ARSA 1/50 Yemen
Phenylketonuria Autosomal recessive PAH 1/35 Yemen
Genetic testing in Jewish populations[edit]
One of the first genetic testing programs to identify heterozygote carriers of a genetic disorder was a program aimed at eliminating Tay–Sachs disease. This program began in 1970, and over one million people have now been screened for the mutation.[46] Identifying carriers and counseling couples on reproductive options have had a large impact on the incidence of the disease, with a decrease from 40–50 per year worldwide to only four or five per year.[4] Screening programs now test for several genetic disorders in Jews, although these focus on the Ashkenazi Jews, since other Jewish groups cannot be given a single set of tests for a common set of disorders.[2] In the USA, these screening programs have been widely accepted by the Ashkenazi community, and have greatly reduced the frequency of the disorders.[47]
The official recommendations of the American College of Obstetricians and Gynecologists is that Ashkenazi individuals be offered screening for Tay Sachs, Canavan, cystic fibrosis, and familial dysautonomia as part of routine obstetrical care.[48]
In the orthodox community, an organization called Dor Yeshorim carries out anonymous genetic screening of couples before marriage to reduce the risk of children with genetic diseases being born.[49] The program educates young people on medical genetics and screens school-aged children for any disease genes. These results are then entered into an anonymous database, identified only by a unique ID number given to the person who was tested. If two people are considering getting married, they call the organization and tell them their ID numbers. The organization then tells them if they are genetically compatible. It is not divulged if one member is a carrier, so as to protect the carrier and his or her family from stigmatization.[49] However, this program has been criticized for exerting social pressure on people to be tested, and for screening for a broad range of recessive genes, including disorders such as Gaucher's disease.[3]
See also[edit]
6. ^ a b Abel 2001, p. 7
9. ^ Paul DB, Spencer HG (December 2008). Keller, Evelyn Fox, ed. ""It's Ok, We're Not Cousins by Blood": The Cousin Marriage Controversy in Historical Perspective". PLoS Biol. 6 (12): 2627–30. doi:10.1371/journal.pbio.0060320. PMC 2605922. PMID 19108607.
14. ^ "Schuster, Ruth 'Ashkenazi Jews Descend From 350 People, Scientists Say:Geneticists Believe Community Is Only 600-800 Years Old' (Sept 9, 2014) The Jewish Daily Forward"
17. ^ Orr-Urtreger A; Shifrin C; Rozovski U et al. (October 2007). "The LRRK2 G2019S mutation in Ashkenazi Jews with Parkinson disease: is there a gender effect?". Neurology 69 (16): 1595–602. doi:10.1212/01.wnl.0000277637.33328.d8. PMID 17938369.
22. ^ "National Gaucher Foundation". Retrieved May 30, 2007.
31. ^ about one in 30 Ashkenazi Jews carry the disease, compared to 1 in 3000 of the general population
32. ^ Ashkenazi Jews and Colorectal Cancer: from The Chicago Center for Jewish Genetic Disorders
33. ^ Ashkenazi Disorders: Mendelian – Non-Classical Adrenal Hyperplasia
37. ^ Ashkenazi Jewish Diseases: Tufts Medical Center
44. ^ Glycogen Storage Disease Type Ia Mutation Analysis (Ashkenazi Jewish)"
Further reading[edit]
External links[edit] | <urn:uuid:683415de-f71b-4c55-b977-b8169b4c23d2> | 3 | 3.15625 | 0.356924 | en | 0.807346 | http://en.wikipedia.org/wiki/Medical_genetics_of_Jews |
Pressurized water reactor
From Wikipedia, the free encyclopedia
(Redirected from Pressurised water reactor)
Jump to: navigation, search
Nuclear Regulatory Commission image of pressurized water reactor vessel heads
An animation of a PWR power station with cooling towers
Pressurized water reactors (PWRs) constitute the large majority of all Western nuclear power plants and are one of three types of light water reactor (LWR), the other types being boiling water reactors (BWRs) and supercritical water reactors (SCWRs). In a PWR, the primary coolant (water) is pumped under high pressure to the reactor core where it is heated by the energy generated by the fission of atoms. The heated water then flows to a steam generator where it transfers its thermal energy to a secondary system where steam is generated and flows to turbines which, in turn, spin an electric generator. In contrast to a boiling water reactor, pressure in the primary coolant loop prevents the water from boiling within the reactor. All LWRs use ordinary water as both coolant and neutron moderator.
PWRs were originally designed to serve as nuclear marine propulsion for nuclear submarines and were used in the original design of the second commercial power plant at Shippingport Atomic Power Station.
PWRs currently operating in the United States are considered Generation II reactors. Russia's VVER reactors are similar to U.S. PWRs. France operates many PWRs to generate the bulk of its electricity.
The United States Army Nuclear Power Program operated pressurized water reactors from 1954 to 1974.
Three Mile Island Nuclear Generating Station initially operated two pressurized water reactor plants, TMI-1 and TMI-2.[1] The partial meltdown of TMI-2 in 1979 essentially ended the growth in new construction of nuclear power plants in the United States for two decades.[2]
The pressurized water reactor has three new Generation III reactor evolutionary designs: the AP-1000, VVER-1200, ACPR1000+
Pictorial explanation of power transfer in a pressurized water reactor. Primary coolant is in orange and the secondary coolant (steam and later feedwater) is in blue.
Nuclear fuel in the reactor vessel is engaged in a fission chain reaction, which produces heat, heating the water in the primary coolant loop by thermal conduction through the fuel cladding. The hot primary coolant is pumped into a heat exchanger called the steam generator, where it flows through hundreds or thousands of tubes (usually 1.9 cm in diameter). Heat is transferred through the walls of these tubes to the lower pressure secondary coolant located on the sheet side of the exchanger where the coolant evaporates to pressurized steam. The transfer of heat is accomplished without mixing the two fluids to prevent the secondary coolant from becoming radioactive. Some common steam generator arrangements are u-tubes or single pass heat exchangers.[citation needed]
In a nuclear power station, the pressurized steam is fed through a steam turbine which drives an electrical generator connected to the electric grid for distribution. After passing through the turbine the secondary coolant (water-steam mixture) is cooled down and condensed in a condenser. The condenser converts the steam to a liquid so that it can be pumped back into the steam generator, and maintains a vacuum at the turbine outlet so that the pressure drop across the turbine, and hence the energy extracted from the steam, is maximized. Before being fed into the steam generator, the condensed steam (referred to as feedwater) is sometimes preheated in order to minimize thermal shock.[3]
The steam generated has other uses besides power generation. In nuclear ships and submarines, the steam is fed through a steam turbine connected to a set of speed reduction gears to a shaft used for propulsion. Direct mechanical action by expansion of the steam can be used for a steam-powered aircraft catapult or similar applications. District heating by the steam is used in some countries and direct heating is applied to internal plant applications.[citation needed]
Two things are characteristic for the pressurized water reactor (PWR) when compared with other reactor types: coolant loop separation from the steam system and pressure inside the primary coolant loop. In a PWR, there are two separate coolant loops (primary and secondary), which are both filled with demineralized/deionized water. A boiling water reactor, by contrast, has only one coolant loop, while more exotic designs such as breeder reactors use substances other than water for coolant and moderator (e.g. sodium in its liquid state as coolant or graphite as a moderator). The pressure in the primary coolant loop is typically 15–16 megapascals (150–160 bar), which is notably higher than in other nuclear reactors, and nearly twice that of a boiling water reactor (BWR). As an effect of this, only localized boiling occurs and steam will recondense promptly in the bulk fluid. By contrast, in a boiling water reactor the primary coolant is designed to boil.[4]
PWR reactor design[edit]
PWR reactor vessel
Light water is used as the primary coolant in a PWR. It enters the bottom of the reactor core at about 548 K (275 °C or 530 °F) and is heated as it flows upwards through the reactor core to a temperature of about 588 K (315 °C or 600 °F). The water remains liquid despite the high temperature due to the high pressure in the primary coolant loop, usually around 155 bar (15.5 MPa 153 atm, 2,250 psig). In water, the critical point occurs at around 647 K (374 °C or 705 °F) and 22.064 MPa (3200 PSIA or 218 atm).[5]
Main article: Pressurizer
Pressure in the primary circuit is maintained by a pressurizer, a separate vessel that is connected to the primary circuit and partially filled with water which is heated to the saturation temperature (boiling point) for the desired pressure by submerged electrical heaters. To achieve a pressure of 155 bar, the pressurizer temperature is maintained at 345 °C (653 °F), which gives a subcooling margin (the difference between the pressurizer temperature and the highest temperature in the reactor core) of 30 °C (54 °F). Thermal transients in the reactor coolant system result in large swings in pressurizer liquid volume, total pressurizer volume is designed around absorbing these transients without uncovering the heaters or emptying the pressurizer. Pressure transients in the primary coolant system manifest as temperature transients in the pressurizer and are controlled through the use of automatic heaters and water spray, which raise and lower pressurizer temperature, respectively.[6]
The coolant is pumped around the primary circuit by powerful pumps, which can consume up to 6 MW each.[7] After picking up heat as it passes through the reactor core, the primary coolant transfers heat in a steam generator to water in a lower pressure secondary circuit, evaporating the secondary coolant to saturated steam — in most designs 6.2 MPa (60 atm, 900 psia), 275 °C (530 °F) — for use in the steam turbine. The cooled primary coolant is then returned to the reactor vessel to be heated again.
Pressurized water reactors, like all thermal reactor designs, require the fast fission neutrons to be slowed down (a process called moderation or thermal) in order to interact with the nuclear fuel and sustain the chain reaction. In PWRs the coolant water is used as a moderator by letting the neutrons undergo multiple collisions with light hydrogen atoms in the water, losing speed in the process. This "moderating" of neutrons will happen more often when the water is denser (more collisions will occur). The use of water as a moderator is an important safety feature of PWRs, as an increase in temperature may cause the water to expand, giving greater 'gaps' between the water molecules and reducing the probability of thermalisation—thereby reducing the extent to which neutrons are slowed down and hence reducing the reactivity in the reactor. Therefore, if reactivity increases beyond normal, the reduced moderation of neutrons will cause the chain reaction to slow down, producing less heat. This property, known as the negative temperature coefficient of reactivity, makes PWR reactors very stable. This process is referred to as 'Self-Regulating', i.e. the hotter the coolant becomes, the less reactive the plant becomes, shutting itself down slightly to compensate and vice versa. Thus the plant controls itself around a given temperature set by the position of the control rods.
Heavy water has very low neutron absorption, so heavy water reactors tend to have a positive void coefficient, though the CANDU reactor design mitigates this issue by using unenriched, natural uranium; these reactors are also designed with a number of passive safety systems not found in the original RBMK design.
PWRs are designed to be maintained in an undermoderated state, meaning that there is room for increased water volume or density to further increase moderation, because if moderation were near saturation, then a reduction in density of the moderator/coolant could reduce neutron absorption significantly while reducing moderation only slightly, making the void coefficient positive. Also, light water is actually a somewhat stronger moderator of neutrons than heavy water, though heavy water's neutron absorption is much lower. Because of these two facts, light water reactors have a relatively small moderator volume and therefore have compact cores. One next generation design, the supercritical water reactor, is even less moderated. A less moderated neutron energy spectrum does worsen the capture/fission ratio for 235U and especially 239Pu, meaning that more fissile nuclei fail to fission on neutron absorption and instead capture the neutron to become a heavier nonfissile isotope, wasting one or more neutrons and increasing accumulation of heavy transuranic actinides, some of which have long half-lives.
Main article: Nuclear fuel
PWR fuel bundle This fuel bundle is from a pressurized water reactor of the nuclear passenger and cargo ship NS Savannah. Designed and built by the Babcock and Wilcox Company.
After enrichment, the uranium dioxide (UO
) powder is fired in a high-temperature, sintering furnace to create hard, ceramic pellets of enriched uranium dioxide. The cylindrical pellets are then clad in a corrosion-resistant zirconium metal alloy Zircaloy which are backfilled with helium to aid heat conduction and detect leakages. Zircaloy is chosen because of its mechanical properties and its low absorption cross section.[9] The finished fuel rods are grouped in fuel assemblies, called fuel bundles, that are then used to build the core of the reactor. A typical PWR has fuel assemblies of 200 to 300 rods each, and a large reactor would have about 150–250 such assemblies with 80–100 tonnes of uranium in all. Generally, the fuel bundles consist of fuel rods bundled 14 × 14 to 17 × 17. A PWR produces on the order of 900 to 1,600 MWe. PWR fuel bundles are about 4 meters in length.[10]
Refuelings for most commercial PWRs is on an 18–24 month cycle. Approximately one third of the core is replaced each refueling, though some more modern refueling schemes may reduce refuel time to a few days and allow refueling to occur on a shorter periodicity.[11]
In PWRs reactor power can be viewed as following steam (turbine) demand due to the reactivity feedback of the temperature change caused by increased or decreased steam flow. (See: Negative temperature coefficient.) Boron and control rods are used to maintain primary system temperature at the desired point. In order to decrease power, the operator throttles shut turbine inlet valves. This would result in less steam being drawn from the steam generators. This results in the primary loop increasing in temperature. The higher temperature causes the density of the primary reactor coolant water to decrease, allowing higher neutron speeds, thus less fission and decreased power output. This decrease of power will eventually result in primary system temperature returning to its previous steady-state value. The operator can control the steady state operating temperature by addition of boric acid and/or movement of control rods.
Reactivity adjustment to maintain 100% power as the fuel is burned up in most commercial PWRs is normally achieved by varying the concentration of boric acid dissolved in the primary reactor coolant. Boron readily absorbs neutrons and increasing or decreasing its concentration in the reactor coolant will therefore affect the neutron activity correspondingly. An entire control system involving high pressure pumps (usually called the charging and letdown system) is required to remove water from the high pressure primary loop and re-inject the water back in with differing concentrations of boric acid. The reactor control rods, inserted through the reactor vessel head directly into the fuel bundles, are moved for the following reasons:
• To start up the reactor.
• To shut down the primary nuclear reactions in the reactor.
• To accommodate short term transients, such as changes to load on the turbine.
The control rods can also be used:
However, these effects are more usually accommodated by altering the primary coolant boric acid concentration.
• PWRs can passively scram the reactor in the event that offsite power is lost to immediately stop the primary nuclear reaction. The control rods are held by electromagnets and fall by gravity when current is lost; full insertion safely shuts down the primary nuclear reaction.
• PWR technology is favoured by nations seeking to develop a nuclear navy, the compact reactors fit well in nuclear submarines and other nuclear ships.
• The coolant water must be highly pressurized to remain liquid at high temperatures. This requires high strength piping and a heavy pressure vessel and hence increases construction costs. The higher pressure can increase the consequences of a loss-of-coolant accident.[12] The reactor pressure vessel is manufactured from ductile steel but, as the plant is operated, neutron flux from the reactor causes this steel to become less ductile. Eventually the ductility of the steel will reach limits determined by the applicable boiler and pressure vessel standards, and the pressure vessel must be repaired or replaced. This might not be practical or economic, and so determines the life of the plant.
• Additional high pressure components such as reactor coolant pumps, pressurizer, steam generators, etc. are also needed. This also increases the capital cost and complexity of a PWR power plant.
• The high temperature water coolant with boric acid dissolved in it is corrosive to carbon steel (but not stainless steel); this can cause radioactive corrosion products to circulate in the primary coolant loop. This not only limits the lifetime of the reactor, but the systems that filter out the corrosion products and adjust the boric acid concentration add significantly to the overall cost of the reactor and to radiation exposure. In one instance, this has resulted in severe corrosion to control rod drive mechanisms when the boric acid solution leaked through the seal between the mechanism itself and the primary system.[13][14]
• Natural uranium is only 0.7% uranium-235, the isotope necessary for thermal reactors. This makes it necessary to enrich the uranium fuel, which significantly increases the costs of fuel production. The requirement to enrich fuel for PWRs also presents a serious proliferation risk.
See also[edit]
Next generation designs[edit]
1. ^ Mosey 1990, pp. 69–71
2. ^ "50 Years of Nuclear Energy". IAEA. Retrieved 2008-12-29.
3. ^ Glasstone & Senonske 1994, pp. 769
4. ^ Duderstadt & Hamilton 1976, pp. 91–92
5. ^ International Association for the Properties of Water and Steam, 2007.
6. ^ Glasstone & Senonske 1994, pp. 767
7. ^ Tong 1988, pp. 175
8. ^ Mosey 1990, pp. 92–94
9. ^ Forty, C.B.A.; P.J. Karditsas. "Uses of Zirconium Alloys in Fusion Applications" (PDF). EURATOM/UKAEA Fusion Association, Culham Science Centre. Retrieved 2008-05-21. [dead link]
10. ^ Glasstone & Sesonske 1994, pp. 21
11. ^ Duderstadt & Hamilton 1976, pp. 598
12. ^ Tong 1988, pp. 216–217
13. ^ "Davis-Besse: The Reactor with a Hole in its Head" (PDF). UCS -- Aging Nuclear Plants. Union of Concerned Scientists. Retrieved 2008-07-01.
14. ^ Wald, Matthew (May 1, 2003). "Extraordinary Reactor Leak Gets the Industry's Attention". New York Times. Retrieved 2009-09-10.
15. ^ Duderstadt & Hamilton 1976, pp. 86
External links[edit] | <urn:uuid:41030f1a-cfe8-4d5d-aeef-76ee684650f9> | 4 | 3.734375 | 0.441829 | en | 0.922302 | http://en.wikipedia.org/wiki/Pressurised_water_reactor |
Catholic Encyclopedia (1913)/University of Granada
From Wikisource
Jump to: navigation, search
Granada, University of.—The origin of this university is to be traced to the Arab school at Cordova, which, when the city was captured by St. Ferdinand in 1236, was removed to Granada and there continued. When Granada in its turn fell into the hands of the Catholic sovereigns one of their earliest and chief cares was to secure the preservation of letters and the art of imparting knowledge, in which the Arabs had been so well-versed, and the school was taken under their protection. However, it did not receive the status of a university until the reign of Charles V, when a Bull of erection, dated 1531, was issued by Clement VII. The institution is endowed with privileges similar to those enjoyed by the Universities of Bologna, Paris, Salamanca, and Alcalá de Henares. The large building which it occupies was erected by the Jesuits and is admirably suited to its purpose. The curriculum covers a wide field, the faculties including those of law, medicine, social science, etc. The university has a seismological station in the observatory of Cartuja. The magnificent library contains 40,000 volumes, and includes a polyglot Bible, several valuable works of theology, and some Arabic MSS.
Blanche M. Kelly. | <urn:uuid:89375e02-45e4-4371-9c11-3152ca1adc1f> | 3 | 2.546875 | 0.022884 | en | 0.973859 | http://en.wikisource.org/wiki/Catholic_Encyclopedia_(1913)/University_of_Granada |
Page:Australian and Other Poems.djvu/29
From Wikisource
Jump to: navigation, search
This page has been validated.
Upon the swelling, noisy waves intent,
That with a blustering and an awkward grace
Pay court where ocean comes to steal a glance,
I pictured thee a maiden fair, hard-wooed
By lover grey—a gallant poor in years.
But rich in gold and silver; ships that bear
From every clime their proper fruits and wares;
Spreading domains and stately mansions stored
With all the wealth of art. In the loud roar
The waves sent forth, methought I heard the tale
The lover told to win the blushing fair.
He spoke of bridal train that rich in robes.
Nor less in heartfelt joy, should lead the way.
When to the altar the bright concourse went.
By prancing steeds and glittering chariots borne.
He spoke of waiting train, of pomp, of show.
Of the high festival that frequent comes
Whereof his bride is queen ; and when his speech.
That wearied by its length and haughty sound,
Was done, the pompous lover vainly tried
To smile, and puffed his rosy cheeks that glowed
With tinge imparted by the viny juice.
Anon I gazed upon the placid bay,
That murmuring laves the circling beach that lies | <urn:uuid:f4c2be3c-beda-4119-8994-565bb5cbcf6f> | 2 | 1.882813 | 0.034488 | en | 0.892571 | http://en.wikisource.org/wiki/Page:Australian_and_Other_Poems.djvu/29 |
blood is thicker than water
Definition from Wiktionary, the free dictionary
Jump to: navigation, search
blood is thicker than water
1. Family relations and loyalties are stronger than relationships with people who are not family members.
• 1866, Anthony Trollope, The Belton Estate, ch. 30,
Blood is thicker than water, is it not? If cousins are not friends, who can be?
• c. 1915, Lucy Fitch Perkins, The Scotch Twins, ch. 5,
The old clans are scattered now, but blood is thicker than water still, and you're welcome to the fireside of your kinsman!
| <urn:uuid:85380b69-5522-459d-90e8-9ba537ba1c20> | 3 | 2.875 | 0.199904 | en | 0.848898 | http://en.wiktionary.org/wiki/blood_is_thicker_than_water |
Simile
One of the best ways to make someone understand a concept or have a better idea about the nature of something is to use a comparison. Comparisons are helpful because they can relate meanings by framing certain aspects of the objects being compared in terms with which the reader is familiar. For example, when shopping for a new car, a buyer might ask the salesperson how the car drives. The car salesman can say a lot of things about the car's maximum speed, handling, acceleration, and so forth, but those words and phrases have no meaning if the person buying the car has never driven a car before and has no context for comparison. Similarly, in literature, it is helpful to convey meaning and intent to the reader by use of comparisons. One of the most common figures of speech used to compare objects is the simile.
A simile is a figure of speech that uses the words "like" or "as" to compare two unlike objects. The purpose of the simile is to give information about one object that is unknown by the reader by comparing it to something with which the reader is familiar. For example, the simile, "Debbie is slow as a snail," gives the reader information about Debbie's slowness by comparing her to a snail, which is an animal known to be slow.
Similes can be either explicit or implicit depending on the way the simile is phrased. An explicit simile is a simile in which the characteristic that is being compared between the two objects is stated. The previous example, "Debbie is slow as a snail," is an explicit simile because it indicates what characteristic of Debbie and the snail are shared. An implicit simile is a simile in which the reader must infer what is being compared. For example, if the sentence read, "Debbie is like a snail," it is up to the reader to determine what is meant. Is the writer trying to say that Debbie is slow? Or is the writing meaning that Debbie is slimy? Both of these characteristics are common to snails and could possibly provide information that pertains to Debbie, but without any other context, it is impossible to know what meaning the author intended.
Similes can be used in all kinds of writing but are especially effective in poetry and fiction, where they can be used to paint images and form pictures that carry more emotion than mere words can convey. However, a writer should guard against using familiar similes which may be considered cliché due to their overuse.
| <urn:uuid:cd92b8ae-524e-4fb7-a04a-68291411b776> | 4 | 4.21875 | 0.731808 | en | 0.9713 | http://figurativelanguage.net/Simile.html |
Skip to content
fat is healthy
crow89 posted:
I have been reading his board weekly for the past 6 years.What happened to the posters that are in favor of a balanced diet or a high fat diet?Have you considered that it may be sugar either in its natural from or as a a carbohydrate that causes disease and not fat.In fact fat be healthy.I would like to see are more information posted form that perspective.Thanks,Crow
jc3737 responded:
I have thought about that possibility very often.But if carbs wre a problem how would you explain the asian culyures that live almost totally on starches like rice....or the Mexicans that eat tons of beans.....and even without medical care (like we have in the US)their longevity is equal to ours.So I agree that pure sugar (processed white sugar)is bad but the sugar in fruits and vegetables or carbs.....I would need to see stronger evidence.
DoloresTeresa replied to jc3737's response:
How strong do you need the evidence to be? McDougall,s patients on a very low (almost no) fat diet reverse diabetes, other medical problems and lose weight. Esselstyn's patients who were at death's door not only got better but are still alive after 15 or 20 years, Fuhrman's patients on at least an ounce of nuts, seeds or avocados get better. Atkins in his second book, after claiming in his first book that his diet was THE diet for heart disease and diabetes, admits that his diabetic patients "adjust" (quotes his) to his diet and he adds STARCHES to his diet, and he himself was known to have heart disease.
Heretic, among others, eats a high fat diet but, from the ages of his children, he must be a younger man. Even people on the horrible SAD usually last through their fifties before the SAD catches up to them.
jc3737 replied to DoloresTeresa's response:
I was NOT asking for evidence from McDougall,Fuhrman or other such plant based diets.
I don't know how old Heretic is but I think he is a good bit younger than we are...I'm 62.When I said I need to see stronger evidence I meant stronger evidence from the POSTER that sugars in fruit and carbs does any damage.I can see no evidence that the sugar in carbs causes damage.I was asking HIM for more evidence for HIS stance.(or HER)
But Dr Davis says carbs cause glycation.I'm asking for evidence that carbs cause cause glycation.If they cause glycation then I ask why don't the rural Chinese drop dead at early ages since they eat almost 100% carbs.
Its too bad Heretic is not here to give us an opposing point of view.Its never good to get too self-satisfied that we have found all the answers.I'm still a srtong believer in the Einstein/Popper method.
DoloresTeresa replied to jc3737's response:
Spotlight: Member Stories
Helpful Tips
Zeal for Life
Was this Helpful?
5 of 5 found this helpful
Report Problems With Your Medications to the FDA
| <urn:uuid:fd986adb-f982-4cb6-8de6-779d3fe05f96> | 2 | 2.15625 | 0.029674 | en | 0.959808 | http://forums.webmd.com/3/diet-debate/forum/219 |
Big data used for weather forecasting
The power of government big data: It's all in the use cases
As more people join the discussion about big data, and as they continue to evaluate what the concept means for government IT, chances are you will see the phrase "big data use case."
Understanding what is meant by a use case, and how use cases have a broader impact on government systems, is key when it comes to understanding why big data is different than traditional government data collection and processing.
Let's start with what the phrase “use case” itself. The phrase has earned a place in software engineering vernacular over the past 20 years. Originally use cases served as a way to examine whether an IT system design met the real-world conditions for business steps and information flow.
A good example of a big data use case is the influx of sensor data collected by smart city applications, or by Defense Department perimeter sensors, surveillance videos and more. What do you do with all these new types of data? Think of a use case as a thought experiment detailing how data can be used, what business need can be met through that use and what needs to happen in order to make that use case a reality.
One major difference with big data use cases is that big data is often held in a central repository where it is made available to multiple applications. And in today’s third platform era, where big data, cloud mobile and social merge, a business case may extend across multiple applications.
diagram of big data use cases
As the illustration shows, a government office might have several concurrent big data use cases operating at once. Such use cases could include anything from running a large financial report to evaluating the efficiency of ongoing procurement efforts. The business processes associated with each use case can interact with one of more applications, and the individual applications could call to the same universal big data set that is available across the enterprise.
Some typical big data use cases are listed below.
Sensors: The National Weather Service collects terabytes of data from monitoring systems around the globe through the Joint Polar Satellite System (JPSS), which monitors environmental conditions, and the associated JPSS Common Ground System (JPSS CGS), which draws data from sensors and satellites.
This data is available through a central repository. Ongoing weather predictions could be one use case, and another could be long-term crop forecasts.
Entity analytics: Entity analytics looks for connections between entities, which can be a person, place, thing, location, transaction or one of many other data points. By sifting through billions of data points, analysts can tell if a house where suspicious activity is taking place is also the mailing address for a credit card that has been used to purchase suspicious items. Many connections can be made through applications that have access to the central collection of data entities. Different applications can support police reports, connection maps and fraud analytics.
Compliance records and national resource controls: Managing national resources means measuring and evaluating large volumes of data related to land, water, soil, plants and animals. One good big data use case is the EPA's ECHO website, which provides integrated compliance and enforcement information for about 800,000 regulated facilities nationwide. With the data in ECHO, environmental managers can get a snapshot of a facility’s compliance record. Each of these types of reports may come from a different business application, but they can pull data from a central repository.
Health services: Data from hospitals, accident reports, disease control efforts and social service case files can quickly show which geographic areas or socioeconomic levels are over served or under served by current health efforts. The real power of big data comes when this information can be cross tabbed with the environmental data. Health-related cause and effect correlations are easier to hypothesize when relevant data is accessible.
These are a few examples of big data use cases that extend beyond the realm of stand alone applications. Big data can mean big changes for the way government conducts its research and its ongoing business. But in order to make that leap, understanding use cases can help IT managers look beyond their silos and their dedicated applications. Working groups are dedicated to this effort. NIST is even collecting examples of government big data use cases.
It's a big, increasingly interactive world out there for government IT. (And we have the big data to prove it.)
Reader Comments
Please type the letters/numbers you see above | <urn:uuid:66f30f2c-89f5-4ae9-8a83-f43e8a08613c> | 3 | 2.90625 | 0.026268 | en | 0.931068 | http://gcn.com/articles/2014/05/28/big-data-use-cases.aspx |
Abuse or exploration?
Originally Published: May 18, 2012
Share this
Dear Alice,
I am an 18-year-old girl and it is my first year in college. I am also involved in my first physically intimate relationship. At the beginning of the relationship, I was having a lot of problems with feeling sexually aroused and being physical with my boyfriend. It made me cry almost every time. Then, I remembered some experiences involving adult sexual behavior (both physical and conversations over the phone) with my best friend in first and second grade. I had not thought of these experiences in years, and the memories, even now, make me feel sad, scared, and sometimes guilty.
I remember being afraid to see her and being very upset as a child about what we did. I don't remember much, only very small snippets of what happened here and there. I went into therapy at school, and I can be intimate now without crying, but these memories still bother me and I just want to know what to call it. Is this child-on-child sexual abuse, or just little kids exploring?
Dear Reader,
It is brave of you to seek out information about these experiences and try to uncover how they may be affecting your sexuality currently. Whether or not you would define these experiences as abuse, they are clearly having an impact on you and they are worth exploring. Kudos to you for seeking out support.
The issue of children exploring sexuality is one that can be uncomfortable and confusing to talk about for many people. Children do have a natural curiosity about their own bodies and about others’ bodies. They do not experience sexual desire the way adults do, but many (if not most) enjoy touching their own bodies and being touched by others. Even newborn babies enjoy touching their genitals in pleasurable ways and show evidence of experiencing physiological arousal (vaginal lubrication and erection, for example). All throughout childhood, masturbation and sexual exploration with others of the same or different gender is quite common. Behaviors may include playing “doctor,” curiosity about “where babies come from,” “show me yours and I’ll show you mine” games, telling sexual jokes, and role playing relationships.
The fact this is common and healthy does not mean of course that all children want to engage in sexual exploration with others or that all want to have the same type of experiences. It seems like in your situation, this may have not been something you wanted to do with your friend. Your question about whether or not it was abuse really depends on your definition. In legal terms, it probably would not be considered such without their being an age difference between you two. Others might argue that regardless of age, if your friend knew you were not interested but continued to engage with you in this way, then it was indeed abuse. Others would argue that if she didn’t know, the fact that you did not want to do these things with her is enough to call it abuse. Whether or not you call it abuse, it clearly has affected you and you are right to seek out help in working through it.
There may be related explanations for your intimacy woes. For example, many people look back on childhood exploration and feel shame, guilt, or regret. Children who are “caught” by adults who are shocked or upset by the discovery of child sexuality especially may experience some strong messages about these activities being dirty, wrong, and shameful. If these messages are delivered in a harsh or judgmental manner, it may have a long-term impact on a person’s sexuality.
Has your boyfriend been understanding about your tears? Does he respect your boundaries and provide support? It may be difficult for those who do not have past experiences of abuse or trauma to understand these reactions. Remember you have a right to move as slowly into sexual intimacy as feels right for you. Avoid putting pressure on yourself to go further than feels right in the moment. Moving forward is completely up to you.
Props to you for working through these feelings of sadness, fear, and guilt. With time and support you should experience more joy from your relationships than anything else.
Take care, | <urn:uuid:893bedd7-e415-47ac-a0cb-29592d677219> | 2 | 1.78125 | 0.056996 | en | 0.974559 | http://goaskalice.columbia.edu/abuse-or-exploration |
Harry Potter Wiki
Papua New Guinea
12,074pages on
this wiki
Papua New Guinea
Location information
Southern Hemisphere
Papua New Guinea is a tropical island country located in Oceania. Its capital is Port Moresby. [1]
Magical creatures
Because of its tropical climate, Papua New Guinea is an ideal habitat for Lethifolds, carnivorous beasts similar to Dementors.
In 1782, wizard Flavius Belby became the first and only wizard to survive a Lethifold attack. This occurred in Papua New Guinea. [2]
The country acquired its name in the 19th century. The word "Papua" derives from Malay papuah describing the frizzy hair of Melanesians. "New Guinea" comes from the Spanish explorer Íñigo Ortiz de Retes, who noted the resemblance of the local people to those he had earlier seen along the Guinea coast of Africa. [3]
Notes and references
Around Wikia's network
Random Wiki | <urn:uuid:3277129f-3e9c-4c91-a5ec-8e260af86fe8> | 3 | 2.671875 | 0.228293 | en | 0.90524 | http://harrypotter.wikia.com/wiki/Papua_New_Guinea |
A Guide for the Perplexed
The ancient Jewish scholar, Maimonides (the Rambam), who lived in the Middle Ages, wrote The Guide for the Perplexed, a philosophical guide to understanding the universe and religion. Using the original text as a backdrop, Dara Horn introduces the reader to Josie Ashkenazi, a brilliant Jewish computer guru who has invented a program called “Geniza” that records virtually everything you do, even your memories.
Josie has an enviable life. Not only is she wealthy, she has a good marriage to Israeli-born Itamar and a daughter – everything her sister, Judith, does not have. The sisters have a precarious relationship: Josie feels obligated to employ Judith at the company she founded, and Judith envies Josie’s life, so much so that Judith convinces Josie to accept an invitation to Egypt to show her software, knowing that Josie will be in danger if she goes because of the political unrest in that country.
On her business trip, Josie is kidnapped and believed dead, while, on the other side of the world, Judith inserts herself into Josie’s old life. When a mysterious text appears on Judith’s phone purporting to be from Josie, Judith must make a life-altering decision.
Woven within the novel is the backstory of Solomon Schechter, a real-life historical rabbi and scholar known for being the founder of Conservative Judaism and for uncovering thousands of pages of Hebrew manuscripts that were hidden in an Egyptian synagogue, known as the Cairo Geniza. Somehow Horn also manages to fit in a side story about the Rambam.
Horn brilliantly intertwines fact and fiction in this wholly engrossing, though complex novel that blends such themes as the bonds of sisterhood, the risks of technology and the value of preservation ancient texts. Sometimes the book gets too bogged down in the intricacies of Josie’s fictional computer program, but ultimately that does not detract from the overall reading experience.
Share this review
Now available in paperback (UK) or on Kindle
Jenny Barden's masterful novel about the lost colony of Roanoke.
Online Exclusive
(US) $25.95
(US) 9780393064896 | <urn:uuid:6a22aa82-71a7-4def-b378-d90dd19896d4> | 2 | 2.078125 | 0.19399 | en | 0.926002 | http://historicalnovelsociety.org/reviews/a-guide-for-the-perplexed/ |
Take the 2-minute tour ×
Apparently another technology, Kinemacolor, was invented first in the UK, but it never caught on in Hollywood. Kinemacolor was expensive, and had some other glitches involved, but the fact that it was first on the color movie scene doesn't seem to have saved it. What are some of the differences between the two that might have contributed to Technicolor prvailing over Kinemacolor? Was Technicolor that much better, and easier to use?
share|improve this question
1 Answer 1
up vote 4 down vote accepted
The two technologies never directly competed (as the question implies). The last Kinemacolor film was made in 1914, while the first Technicolor film wasn't made until 1917, and it doesn't appear to have entered serious usage until the 1920's.
So the real question we are left with is why Technicolor succeeded, where the earlier technology failed.
The main clue I see in the Kinemacolor wiki page was that the company never made money (despite installing projectors in several hundred theaters), and some mention of the projectors being expensive:
However, the company was never a success, partly due to the expense of installing special Kinemacolor projectors in cinemas. Also, the process suffered from "fringing" and "haloing" of the images, an insoluble problem as long as Kinemacolor remained a successive frame process.
The Technicolor folks, coming along a decade later as they did, would have been in a position to learn from the financial mistakes of their predecessors. In particular, it appears they put a priority in coming up with a system that didn't require all those expensive projectors:
The difference was that the two-component negative was now used to produce a subtractive color print. Because the colors were physically present in the print, no special projection equipment was required and the correct registration of the two images did not depend on the skill of the projectionist.
share|improve this answer
Your Answer
| <urn:uuid:453050b9-b1be-45c8-a506-d96fb843c228> | 3 | 2.828125 | 0.881603 | en | 0.974895 | http://history.stackexchange.com/questions/1979/what-differences-led-to-technicolor-becoming-the-dominant-color-filming-technolo?answertab=votes |
You are here
How to write website content
Generally, website owners are overly concentrated on getting the best graphical layout for their site at all costs, but few go further to work on SEO and usability and even fewer focus on creating website content. How can you create a content that will convert? Read on.
There are a couple of things to consider before writing your site content:
• your customers' interests and needs - why do they buy your product/service?
• targeted keywords - which keywords drive the most sales/profit/sign-ups?
• the intent of the content: is it to inform your visitors or to sell?
Let's see why those aspects are important.
Focus on the customers' needs
Obviously, to make your site content efficient you'll need to provide value to your visitors. To provide value to your customers, you need to know their needs to tell them how your product or service can make their life better. This means that you'll need to focus on the benefits of your product or service, not on the features. The more benefits you describe, the more you connect your content to the needs, interests and reasons people buy from you, the more efficient your website content will be.
Target the right keywords
Another aspect of talking the same language with your customers is the words you use. To know that, you'll need to do some keyword research to understand how your customers think and how they relate to your product. When writing website content, you'll be aware what keywords to use.
Pick the right tone
And yet another important factor to remember is the tone of your content. The content tone depends on how well you know your customers, how you want to position yourself and the purpose the content servces: whether you need to prompt your customers to do something or you simply need to inform them of something.
Developing a fitting tone will make your content more compelling and more pleasant to read.
How to write?
Now that we know what to think about before writing the content, let's actually see how to do it.
First of all, you need to focus on the benefits of your product or service that make your customer buy from you. To do this, you'll need to empathise with your clients and think like them. Write how they can improve their lives with your help.
Secondly, you need to write naturally, just as you would talk to your friend who suddenly started considering your product or service. Then, after the copy (content) is written, you may add relevant keywords in it by replacing pronouns, such as 'it, he, she' with a more descriptive word as well as using synomyms. The point here is to still sound naturally (with readable content) and use keywords as well.
Of course, you'd rather stick to two or thre (at most) synonyms throughout the piece, because you need to target the least possible amount of keyword phrases in it (that's another SEO story, though).
Thirdly, the tone of your content can make a huge difference. If your aim is to inform your site visitors, you may as well pick a less formal tone, add sparks of humour here and there, etc. If your site presents formal information, you may as well be more reserved.
Usually, an informative tone, with a bit of personality and humour may be best for most sites, as it represents the tone people are familiar with in the real life and they can relate to the content author more closely.
In essence, the most important thing you can think of when writing your content is the value you provide to your site visitors. Your content is the highway that you can do that so it is in your best interest so create the most pleasant experience for your site visitors with.
Read more about writing compelling copy and titles at CopyBlogger.
Add new comment | <urn:uuid:8c90a7d4-c6b1-4151-9775-307b3ce573bc> | 2 | 1.679688 | 0.034182 | en | 0.961537 | http://improvetheweb.com/how-to-write-website-content/ |
Canada’s Maximum 10-year Prison Term for Wearing a Mask at a Riot
Wearing a mask at a riot is now a crime
Mask ban: Canada’s veiled protesters face 10 years’ jail
A new Canadian law forbids people from wearing a mask or covering their face during a riot or so-called “unlawful assembly” in the country. The law carries a maximum ten-year sentence for anyone convicted of physically concealing their identity.
Current Canadian law already forbids covering the face during a criminal act, although CBC reported that the statue, which criminalizes “disguise with intent,” generally applies to robberies. Police departments across the nation have called on lawmakers to lower the burden of proof for investigators trying to prove a mask was used for the sole purpose of hiding a demonstrator’s identity. Municipal authorities have also sought to stiffen penalties in the wake of recent violent riots in Toronto, Vancouver, Montreal and other cities.
You Can Protest, But Don’t Wear a Mask
In a move that comes with an unsettling brand of legislative style, tongue-in-cheek humor, the Canadian government passed a bill on Halloween that aims to outlaw masked protestors during riots or “unlawful assemblies.” In a way, it’s not surprising. Canada has had a tricky few years for riots and protest related carnage. As you may have seen in our documentary, the streets of Montreal were torn up this year by students, anarchists, and anarchist students who were protesting tuition hikes.
When the G20 summit came to Toronto in 2010, cop cars were burned, the Black Bloc smashed up retail windows, uninvolved civilians were held by police blockades in the rain, and many protestors were detained in a make-shift detention center on the east end of the city. People in Vancouver also got super upset when the Canucks blew it in the Stanley Cup finals. However, the move to ban masks entirely appears to be an unrealistic measure that will do more to prevent the freedom of protestors, than limit the amount of violence and anarchy on Canadian streets during trying political times. | <urn:uuid:92cd083b-b1ac-4d59-8e94-98ff0a4172bc> | 2 | 2.140625 | 0.372838 | en | 0.971582 | http://investmentwatchblog.com/canadas-maximum-10-year-prison-term-for-wearing-a-mask-at-a-riot/ |
Take the 2-minute tour ×
I'd always assumed that if a cow died of any cause other than proper shechita (kosher slaughter), the meat is neveila. If it died by kosher slaughter but had already been seriously injured or diseased, it's treifa.
There's an "old manuscript Rashi" printed in some editions in the margins of Zevachim 70a that speaks of "a treifa that became a neveila", for which one could be punished for both.
Has anyone ever heard of this? Do other rishonim agree? It was news to me!
share|improve this question
@WAF, can we do a different tag than just "yoreh-deah"? Maybe "kashrut-theory-yoreh-deah"? I understand why this question should be tagged somewhat differently than a practical kashrut question like "is hechsher ABC recommended". Thoughts? – Shalom Jan 21 '11 at 19:42
1 Answer 1
up vote 5 down vote accepted
It looks like this is in fact the subject of a machlokes between R' Yochanan and Reish Lakish (Yerushalmi Nedarim 6:1 (26a)). R' Yochanan says that one who eats "a treifah that became neveilah" is indeed punishable for both prohibitions. (Although Korban Ha'eidah actually reverses the two opinions and attributes this view to Reish Lakish, since he says it depends on what verses these two prohibitions are derived from.)
Pnei Moshe there spells out that indeed according to this view, the fact that it became neveilah doesn't take away the animal's designation as treifah.
Imrei Baruch (at the foot of the page there) adds that this seems to depend on the question of whether the prohibition of treifah applies while the animal is still alive, and references Tosafos to Chullin 32a ד"ה ורמינהו (where indeed they say the same as Pnei Moshe) and a related sugya in Chullin 103a, where indeed Reish Lakish says that this prohibition applies only after the animal is slaughtered.
share|improve this answer
Any indication how we pasken? – Shalom Jan 21 '11 at 19:45
To get two according to Reish Lakish, must it have died of a separate issue than its original, treifa-inducing injury, or even if it died of that injury? – Shalom Jan 21 '11 at 19:46
Usually the halachah is like R' Yochanan against Reish Lakish, but I don't know whether that's true here. According to Korban Ha'eidah, that it depends on the respective pesukim - well, Rambam follows Reish Lakish in saying that these are two different verses (Hil. Maachalos Asuros 4:1,6), so maybe then the halachah would follow him. But according to everyone else, that the question is whether the animal's death removes the treifah designation - then Rambam (ibid. 5:5) describes a case where שני איסורין באין כאחת and therefore both apply, but that doesn't seem to be the case here. – Alex Jan 23 '11 at 18:34
So far I haven't found anything to suggest that it would make a difference whether it died of the treifah injury or something else. – Alex Jan 23 '11 at 18:34
Your Answer
| <urn:uuid:6393e85c-5b05-4e53-a652-e40706fbe11a> | 2 | 1.78125 | 0.287085 | en | 0.943429 | http://judaism.stackexchange.com/questions/5428/can-meat-ever-be-both-treifa-and-neveila |
• Immutable Page
• Info
• Attachments
How Is The Root File System Found?
One of the important kernel boot parameters is "root=", which tells the kernel where to find the root filesystem. For instance,
This is commonly specified as what looks like a standard Unix pathname (as above). But standard Unix pathnames are interpreted according to currently-mounted filesystems. So how do you interpret the above root pathname, before you've even mounted any filesystems?
It took me a few hours to decipher the answer to this (the following applies at least as of the 2.6.11 kernel sources). First of all, at kernel initialization time, there is an absolutely minimal filesystem registered, called "rootfs". The code that implements this filesystem can be found in fs/ramfs/inode.c, which also happens to contain the code for the "ramfs" filesystem. rootfs is basically identical to ramfs, except for the specification of the MS_NOUSER flag; this is interpreted by the routine graft_tree in fs/namespace.c, and I think it prevents userland processes doing their own mounts of rootfs.
The routine init_mount_tree (found in fs/namespace.c) is called at system startup time to mount an instance of rootfs, and make it the root namespace of the current process (remember that, under Linux, different processes can have different filesystem namespaces). This routine is called at the end of mnt_init (also in fs/namespace.c), as part of the following sequence:
sysfs_init(); /* causes sysfs to register itself--this is needed later for actually finding the root device */
init_rootfs(); /* causes rootfs to register itself */
init_mount_tree(); /* actually creates the initial filesystem namespace, with rootfs mounted at "/" */
mnt_init is called from vfs_caches_init in fs/dcache.c, which in turn is called from start_kernel in init/main.c.
The actual interpretation of the root=path parameter is done in a routine called name_to_dev_t, found in init/do_mounts.c. This tries all the various syntaxes that are supported, one of which is the form "/dev/name", where name is interpreted by doing a temporary mount of the sysfs filesystem (at its usual place, /sys), and then looking for an entry under /sys/block/name (done in the subsidiary routine try_name in the same source file). name_to_dev_t is called from prepare_namespace, which in turn is called from init in init/main.c. This routine is spawned as the first process on the system (pid 1) by a call to kernel_thread in rest_init, which comes at the end of the abovementioned start_kernel.
start_kernel is the very last routine called in the boot sequence after the kernel gets control from the bootloader (in arch/i386/kernel/head.S for the i386 architecture). It never returns, because the very last thing it does after all the initialization is call cpu_idle, which runs an endless loop for soaking up CPU time as long as the CPU doesn't have anything else to do (like run a process or service an interrupt).
Tell others about this page:
last edited 2006-08-20 20:04:32 by CPE00079524e848-CM0011e6ecad14 | <urn:uuid:a8dbfd67-0df7-4ed0-981a-12a498983e98> | 3 | 3.09375 | 0.24442 | en | 0.916909 | http://kernelnewbies.org/RootFileSystem |
apples walking checkup
For urgent matters, please send an email to
The Kidney Diseases Dictionary: R - W
previous page Q R S T U V W X Y Z
renal (REE-nuhl):
renal agenesis (REE-nuhl) (ay-JENuh-siss):
the absence or severe malformation of one or both kidneys.
renal artery stenosis (REE-nuhl) (AR-tur-ee) (steh-NOH-siss):
narrowing of the artery that supplies blood to the kidney, often resulting in hypertension and kidney damage.
renal cell carcinoma (REE-nuhl) (sel) (KAR-sih-NOH-muh):
a type of kidney cancer.
renal cysts (REE-nuhl) (sists):
abnormal fluid-filled sacs in the kidney that range in size from microscopic to much larger. Many simple cysts are harmless, while other types can seriously damage the kidneys.
renal tubular acidosis (REE-nuhl) (TOO-byoo-lur) (ASS-ih-DOHsiss):
a defect in the kidneys that hinders their normal excretion of acids. Failure to excrete acids can lead to weak bones, kidney stones, and poor growth in children.
renal vein thrombosis (REE-nuhl) (vayn) (throm-BOH-siss):
blood clots in the vessel that carries blood away from one of the kidneys. This condition can occur in people with nephrotic syndrome.
renin (REE-nin):
previous page Q R S T U V W X Y Z
semipermeable membrane (SEM-ee-PUR-mee-uh-buhl) (MEM-brayn):
Drawing of a section of a semipermeable membrane that could be used in a dialyzer. The section looks something like a sponge. The area above the membrane is labeled “Blood compartment.” The area below the membrane is labeled “Dialysis solution compartment.” Labels explain that waste products move from the blood compartment to the dialysis solution compartment through holes in the membrane. Other labels explain that blood cells bounce off the membrane and remain in the blood compartment. The membrane is labeled “Semipermeable membrane” because it allows wastes, but not blood cells, to permeate into the dialysis solution compartment.
Semipermeable membrane
struvite stone (STROO-vyt) (stohn):
a type of kidney stone caused by infection.
previous page Q R S T U V W X Y Z
transplant (TRANZ-plant):
placement of a healthy organ into the body to take over the work of a damaged organ. A kidney transplant may come from a living donor, often a relative, or from someone who has just died.
tubule (TOO-byool):
one of millions of tiny structures within the kidneys that collect urine from the glomeruli.
previous page Q R S T U V W X Y Z
see urine albumin-to-creatinine ratio.
ultrasound (UHL-truh-sound):
urea (yoo-REE-uh):
urea reduction ratio (URR) (yoo-REE-uh) (ree-DUHKshuhn) (RAY-shee-oh):
uremia (yoo-REE-mee-uh):
ureteroscope (yoo-REE-tur-ohskohp):
ureters (YOOR-uh-turz):
tubes that carry urine from the kidneys to the bladder.
urethra (yoo-REE-thruh):
an illness caused by harmful bacteria growing in the urinary tract.
urinate (YOOR-ih-nayt):
urine (YOOR-in):
urine albumin-to-creatinine ratio (UACR) (YOOR-in) (al-BYOOmin) (too) (kree-AT-ih-neen)(RAY-shee-oh):
the condition of having stones in the urinary tract.
see urea reduction ratio.
see urinary tract infection.
previous page Q R S T U V W X Y Z
vascular access (VASS-kyoo-lur) (AK-sess):
vasculitis (VAS-kyoo-LY-tiss):
inflammation of the blood vessel walls. This swelling can cause rash and disease in multiple organs of the body, including the kidneys.
vasopressin (VAY-soh-PRESS-in):
see antidiuretic hormone.
vein (vayn):
a blood vessel that carries blood to the heart.
vesicoureteral reflux (VESS-ih-kohyoo-REE-tur-uhl) (REE-fluhks):
an abnormal condition in which urine backs up into the ureters, and occasionally into the kidneys, raising the risk of infection.
to urinate; to empty the bladder.
previous page Q R S T U V W X Y Z
Wegener's granulomatosis (VUHG-uh-nurz) (GRANyoo-loh-muh-TOH-siss):
an autoimmune disease that damages the blood vessels and causes disease in the lungs, upper respiratory tract, and kidneys.
Back to Dictionary Index
Page last updated September 9, 2011
National Kidney and Urologic Diseases Information Clearinghouse
3 Information Way
Bethesda, MD 20892–3580
Phone: 1–800–891–5390
TTY: 1–866–569–1162
Fax: 703–738–4929
NIH...Turning Discovery Into Health ®
| <urn:uuid:8fb65337-7c2b-48c5-8e75-f7c79ebe2ae9> | 3 | 2.90625 | 0.084141 | en | 0.849752 | http://kidney.niddk.nih.gov/KUDiseases/pubs/KDictionary/R_W.aspx?control=Newsletters |
Linux API headers
Dejan Čabrilo dcabrilo at
Mon Jul 9 11:17:04 PDT 2007
Hello all,
I have tried my best to research this, but came up with no answer
myself, so let me turn to the mailing group instead. The question is
lengthy, but I'm puzzled and there seems to be a lack of documentation,
so I can't put it concisely. Please bear with me:
I installed LFS (Version SVN-20070706) and everything is good. However,
I would now like to use a package manager to pack all the packages to
make my life easier. So, I essentially need to recompile every package
from Chapter 6 of the book, as I already have the system on my machine.
If I try to do:
make mrproper
make headers_check
make INSTALL_HDR_PATH=/usr headers_install
the system goes to hell, glibc becomes unusable, compilers fail sanity
checks, etc, and I can't compile anything (thanks god for jhalfs).
The warning in the book reads in Chapter 8 section 3 (installing the
"The headers in the system's include directory should always be the
from this Linux kernel tarball. Therefore, they should never be
replaced by either the raw kernel headers or any other kernel
sanitized headers."
So, my questions are:
1a) What does the warning in the SVN book mean by "the ones against
which Glibc was compiled ... the sanitized headers from this Linux
kernel tarball"?
1b) Why can't I reuse what is installed by make headers_install target?
I.e. why did "make headers_install" work in chapter 6, and it doesn't
work after everything was installed (I can't compile anything anymore)?
2) Do those headers have to match the version of the kernel in use? What
happens if they don't? I suppose I would have to rebuild glibc, but
would other software be affected by headers change if glibc remained the
Whoever got this far gets a beer next time they are in Belgrade :) I
hope this all makes some sense...
Looking forward to responses!
More information about the lfs-support mailing list | <urn:uuid:64de2b9a-3779-4eb4-b81a-354cb2591b64> | 2 | 1.96875 | 0.767987 | en | 0.938335 | http://linuxfromscratch.org/pipermail/lfs-support/2007-July/033319.html |
From: Justin James <[email protected]>
Date: Tue, 5 Aug 2008 10:24:42 -0400
Message-ID: <06a601c8f707$00710de0$015329a0$@com>
> Behalf Of Julian Reschke
> Sent: Tuesday, August 05, 2008 6:53 AM
> To: Ian Hickson
> Cc: 'HTML WG'
> SVGWG SVG-in-HTML proposal)
> >>> prefixes inordinately confusing, they add a level of indirection
> where
> >>> none is
> >> People also find class names for CSS confusing.
> >
> > Indeed. Let's learn from our mistakes instead of adding more.
> So, out of curiosity, what would be a better design for CSS?
People who don't understand CSS classes use the much maligned inline styles.
And frankly, I really don't know why people on this list have such a problem
with inline styles. My view of HTML is that it is where everything comes
together and gets late-bound. Inline styles is just "later-bound" than an
external stylesheet, since it can override it. But I digress...
> > How do you propose to have a distributed extension system with URNs,
> if
> > you're not using the domain name system to guarantee uniqueness?
> Aren't
> > you just trading one central repository (the HTML WG) for another
> (the URN
> > registry)? Could you elaborate on how you see this working?
If the URI/URN (or whatever UR*) class names are not being de-referenced,
then who *cares* if there could be a clash somewhere? It is irrelevant, so
long as the CSS tree for the current document does not have any clashes. And
if it were, who cares? Because CSS handles multiple definitions just fine,
the one "closer" to the tag (externally defined, then internally defined,
then inline style) overrides indentical attributes while allowing
non-identical attributes to inherit up.
So I really am not sure why you guys are so worried about clashing class
names, it seems like a non-problem to me. Am I missing something?
This concept of mandating that the HTML author have "authority" over a
URI/URN that they are using as a class name is not working for me. This is
the second time that you've mentioned it, but I really do not understand:
* How do you want to define "having authority over"?
* How do you handle someone importing a CSS stylesheet from a URI that they
do *not* "have authority over*, such as is the case when using a public
widget library?
Sorry to just jump into the middle of this conversation like this...
Received on Tuesday, 5 August 2008 14:25:45 UTC
| <urn:uuid:e4bc01f3-8a94-4802-a29e-b0a8208637f6> | 2 | 1.523438 | 0.088842 | en | 0.881667 | http://lists.w3.org/Archives/Public/public-html/2008Aug/0166.html |
Subject: CVS commit: src/sys/dev/pci
To: None <>
From: Frank van der Linden <>
List: source-changes
Date: 11/11/2003 22:28:58
Module Name: src
Committed By: fvdl
Date: Tue Nov 11 22:28:58 UTC 2003
Modified Files:
src/sys/dev/pci: if_bge.c
Log Message:
From FreeBSD:
* erratum: disable the nocrc RX bit, as it may cause problems on the 570{1-4}.
adjust the length of the incoming packet accordingly to trim it.
* the 5704 has a smaller MBUF_POOL, so set a smaller value
Local change:
* Pass the autoneg force flag to mii_attach. Some PHYs need to be kicked
out of their falsely autoneged 10baseT state with this.
To generate a diff of this commit:
cvs rdiff -r1.53 -r1.54 src/sys/dev/pci/if_bge.c
copyright notices on the relevant files. | <urn:uuid:31bb6b0d-645a-485d-92f1-24202195a93e> | 2 | 1.617188 | 0.125284 | en | 0.742332 | http://mail-index.netbsd.org/source-changes/2003/11/11/0037.html |
Subject: gcc3 patches
To: None <>
From: Manuel Bouyer <>
List: tech-pkg
Date: 02/23/2004 18:57:06
here are various patches against the current gcc3 packages and dependancy,
which do various things:
- upgrade to gcc 3.3.3 (the most important, 3.3.2 can't build qt3-tools
on solaris because of what looks like a codegen bug)
- remove /usr/local/{include,lib} from the default search path, because this
breaks a lot of things when LOCALBASE=/usr/local
- handle non-empty ${GCC_VERSION} better (but in its current form breaks with
an empty ${GCC_VERSION}).
Comments ?
Manuel Bouyer <>
NetBSD: 26 ans d'experience feront toujours la difference | <urn:uuid:45950010-dec5-44b8-bedd-630502bcfaea> | 2 | 1.539063 | 0.045179 | en | 0.778902 | http://mail-index.netbsd.org/tech-pkg/2004/02/23/0011.html |
Feature photo: REUTERS/Mike Hutchings / Photo above: kanaka
With super advanced equipment, tow-in access, and internet swell tracking, a growing number of surfers are getting rides on incredibly powerful waves.
What makes a wave dangerous? Is sheer size an accurate indicator for how hazardous a surf spot is? Read on for our roundup of the top ten most dangerous waves in the world.
1. Cyclops (remote south coast Western Australia)
This ultra square-shaped, below sea level, one-eyed monster tops the list for good reasons. It’s impossible to paddle into on a surfboard and almost unrideable towing behind a jet ski.
If you blow a wave here you’ll be washed straight onto the dry rocks, which is a bummer because the nearest medical help is hours away.
Photo: REUTERS/Mike Hutchings
2. Teahupoo (Tahiti)
The scary thing about Teahupoo (pronounced Cho-poo) is that as the swell gets beyond 10 feet the wave doesn’t so much get taller, it just gets more enormous, often looking like the entire ocean is peeling over with the lip.
Falling off here is almost a guarantee of hitting the razor sharp coral reef below, which wouldn’t be so bad if the locals didn’t insist on using fresh Tahitian lime juice to sterilise the reef cuts. Ouch.
3. Shipsterns (Tasmania, Australia)
Set along a remote length of pristine Tasmanian coastline, you could almost call this area picturesque if the wave itself wasn’t so ugly.
Raw Antarctic swells come out of deep ocean and jack up into a roaring righthander in front of the cliff which gives the spot its name. The uneven reef causes weird steps and bubbles in the wave, which are always an unpleasant surprise when you’re still trying to navigate the drop down the face.
Photo: jurvetson
4. Dungeons (Cape Town, South Africa)
It’s not that shallow and it doesn’t break in front of any rocks, but it is located off the tip of South Africa in the freezing Southern Ocean in shark-infested waters. Dungeons regularly holds waves up to 70 feet, which is why organisers have chosen to put on the annual Big Wave Africa contest here since 1999.
5. Pipeline (Oahu, Hawaii)
The shallow lava reef that shapes Pipe’s famous round tube is actually full of trenches and bumps — meaning a nasty old time for anyone falling out of the lip from 12 feet above. Which happens with surprisingly regularity, even to the experienced locals.
Perhaps almost as dangerous are the insane crowds that flock to Pipe any time it gets good, with fearless Hawaiians competing with pros, wannabes, and tourists for the set waves.
Like this Article
Like Matador | <urn:uuid:3634d324-fba0-4ba5-89b6-403357c7eab5> | 2 | 2.359375 | 0.122913 | en | 0.914379 | http://matadornetwork.com/trips/top-10-most-dangerous-waves-in-the-world/ |
Q&A #820
Teachers' Lounge Discussion: Explaining the relationship between the degree of a function and its shape
View entire discussion
[ next >>]
From: David Richards <[email protected]>
To: Teacher2Teacher Public Discussion
Date: 2007121712:48:57
Subject: The shape of a function and its degree
All even degree polynomials graph a parabola. They all enter in one
direction and leave in the opposite direction. All odd degree
polynomials graph a line. They enter and exit in the same direction.
The leading term determines the shape and the constant determines the
y-intercept. All the junk in between determines when and how many
times the polynomial will bounce in the middle. Every polynomial has a
line of reflection. In an odd polynomial you can draw a line through
the reflection point parallel to the legs of the graph. The graph will
be above or below the line to the left of the reflection point and
will flip to the other side of the line on the right. In an even
polynomial the reflection point is the central vertex of the parabola.
The graph will be increasing or decreasing on the left of the
reflection point and will flip directions on the right. I'm not sure
how to find the reflection points. In a second degree polynomial the
x-coordinate is -b/(2a). I don't know how to find it for higher degree
polynomials, but I'm sure someone has figured it out. This is the
elementary stuff I always point out to my Algebra students when we
start graphing polynomials. You know what the general shape of the
polynomial will be just from looking at the degree. The sign on the
leading term tells you if the polynomial is increasing or decreasing.
If it's positive for an even degree polynomial you get a U, if it's
negative you get an upside down U. If the polynomial is odd and the
leading term is positive you get an increasing line, if it's negative
you get a decreasing line. You can always plot the y-intercept just
from looking at the constant or lack thereof (if there is no constant
the polynomial goes through the origin). You can tell how many times
it will bounce by counting the changes in sign as you read the
polynomial from left to right.
Post a reply to this message
Post a related public discussion message
Ask Teacher2Teacher a new question
[Privacy Policy] [Terms of Use]
Teacher2Teacher - T2T ®
© 1994- Drexel University. All rights reserved. | <urn:uuid:47009b92-7292-4512-bdd3-066b2b632404> | 4 | 3.5625 | 0.070411 | en | 0.863769 | http://mathforum.org/t2t/discuss/message.taco?thread=820&n=1 |
Take the 2-minute tour ×
What's all this thing that we see in films, when to break into a car you stick a long blade with a kind of hook on the end and pull up to open the door? Was it ever possible to open a car door like that? It seems like a huge design flaw, that would have been quite easy to correct. Are our cars that vulnerable?
share|improve this question
2 Answers 2
That's generally called a "slim jim". They certainly work on most older cars. Newer cars are more difficult, but it just takes different kinds of tools.
Slim jims are illegal in some jurisdictions (in the USA, at least), although in some states you can own them as long as you don't use them :)
If you ever lock your keys in your car, and call AAA or a locksmith, they'll likely use modified slim jims to easily get into your car. It only takes a minute or two.
Now that automobile ignition systems are so complex (with electonic keys, etc.), it's difficult for a thief to steal a car even if he gets the door open. But, if the thief just wants to take something from your car, he can always just break a window. Vulnerability is in the eye of the beholder!
share|improve this answer
For older cars, securing them really wasn't considered that much of a big deal (similarly, safety wasn't that important either!)
Until relatively recently, the residents of the town I grew up in wouldn't lock their cars, and sometimes would leave the keys in the ignition, so that they could be moved easily if someone needed them to get past.
So the mechanism that locked the door was not protected physically or electronically - and a slim instrument (the Slim Jim) was all you needed to get down past the rubber window seal to catch one of the mechanism rods and pull.
share|improve this answer
Your Answer
| <urn:uuid:16d3878b-75eb-4d43-97a1-75dacd07d885> | 2 | 1.914063 | 0.980496 | en | 0.966283 | http://mechanics.stackexchange.com/questions/9622/sticking-a-blade-between-door-and-window-to-open-the-door |
Ladd Irvine
Senior Faculty Research Assistant
Phone: (541) 867-0394
Lead field expeditions lasting up to 6 weeks to conduct satellite-monitored radio tagging of large whales. Field duties include tag deployment, small boat handling, planning search areas/strategy, coordinating pre-and post-cruise logistics.
Assist with the development and testing of new satellite tag designs.
Analyze tracking and dive behavior data from tagged whales.
Present the results of tracking studies in both written (peer-reviewed journal or reports) and oral (presentations at scientific meetings) forms.
Educational Background:
M.Sc. Biological Oceanography, Oregon State University, 2008
B.S. Biology, University of Puget Sound
Professional Preparation
Graduate Research Assistant, Marine Mammal Institute, 2004–2007
Intern and research technician, Marine Mammal Program, 1999–2004
Research Interests/Area of Expertise:
My current research interests include characterizing the distribution and seasonal occurrence of satellite-tagged whales and identifying heavily used areas. This type of information can then be combined with remotely sensed environmental data like sea surface temperature and chlorophyll concentration to model habitat preferences of the whales. This type of information can be used to help predict where whales might occur, which would be a valuable resource for people trying to help endangered populations recover.
I am also interested in studying the diving behavior of whales using time depth recorders. The fact that whales spend so much time underwater prevents us from using one of the most basic research methods available to scientists: observation. Time depth recorders allow us to monitor the whales as they dive, allowing us to answer basic questions like how long/deep/frequently do whales dive and also see how their diving behavior changes over time, and as they move. This information allows us to identify different behaviors like foraging or searching/traveling and use that information to identify important habitat for the whales. | <urn:uuid:12592d06-353b-477c-beef-1b66fbda37f6> | 2 | 1.789063 | 0.037155 | en | 0.870632 | http://mmi.oregonstate.edu/ladd-irvine |
Monday, February 18, 2008
Bigfoot in West Virginia?
Bigfoot makes the news again.
AWT said...
Excellent post... considering that it's from the "mainstream" media, the article seemed unusually objective. I was particularly struck by the sighting made by the "bat crew" near the mine. (I'm betting that they were environmental consultants contracted by a natural resources management agency.) I too study bats and when out collecting data at night w/my colleagues have heard them joke about possibly seeing Bigfoot (UFO's too). I guess that it's really like playing the odds... you spend enough time in the backcountry at night, your probability of encountering such fauna (that normally don't make themselves conspicuous by day) increases. Found this out personally last summer when returning from such a trip I saw a black bear ambling across the road... it was my first one to see despite having spent many daylight hours in bear country beforehand.
I'm surprised that the crew reported seeing the Bigfoot, that's not something that increases your chances of getting hired to do scientific studies of wildlife. Considering this, I wonder how many such sightings go unreported.
Nick Redfern said...
Yeah, the article was strikingly balanced - and very welcome too for a change! | <urn:uuid:e915bfd7-408f-4fdd-bce9-e4b01c5d494c> | 2 | 1.75 | 0.041232 | en | 0.972147 | http://monsterusa.blogspot.com/2008/02/bigfoot-in-west-virginia.html |
Monday, July 13, 2009
Why most reviews online are biased and curved (and therefore a SCAM)
Did you ever notice that most online reviews are curved to the positive? Here's why.
Most reviews let you rate a product , service, or company with a rating from 1 to 5. Sounds fair. right? Wrong!!!!
Take this example:
We'll talk about reviews for this widget.
Mister Aye thinks is the best thing to come out since toilet paper. He quickly gives it a rating of 5.
Miss Bee also bought this product and it broke before she got home. She thinks it's useless and a total waste of money. She gives it a rating of 1.
Mrs. Sea also buys the product and it simply won't do what widgets are supposed to do. What a piece of crap. She gives it a rating of 1.
The average rating is 5+1+1=7 / 3 = 2.33. The site so nicely shows a rating of 2.5 stars (rounded to the nearest half).
When Mr. Dee comes to the site and checks out the rating, it's 2.5 stars out of 5 stars. Okay, he says, half the people like it half don't, an average product. He then decides to buy it, or not.
Mr. Dee was SCAMMMED by the site. For every one person that likes it, there are two people that think it's junk. The true rating should be 1.66 (5+0+0=5/3=1.66). I think Mr. Dee may have decided against the purchase had he seen 1.5 stars instead of the 2.5 shown.
In other words, stars are partly a SCAM!!!!!!!!!
Sunday, July 12, 2009
Earn 4 year Bachelor Degree quickly, easily, and for CHEAP - about $5000 or even as low as $3615 (for NJians)
Oh, and did I mention without sitting through a single class.
Yeah, you're figuring it's another one of those hoaxes. Well, it's not. I am not selling anything here or offering anything here. All I'm doing is giving you the information that you need to get that degree you need/want. And, believe it or not, I have no gain in this. I just want to help out. I know it's a rarity these days, but here I am.
Here goes:
Many people are having a hard time getting a job because they have no Bachelor's degree. Some people are getting passed over for promotions because there's a degree requirement. What are you to do? You don't want to spend $15,000 getting your degree. Also, you don't have time to sit in class or the patience to do the same. Hmm..... is there another alternative? Yes there is and I'll show you how to do it.
First of all, the breakdown that will follow below is for someone that has no previous college credits. Zero. If you have some, you will have an easier , shorter and cheaper time getting your degree.
To get this degree, you have to be ready to learn enough on many topics to get a passing mark on an exam on that topic. We're talking CLEP exams. You may have heard of it before. All it is is a multiple choice exam (usually 100 questions) that covers one topic. The cost is $72 per exam plus you'll pay a proctor fee, which ranges from $20 to $35 depending where you take it. There's a bunch of info available online, if you just use the good old google.
Here's a tip. There's a site called For a minimal price you become a member. This gives you access to practice multiple choice questions on a huge variety of CLEP/Dantes exams. From past experience, I have found that using InstantCert will sufficiently prepare you to pass the CLEP exams. I would strongly advise checking it out if you're gonna this route.
Another part of the trick here is to get 25 free electives through They offer small courses online for free. Each short course has an open-book final made up of 25-50 multiple choice questions. For each course you pass, subject to limitations, you'll get one college credit. These courses take very little time and will give you 24 of the 120 credits you'll need for the degree.
Here's the whole story. Thomas Edison State College, which is a fully accredited college in New Jersey, will let you earn a degree even if you don't take a single course with them. If you fulfill all the requirements for a degree wherever however, they'll let you transfer in the credits and earn the degree. You must, however, enroll in the college, which is half the cost of this degree.
The breakdown of the costs are as follows:
Application fee $75 - this pays for your evaluation which evaluated your previous studies/transcripts
Annual enrollment fee (which your only gonna do for one year obviously) - $2520
Annual Technology fee - $103
Graduation fee (which is gonna be pretty much right away) - $247
So far that's a total of $2945
Now comes the fun. You are going to take 21 exams, each giving you 3 or 6 credits. for a total of 96 credits. The other 24 credits will be through FEMA. If you estimate the cost of each exam at $100 (it may be a bit less for you), that's another $2100
That brings you to a total of $5045 (See the end of the article to see how it may be way cheaper for you.)
PLEASE NOTE: DISCLAIMER: I have not verified all this information with Thomas Edison State College. If you go this route, you may want to apply, get an evaluation, and make sure this is going to work. I just know that I tested out of 15 courses for my degree, and don't see why this shouldn't work for you.
Below, I have broken down the requirements for the Bachelor's Degree in Liberal Studies at Thomas Edison State College, and how you'll fulfill those requirements by exam.
A. English Composition (6 credits)
• CLEP English Composition with Essay (6 credits)
B. Humanities (12 credits)
• CLEP American Literature (6 credits)
• CLEP Humanities - General (6 credits)
C. Social Sciences (12 credits)
• CLEP Social Sciences and History General (6 credits)
• CLEP American Government (3 credits)
• CLEP Human Growth and Development (3 credits)
D. Natural Sciences and Mathematics (12 credits)
• CLEP College Mathematics General (6 credits)
• CLEP Natural Sciences - General (6 credits)
E. General Education Electives (18 credits)
• CLEP Chemistry (6 credits)
• CLEP Biology (6 credits)
• CLEP English Literature (6 credits)
F. Liberal Studies (33 credits)
(Only 2 courses can be at the 100 level)
(Must be at least two areas)
• CLEP Analyzing and Interpreting Literature (6 credits)
• CLEP Introduction to Educational Psychology (3 credits)
• Dantes Introduction to Computing (3 credits)
• Dantes Ethics in America (3 credits)
• Dantes Environment and Humanity: The Race to Save the Plant (3 credits)
• Dantes Lifespan Developmental Psychology (3 credits)
• Dantes Organizational Behavior (3 credits)
• Dantes Drug and Alcohol Abuse
• Dantes Technical Writing Dantes Human/Cultural Geography
G. Free Electives (27 credits)
• Dantes Foundations of Education (3 credits)
• Free FEMA online courses - 24 courses - 1 credit each course (24 credits)
Total credits: 120 credits
Total number of CLEP/Dantes exams: 21 exams
Please note: There are many more CLEP and Dantes exams available that will do the job as well. You should check out all the exams, see which would be easier for you etc. I have, however, used all the 6 credit exams in the calculations (see below for an exception).
Now in the title I wrote "or for as low as $3615". How do I explain that.It's actually quite simple. If you live in New Jersey, as I do, then instead of paying $2520 for enrollment, you pay only $1390. That takes $1130 off the costs, bring it down to $3915.
Now if you know Spanish well, or French or German, there is a CLEP exam worth 12 credits that will replace one of the 3 credit exams I counted. This will take $300 off the costs (3 exams less), bringing the cost down to $3615
Please leave comments and suggestions if you have any.
Wednesday, July 8, 2009
I want to try an experiment and I'm willing to spend $50 on it. I am running a "contest", if you can call it that. I am trying to see how quickly I can get 30,000 unique visitors to this 'basically empty site' by offering a small amount of money to one lucky 'winner'.
Here how it works.
All you have to do is enter your email address and click submit. It can be address that goes to your garbage mail or wherever. I am really not interested in the address. I will be counting unique IP addresses to this site, or however google analytics categorizes unique visitors. Once the 30,000 mark is hit, I will randomly choose one of the people and email them that they won. They'll let me know how they want me to pay them. Paypal, check in the mail, or whatever other method they so choose.
Here's the catch. How can I prove that I'll actually send the money to the 'winner'. I can't. I give my word, but that's it. If you don't want to enter an email address, no problem, You can click away from this 'site' the same way you came. This is an experiment and if it fails, so be it. I also give my word that I will not use your email address for any other purpose other than to let you know you've won.
Also, you should know that I don't expect to ever reach the 30,000 mark. I we reach it, I'll pay out, but I doubt it'll ever come to that. That's why it's an experiment. Experiments usually fail.
If you decide to trust me, or you have an address where all the garbage email goes to anyway, enter it here. Please tell others about this page, so that we can reach the 30,000 mark quickly. If you have any ideas how I can let people know about this page, you can enter it in the comment box.
Tuesday, February 5, 2008
SQL Basics - Creating and Changing Tables (Last Lesson)
Up until now, we dealt with retrieving, inserting, and manipulating data. Today, we'll discuss manipulating the table itself.
To create a new table, we use the CREATE TABLE clause. When we create the table, we must specify the name of the new table, and the name and datatypes of the columns, with each column separated by a comma. For example, to creat the table 'OrderItems' that we've been working with, we do the following:
order_item INTEGER NOT NULL,
prod_id CHAR(10) NOT NULL,
item_price DECIMAL(8,2) NOT NULL
As you can see, we specified the datatype, and if it allows a NULL value or requires you to givwe a value. You can also define a default value, as we've done with the quantity column. This creates the table we have used in this tutorial.
To make changes to an existing table we use the ALTER TABLE clause. Different applications have very different rules as to when you're allowed to alter a table, and with what information. Refer to your application documaentation for your specific case. If and when you do add a column to the table, here's how it's done:
ADD payment_method CHAR(20);
and when your boss is ready to kill you for what you did, you can remove a column:
DROP COLUMN payment_method;
Now your boss is ready to fire you for wasting time, so you can really mess him up by deleting the whole table:
That's by far the easiest thing to do so far. I wonder why that is. The guy that wrote this language must've have loads of scores to settle.
Contact Form | <urn:uuid:5a180921-75a4-48c5-b669-9a5ded453503> | 2 | 1.820313 | 0.037359 | en | 0.941382 | http://n00bhacker.blogspot.com/ |
Tuesday, February 12, 2013
Alcohol Based Markers Vs. Water Based Markers
What are Alcohol Based Markers?
Photo and art courtesy of Heidi Black, markers are mine.
A lineup of alcohol based markers showcasing the diverse selection in nibs and brushes.
Having soldered my way through several reviews of these things, I realized that many in my audience may not know what I mean by the term 'alcohol based marker'. Alcohol based markers differ from water based markers in that the color (dye or pigment) is suspended in an alcohol, rather than water. This means that alcohol based markers are not water soluable, but may be alcohol soluable. Alcohol based markers tend to be permanent, and you can use them to mark on just about anything. I know several cosplayers who use Sharpies or Copics to add color to their wigs, in fact.
Alcohol based markers tend to perform much better than water-based markers, though I must admit, my experience with waterbased markers is pretty much limited to Crayola and it's ilk, or to watercolor markers.
For many artists, 'alcohol based marker' is synonymous with Copic marker, although Copic is just one brand among many. Prismacolor, Chartpak, Letraset, and many other companies make alcohol based markers. Even the ubiquitous Sharpie is an alcohol based marker, although for artistic purposes, it's not archival and will eventually destroy the paper it's on.
There's a variety of uses for alcohol based markers, and for each use, there seems to be a marker that suits that need. From stamping to fine illustration, graffiti to card makining, there's plenty of options to choose from.
Alcohol based markers tend to cost more than waterbased markers, particularly if the comparison is between school-grade markers and illustration grade markers. School grade markers, which are often priced below a dollar per marker, are rarely sold open stock, are not designed to be archival, and are not really intended for professional artist use. They are not refillable, and the nibs are not replaceable (nor very sturdy), and the inks not color fast. Alcohol based markers, originally designed to facilitate graphic and concept artists in generating mock ups, are intended to be archival. You probably can achieve some very impressive effects with school grade markers, but it would take a lot of artistic experience, trial, and effort.
I've been using alcohol based markers (Copic Sketch primarily, before that, Prismacolor) for a little under six years. I use them primarily for commission and illustration work, and my technique leans toward my penchant for unsaturated color and watercolor-esque effects. Of course, they're not limited to that. Alcohol based inks come in a wide variety of colors, hues, and saturations, and one is not limited to watercolor mimicry. You can achieve some very bold effects with alcohol based markers.
What originally drew me to alcohol based markers was the fact that they could be blended, unlike water based markers. Both marker types are capable of overlapping color, but with alcohol based markers, you can blend two dissimilar colors utilizing either a blender marker, rubbing alcohol, or a color between the two. With alcohol based markers, it's easier to avoid the streaky color fields that all grade-school marker enthusiasts are familiar with. To do so, you can 1. saturate the paper with blender before applying your color, 2. saturate your paper with the color you intend to use, or 3. blend out the streaks. If you were to attempt similiar effects with a water based marker, you'd have to give each application time to dry, or the water saturation would make the paper weak.
Because alcohol based markers can be blended, you actually need fewer colors than you would with water based markers, which cannot be blended. A well planned set of alcohol based markers can go a long way if strategically used. | <urn:uuid:3ef23d45-8d0e-4c59-ab8d-c96921dcb514> | 2 | 2 | 0.311557 | en | 0.96831 | http://nattosoup.blogspot.com/2013/02/alcohol-based-markers-vs-water-based.html |
Next Contents Previous
The redshift z of an object is the fractional doppler shift of its emitted light resulting from radial motion
Equation 8 (8)
where nu0 and lambda0 are the observed frequency and wavelength, and nue and lambdae are the emitted. Redshift is related to radial velocity v by
Equation 9 (9)
where c is the speed of light. Many feel that it is wrong to view relativistic redshifts as velocities (eg, Harrison 1993), but I simply do not agree. The difference between an object's measured redshift and its cosmological redshift is due to its (radial) peculiar velocity; ie, we define the cosmological redshift as that part of the redshift due solely to the expansion of the Universe, or Hubble flow. For small v / c, or small distance d, in the expanding Universe, the velocity is linearly proportional to the distance (and all the distance measures, eg, angular diameter distance, luminosity distance, etc, converge)
Equation 10 (10)
where DH is the Hubble distance (see above). But this is only true for small redshifts!
(It is very important to note that galaxy redshift surveys, when presenting redshifts as radial velocities, always use the non-relativistic approximation v = c z, even when it is not appropriate physically; eg., Fairall 1992.)
In terms of cosmography, the cosmological redshift is directly related to the scale factor a (t), or the ``size'' of the Universe. For an object at redshift z
Equation 11 (11)
where a (t0) is the size of the Universe at the time the light from the object is observed, and a (te) is the size at the time it was emitted.
Redshift is almost always determined with respect to us (or the frame centered on us but stationary with respect to the microwave background), but it is possible to define the redshift z12 between objects 1 and 2, both of which are cosmologically redshifted relative to us: the redshift z12 of an object at redshift z2 relative to a hypothetical observer at redshift z1 < z2 is given by
Equation 12 (12)
Next Contents Previous | <urn:uuid:50c879d9-9672-4074-ae6a-76a254660c04> | 4 | 3.734375 | 0.115063 | en | 0.911817 | http://ned.ipac.caltech.edu/level5/Hogg/Hogg3.html |
Polish CERT acts against Virut malware with domain takedowns
CERT Polska, a computer emergency response team in Poland that is run under the aegis of the country's Research and Academic Network (NASK), has announced takedown action against a raft of web servers associated with the Virut family of malware.
Most zombies rely on connecting to so-called C&C (command-and-control) servers to find out what to do next.
So taking over some or all of those servers can make a big difference, at least temporarily, to the crooks' ability to operate their botnets. Every infected PC that crooks can no longer send on a criminal mission represents lost opportunity and lost revenue, and that hits them where it hurts: the pocket. | <urn:uuid:6d56eb13-b967-4465-9766-f27800d043bc> | 2 | 1.75 | 0.090085 | en | 0.917856 | http://news.hitb.org/content/polish-cert-acts-against-virut-malware-domain-takedowns |
National Geographic News
Pebble toad in Tepui Mountains.
South America's tepuis are populated with unique animals like the pebble toad.
Photograph by Joe Riis, National Geographic
Brian Clark Howard
National Geographic
Published September 28, 2013
Image of the 125 Anniversary logo
Tepuis—high sandstone mesas that erupt from surrounding rain forest in southern Venezuela and part of adjacent Guyana and Brazil—have captivated scientists for centuries. Remote, ancient outcrops that soar up to 10,000 feet (3,000 meters) high, tepuis have long been thought of as crucibles of evolution. Recent research confirms their biological importance, but also shatters the most romantic notions about them.
Tepuis are widely cited as the inspiration for Sir Arthur Conan Doyle's The Lost World, and it's true that Victorian-era scientists had hoped to find intact prehistoric ecosystems on their summits, possibly complete with dinosaurs. That idea was even revisited in the 2009 Disney/Pixar film Up, in which a young boy and an elderly man meet a prehistoric-looking, flightless bird after their balloon-lofted house crashes on a tepui.
The filmmakers of Avatar "may have gotten the idea for those islands from tepuis, except they made them float," Bruce Means, a herpetologist and ecologist at Florida State University in Tallahassee, who studies the tabletop mountains, told National Geographic.
Tepuis are the real "islands in the sky," Means said, because their height—typically 5,000 to 10,000 feet (1,500 to 3,000 meters)—effectively cuts them off from the surrounding rain forest. That means they host unique habitats and unique creatures.
The rock that makes up tepuis is thought to be 1.6 to 1.8 billion years old, said Means, who has visited the mountains with support from National Geographic. Part of a vast region called the Guiana Shield, the mesas were uplifted around 40 to 50 million years ago, and then the surrounding rock eroded away. Scientists had assumed that erosion would have left plants and animals stranded on the summits, cut off from the surrounding landscape for tens of millions of years.
"So you would expect tepuis to have very ancient creatures," said Means. "It turns out that many of them do." The story turns out to have more twists than once thought, however, a realization revealed by Means and other researchers.
On Top of the Tepuis
The summits of tepuis are barren, rocky places, said Means. There is very little soil, because heavy tropical rains wash away everything that isn't solid rock. Trees and other vegetation can take hold only in protected cracks and gullies.
Lowland tropical forest surrounds the base of each tepui. Farther up, the vegetation that clings to their rocky sides changes, becoming cloud forest.
Even higher, at about 1,000 feet (300 meters), the mix of tropical animals and plants that can survive changes dramatically, said Means.
On a tepui summit, a scientist might find one or two species of frogs. In the cloud forest zone on the sides of the mountain, there might be 30 to 40 species. Those areas are poorly studied, because the only way to reach them is to climb, said Means. "Many have genera on them that may not be found on any other tepui, let alone anywhere else in the world," said Means, who added that tepui summits—at least in Venezuela—have been better studied than the slopes, because scientists have landed on several in helicopters.
The Mystery of Pebble Toads
To try and gauge exactly how cut off tepui dwellers are from their surroundings below, Means and colleagues homed in on pebble toads—small, colorful amphibians that are covered with bumps. Part of the genus Oreophrynella, pebble toads are an ancient group that split off from other toads many millions of years ago, during the time of the supercontinent of Gondwana, said Means.
Some of their closest relatives now live in cliffs in Africa, he added, an echo of the time when that continent was joined to South America. "Pebble toads have an extraordinary hind foot that has been modified for climbing," said Means. "They are like hands. Nothing else in the frog world is like it."
It is that climbing ability that has allowed pebble toads to be arboreal. "You don't expect to see a toad in the trees," said Means.
Over the past few years, Means has been visiting tepuis in Guyana to collect pebble toads. "We looked at their DNA and found that the differences between [frogs on] tepuis [are] very small, not the differences you would expect from animals that have been isolated from each other for a long time," Means said.
He concluded that there must have been more gene flow up and down tepuis than scientists had previously thought. Pebble toads on the summit had diverged from those at lower elevations "only a few tens of thousands of years ago, if that," said Means.
Means said it's possible that the small toads could climb along cracks in the mesas, even for thousands of feet. Climate change over the past few tens of thousands of years could have spurred them along, he suggested.
"I suspect that even if one of these frogs fell off a tepui, and fell a few thousand feet, it still might survive if it lands in the bushes, because they are very light," said Means.
In addition to pebble toads, Means has discovered an entirely new family of frogs on tepuis, with DNA unlike anything seen before. "It is a living fossil connecting frog families that have tadpoles with those that bypass the tadpole stage in the egg," said Means. He has also identified what he believes will turn out to be a new species of lizard.
Rich Plant Life
Erin Tripp, a botany professor at the University of Colorado, Boulder, who is studying the biodiversity of tepuis, added that the mountains tend to foster plants and animals with extreme adaptations. "Tepuis are some of the wettest places on Earth, with some getting up to 27 feet [9 meters] of rain per year," she said, noting that the wettest parts of continental North America get about 10 feet (3 meters) of rain a year.
"Their nearly pure-sand soils are extremely nutrient deficient," Tripp added about tepuis. "So what can grow in that environment? Your average petunia would be dead in 24 hours."
With support from National Geographic and others, Tripp has collected museum specimens and genetic samples from thousands of plants from northern South America, in an effort to help scientists understand the role that tepuis play in plant evolution.
Kenneth Wurdack, a botanist at the Smithsonian's National Museum of Natural History, has studied tepuis in Guyana and Venezuela, most recently in summer 2012 on a National Geographic/Waitt grant to visit Kamakusa Mountain with Tripp and others. Wurdack's scientist father had been part of the "high tepui-exploration period" of the 1950s, so he grew up fascinated with them.
"No helicopters could be hired in the 1950s. They had to climb up, so those trips would go on for months," Wurdack said.
Scientists started to map tepuis from the air in the late 1940s, said Wurdack. At that point they had been largely unexplored. Native people avoided them because of their harsh climate and rugged terrain, and because of local superstitions. But through the 1950s and 1960s, expeditions were launched from the U.S. to survey tepui biodiversity. In the 1970s, Venezuelan scientists started leading trips up the mesas. By the 1980s, scientists had started using helicopters to access the summits, although some tepuis have yet to be visited by any human beings.
Wurdack said that recent science suggests a complex picture of biodiversity on tepuis. Many of the mountains do have their own unique species, some of which seem to be most closely related to species on nearby tepuis. That's similar to the way species are distributed across island chains, he noted.
Tepuis also have species that are related to flora and fauna in the lowlands. "On tepuis the climate is harsher, so the plants evolved thick leaves or hairs to protect them," said Wurdack. He pointed to types of Podocarpus, an evergreen conifer that is sometimes used as a houseplant, as one common fixture on tepui heights.
The largest tepuis, such as Auyan-tepui in Venezuela—which hosts the spectacular Angel Falls (the world's tallest waterfall)—tend to have the highest number of unique species. (See: "Venezuela Photos.") Such endemic species are most likely to be living things that are less mobile, especially plants, but also small animals like reptiles and amphibians, Wurdack noted. There are very few, if any, birds unique to individual tepuis, and the small mammals thought to live on them are not well known.
Challenges and Threats
Studying tepuis has often been challenging politically, as well as logistically, Wurdack added. In the 1990s, South American governments restricted access to the region after the movie Arachnophobia was released "because they thought it portrayed the area in a negative light," he said.
And after Patrick Tierney's controversial 2000 book Darkness in El Dorado, Venezuela imposed further strict restrictions on scientists trying to study the tepuis. The book, though now largely discredited, had accused anthropologists of performing unethical experiments on and spreading measles to indigenous people in the tepui region in the 1960s.
But the effort required to study tepuis is worth it, said Wurdack, who noted, "Island-type systems like tepuis are very interesting as crucibles of evolution."
Tripp added that part of the scientific appeal of tepuis is they are relatively pristine, so researchers can study an ecosystem that is largely undisturbed by human activity.
Despite their remoteness and hardiness, plants and animals on tepuis do face challenges, especially from climate change, warned Wurdack. Each living thing is adapted to a particular zone. Plants and animals may be able to move up or down the mountain to some extent when the climate changes, but eventually they will run out of places to go.
Means added that some tepuis also face development pressure. Many are well protected in Venezuela, as part of a national park system. But in Guyana, there is less oversight.
"Tepuis can contain gold and diamonds, and people are tearing up the landscape to get at them," said Means. "They use hydraulic hoses, which can blast away rain forest, and they use mercury to amalgamate the gold, and it gets in the water."
He added that there is a lot of land that remains untouched, but that might not always be the case.
Wurdack said the film Up may inspire a new generation of scientists to explore tepuis, the way he had been inspired by The Lost World as a child. "Up did a good job of depicting the landscape, with weird sandstone outcrops and limited vegetation," he said.
Follow Brian Clark Howard on Twitter and Google+.
Jared Parrish
Jared Parrish
I remember the movie Arachnophobia, I didn't see it as a negative movie. I thought it would be fascinating to find a new species of spider. However, I can see how this type of movie was used to scare people. Perhaps, I was more focused on the opening scene. But in reality, spiders are important for our ecosystem. I believe these types of movies, especially the Lost World, Up, and others can inspire people and enhance public interest in our wonderfully amazing world. Let us continue to preserve our world from those that seek to destroy in the name of greed.
c'est bon les superstitions !! tant que les hommes craignaient je ne sais quel esprit maléfique, ils laissaient en paix toutes ces merveilles !! par la faute des SCIENTIFIQUES; ils ont compris qu'il n y avait rien à craindre , et qu'il y avait PEUT-ETRE quelque chose à gagné , ils ont retroussé leurs manches se sont saisi de leur armes de destruction de la biodiversité et ont foncé tête première dans la nature jusqu'alors DELAISSEE. pauvre tepuis
Samantha Rutherford
Samantha Rutherford
Why must a beautiful place be destroyed because of gold or diamonds? Can't it stay the way it is?
Michael Lentz
Michael Lentz
It would be interesting to have autonomous mini-rover cameras that could negotiate cliff faces and trees, to remotely capture images. Of course, they should not appear too appetizing to the local fauna.
Noel Johnson
Noel Johnson
I, too, have been fascinated by the Tepui country since I was a little kid. I read the book "Green Mansions" by William Henry Hudson; saw the movie "The Lost World" (and the recent version).. on and on. This region is important on SO many levels, and in SO many belief systems. What about the Indigenous legend of "Cuyaquiaré".... the Man-Lizard People... said to live in the MANY cavern systems in the area.. especially near Cerro Autana.. a tall, narrow tepui with an open cavern that completely perforates the tepui, about 2/3rds of the way up. The cavern is big enough to fly a small plane or helicopter all the way through Autana !
Danton Shepherd
Danton Shepherd
I wonder if any budding scientists from the University of Guyana are studying these fascinating 'islands'.
Popular Stories
The Future of Food
• Why Food Matters
Why Food Matters
• Download: Free iPad App
Download: Free iPad App
See more food news, photos, and videos » | <urn:uuid:3040dff7-f846-4263-a249-67829da2b324> | 4 | 3.53125 | 0.057425 | en | 0.94549 | http://news.nationalgeographic.com/news/2013/09/130928-tepuis-pebble-toads-biodiversity-evolution-science/ |
Mayo Clinic News Network
News Resources
Share this:
Posted by Shawn Bishop (@Shawngbishop) · Mar 4, 2011
Self-care Steps Can Help Keep Blood Pressure in Normal Range
Self-care Steps Can Help Keep Blood Pressure in Normal Range
March 4, 2011
Dear Mayo Clinic:
Lately when I have my blood pressure checked it is slightly higher than it has been over the years. How can I keep my blood pressure in a healthy range? What are ideal levels?
Your situation is normal. Quite commonly, blood pressure rises with age. For example, even if your blood pressure is within the normal range when you're 40 years old, there's a 50 percent chance it will be higher than normal when you reach 65.
Blood pressure is a measure of the pressure in your arteries as your heart pumps. Testing your blood pressure is an important way for your doctor to monitor your general health. A high blood pressure reading may signal that you're at increased risk for a heart attack or stroke.
Generally, normal blood pressure is less than 140/90 when it's taken in a doctor's office. If you monitor your blood pressure at home, normal is a little lower at 135/85. Visiting a doctor may be a bit stressful for some people, and stress can sometimes raise blood pressure. The higher normal level at the doctor's office takes that into account. People at the lowest risk of stroke and heart attack have blood pressure readings less than 120/80. These are the normal levels used for healthy people. The target for patients younger than 80 years old who are taking medication to regulate their blood pressure is less than 140/90.
For someone in your situation who has noticed a slight rise in blood pressure, you can take self-care steps that may help keep your blood pressure within the normal range.
First, watch what you eat and drink. Limit the amount of salt in your diet. Shoot for no more than 2,000 milligrams of sodium per day. You should carefully read food labels and recognize that "high salt foods" are those with more than 250 milligrams of sodium per serving. Try and choose foods you like with less sodium per serving than that. Focus on eating healthy foods, including lots of fruits, vegetables and whole grains. Drink no more than one alcoholic beverage a day, and keep your daily caffeine intake to less than four units. (One cup of coffee or one can of caffeinated soda is one unit.)
Second, maintain a healthy weight. If you're overweight, losing just 5 to 10 pounds can have a positive effect on your blood pressure. Regular physical activity can also help lower your blood pressure, as well as keep your weight under control. Strive for 45 to 60 minutes of daily aerobic exercise, such as biking, swimming or brisk walking. The time you spend exercising is more important than the intensity.
Third, if you smoke, quit. Avoid secondhand smoke as much as possible. The nicotine in cigarette smoke makes your heart work harder by narrowing your blood vessels and increasing your heart rate and blood pressure. Carbon monoxide in cigarette smoke replaces some of the oxygen in your blood. This increases your blood pressure by forcing your heart to work harder to supply your body with the oxygen it needs.
Finally, if you're concerned about your blood pressure, avoid taking nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen and naproxen sodium. NSAIDs may cause you to retain sodium, creating kidney problems and raising your blood pressure.
Continue to have your blood pressure monitored regularly, at least once a year. If your blood pressure is persistently elevated despite making lifestyle changes, talk to your doctor. Additional measures, which may include medications that lower blood pressure, may be necessary to keep your blood pressure at a healthy level.
— John Graves, M.D., Nephrology and Hypertension, Mayo Clinic, Rochester, Minn.
Blood Pressure
Have something to say? Please login or register to respond. | <urn:uuid:012cce6a-dc5c-4def-9765-6b61bb801a05> | 3 | 2.875 | 0.299534 | en | 0.938189 | http://newsnetwork.mayoclinic.org/discussion/self-care-steps-can-help-keep-blood-pressure-in-normal-range/ |
Novus Scientia Journals are devoted to the publication of original research papers in all areas of science and Technology. We invite scientifically based research articles which will be peer reviewed. Novus Scientia Journals provide researchers with continuing education in basic , advanced and innovative scientific research. Novus Scientia Journals dedicated to all branches of science with following objectives:
1. To maintain the utmost standards of editorial integrity and to use technologies to coerce innovation and improve the communication of journal content
2. To publish original, important, valid, peer-reviewed articles on a diverse range of scientific topics
3. To enable researchers to remain informed in multiple areas of technology, including developments in fields other than their own
4. To inform viewers about the different aspects of research.
5. To achieve the highest level of ethical science journalism and to produce a publication that is timely, realistic, and pleasurable to read.
6. To provide skilled publishing services to the scientific person as author, editor, professionals, teachers, students assisting them improve their knowledge.
Novus Scientia Journals publish high quality peer-reviewed review articles, research papers, short communications in all aspects of Biology, Pharmaceutical and Engineering Technology, Health & Life Science, Biotechnology, Chemistry and Applied Science etc.
INDEXING: Chemical Abstracts Service (CAS), Google Scholar, Science Central c
you are visitor no 7395 | <urn:uuid:77254cda-4a80-40f1-ac8c-15aa8eb7f62d> | 2 | 1.914063 | 0.039136 | en | 0.873555 | http://novusscientia.org/?p=46 |
For the week ending 4 December 2004 / 21 Kislev 5765
Artists of the Soul
by Rabbi Yaakov Asher Sinclair -
The Color of HeavenArtscroll
All of us experience moments of poetry.
They may come from events in our personal lives the re-uniting of long-lost family, a birth, a death. Or these moments of inspiration may spring from this world of teeming splendor, from our sense of joy and wonder at the creation. Some of us, however, are not content to leave those moments of inspiration in the realm of the intangible. We feel the need to give them a physical existence, to immortalize them, or better, to "mortalize" them in words, in song, in paint, or as a photograph.
And once we have made this commitment to clothe our inspiration with earthly garb, there comes the difficult and frustrating process of wrestling with stubborn charcoal and canvas, obdurate gouache, obstinate film and chemicals, to say nothing of the intractable depths of Photoshop[1].
View from Lifta towards the tomb of Samuel the prophet 2004
Art is inspiration wrestling with constriction, the constriction of the physical doing battle with the idea. For in whichever medium the artist chooses to clothe his muse, he must struggle with the characteristics and the limitations of that medium. After all, he is trying to coax that which is beyond the physical to reside within the physical. Its no wonder then that good art is rare.
However, without this struggle of vision-constricted-through-media, there is no art; the mind can dance, but there is no dancing partner. Art exists as a function of constriction, not in spite of it. That dance of the mind and spirit with paper and paint, that exquisite tension between the material and the ephemeral, is where art lives and breathes. Just as a flute only produces music by the constriction of breath through a metal pipe, and without that constriction, that limitation, there is no music, so all the plastic arts rely on the celebration of limits.
"In the image of G-d, He created him.[3]" This verse in the Torah is often misunderstood as meaning that Judaism believes in an anthropomorphic G-d; that G-d has arms, feet, a head and a back. Obviously this cannot be a correct understanding. G-d is a non-physical, non-spiritual Entity of whose essence we can ultimately know nothing. However, whatever ends up in this world as a hand is but the lowest incarnation of something that starts off at the highest level as an aspect of G-ds interface with His creation. Thus, to the extent that it is possible, G-d gives us the ability to know Him from knowing ourselves. As it says in the book of Job, "From my flesh, I will see G-d.[4]" Not only does this mean that by reflecting on the miraculous nature of the body a person can arrive at a belief in a Creator (for the human body is such a complex and brilliant feat of engineering that Darwin himself despaired of his "Origin of Species" when confronted with the human eye), but the fact that G-d created us in His image means that by introspecting on the nature of who we are, we can understand something about G-d.
What is that aspect?
Jewish mystical sources teach that when G-d created the universe, He "constricted Himself" to allow the appearance of something other than Himself. This concept is called tzimtzum - literally "constriction."[5] In other words, this world and everything in it is G-ds Work of Art. It is the place where He constricted His Inspiration by tzimtzum to produce a physical incarnation of His Will the universe. The ultimate Artist is G-d. However, when an artist of flesh and blood paints a picture on a wall, he cannot infuse his creation with a living spirit, with a soul, innards and intestines. An earthly artist can only create a static world. Show me an artist whose paintings can multiply and proliferate or a playwright whose characters have free choice to make decisions that will influence the course of the play![6]
The Talmud says that "if you never saw the Second Beit HaMikdash (Holy Temple), you never saw a beautiful building in your life.[8]" The Beit HaMikdash was called the "eye of the world." The eye is a physical organ but it receives something that is about as non-physical as you can get - light. The eye is the gateway to a non-physical existence called light. The Beit Hamikdash was called "the eye of the world" because it was the portal for the Light. The Beit HaMikdash was the most beautiful building not because its of its dimensions and proportions or its finishes but because it represented the tzimtzum of Hashem in this world. "what house could you build me and what place could be My resting place?[9]"
G-d constricted Himself to allow the existence of the universe. This act of tzimtzum was the first and greatest artwork. As we are created "in the image of G-d", it must be then that we possess a parallel ability in earthly terms. One aspect we have already noted - the universe is G-ds work of art. However, there is more.
Chanuka is the festival that contrasts the artists of the body with the artists of the soul. If the Greeks "wrote the book" on the art of the physical, the Jews are still learning the Book of the Soul.
The Greek view of Judaism goes like this: "How restrictive! You cant eat scampi. You have to pray at certain prescribed times. You must eat at certain times and fast at others. You cant gossip. You cant enjoy the pleasure of the looking at the human body. You cant even pick up a telephone on Saturday." (Thank G-d!) The life of a Jew is brim full of constrictions and restrictions. It is these very restrictions, however, that allow our souls to sing. G-d put into this world a mystical song. It is called the Torah. The Torah is the score, the notes and semibrieves of existence. The Torah allows us to turn this world into art. The mitzvot are the raw material of the artist of the soul. They restrict us but they are the paint and canvas that give us the power to make the physical world speak in the language of the spirit. They are the media through which we create the ultimate art that can exist, because they allow us to form a partnership with the Ultimate Artist in His ultimate artwork.
They are the tools of the artist of the soul.
1. And for the photographer, apart from these basic material constrictions, he has another level of constraint to deal with: As Edward Steichen said, "Every other artist begins with a blank canvas, a piece of paper...the photographer begins with the finished product." He must coax from this world a spirit that is reluctant to epitomize for his lens. The photographer-as-artist tries to make visible the invisible - "to make seen what without you might never have been seen." (Robert Bresson) How many pairs of boots are worn out, how many hundreds of boring negatives are exposed until we are blessed with that decisive moment!
2. This could be one reason that photographs are themselves considered less artistically worthy than painting, because they are closer to reality and less restricted by the medium.
3. Bereshet (Genesis) 1:27
4. Iyov (Job) 19:26
5. Needless to say, a true understanding of this concept is far beyond our grasp. It can only be understood properly by the greatest and holiest of each generation.
6. Luigi Pirandellos "Six Characters In Search Of An Author" toys with this idea. However, in reality we are still watching Six Actors in Search Of A Job.
7. Berachot 10a
8. Bava Batra 4a.
9. Yishayahu 66:1
The View From Lifta
Why does the light playing
on this particular patch of grass
touch my heart?
What makes it
the essential patch of grass,
the patch of grass?
Why do I need to make a photograph of it?
Or the clouds.
What are the unspoken messages
of those great candyfloss giants
tiptoeing across the night sky?
What of the dust rising
from the distantmost turn of the road
on its way
to the tomb of Samuel the Prophet?
© 1995-2015 Ohr Somayach International - All rights reserved.
« Back to Chanukah
| <urn:uuid:cf5327a6-914d-4fbb-aecd-acd0d8d8e929> | 2 | 1.914063 | 0.068215 | en | 0.955164 | http://ohr.edu/holidays/chanukah/greek_philosophy/1956 |
by Rodrigo
submit your photo
Hall of Fame
View past winners from this year
Please participate in Meta
and help us grow.
Take the 2-minute tour ×
I want to know if smudges (finger prints etc) and grease marks in specific have any effect in increasing the lens flare. As those who wear glasses may already know about the problem of flares caused by grease or smudge marks on the lens.
I understand that there are are some questions such as how to control lens flares etc but none have discussed these factors. This is specific to smudges and grease.
share|improve this question
Now, I am more confused with some answers claiming that smudges and grease marks do add to the problem of lens flares and other claiming that they don't. – Nitin Kumar Jul 28 '12 at 1:18
Grease is the material, a smudge is the result of the application. Keep in mind that definitions cause subtle distinction to be made between similar terms for clarification and for precision and accuracy. Otherwise, y'know, like, man. Cuz, well, y' know whadim sayin'? – Stan Sep 2 '13 at 23:03
5 Answers 5
Yes grease and smudges can cause flare, but instead of well defined circles or lines you are more likely to get an overall clouding effect with a visible glow around highlights and lightsources.
In fact it used to be a common technique with glamour and some portrait photographers to smear vaseline on a lens in order to get flattering (if cheesy) soft focus look. The same technique was used to simulate motion blur when shooting stop motion animation in films such as The Terminator.
share|improve this answer
Nose grease worked better than vaseline. No pro was crass enough to apply it directly in front of the client, anyway. We were instructed to use a clean cotton ball to transfer from the source and apply to the surface of the glass filter for the effect. We kept a few different ones around the studio. – Stan Sep 3 '13 at 2:51
Think of smudges, grease, and finger prints (usually caused by oil mixed with other things on your hands) as a semi-transparent mirror.
It causes light to refract and reflect at the points where the smudges and fingerprints are. As Matt pointed out, sometimes this is the desired effect for aesthetic appeal.
share|improve this answer
Smudges can't create "lens flare" per se. Smudges will definitely affect the result (the effect might even be interesting) but it is unlikely to cause what we traditionally call "lens flare". Niether can any similar problem such as dust, fog, or mold that can grow internally.
Lens flare will show up as a chromatic bright spot, often times several spots. It is caused by the source of light, the sun for instance, being reflected internally against the lens elements until those reflections are picked up by the film/sensor. The chances of them occurring go up and the focal length goes down. Wide angle lenses have a higher incidence of lens flare than telephoto lenses. They also go up the narrower the angle between your subject and the light source.
To reduce lens flare, shade the primary element (the outer piece of glass) of the lens. A lens hood is designed for this purpose, but your hand or anything that can block the light will do.
share|improve this answer
Technically speaking, the shapes caused by internal reflections off the lens elements and sensor is called ghosting. The technical term flare refers to the rays of light that extend out from bright light sources, particularly when they are in the corner of the frame. The third related problem would be a loss of contrast or large areas of glare that reduce contrast...which is most likely what grease on the lens would introduce. Grease is also likely to introduce glare even with incident light (flare and ghosting tend to be created by non-incident light)...a hood might not fix it. – jrista Jul 26 '12 at 3:19
Your definition of "glare" is spot on. But lens flare is called that precisely because it looks like a flare, and is the effect of the light source being reflected internally such as in this example: cameron-photo.com/files/gimgs/20_lensflare0703.jpg. – IAmNaN Jul 26 '12 at 3:32
@jrista I've generally heard the term flare refer to both the bright lines and contrast reducing haze (the wikipedia article reflects this issue). – Matt Grum Jul 26 '12 at 18:05
Yeah, flare usually encompasses the bright lines and contrast reduction...however contrast reduction from glare could occur independent of any actual flaring if there are greasy spots on the lens. As for "flare" including reflections...its often used to refer to the whole entire effect in casual speak, but ghosting is the official term for reflections. Canon, Nikon, and most other brands use the term ghosting in technical documentation as well as informative pages on lenses most of the time. – jrista Jul 26 '12 at 19:37
I would say no, well not by the definition of lens flare. Lens flare is more defined as coloured circles, grease etc will cause blurred images and "haze".
Unless of course its a huge chunk of clear grease (or indeed rain)- which could itself act as a small lens, in which case.. yes!
I might add - for god's sake keep your lenses clean! there is nothing worse for a lens than the act of cleaning it (well perhaps dropping it!) you will inevitably mildly (invisibly) scratch it and slowly erode the coating.
share|improve this answer
Anything irregular on the polished surface of the lens will generate non-imaging light; aka flare.
You can make a very long list of specifics, if you want, but there it is. It doesn't matter how thin, or thick it is. If it is non imaging illumination, it is flare.
Dust is individual particles, Moisture is individual droplets, grease is a thin non-planar layer of translucent oil, scratches are individual furrows, etc.
The coating is a controlled application of a thin (usually metal) surface treatment applied evenly to the polished surface of the lens for image enhancement or surface protection.
share|improve this answer
Your Answer
| <urn:uuid:ad91316e-c096-4242-8cef-f221b2464006> | 2 | 1.8125 | 0.432186 | en | 0.941083 | http://photo.stackexchange.com/questions/25644/can-smudges-and-grease-marks-add-to-the-problem-of-lens-flare/25645 |
EPA Honors AMD With ENERGY STAR Certificate For Innovative Cool'n'Quiet Technology
Mar 17, 2005
AMD today announced that the U.S. Environmental Protection Agency (EPA) awarded AMD’s Cool‘n’Quiet technology with an ENERGY STAR Certificate of Recognition for advancing computer energy efficiency. All AMD Athlon 64 desktop processors have the innovative Cool’n’Quiet technology, a system-level feature that lowers the power consumption of a computer whenever maximum performance is not needed. AMD received the certificate on March 15 in conjunction with the 2005 ENERGY STAR Awards Ceremony in Washington, D.C.
The EPA recognized AMD for significantly advancing energy efficiency in desktop PCs. AMD demonstrated to the EPA power savings of up to 35 watts per computer, depending on the application in use, in comparison tests with PCs not supporting AMD’s Cool’n’Quiet technology. By optimizing power consumption, Cool’n’Quiet technology not only works to benefit the environment, but consumers’ energy bills as well.
“The ENERGY STAR award for Cool’n’Quiet technology once again demonstrates the value of AMD’s commitment to developing customer-centric innovations that truly make a difference in people’s lives,” said Marty Seyer, corporate vice president and general manager of the Microprocessor Business Unit, Computation Products Group, AMD. “While demand for faster processors may increase power consumption, heat and noise, power management solutions such as AMD’s Cool’n’Quiet technology allow consumers to make smarter choices, saving money and energy, while contributing to an improved global environment.”
Energy efficiency means delivering the same service or operations with less energy. Its benefits extend to the environment, the consumer’s pocketbook and the value proposition offered to business customers who use many PCs. Energy-efficient computers consume less power, give off less heat, exert less strain on cooling systems and can result in a quieter work environment.
“Innovative processor advancements such as AMD’s Cool’n’Quiet technology significantly improve power management features, making them more reliable, dependable and user-friendly than even just a few years ago,” said Craig Hershberg, product manager for Office Equipment and Consumer Electronics, U.S. EPA. “In AMD’s tests, Cool’n’Quiet showed a significant decrease in power consumption resulting in energy efficiency improvements up to 28 percent, making it an ideal candidate for the ENERGY STAR Certificate of Recognition.”
AMD’s Cool’n’Quiet technology effectively lowers power consumption, enabling a quieter-running system while delivering performance on demand for the ultimate computing experience. Cool’n’Quiet technology improves a computer’s energy efficiency by matching processor utilization to the performance actually required. Because common PC programs such as word processing and reading email only require minimal processor utilization, while cinematic games, complex calculations or data encoding require higher utilization, Cool’n’Quiet technology adjusts accordingly to leverage system energy efficiency. By reducing the frequency and voltage of the microprocessor, Cool’n’Quiet technology enables overall lower system and processor power consumption.
AMD has been a leader in bringing power-management features to businesses and consumers, first with AMD PowerNow!™ technology in our mobile processors and following with Cool’n’Quiet technology in AMD Athlon 64 desktop processors. In addition, AMD recently announced that the enterprise-class AMD Opteron™ processor family will include AMD PowerNow! technology with Optimized Power Management (OPM) in the first half of 2005.
The U.S. Environmental Protection Agency established ENERGY STAR in 1992 as a voluntary, market-based partnership to reduce air pollution by giving consumers simple energy-efficient choices. Today, with assistance from the U.S. Department of Energy, the ENERGY STAR program offers businesses and consumers energy-efficient solutions to save energy and money, and help protect the environment for future generations. More than 8,000 organizations have become ENERGY STAR partners and are committed to improving the energy efficiency of products, homes and businesses. ENERGY STAR continues to build awareness internationally through its partnerships with the European Community (EU), Japan, Taiwan, Canada, New Zealand and Australia.
Explore further: Why your laptop battery won't kill you
add to favorites email to friend print save as pdf
Related Stories
Why your laptop battery won't kill you
4 hours ago
Visa, MasterCard moving into mobile pay in Africa
4 hours ago
Recommended for you
Supreme Court allows challenge to Colorado Internet tax law
9 hours ago
New incubator network to help clean-energy entrepreneurs
10 hours ago
User comments : 0
Click here to reset your password. | <urn:uuid:57d638a3-2ce6-40b6-b801-adae1a7e8725> | 2 | 1.71875 | 0.020196 | en | 0.904738 | http://phys.org/news3428.html |
Trend: Turbulent convection
Guenter Ahlers, Department of Physics, University of California, Santa Barbara, CA 93106, USA
Published September 14, 2009 | Physics 2, 74 (2009) | DOI: 10.1103/Physics.2.74
+Enlarge image Figure 1
Illustration: Royal Swedish Academy of Sciences
Figure 1 Granules and a sunspot in the sun’s photosphere, observed on 8 August 2003 by Göran Scharmer and Kai Langhans with the Swedish 1-m Solar Telescope operated by the Royal Swedish Academy of Sciences.
+Enlarge image Figure 2
Figure 2 (Top) Shadowgraph visualization of rising and falling plumes at Ra=6.8 × 108, Pr=596 (dipropylene glycol) in a Γ=1 cell (from Ref. [11]). (Bottom) Small thermochromic liquid-crystal spheres are seeded in the convecting fluid. Their Bragg-scattered light changes color from red to blue in a narrow temperature range. Streak pictures of the spheres with a long exposure time show the temperature and velocity fields simultaneously. Cooler regions appear brown and warmer regions appear green and blue. This image was taken near the top surface at Ra=2.6 × 109 and Pr=5.4 (water). The view shows an area of 6.5 cm by 4 cm. Near the middle top one sees a brownish (cold) plume detaching from the boundary layer, extending down and to the left into the fluid, and forming a mushroom head consisting of two swirls (from Ref. [12]).
+Enlarge image Figure 3
Figure 3 Visualization for Ra=108 of two temperature isosurfaces in a cylindrical sample with Γ=1 for Pr=6.4 and at a modest rotation rate (from Ref. [30]).
+Enlarge image Figure 4
Figure 4 The High-Pressure Convection Facility, weighing approximately 2000 kg, is being inserted into the turret of the “U-boat.”
Turbulent convection in a fluid heated from below and cooled from above, called Rayleigh-Bénard convection [1, 2], plays a major role in numerous natural and industrial processes. Beyond a particular temperature difference, the heated fluid rises and the cooled fluid falls, thereby forming one or more convection cells. Increasing the difference causes the well-defined cells to become turbulent. Turbulent convection occurs in earth’s outer core [3, 4], atmosphere [5, 6], and oceans [7, 8], and is found in the outer layer of the sun [9] and in giant planets [10]. A beautiful example is seen in the photosphere of the sun (see Fig. 1), where a dominant feature is an irregular and continuously changing polygonal pattern of bright areas surrounded by darker boundaries. These granules are convection cells with a width of typically 103km and a lifetime of only about 10 to 20 minutes.
The processes mentioned in the previous paragraph are exceptionally complex. It is true that buoyancy due to the density variation associated with the temperature variation and in the presence of gravity is the central driving force that produces the fluid flow. However, in astrophysics this flow often is modified by the influence of a Coriolis force, for instance due to the rotation of a star or planet. Further complications arise from the fact that the fluids involved sometimes are plasmas or liquid metals. In those cases the flow can interact with or even generate magnetic fields. The equations of fluid mechanics, i.e., the Navier-Stokes equations, then have to be supplemented by and are coupled to Maxwell’s equations. Additional problems may be added by the shape of the convecting system, which can introduce complicated boundary conditions.
What then is a physicist to do in these situations of apparently hopeless complexity? The astrophysicist or engineer, for instance, will have to come to grips with the entire problem by making whatever approximations may be necessary to render it tractable, while not losing any of the main physical aspects. The physicist, on the other hand, has the luxury of extracting a particular manageable aspect from the whole and idealizing it in a carefully constructed laboratory apparatus or computer program where boundary conditions and other external conditions are precisely defined. In this idealized system quantitative studies of particular fundamental aspects of the complex system then become feasible.
The idealization I want to consider is a sample of fluid in a cylindrical container with a circular cross section, a vertical axis, and an aspect ratio ΓD/L (where D is the diameter and L the height) that is heated uniformly over its bottom surface while it is cooled uniformly from above. In addition to its relevance (or some may say irrelevance because it is a major approximation) to astrophysics and geophysics (as well as numerous industrial applications), this system turns out to be of remarkable interest for its own sake. From the fluid mechanics viewpoint, it is fascinating because it is dominated over wide parameter ranges by the physics of boundary layers. Equally interesting is that it provides a tractable example of interactions between large and small scales, which are broadly important in fluid-flow problems. More generally from the viewpoint of statistical mechanics, it offers the opportunity to study the statistical properties of a driven (i.e., nonequilibrium) system in which the small turbulent scales are the noise source that drives the large-scale flow structures.
Below the onset of turbulence
To easily compare different systems, we express the strength of the thermal driving as a quantity called the Rayleigh number
which is a dimensionless form of the temperature difference. Here α is the isobaric thermal expansion coefficient, g the local acceleration of gravity, ΔT the applied temperature difference, κ the thermal diffusivity, and ν=η/ρ the kinematic viscosity (η is the shear viscosity and ρ the density). For sufficiently small ΔT, the motionless (pure conduction) state of the fluid is stable, and convection will set in only when Ra is greater than some critical value Rac(Γ). For a sample of infinite width and finite height, i.e., for Γ=, it has long been known that Rac()=1708, but for a cylinder of finite Γ, Rac is larger and depends on the conductivity of the side walls [13].
In what follows, I will consider the case Γ=1. Historically this is the case that was studied most extensively because it allows the use of a fairly large height L [ and thus large Ra, see Eq. (1) ] without becoming too wide to fit conveniently into a laboratory. For nonconducting walls, one then has Rac (1)4000. Above onset, the azimuthal symmetry of the fluid flow can be described well by the eigenfunctions of the Laplace operator in cylindrical coordinates, i.e., it has the form exp(imθ). For our Γ=1 case, and close to Rac, the flow consists of a single convection roll with upflow along the wall at an azimuthal orientation θ0 and downflow at the opposite side θ0+π, corresponding to m=1. As Ra increases, the pattern becomes more complex, corresponding to larger values of m and possibly also to more complicated vertical structures. When Ra is sufficiently large, the flow goes from steady to time-varying. Precisely what happens then will depend on another dimensionless quantity, the Prandtl number Prν/κ (which tells us about the relative importance of viscous and thermal dissipation). Typically, the time dependence at first is periodic or chaotic—remnants of the cellular flow structure with m>1 are still recognizable, and the fluid flow remains laminar—but as Ra is increased further beyond some Rat, all internal structure disappears except for a single roll (m=1). In that Ra range, vigorous small-scale fluctuations become important and we regard the sample as being turbulent. The precise sequence of events leading to turbulence and the value of Rat depend both on Γ and on Pr (see Ref. [14]). For Pr30 and Γ=1, for instance, we found that Rat107. The transition from laminar to turbulent flow was not sharp, with the turbulence evolving gradually from chaos as Ra was increased near Rat.
The turbulent range
In the turbulent regime much experimental and numerical work has been done for Γ1 (for details see Ref. [1]). We find that this system indeed contains a single convection roll, known as a “large-scale circulation,” just as it did close to Rac, albeit in the presence of vigorous fluctuations on smaller length scales. The upper part of Fig. 2 is a shadowgraph visualization, looking sideways through the sample. This method is based on the bending of light rays by refractive-index gradients and thus provides an image closely related to the temperature field. One sees plumes of relatively warm fluid rising on the left and plumes of relatively cold fluid falling on the right. These plumes originate at thermal boundary layers [15] of thickness λb<<L just below the top and just above the bottom plate.
An example of plume emission is shown in the bottom of Fig. 2. As a very crude approximation, the boundary layers can be viewed as quiescent fluid, with each layer supporting a temperature difference roughly equal to ΔT/2. This then would leave the entire sample interior at a nearly constant temperature. In reality the situation is a great deal more complicated because the temperature and velocity fields are fluctuating vigorously, both in the interior and in much of the boundary layers. Again roughly speaking, the boundary layers will adjust their thicknesses so that, according to Eq. (1), the Rayleigh number based on the boundary layer thickness λb (rather than L) approximately reaches its critical value. The plume emission can then be viewed as a manifestation of the near-marginal stability of the boundary layers.
Recent experimental work for Γ1 revealed that the large-scale circulation carrying the plumes, and in turn being driven by their buoyancy, displays very interesting dynamics. In samples of circular cross section, the orientation of the near-vertical circulation plane undergoes azimuthal diffusion, as revealed by the observation that its mean-square azimuthal displacement is proportional to the elapsed time [16, 17, 18, 19]. A further fascinating feature of the large-scale circulation is a torsional oscillation, with azimuthal displacements that are out-of-phase by π in the top and bottom parts of the sample [20, 21]. An important question was whether this mode is a characteristic of the underlying deterministic dynamics. Such a deterministic oscillator mode would have a probability distribution p(θ-θ0) of the azimuthal displacement θ away from the mean value θ0 with two maxima, one each near the two displacement extrema. However, it turned out that p(θ-θ0) was Gaussian distributed [21] with a maximum at θ-θ0=0. Such a distribution is indicative of a stochastically driven damped harmonic oscillator [22]. Thus both the azimuthal diffusion and the nature of the torsional mode suggest to us that we are dealing with a large-scale circulation of the system that is driven by the noise consisting of the small-scale turbulent background fluctuations.
Another experimentally observed property of the large-scale circulation is that it occasionally slows down and virtually comes to a halt, only to start up again, albeit usually at a different orientation [18, 23]. These “cessations” are events reminiscent of the cessations observed in the geo-dynamo that are associated with reversals of earth’s magnetic field [3, 4]. Much earlier it had been realized already that there are also rare occasions when the large-scale circulation orientation undergoes rotations at exceptionally high rates without completely loosing its circulation [24]. Both the “rotations” and the cessations occupy only a small fraction of the total time, and are superimposed upon the otherwise diffusive azimuthal dynamics. Yet another unexpected experimental observation was that the probability distribution of θ0 had a broad peak rather than being uniform as would be expected based on the rotational invariance of the sample.
Stimulated by some of these experimental findings and hopeful for an explanation of others, Eric Brown and I derived a simple model for the large-scale circulation [25, 26]. The idea was to identify the smallest number of necessary components of the large-scale circulation, to retain the terms of the Navier-Stokes equations that are physically relevant to these components, to perform a volume average so as to reduce the field equations to ordinary differential equations, and to add phenomenological stochastic driving terms (with intensities derived from the measured diffusivities) to represent the action of the small-scale fluctuations on the large-scale excitation.
There turn out to be at least two necessary components, namely, the circulation strength U and the azimuthal orientation θ0 of the circulation plane. The strength U is driven by the buoyancy term and damped by viscous velocity boundary layers near the walls. The equation for U is coupled to that for θ0 by a term that arises from the nonlinear term in the Navier-Stokes equation; this term represents the angular momentum of the large-scale circulation and is proportional to U. We assumed further that U is proportional to the amplitude δ of the measurable sinusoidal temperature variation around the circumference at the horizontal midplane of the cylinder. This procedure yielded two stochastic ordinary differential equations, one for the first time derivative of δ, the other for the second derivative of θ0 [1, 25, 26].
We find that there is an unstable fixed point at δ=0 and a stable one at the mean value of δ=δ0. Normally δ will undergo diffusion in the depth of the potential well surrounding δ0, but on rare occasions δ will be driven by the noise to the unstable fixed point. Such an event corresponds to a cessation. The equation for the second derivative of θ0 is equally interesting: reflecting the rotational invariance of the system, it has no potential extrema. It will yield diffusion, but at a typical rate controlled by an effective damping term, proportional to δ, that represents the angular momentum of the large-scale circulation. Thus rapid and large changes of θ0 can, but do not have to occur when δ (and thus the angular momentum) is small. This feature explains the observed occasional rapid rotations.
Recently this model was extended by including terms that break the azimuthal invariance of the system [27]. An example of such a term is a noncircular cross section of the cylinder. The model then predicts that the circulation plane will tend to align along the largest diameter, with fluctuations about this alignment. Another example is a system with a tilt of the vertical axis relative to gravity. Both of these cases will, for appropriate parameter values, lead to oscillations of θ0 corresponding to a damped stochastically driven harmonic oscillator. For the tilted case, these oscillations have actually been observed and their properties have been measured [27]. Note that they are unrelated to the torsional oscillations mentioned earlier.
A particularly interesting symmetry-breaking term is the Coriolis force due to the rotation of the earth, which couples to the circulation [17]. In the northern hemisphere it turns out that up- or downflow more or less parallel to the cylinder axis yields a preferred westerly orientation of θ0, whereas flow more or less horizontal, and thus parallel to the cylinder diameter, applies a torque that tends to rotate the circulation plane in the clockwise direction when seen from above. These two competing effects yield a periodically varying potential (with period 2π) with a sloping background. Such a potential is sometimes known as a “washboard potential” and arises in many condensed-matter physics problems, including charge-density waves in semiconductors and constant-current-biased Josephson junctions. Knowing the azimuthal diffusivity and the potential of the system, one can calculate the probability distribution p(θ0) using a Fokker-Planck equation. The result, obtained without any adjustable parameters, agrees extremely well with the measured broad peak in p(θ0) that had been so surprising in view of the perceived rotational invariance of the system. Here we have a wonderful application of the methods of statistical mechanics to a fluid-mechanical problem.
Extensive measurements were made also for cylinders with Γ=0.5 and Pr=5 (see Refs. [16, 28, 29]). Among other interesting results, this work showed that cessations are more frequent by an order of magnitude than they are for Γ=1. It remains to be seen whether this difference can be explained in terms of the model equations discussed earlier, with appropriate parameter choices.
What are the unresolved issues?
Several variations of the basic Rayleigh-Bénard convection problem are of current interest. One of them is the influence of deliberately imposed rotation about an axis parallel to the cylinder axis and at angular speeds Ω much larger than that of earth’s rotation. For not too large values of Ω, the Coriolis force will twist the plumes emitted from the boundary layers into vertically aligned tubes known as Ekman vortices. This is illustrated by the direct numerical simulation results shown in Fig. 3. These vortices, by virtue of the reduced pressure along their axes, will extract extra fluid from the boundary layers and significantly increase the heat transport. Enhancements in the ratio of convective to conductive heat transport (the Nusselt number, Nu) of over 30% have been observed [30]. However, at larger Ω the Nusselt number is suppressed because globally the rotation suppresses flow parallel to the rotation axis. Understanding these phenomena has significant industrial consequences, for instance, in the growth of crystals from the melt. It is relevant as well to the elucidation of convection in astrophysical objects where rotation can have a much larger influence than it does on earth. Much more is to be learned about the physics that is involved.
Another interesting problem arises when the applied temperature difference straddles a first-order phase transition [31]. The heat transport can then be enhanced by an order of magnitude or more. This problem is important, for instance, in understanding the formation of rain in clouds and for the understanding of convection in earth’s mantle. And of course it has numerous industrial applications ranging from miniaturized heat exchangers for cooling of computer components to large-scale power plants.
Other issues that are beginning to be investigated are the turbulent state in liquid crystals, where the rodlike molecules can be given a preferred orientation by the application of a magnetic field and where the fluid properties are then anisotropic. In this system the instabilities of the boundary layers are expected to differ from those of the isotropic fluid, and it will be interesting to see how this affects the turbulent state. Other variations of current interest include the influence of suspended particles and the effect of polymers on the heat transport and flow structure.
Returning to pure Rayleigh-Bénard convection without the above variations or complications, there also remain major open issues. Let us consider just two of them. First, it is obvious that convection in a cylinder with Γ1 does not correspond very closely to many of the problems of interest, for instance, to the granules seen in the photosphere of the sun (see Fig. 1). We would love to know whether an irregular polygonal pattern of vigorously fluctuating convection cells such as seen in Fig. 1 would also be the pattern of large-scale circulation in a system of very large Γ. To answer this simple question is difficult. In experiments there generally is a limit to the lateral extent of an apparatus. Thus large Γ is often achieved only at the expense of the height L. However, according to Eq. (1), small L will lead to small Ra, and yet large Ra is desired as well. Nonetheless, no doubt this will be one of the directions of future research. If an irregular polygonal pattern does indeed exist, then an interesting question will be how this pattern is influenced by a prevailing lateral current imposed upon the system. This issue is relevant, for instance, to the formation of cloud streets (lines of cumulus clouds) in the atmosphere. It has been studied at some length near the onset of convection [32], but, to my knowledge, not for the turbulent system. The common view is that the irregular convection cells will be organized into more or less ordered rolls by the prevailing wind.
A second question of great importance is how Rayleigh-Bénard convection, even in a cylinder with Γ of order unity, will behave at very large Ra. With a few exceptions to be mentioned below, laboratory experiments have been limited to Ra1012, and direct numerical simulations have not yet been able to reach such high values. Reliable calculations, taking many days of CPU time on modern computers, have reached only Ra1010 for a cylindrical sample with Γ=1/2 [33]. In the explored Ra range, measurement and numerical simulations indicate that Nu is proportional to Raγ, with γ changing gradually from about 0.28 to about 0.31 as Ra changes from 107 to 1012 [34, 35, 36, 37]. This behavior is explained very well by a model of Grossmann and Lohse [38, 39], which is based on a decomposition of the kinetic and thermal dissipations into boundary and bulk contributions. As Ra increases, bulk contributions generally become more important and for that reason the effective exponent changes.
One might be quite satisfied with the understanding of Rayleigh-Bénard convection developed on the basis of the existing measurements for Ra1012, except for the fact that theoretically the physics of this system is expected to change dramatically as Ra grows further. With increasing Ra the large-scale circulation is expected to become more vigorous. Its maximum speed is near the boundary layer at the top and bottom, but directly at the plates the velocity has to vanish for a viscous fluid. Thus the large-scale circulation applies a shear to the boundary layers. When the shear becomes large enough, the heretofore laminar (albeit fluctuating) boundary layers will themselves become turbulent and in a sense be swept away. An estimate [40] suggests that this will occur for Ra=Ra*3 x 1014 when Pr=1, and that Ra* is proportional to Pr0.7. The nature of Nu(Ra) for Ra>Ra* was investigated theoretically long ago by Kraichnan [41], and his predictions have stimulated the community ever since to search for ways to explore this high-Ra regime. His prediction for a system without boundary layers is that NuRa1/2, i.e., that Nu should increase much more rapidly with Ra than it does below Ra*. Of course our actual laboratory system does have top and bottom boundaries, and even though the laminar boundary layers may be gone, there remains the restriction that the velocity must vanish at the solid-liquid interface. This condition leads to so-called “viscous sublayers,” which are thinner than the laminar boundary layers; in Kraichnan’s theory they lead to logarithmic corrections to the relation between Ra and Nu, yielding NuRa1/2/[ln(Ra)]3/2.
There are at least two reasons why the Kraichnan transition is so important. First, it is associated with a fundamental change in the heat transport mechanism. Below Ra* the heat transport was limited primarily by laminar boundary layers. Above Ra* the limiting factor presumably is a thermal gradient in the bulk fluid. We certainly would like to understand this basic change in the physics of the system. Second, we know that what we learned below Ra* cannot be extrapolated to Ra>Ra* because of this change in the mechanism. It turns out that many of the astrophysical applications involve Ra>1020, i.e., values above Ra* by several orders of magnitude. So we really cannot extrapolate existing measurements to the Ra ranges of these natural phenomena.
Achieving large Rayleigh numbers and strong turbulent convection
How then can we reach very large values of Ra? From Eq. (1) one sees that either a fluid can be chosen for which the combination α/κν is particularly large, or an apparatus with very large L can be built. The former choice was pursued by Castaing and co-workers in Chicago, US, followed by Chavanne et al. in Grenoble, France, who used fluid helium at about 5K near its critical point and reached Ra1015 [42]. Another group, Niemela et al. in Oregon, US, went further by using low-temperature helium as well, and at the same time also constructing a large apparatus with D0.5m and L1m [43]. Unfortunately the two sets of measurements do not agree. The Grenoble results found a transition in Nu at Ra1011 from a low-Ra regime with γ0.31 to a high-Ra regime with γ0.39, which they interpreted as the Kraichnan transition even though it occurred at an unexpectedly low value of Ra*. The Oregon group reached unprecedented values of Ra as large as 1017 corresponding to Nu20000; their data were consistent with γ0.31 over their entire Ra range and did not reveal any transition.
Researchers needed to address this discrepancy with a different type of experiment that was not dependent on cryogenic techniques and instead used classical fluids at ambient temperatures. To that end, Denis Funfschilling, Eberhard Bodenschatz, and I used a very large pressure vessel at the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany. It is a cylinder of diameter 2.5m and length 5.5m, with its axis horizontal, and with a turret above it that extends the height to 4m over a diameter of 1.5m. Because of its shape, this vessel has become known as the “U-boat of Göttingen .” It can be filled with various gases at pressures up to 19 bars. In the section containing the turret we placed a Rayleigh-Bénard sample cell with L=2.24m and D=1.12m (the “High Pressure Convection Facility” or HPCF), yielding Γ=0.500. Figure 4 shows the insertion of the HPCF into the turret of the U-boat. After insertion, a dome is bolted to the top of the turret section to complete the pressure enclosure.
Using sulfur hexafluoride at 19 bars, we reached Ra2×1015 [44]. Up to Ra=4×1013 our results were consistent with the Oregon experiment, but differed from the Grenoble measurements: we did not find the Kraichnan transition in Nu. At Ra=4×1013 we observed a sharp transition in Nu(Ra) to a new state, but the dependence of Nu(Ra) for this state was not as predicted by Kraichnan; we found an effective exponent that was less than 0.3 rather than the predicted 0.39 or so. Work with the HPCF is still under way, and we look forward to what the future will bring. We expect to learn quite a bit more about how the large-scale circulation evolves as Ra becomes large. However, at this point it is not clear whether the ultimate, or asymptotic, regime predicted by Kraichnan can ever be reached in a system with rigid top and bottom plates. But then the granules in the sun’s photosphere for instance do not have any such confining plates.
1. G. Ahlers, S. Grossmann, and D. Lohse, Rev. Mod. Phys. 81, 503 (2009).
2. D. Lohse and K.-Q. Xia, Annu. Rev. Fluid Mech. (to be published).
3. P. Cardin and P. Olson, Phys. Earth Planet. In. 82, 235 (1994).
4. G. Glatzmaier, R. Coe, L. Hongre, and P. Roberts, Nature 401, 885 (1999).
5. E. van Doorn, B. Dhruva, K. R. Sreenivasan, and V. Cassella, Phys. Fluids 12, 1529 (2000).
6. D. L. Hartmann, L. A. Moy, and Q. Fu, J. Climate14, 4495 (2001).
7. J. Marshall and F. Schott, Rev. Geophys. 37, 1 (1999).
8. S. Rahmstorf, Climatic Change 46, 247 (2000).
9. F. Cattaneo, T. Emonet, and N. Weiss, Astrophys. J. 588, 1183 (2003).
10. F. H. Busse, Chaos 4, 123 (1994).
13. J. C. Buell and I. Catton, J. Heat Transfer 105, 255 (1983).
14. R. Krishnamurti, J. Fluid Mech. 42, 309 (1970).
15. S. L. Lui and K.-Q. Xia, Phys. Rev. E 57, 5494 (1998).
16. C. Sun, H. D. Xi, and K. Q. Xia, Phys. Rev. Lett. 95, 074502 (2005).
17. E. Brown and G. Ahlers, Phys. Fluids 18, 125108 (2006).
18. E. Brown and G. Ahlers, J. Fluid Mech. 568, 351 (2006).
19. H. D. Xi, Q. Zhou, and K. Q. Xia, Phys. Rev. E 73, 056312 (2006).
20. D. Funfschilling and G. Ahlers, Phys. Rev. Lett. 92, 194502 (2004).
21. D. Funfschilling, E. Brown, and G. Ahlers, J. Fluid Mech 607, 119 (2008).
22. M. Gitterman, The Noisy Oscillator: The First Hundred Years From Einstein Until Now (World Scientific, Singapore, 2005)[Amazon][WorldCat].
23. E. Brown, A. Nikolaenko, and G. Ahlers, Phys. Rev. Lett. 95, 084503 (2005).
24. S. Cioni, S. Ciliberto, and J. Sommeria, J. Fluid Mech. 335, 111 (1997).
25. E. Brown and G. Ahlers, Phys. Rev. Lett. 98, 134501 (2007).
26. E. Brown and G. Ahlers, Phys. Fluids 20, 075101 (2008).
27. E. Brown and G. Ahlers, Phys. Fluids 20, 105105 (2008).
28. H.-D. Xi and K.-Q. Xia, Phys. Rev. E 78, 036326 (2008).
29. H.-D. Xi and K.-Q. Xia, Phys. Fluids 20, 055104 (2008).
30. J.-Q. Zhong, R. Stevens, H. Clercx, R. Verzicco, D. Lohse, and G. Ahlers, Phys. Rev. Lett. 102, 044502 (2009).
31. J.-Q. Zhong, D. Funfschilling, and G. Ahlers, Phys. Rev. Lett. 102, 124501 (2009).
32. R. E. Kelly, Adv. Appl. Mech. 31, 35 (1994).
33. R. J. A. M. Stevens, R. Verzicco, and D. Lohse (to be published).
34. X. Xu, K. M. S. Bajaj, and G. Ahlers, Phys. Rev. Lett. 84, 4357 (2000).
35. D. Funfschiling, E. Brown, A. Nikolaenko, and G. Ahlers, J. Fluid Mech. 536, 145 (2005).
36. A. Nikolaenko, E. Brown, D. Funfschilling, and G. Ahlers, J. Fluid Mech. 523, 251 (2005).
37. C. Sun, L.-Y. Ren, H. Song, and K.-Q. Xia, J. Fluid Mech. 542, 165 (2005).
38. S. Grossmann and D. Lohse, J. Fluid. Mech. 407, 27 (2000).
39. S. Grossmann and D. Lohse, Phys. Rev. Lett. 86, 3316 (2001).
40. S. Grossmann and D. Lohse, Phys. Rev. E 66, 016305 (2002).
41. R. H. Kraichnan, Phys. Fluids 5, 1374 (1962).
42. B. Castaing, G, Gunaratne, F. Heslot, L. Kadanoff, A. Libchaber, S. Thomae, X.-Z. Wu, S. Zaleski, and G. Zanetti, J. Fluid Mech. 204, 1 (1989); X. Chavanne, F. Chilla, B. Castaing, B. Hebral, B. Chabaud, and J. Chaussy, Phys. Rev. Lett. 79, 3648 (1997).
43. J. J. Niemela, L. Skrebek, K. R. Sreenivasan, and R. Donnelly, Nature 404, 837 (2000).
44. D. Funfschilling, E. Bodenschatz, and G. Ahlers, Phys. Rev. Lett. 103, 014503 (2009).
About the Author: Guenter Ahlers
Guenter Ahlers
Guenter Ahlers received his B.A. degree in chemistry from the University of California at Riverside in 1958 and a Ph.D. in physical chemistry from the University of California at Berkely in 1963. In 1963 he became a Member of Technical Staff at Bell Laboratories in Murray Hill, N.J. There he worked on critical phenomena near the lambda point in liquid helium and near magnetic phase transition, and on superfluid hydrodynamics. In 1970 he began research on Rayleigh-Bénard convection in liquid helium that led to the experimental observation of chaos in a fluid-mechanical system. In 1979 Ahlers moved to the University of California, Santa Barbara, where he studied pattern formation in convection and Taylor-vortex flow, and turbulent Rayleigh-Bénard convection. He and his co-workers published about 270 papers in the Journal of Fluid Mechanics, Physics of Fluids, Physical Review A, B, and E, Physical Review Letters, and elsewhere. Ahlers became a Fellow of the APS in 1971 and of the AAAS in 1990. He received the IUPAP Fritz London Memorial Award in low-temperature physics in 1978, the Alexander von Humboldt Senior US Scientist Award in 1989, and the APS fluid-dynamics prize in 2007. In 1998 he was a Guggenheim Fellow. He was elected to the National Academy of Sciences in 1982 and became a Fellow of the American Academy of Arts and Sciences in 2004.
Subject Areas
New in Physics | <urn:uuid:1f04618f-29f3-44b1-8238-e61e15747b3d> | 4 | 3.625 | 0.025689 | en | 0.921385 | http://physics.aps.org/articles/v2/74 |
Will Congress Pass Obama's Gun-Control Legislation Proposals?
President Obama unveiled his package of proposals to reduce gun violence today, a mix of executive actions he can undertake unilaterally (23 of them) and ideas that will require new laws passed through Congress. I'll tell you what I think about the package as a whole in a moment, but here are the major provisions:
• Universal background checks. Right now, about 40 percent of gun sales—those at gun shows, or between two private citizens—require no background check. This is a significant change.
• A new assault weapons ban. The ban in place between 1994 and 2004 was riddled with loopholes. This one is likely to be much stricter, making it harder to get new military-style weapons. But it won't affect the millions of such guns already in circulation.
• A ban on high-capacity magazines. Magazines would be limited to 10 rounds.
• A renewal of effective data-gathering and research into gun violence. Today, not only is the FBI required to destroy all background check information within 24 hours, the CDC is effectively banned from researching the causes and consequences of gun violence. Obama is directing the CDC to begin such research again.
There are a bunch of other proposals, particularly in those 23 executive actions, many of which are rather minor and involve clarifying existing policies. Immediately after he finished his statement, he signed the executive orders, but those were the easy things. The more difficult and consequential parts—the assault-weapons and high-capacity magazine bans, the universal background checks—will require Congress. It's going to be extremely hard to get such laws passed, though the background-check provision is the one most likely to succeed.
So how could you overcome that opposition? The fact is that a majority of the House of Representatives isn't inclined to go along. So there are two things you can do: boot enough of them out of office in the next election or two to change that calculation, or you can raise the cost of opposing these measures, making at least some of them decide that even if they don't like it, the politics make it difficult to oppose.
That takes a national, sustained campaign, one that drives the debate and mobilizes voters. Do the White House and its allies have it in them? I'd like to be optimistic about that. Maybe things really have changed. Maybe.
As long as the govt has to abide by these same restrictions in terms of access to weapons I'm fine with the legislation.
Yeah, cstoney04, i really want my American soldiers be limited to 10 round magazines.
yeah,steveh, where are your american soldiers ? O yea thats rite.... their not in america are they.... ? shouldnt they be here helping defend this great country and its children ? i guess not... ? their all over sea keeping the peace hu ? did you know that only 3% of homicides in america were riffles and assult riffles.... 3% the rest of the 97% were handguns and shotguns holding 10 rounds or less,,, look it up !! did you also know that homicied crime dropped 56% after ther clinton ban was released in 2004 ? look it up !! did you know that more american soldiers die from suicide than killed in action,, look it up!! SANDY HOOK EXPOSED -YOU TUBE look at some facts some things just seem a little fishy .
Marc B.
Free MP3 Downloads
You need to be logged in to comment.
, after login or registration your account will be connected. | <urn:uuid:72aa5b5b-4f21-47a7-9cbc-9d64448238b5> | 2 | 1.539063 | 0.072777 | en | 0.969998 | http://prospect.org/comment/18707 |
All over the UK this morning, news organisations are talking about a cloud of sulfurous gas emanating from a factory operated by specialist lubricants and paints firm Lubrizol in Rouen, France. The gas is spreading northwards on the wind, covering vast swathes of southern England, and southwards to the French capital, Paris.
While it has not yet reached the secret Chemistry World bunker, ‘Le pong’ – as some newspaper editors have dubbed it – has already caused considerable disruption and discomfort. A French football match was postponed and lots of people are complaining about the smell.
However, the biggest disruption is caused by the specific nature of the gas and an unfortunate coincidence. Lubrizol has said that the gas is ‘mercaptan’. Chemically speaking, mercaptans are a class of compounds containing an S-H group. They are the sulfur analogues of alcohols, also known as thiols. These compounds, along with related thioethers like dimethylsulfide, are also characterised by their extremely noxious odours (reminiscent of rotting eggs, overcooked cabbage, sweat, diesel fumes and a host of other foul aromas) at anything above the lowest of concentrations.
In this case, it appears that the specific compound involved is methylmercaptan, or methanethiol. Unfortunately, this is also a significant component of the mix of compounds added to the natural gas (methane) supplied to homes all over the UK, to enable us to detect gas leaks more easily. When people say ‘I can smell gas’, they usually mean ‘I can smell thiols’, since methane itself is odourless.
This deep-seated association of the smell of thiols with potentially explosive methane leaks has meant the emergency phonelines at the National Grid, which maintains Britain’s gas infrastructure, have been ringing off the hook. There is also a risk that the smell will mask any actual gas leaks in the affected areas.
On the plus side, one of the reasons methanethiol was chosen as a gas additive in the first place is its low toxicity, coupled with our ability to detect its odour at vanishingly low concentrations. While it may be unpleasant, authorities (including the French minister for ecology, Delphine Batho) have been quick to reassure the public that there is no threat to public health. That said, our noses can pick up the smell of thiols at minuscule concentrations, so ‘Le pong’ is likely to hang around like, well, a bad smell…
But what caused the smell in the first place? Lubrizol has said that the leak was caused by ‘instability with a batch of one of [its] products’. That batch of product is decomposing, releasing the thiol. Operations at the plant have been suspended, but since the problem appears to be with already manufactured product rather than the process or a leaky pipe, the company will need to find a way to either stabilise or safely destroy the offending batch to prevent more gas being produced.
In the lab, waste thiols are often oxidised to eliminate their odours, but clearing up industrial quantities of a degrading speciality chemical product may need to be a bit more subtle than just dumping it in a big bucket of bleach. However, that does appear to be exactly what is happening in the first phase of the clean-up operation, according to the French ministry of the interior.
Meanwhile, if you’re in an affected area, be thankful that your brain has an in-built mechanism for ‘turning off’ its response to bad smells after a certain amount of time. Something I was very thankful for when I worked at a bench next to a big (thankfully ventilated) cupboard full of a variety of stinky, sulfurous reagents.
Phillip Broadwith
VN:F [1.9.10_1130]
Rating: 9.5/10 (6 votes cast)
VN:F [1.9.10_1130]
Rating: +4 (from 4 votes)
Thiols, mercaptans and the stench from the French, 9.5 out of 10 based on 6 ratings
Digg This
Reddit This
Stumble Now!
Share on Facebook
Bookmark this on Delicious
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader) | <urn:uuid:8c024cbe-7c19-44db-8030-4ad86c93eaf9> | 2 | 2.375 | 0.047924 | en | 0.957392 | http://prospect.rsc.org/blogs/cw/2013/01/23/thiols-mercaptans-and-the-stench-from-the-french/ |
Sunday, November 6, 2011
Occupy Cal newsletter
From the text, composed and laid out by a comrade in the encampment working group:
And that is how the university system is governed: through the dictatorship of the Regents. There is little, perhaps zero, democracy involved in the administration of our universities. The Regents have the power to set policies throughout all UC campuses and they also determine the UC budgets. Basically, as stated before, they have total authoritative control of the UCs. Most interestingly, though, I don't remember voting these people into positions of power, and neither should you because they are not elected public officials. Instead, the 18 voting members are handpicked by the Governor of California and approved by the State Senate. Since the Regents control all the money and property under the UCs, which is valued at roughly around $53 billion, the position of Regent is one of the most prestigious appointments the Governor can give. As a result, those that tend to give the Governor hefty campaign donations tend to also become Regents. | <urn:uuid:3b3d2092-e241-42cd-b62c-bb45c7ec7dee> | 2 | 1.851563 | 0.038245 | en | 0.972413 | http://reclaimuc.blogspot.com/2011/11/occupy-cal-newsletter.html |
2007 News Story
Grand Mufti warns youths against traveling abroad for jihad
Grand Mufti of Saudi Arabia and Chairman of the Senior Ulema (Religious Scholars) Sheikh Abdulaziz Al-Ashaikh has warned Saudi youths against traveling abroad with the intention of engaging in jihad.
Situations abroad are ambiguous, and youths do not have the knowledge necessary to distinguish between right and wrong, he said in a speech yesterday. The youths are also putting themselves at risk of being misused by deviant elements to achieve their own political and military gains, he added.
Moreover, by going abroad to engage in jihad they are violating a number of Islamic teachings, including loyalty to a ruler, he said. In April 2007, the Grand Mufti clarified that Islam prohibits swearing an oath of allegiance to another leader while the ruler of the nation remains in office.
In the past, the Grand Mufti has also condemned suicide bombings as well as criminal acts perpetrated by militants, and described Al-Qaeda as enemies of Islam, the nation and its economy. | <urn:uuid:ef29db22-4fe9-4b27-b529-8855108d5012> | 2 | 1.609375 | 0.018883 | en | 0.967511 | http://saudiembassy.net/archive/2007/news/page158.aspx |
Everyday Science: Diamond Quiz
You Scored: 0 out of 10
0 Correct Answers
Question 0 of 10
• A famous slogan claims that a diamond is forever. While these precious stones symbolize love and glamour for millions of couples, there's a lot about nature's most famous jewel you might not know. How do diamonds form in nature? How do they get from the Earth's mantle to your local jeweler? And why should you be wary of a diamond's background when you go shopping?
More To Explore
Don't Miss | <urn:uuid:dddb33bb-0e21-4b89-a12b-90e3d360a363> | 2 | 1.6875 | 0.989614 | en | 0.878562 | http://science.howstuffworks.com/environmental/earth/geology/diamond-quiz.htm |
Wavy Google Logo Honors Heinrich Hertz
The wavy blue, red, yellow, and green logo on Google’s homepage today is in honor of Heinrich Rudolph Hertz. The German physicist, who was instrumental in the discovery of electromagnetic and radio waves, was born on this date 155 years ago.
The Google Doodle is somewhat unique, as Google’s name doesn’t appear in the logo. Rather, you only see the multi-colored wave scrolling in a simple animated GIF until you click on the logo and are taken to Google’s search results.
Hertz is credited as being the first person to prove the existence of electromagnetic waves. By building a basic device, Hertz became the first person to broadcast and receive radio waves, which inspired the invention of the wireless telegraph, radio, and eventually television and radar, and paved the way for many of the communication and wireless devices we now use every day. The hertz (Hz) unit of frequency is named after him.
Google regularly pays tribute to the scientists and inventors who have made big contributions to our modern society. In the past, Google has honored the likes of geology pioneer Nicolas Steno, microchip inventor Robert Noyce, photography inventor Louis Daguerre, vitamin C discoverer Albert Szent-Gyorgyi, and the father of genetics Gregor Mendel, to name just a few from the past 12 months. | <urn:uuid:8559be83-4c71-4178-ad20-4a3896714c6a> | 3 | 2.8125 | 0.194766 | en | 0.928522 | http://searchenginewatch.com/sew/news/2154110/wavy-google-logo-honors-heinrich-hertz |
Take the 2-minute tour ×
First, the situation: I've got a Linux computer with two eSATA drive bays that accept removable SSD drives. I'm trying to write a little GUI application that makes it easier for the user to mount/unmount/format/backup/etc the drives that he puts into these bays.
It all mostly works. One small problem, however, is that I don't know how to find out any information about what's on the inserted drive(s) until after the drives have been successfully mounted.
So, for example, if the user inserts a drive that I can't mount (e.g. because it is unformatted, or formatted with an unexpected filesystem), all my app can say about it is "Drive failed to mount".
This isn't very satisfactory, because if the drive is unformatted, the user will probably want to format it... but if the drive contains data from an unrecognized filesystem, the user will probably NOT want to format it.... or at least, I want to be able to warn him that by doing so he'll be erasing potentially valuable data.
So my question is: is there any method for querying some basic information (especially filesystem-type) from a drive that doesn't require that the drive already be mounted? Or do I just have to try to mount it with various known filesystems until one of the mount attempts succeeds, and give a vague "be careful" message if none of them do?
In case it matters, the paths I use to mount the drives in the drive bays are:
share|improve this question
3 Answers 3
up vote 21 down vote accepted
If the drives are unmounted there are several things you can do.
You can use a command like fdisk -l or sfdisk -l to list the partitions. Just the partition type may give you some useful information if the partitions where setup correct.y
# sfdisk -l
Disk /dev/sda: 4177 cylinders, 255 heads, 63 sectors/track
Device Boot Start End #cyls #blocks Id System
/dev/sda1 * 0+ 30 31- 248976 83 Linux
/dev/sda2 31 4176 4146 33302745 8e Linux LVM
/dev/sda3 0 - 0 0 0 Empty
/dev/sda4 0 - 0 0 0 Empty
If it is present on your system you can use the command vol_id against a partition to return some useful details (part of the udev package on Debian). This will generally tell you what filesystem is actually being used.
# vol_id /dev/sda1
The command lshw -class disk will give you some details about the type of drive. You might want to use this if you are curious about the actual serial number of the drive.
# lshw -class disk
description: ATA Disk
product: VBOX HARDDISK
physical id: 0.0.0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: 1.0
serial: VB169e93fb-d1e0fd97
size: 32GiB (34GB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 signature=000d39f8
If you are sure the there is a particular filesystem like ext2/3 on it then you can use the filesystem specific tune2fs tool to examine more details.
# tune2fs -l /dev/sda1
tune2fs 1.41.3 (12-Oct-2008)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 8cbdf102-05c7-4ae4-96ea-681cf9b11914
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: filetype sparse_super
Default mount options: (none)
Filesystem state: not clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 124496
Block count: 248976
Reserved block count: 12448
Free blocks: 212961
Free inodes: 124467
First block: 1
Block size: 1024
Fragment size: 1024
Blocks per group: 8192
Fragments per group: 8192
Inodes per group: 4016
Inode blocks per group: 502
Last mount time: Thu Oct 7 15:34:42 2010
Last write time: Thu Oct 7 15:34:42 2010
Mount count: 4
Maximum mount count: 30
Last checked: Wed Sep 15 09:29:03 2010
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Another useful tool is lsblk.
# lsblk
sda 8:0 0 30G 0 disk
└─sda1 8:1 0 30G 0 part
├─vg1-root (dm-0) 254:0 0 23.3G 0 lvm /
└─vg1-swap (dm-1) 254:1 0 1.9G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom
If you have parted installed you can run a command like this
parted /dev/sda print all
Disk /dev/sda: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 255MB 255MB primary ext2 boot
2 255MB 34.4GB 34.1GB primary lvm
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg1root: 32.6GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 32.6GB 32.6GB ext3
Anyway past that I suggest you take a look at the udev or parted source.
share|improve this answer
'vol_id' has since been renamed 'blkid', for anyone who happens to stumble upon this great answer. – Dave S. Feb 22 '13 at 14:29
lsblk is rad, thanks! – Travis R Aug 14 '14 at 19:43
Another useful command along the lines of vol_id as mentioned by Zoredache is blkid - it returns similar information but can also scan all devices in the system, rather than requiring a device to be passed in.
share|improve this answer
An anonymous user wanted to add: This is all useful but 'vol-id' has now been replaced entirely by 'blkid'; neither SuSE nor Debian have 'vol-id' in their repositories any more. Run whereis blockid from the command line (or man blockid), you will find if it is loaded. Run as root: blkid /dev/sdb1 gives (for instance): /dev/sdb1: SEC_TYPE="msdos" LABEL="DR-05" UUID="8031-5963" TYPE="vfat" The man page is worth looking at too. – Chris S Dec 3 '12 at 13:58
Here's one suggestion from IBM: SCSI - Hot add, remove, rescan of SCSI devices: Rescan of a SCSI Device. This will rescan that SCSI address for new devices, and then you'll be able to read the information in /var/log/messages . Some other disk tools will also work, without you mounting the drive.
echo 1 > /sys/bus/scsi/drivers/sd/<SCSI-ID>/block/device/rescan
I actually tried somethign slightly different yesterday, and it worked (RHEL4 system):
cd /sys/bus/scsi/devices
echo > 0\:0\:0\:0/rescan
share|improve this answer
Your Answer
| <urn:uuid:15187f6f-cb77-412c-9ef4-e3d12e24fd80> | 2 | 2.15625 | 0.060343 | en | 0.816807 | http://serverfault.com/questions/190685/whats-the-best-way-to-get-info-about-currently-unmounted-drives/190706 |
Take the 2-minute tour ×
On my network I have an Ubuntu 12.04 server running Samba4, my domain is fully configured and functional.
Now, I would like to enable VPN access over the internet, and have another box to do so. I have been searching on the internet for guides and information etc, but have not been successful.
I have however found this guide http://www.howtogeek.com/51237/setting-up-a-vpn-pptp-server-on-debian/ but was wondering if I could adapt it somehow to enable access to my DC services.
EDIT: I would need to authenticate my VPN server with my DC, if that is possible of course.
Any insight would be wonderful.
Regards, Jack Hunt
share|improve this question
5 Answers 5
up vote 4 down vote accepted
From your question missing some vital informations, because here are 3 different situations:
1. Your Ubuntu/Samba server is on the Internet
2. Your Ubuntu/Sabma is in the local network, and you now want setup an new computer what should act as VPN server and internet gateway (and firewall and like).
3. You already have a sort of simple NAT router (e.g. common onebox DSL router) on your network, and need setup an new computer BEHIND this router as an VPN server.
Here are basically 3 different common VPN solutions. PPTP(forget it just now), L2TP and OpenVPN. Good comparison is at http://www.ivpn.net/knowledgebase/62/PPTP-vs-L2TP-vs-OpenVPN.html.
Variant 1.) Personally I'm not recommendig this variant, but (maybe) this is based only my paranoia, and don't want giving you pure subjective answers.
Variant 2.) Setup the Gateway/Firewall/VPN server.
The L2TP (on Ubuntu) solution is (nevertheless to downvote the inappropriate "one external link answer" from @slafat01), described in detail the in the provided link. IMO, configuring L2TP and IPSEC is too hard, my recommendation is (when you don't need communicate to Cisco router or soo) - use rather OpenVPN.
OpenVPN (as already @Anders told above) is nice cross-platform easy-to-configure-and-use VPN solution. You can use any OS Linux or Freebsd for using it.
One solution (already suggested by @nedm) is installing pfSense. This is an excellent recommendation.
Another solution (and I'm recommending this) is installing "full-blown" FreeBSD-9.0-RELEASE as FW/GW/OpenVPN. It is a bit more complicated as pfSense (not much), but you will get full-featured server, with zilion ports (packages).
Installation is easy, updates and upgrades too (thanks to freebsd-update command) are easy. You will need install and configure Freeradius on your Samba server for acting as IAS and OpenVPN on the Freebsd. (installing is easy - one command, configuration is simuliar as in the above link)
Variant 3). Like above, Freebsd with OpenVPN, FreeRadius, PF but you will need open and open and forward connections to port 1194 (default port) to server on you nat router.
Some comments:
• you want use TAP/bridged mode for your OpenVPN because easier to setup and manage and the second benefit is the ability to use broadcast and all network protocols.
• you want to use Radius server (freeradius) because in this way is possible authenticate users from your DC and don't need manage different user-database on the FreeBSD server. Configure your FreeBSD pam for openvpn to authenticate via Radius.
• you want do not forget to add the push "dhcp-option DOMAIN ...... to your openvpn.conf. This is an common mistake. Allowing to your remote (road warriors) using your DC.
Sure forgotting something, others can extend my 1st post here, because unfortunately I can post only 2 external links yet.
share|improve this answer
Thanks for the response, very detailed :) I am thinking I shall go with Option 2 and OpenVPN. Thanks again. – VisionIncision Oct 3 '12 at 11:58
Use the other box to set up a pfSense firewall/gateway. pfSense is a fantastic FreeBSD-based firewall distro and has configurable options for PPTP, IPSec, and OpenVPN VPNs built in.
I would install pfSense, enable OpenVPN, and configure it to authenticate using either FreeRadius or LDAP against your internal AD. Lots of guidance for doing this is available on the forums.
share|improve this answer
How about setting up an (separate?Ubuntu) OpenVPN server? To get started, look here: http://ideasnet.wordpress.com/2012/02/19/ides-networking-creating-a-vpn-site-to-site-using-openvpn-between-2-server-with-ubuntu-10-04lts-server-edition/
OpenVPN.net also provides a OpenVPN Access Server Virtual Appliance that you could take a closer look at if you like to do a quick test setup.
There are plenty of information to find here at SF or via Google on how to setup a OpenVPN server on Ubuntu. OpenVPN server can also handle client connections from Windows, Linux, Andriod etc...) not only site-to-site connections.
share|improve this answer
I tried pfSense + PPTP RADIUS Authentication. Works like a charme and easy as a piece of cake.
share|improve this answer
To make this answer useful for the OP, I'd suggest providing some detail about how it worked for you. – Magellan Oct 1 '12 at 5:47
Your Answer
| <urn:uuid:50e795a0-4337-4cba-8bd8-ab7b5edc122d> | 2 | 1.78125 | 0.03429 | en | 0.883015 | http://serverfault.com/questions/429249/vpn-server-to-access-samba4 |
Take the 2-minute tour ×
I'm currently developing on a system with distributed computers around the country. All of these are sitting behind a NAT and are self-controlled. As a Backup-plan if puppet fails or for other maintenance I thought about a small vpn-network to access the clients by ssh if I need to.
I've already connected the test-client successfully to the server, but I'm not able to ping or ssh to the client or vice versa (which is currently not needed).
The server is also secured by a iptables-setup. I tried couple iptables-entrys which are known for openvpn, but non of them is working.
How do I have to setup the server to get ssh 10.8.0.* or similar working?
Greets, Moritz
UPDATE: I found a false configuration on the client-side. It was configured as tls-client and not client. This made ping and ssh both ways possible.
share|improve this question
I have found that OpenVPN works really well with puppet. Since you can re-use the puppet keys and certificates (pki) issued to the server and clients as your keys within OpenVPN. – Zoredache Oct 16 '13 at 16:31
When you say you cannot ping or ssh, are you saying you can't doing it from your client or from the server? – CIA Oct 16 '13 at 18:48
1 Answer 1
I was working on something similar today (connecting to IPMI private LAN IPs). I found that a good way of doing it was to use a PPTP VPN as I couldn't get OpenVPN to play ball.
Pressuming you use Linux; Script I made to do it quickly (modify to your needs, make sure the eth interface in the iptable commands are right):
Centos 5:
rpm -i http://poptop.sourceforge.net/yum/stable/rhel5/pptp-release-current.noarch.rpm
Centos 6:
yum -y update
sysctl -p /etc/sysctl.conf
iptables -F
iptables -A INPUT -i eth1 -p tcp --dport 1723 -j ACCEPT
iptables -A INPUT -i eth1 -p gre -j ACCEPT
iptables -A FORWARD -i ppp+ -o eth1 -j ACCEPT
iptables -A FORWARD -i eth1 -o ppp+ -j ACCEPT
service iptables save
chkconfig pptpd on
echo "localip" >> /etc/pptpd.conf
echo "remoteip" >> /etc/pptpd.conf
echo "ms-dns" >> /etc/ppp/options.pptpd
echo "USERNAME * PASSWORD *" >> /etc/ppp/chap-secrets
service pptpd start
Make sure to enable IP forwarding in /etc/sysctl.conf
Further PPTP Help: https://www.digitalocean.com/community/articles/how-to-setup-your-own-vpn-with-pptp
share|improve this answer
Reliably getting PPTP through firewalls you don't control is going to be a lot more difficult then getting OpenVPN through firewalls. Getting GRE through some firewalls is not trivial. – Zoredache Oct 16 '13 at 16:19
Your Answer
| <urn:uuid:b2b60860-fded-44c7-8e4c-9e576692f914> | 2 | 1.796875 | 0.025191 | en | 0.882024 | http://serverfault.com/questions/546445/no-routing-or-forwarding-plain-ssh-from-server/546449 |
Derek Sivers
Idea: Musician's own website as definitive source of all info
Some outside sites say, “We'll manage all your data!” - but I don't want to go to yet-another-website to enter all of my data, and trust them not to go out of business. In fact, I don't want to enter my info anywhere but my own website!
Then it's the web hosting company's job to spread that info other sites.
How it works for the musician:
1. Log in to your own website.
3. While it's uploading, enter the song info: name, copyright, credits, lyrics, sample-start-time, etc.
6. That's it. You'll never need to enter that info or upload that song ever again.
Then your web-host can do the boring copying:
Your web-hosting company gives you some simple options:
Do you want us to send this to...
[x] MySpace
[x] Facebook
[x] iTunes
[x] Amazon
[ ] Napster
[ ] Pandora
[ ] Spotify
[ ] ReverbNation
How it works on the back-end:
For some websites, the distribution can be automated. The web host sends a server-to-server message to the remote company's servers, adding the necessary info and files. This is how digital distribution of music already works.
Because the company does it for dozens of clients per day, they can do it incredibly fast and cheap, so they don't need to charge extra for this hands-on service.
Who's doing it?
ArtistData is awesome, and the closest I've seen to this idea, but they don't host websites (yet). I heard of them after I came up with this idea two years ago.
I was still at HostBaby then, and everything I described above was my plan for “HostBaby 3.0”. Maybe they'll still do it. Since I left the company and signed a non-compete agreement, I'm not allowed to. But I hope someone does. | <urn:uuid:b00a20bb-f858-457d-a4d3-f342c7a698bd> | 2 | 1.554688 | 0.575424 | en | 0.946512 | http://sivers.org/mhost |
Definition of Sliding Commission Wage
by Karen Farnen, Demand Media
A sliding scale commission rewards top salespeople.
A sliding scale commission rewards top salespeople.
Jupiterimages/Pixland/Getty Images
By 2018, more than 2 million sales representatives will work in wholesale and manufacturing alone, according to the U.S. Bureau of Labor Statistics. Retail, insurance and investments employ many more. The total compensation of sales staff frequently includes special incentives, such as expense accounts, company cars, prizes and bonuses. However, commissions, with or without salary, usually make up a major portion of their pay. One special variety is the sliding commission.
Basic Definition
A sliding commission is a payment to sales staff at varying percentages, depending on the amount of sales. “Amount” can mean either the number of units or the dollar amount. For example, the percent paid for a sliding commission in the auto industry can depend on the number of cars a person sells, regardless of price. In other industries, the commission often varies with the total dollar amount of a person’s sales.
Comparison with Other Types
A sliding commission contrasts with a fixed commission, which always pays the same percentage. For example, a company paying a fixed commission pays 20 percent on the dollar amount of all sales. A company paying a sliding commission pays a salesperson 20 percent if her sales fall below $100,000 in a certain time frame. It pays 25 percent on her total sales if they exceed $100,000. It pays 30 percent on her total sales if they exceed $200,000. A tiered commission is a common variation that pays the base rate of 20 percent on the first $100,000. It then pays the higher percentages only on the amounts exceeding $100,000 or $200,000.
Increasing or Decreasing Sliding Scale
Sliding scale commissions move up or down, depending on the situation. Usually the percentage increases with increased sales, but sometimes it decreases. For example, an auto salesperson receives a 20 percent commission for selling five cars in a certain time frame. She receives 25 percent for selling 10 cars and 30 percent for 15 cars. This represents an increasing sliding scale. In the securities business, however, sales staff frequently receive commissions on a decreasing sliding scale. In this case, large investors pay a smaller percentage commission on stock purchases than do smaller investors.
Negotiable Price
Often sales people must sell at a fixed price. In this case, companies often pay a sliding commission as a percentage of total sales revenue. However, sometimes sales staff negotiate prices. In this case, a sliding scale based on gross margin maintains company profits. Sales expert Alan Rigg, author of "How to Beat the 80/20 Rule in Selling," gives an example of a company that targets 10 percent sales commissions and a 30 percent gross margin. If salespeople negotiate prices, the company pays a higher commission at a 40 percent margin, perhaps 15 percent. At the target margin of 30 percent, it pays a 10 percent commission. At a lower margin of 20 percent, it pays a commission of only 5 percent.
Other Considerations
Paying a higher commission for large sales helps energize your top producers, according to Suzanne Paling of "Entrepreneur." Market conditions change over time, however. Evaluate the success of your compensation methods, including commissions, on a regular basis. Ian Rheeder of "Strategic Marketing Magazine" recommends making your compensation method transparent and easy-to-understand. Sales people should be able to compute their expected income at any particular time. He also recommends a combination of commissions with bonuses and awards to keep sales staff motivated.
About the Author
Photo Credits
• Jupiterimages/Pixland/Getty Images
Suggest an Article Correction
Have Feedback?
| <urn:uuid:ae32d9a2-932a-456c-b685-8af10afc449f> | 3 | 2.515625 | 0.0195 | en | 0.907787 | http://smallbusiness.chron.com/definition-sliding-commission-wage-34878.html |
Take the 2-minute tour ×
When I call hreq.getSession().invalidate(); app engine slows down tremendously. I looked at appstats and saw that on a page where no database calls are made, it was calling memcache.get and datastore.get 23 times each. The stack trace of these calls showed that it was being called from getSession(). This only happens on the production server. Every time I make a request to a page, it makes a bunch of memcache and datastore calls. This slow down goes away though when i restart my browser.
When I changed the code to simply set the isLoggedIn property of the session to false, rather than calling hreq.getSession().invalidate();, everything was fine.
As a test, I didn't invalidate my session, but I changed the value of my browser's session cookie, and the app engine exhibited the same behavior.
Is this a bug with the app engine?
share|improve this question
1 Answer 1
up vote 2 down vote accepted
It's not surprising that getSession() interacts with memcache and the datastore. Take a look at the _ah_SESSION entity with the datastore viewer. You will notice this is a Blob, and the Blob is the session information. Take a look at this.
App Engine includes an implementation of sessions, using the servlet session interface. The implementation stores session data in the App Engine datastore for persistence, and also uses memcache for speed. As with most other servlet containers, the session attributes that are set with session.setAttribute() during the request are saved to the datastore at the end of the request.
If you are invalidating a session then a new session would need to be created and this would require interacting with both memcache and the datastore.
share|improve this answer
Yes, but it makes about 20 database calls, and it does it on every single page I visit until i restart my browser. – Kyle May 30 '10 at 2:15
Your Answer
| <urn:uuid:9ef5dde6-332d-44d8-8c2a-e38d2c30eec5> | 2 | 2.03125 | 0.075395 | en | 0.891908 | http://stackoverflow.com/questions/2936930/google-app-engine-calling-getsession-invalidate-causes-app-engine-to-act |
Take the 2-minute tour ×
What communication is going on between Eclipse and my application server (JBoss) when I run the server from within Eclipse in debugging mode? How does this work?
share|improve this question
2 Answers 2
up vote 11 down vote accepted
When you start the server in debug mode, it listens on a specified TCP port. Eclipse connects to that port, and they talk using the Java Debug Wire Protocol (JDWP). Read the details here: http://java.sun.com/j2se/1.5.0/docs/guide/jpda/
share|improve this answer
I think it is called JDWP (Java Debugging Wire Protocol) - read more here
share|improve this answer
Your Answer
| <urn:uuid:962a8065-cdbc-4677-af6d-e3a0468aa9bc> | 2 | 2.109375 | 0.989942 | en | 0.862977 | http://stackoverflow.com/questions/343947/how-does-eclipse-debug-code-in-an-application-server |
Take the 2-minute tour ×
As is well known, in XHR (aka AJAX) web applications no history for your app is build and clicking the refresh button often moves the user out of his/her current activity. I stumbled upon location.hash (e.g. http://anywhere/index.html#somehashvalue) to circumvent the refresh problem (use location.hash to inform your app of it's current state and use a page load handler to reset that state). It's really nice and simple.
This brought me to thinking about using location.hash to track the history of my app. I don't want to use existing libraries, because they use iframes etc. So here's my nickel and dime: when the application page loads I start this:
if (location.hash !== appCache.currentHash) {
appCache.currentHash = location.hash;
/* ... [load state using the hash value] ... */
return true;
return false;
}, 250
(appCache is a predefined object containing application variables) The idea is to trigger every action in the application from the hash value. In decent browsers a hash value change adds an entry to the history, in IE (<= 7) it doesn't. In all browsers, navigating back or forward to a page with another hash value doesn't trigger a page refresh. That's where the intervalled function takes over. With the function everytime the hash value change is detected (programmatically, or by clicking back or forward) the app can take appropriate action. The application can keep track of it's own history and I should be able to present history buttons in the application (especially for IE users).
As far as I can tell this works cross browser and there's no cost in terms of memory or processor resources. So my question is: would this be a viable solution to manage the history in XHR-apps? What are the pros and cons?
Update: because I use my homebrew framework, I didn't want to use one of the existing frameworks. To be able to use location.hash in IE and having it in it's history too, I created a simple script (yes, it's needs an iframe) which may be of use to you. I published it on my site, feel free to use/modify/critizise it.
share|improve this question
3 Answers 3
up vote 5 down vote accepted
I think you'll have a tricky time knowing if a user went forward or back. Say the url starts /myapp#page1 so you start tracking states. Then the user does something to make the url /myapp#page2 Then the user does something to make the url /myapp#page1 again. Now their history is ambiguous and you won't know what to remove or not.
The history frameworks use iframes to get around the browser inconsistencies you mentioned. You only need to use iframes in the browsers that need them.
Another con is that users will always go for their browsers back button before they will go for your custom back button. I have a feeling the delay on reading the history every 250ms will be noticeable too. Maybe you can do the interval even tighter, but then I don't know if that'll make things perform badly.
I've used yui's history manager, and although it doesn't work perfectly all the time in all browsers (especially ie6), it's been used by a lot of users and developers. The pattern they use is pretty flexible too.
share|improve this answer
I thought about that. Perhaps cluttering history is also a matter of application control - making sure the user ends up where the app leads him/her, so his/her position is allways clear? – KooiInc Feb 20 '09 at 20:26
There are 3 issues that tend to get munged together by most solutions:
1. back button
2. bookmarkability
3. refresh button
The window.location.hash based solutions can solve all three for most cases: the value in the hash maps to a state of the application/webpage, so a user can press one of "back"/"forward"/"refresh" and jump to the state now in the hash. They can also bookmark because the value in the address bar has changed. (Note that a hidden iframe is needed for IE related to the hash not affecting the browser's history).
I just wanted to note however that an iframe only solution can be used without monitoring window.location.hash for a very effective solution too.
Google maps is a great example of this. The state captured for each user action is way too large to be placed into window.location.hash (map centroid, search results, satellite vs map view, info windows, etc). So they save state into a form embedded in a hidden iframe. Incidentally this solves the [soft] "refresh" issue too. They solve bookmarkability separately via a "Link to this page" button.
I just thought it's worthing knowing/separating the problem domains you are thinking about.
share|improve this answer
4 issues actually. You missed "state". – T9b May 30 '11 at 18:16
All that stuff is important for supporting the full range of browsers, but hopefully the need for it will go away. IE8 and FF3.6 both introduced support for onhashchange. I imagine that others will follow suit. It would be a good idea to check for the availability of this functionality before using timeouts or iframes, as it is really the nicest solution currently out there - and it even works in IE!
share|improve this answer
Your Answer
| <urn:uuid:3ef17a53-8c16-47b0-9be5-fbcd2baf3f48> | 2 | 1.546875 | 0.431513 | en | 0.918944 | http://stackoverflow.com/questions/568719/is-monitoring-location-hash-a-solution-for-history-in-xhr-apps |
Take the 2-minute tour ×
I have several directories within a rar file. I am looking to only extract a certain directory out of the rar with cli.
ex: in a rar /tmp /home /etc
I only want to extract the /home directory
share|improve this question
migrated from serverfault.com Apr 20 '11 at 21:48
This question came from our site for system and network administrators.
1 Answer 1
Assuming you're using the unrar program most linuxes have, it's
unrar x foo.rar home
share|improve this answer
Your Answer
| <urn:uuid:48c77719-cb8a-4d66-80a6-c2b137c6d1b3> | 2 | 1.515625 | 0.260432 | en | 0.877639 | http://superuser.com/questions/273440/unraring-partial-files?answertab=active |
June 02, 2010
notes for talk at the panel "Why Is Gertrude Stein So Important?" at the ALA.
We get from avant garde modernism, by which I mean we get from Stein, the contemporary literature that we deserve. That in short is why Stein matters.
But what is Stein? I want to tell several stories about the Stein that we deserve.
One is the Stein that appears in the publication context of her time. And as example here I want to talk some about the 150 or so pages of Stein were published in the various issues of the journal Transition. Transition was edited mainly by Eugene Jolas while he lived in Paris and it came out monthly from 1927 until 1932 (after 1932 it came out less regularly, first from the Hague and then from New York, until 1938). So it interestingly, although not uniquely, charts the development of avant garde modernism before World War II. When Jolas reprints Tender Buttons in his journal Transition, he doesn’t just put it in the journal on its own. He puts it in a larger context, in a section titled “America,” which includes not only work by other American avant gardists such as A. Lincoln Gillespie but also work from a diverse range of genres and cultures such as fairy tales of the Aztec and Inca periods, a Mexican statue, a Columbian figure, and a Peruvian bowl.
The Stein that appears in Transition is one that is in dialogue with the wide variety of arts that are new to Europe as a result of imperialism. The editorials and reviews and essays of Transition make an argument that avant garde modernism’s forms are reflective of a Europe changed by imperialism, a Europe suddenly very much aware of how different cultures and their arts and their languages are entering and shaping European centers, over and over. Jolas does not really use the words “imperialism” or “colonialism,” he does again and again relate the avant garde to economic and political changes. Again and again he argues that this writing, a writing that at moments he calls a declaration of linguistic independence and at other moments the revolution of the word, was indebted to the disruption of the center.
But also, I’m interested in how some of the things that get said in accusation or dismissal about this moment, which with the arts other than writing gets called primitivism, are actually nuanced and complicated in the pages of Transition. Jolas not only juxtaposes work that is geographically and generically diverse, but also influence is often presented as a two or more way street. His juxtapositions of various arts point to disparate connections between the art of empire and the art of the colonies. There is little in Transition that suggests authenticity or any singularity of origin. The elsewhere becomes heterodox as the journal as a whole includes not just Aztec sculptures from the past but also contemporary paintings by Hopi artist Polelonema; not just Cuban sound poems but also poems from colonial Guadeloupe poet St John Perse. Modernism, the pages of Transition suggest, is contingent and full of uncertain rhythms and unexpected connections. And while Jolas’s juxtapositions do not manage to sidestep the provisional and asymmetrical ways that forms from the colonies enter into empire, they do manage to avoid primitivist assumptions of the colonies as pure and collective.
I am not saying anything new here.
Another story. The story that I was taught about avant garde modernism in school was that around 1913 Stein published Tender Buttons (or around 1908 Italian poet Tommaso Marinetti proclaimed the beginnings of futurism or around 1910 Roger Fry organized his exhibition “Manet and the Post-Impressionists” or . . . ) thus beginning an aesthetic revolution that breaks the constraints and conventions of nineteenth century national European literary traditions. I was taught that this was primarily an overturning of one western literary practice with another western literary practice. I was taught in other words that the conventions of nineteenth century literature began to be seen as restrictive by a certain small group of writers and they thus reacted by indulging in a sort of formal convulsion that used “new” or “strange” forms of writing to break from these restrictions. Basically, I was taught that the west made up avant garde modernism all on its own.
I should have known better; it was the late 80s and early 90s after all and postcolonial theory was unavoidable. But I didn’t and so I carried this story with me to a job teaching literature at a state university in the middle of the Pacific. Because I was a new teacher I had to teach a lot of introduction to literature courses. In one course, an introduction to poetry and drama, I assigned Antigone, Shakespeare’s The Tempest, his sonnets, Stein’s Tender Buttons, and Hawaiian playwright Alani Apio’s Kāmau. The works that I chose were somewhat accidental, a combination of meeting the requirements (a Greek and a Shakespeare), works I had taught before (Tender Buttons), and one that I wanted to think more about (Kāmau). I was not intending to make a point about avant garde modernism.
Yet in this island of competing and complicated identity claims in the middle of the Pacific, students forced me to read Stein in new and exciting ways. One argued that Tender Buttons illustrated the Hawaiian concept of hakalau, of looking astray. Another argued that it was written in a form of Pidgin, a European pidgin. Out of this I realized something that Peter Quartermain argues, that Stein’s multilingual childhood and her adult life in voluntary exile had more to do with her writing than I had previously realized. But I also realized that the forms of avant garde modernism that I had been seeing as “new” or “strange”—the polyvocality, the disjunction, the repetition, the unconventional syntax, the lack of a narrative arc—are actually, as my students kept patiently pointing out to me, the exact same techniques used in oral literary traditions. It was through this moment that I learned to listen to claims such as Fanon’s that “Europe is literally the creation of the Third World.” The Tender Buttons that is in Transition is the same Tender Buttons that those students in Hawai’i were reading.
But I really don’t need all this round about-ness, all this talking about how others pointed out something obvious to me about Stein because I could also just as easily quote Stein. When Stein writes in “What Is English Literature?,” “As the time went on to the end of the nineteenth century and Victoria was over and the Boer war it began to be a little different in England. The daily island life was less daily and the owning everything outside was less owning, and, this should be remembered, there were a great many writing but the writing was not so good.” She says something similar in her round about way about how those nineteenth century national literary traditions were feeling a little less than useful in a changing world.
Stein is writing in a time when cultures and their languages and their literatures are uncomfortably hitting up against one another. While avant garde modernism was certainly a reaction to nineteenth century national literary conventions, it was a resistance that most likely felt crucial to its writers not because nineteenth century national literary conventions suddenly felt merely boring or so-nineteenth-century at the beginnings of the twentieth century but because imperialism had dramatically changed so many things. When Stein was in Paris writing in Tender Buttons “act as if there is no use in a centre,” 600,000 troops and 200,000 workers were brought into France from the colonies. European immigration to urban areas such as Paris was also very high at the time.
Still… it is not that Stein is writing an imitative oral poetry in a time of high literacy, but she is drawing from something closer to what Kamau Brathwaite calls (speaking about contemporary poetries) “the notion of oral literature,” something approximate but not directly imitative, something oral and yet literate. And just as critics such as Brent Hayes Edwards, Nathaniel Mackey, Fred Moten and others have complicated the idea that the primary influence for western black traditions is orality, it also makes sense to complicate the other side of this argument: that Euro-American avant garde modernism draws primarily, even if in resistance, from the literate European American tradition.
If we accept Edward Said’s and other’s claims that European nineteenth century national literatures are tied to the rise of the bourgeoisie and thus also tied to the rise of colonialism, then avant garde modernism’s move away from European national literary traditions could cautiously be read as reformatory. Much about this work severs the one on one relationship between national literatures and national languages. This story of cultural exchange that comes out of Stein’s work is built more around uneven attempts at universalisms (are there any other sort?) than contained multiculturalisms or respectful diversities. I have struggled as I wrote this with finding the proper term for this unequal exchange that it is not hybridity nor syncretism nor fusion. But that also does not damn with charges of appropriation. There is undeniably no mutuality here but instead there is some sort of creeping, undercover international formal migration. At moments then, how imperialism shows up in modernism looks naïve: Eliot talking about the primitive and his drum. But it is not only naïve.
Yet at the same time, do I need to remind this?, avant garde modernism is very much a colonial literary tradition. It is not an anti-colonial one. As is obvious, again and again even as avant garde modernism critiques those 19th century colonial literary traditions and at the same time it colludes with the politics of colonialism that figure the colonies as primitive and the empire as civilized. And yet I still think that to really begin to understand literature’s possibilities as not only representing but also being attentive to the contemporary moment, a moment of globalization, requires not only acknowledging avant garde modernism’s culturally inflected formalism but seeing it as a complicated thing, one that is neither all critique nor all collusion. It would be absurd to suggest that avant garde modernism is innocent--that it is as we say at the turn of this century, “multicultural”--but it also might be missing something to avoid discussion of how it might have felt to certain turn of the century intellectuals impossible to not use the forms of oral traditions, or how it might have felt as if doing this was something that required a certain blindness or a certain allegiance to nineteenth century national literatures.
As oral traditions are social, networked traditions, avant garde modernism at its best attempts a version of a social, networked writing. It is at moments attuned to finding different connections amid the frequencies of language, amid the noisy way that words and literary forms are public business. The usually stable configurations between languages and national identity are frequently questioned. The linguistically atypical works in English of Stein force readers of English to no longer be comfortable in their English language skin. They point to how no language is a native language, how every language is formed out of other languages. Avant garde modernist works also often attentively expand the aesthetic into the social without neglecting relations, entanglements, implications. They often point out that when it comes to the improvisations of literature, connection never really obeys the rules. It is not linear. It is webby and tentacled and intrusive and disordered.
But back to that contemporary literature that we deserve. I am again stating something obvious, that like it or not it is modernism that has shaped the contemporary whether in reaction or in imitation. So it matters how we read it. Those who tend to see a formalist avant garde modernism tend to read the poetries of the last half of the twentieth century as formalist. Those who see a western avant garde modernism tend to assume that the west created its experimentalism in isolation. But a different avant garde modernism, one about that webby dialogue between cultures, points to a webby, tentacled late twentieth century poetry. And this is the contemporary poetry that we deserve.
Blog Archive | <urn:uuid:a08b3a63-8866-4a7f-a717-39b63e3011eb> | 2 | 2.1875 | 0.028414 | en | 0.962948 | http://swoonrocket.blogspot.com/2010/06/notes-for-talk-at-panel-why-is-gertrude.html |
NARRATOR: Imogene Smith Washington
DATE OF INTERVIEW: February 19, 1985
TRANSCRIBER: Jackie Kinney (9/1985)
This is an interview with Imogene Smith Washington for the Teaneck Oral History Project of the library on February 19, 1985 and this is June Kapell.
(I) Imogene, what first brought you to Teaneck? Well, first of all, how long have you been here?
(N) Since 1963 so what would that be, twenty two years.
(I) And what brought you to Teaneck specifically?
(N) Well, we had been looking in Westchester and I didn't really see anything that appealed to me and a friend of ours who was a broker said, let me take you to another area. Let's go over to Teaneck, NJ. My husband's aunt lived in Englewood so we were a little bit familiar with the area and we were aware of the fact that Teaneck was sort of like the ideal town and people were integrating and my husband's aunt used to take us sometimes on a Sunday afternoon through different areas and say a black person lives there and a white person lives there and they seem to have been scattered pretty much throughout Teaneck and the houses were new, modern and as I say, we were influenced by our friend the broker because the prices were more reasonable than the homes in Westchester too.
(I) And you had how many children?
(N) I had two. I had a daughter ten and a very active son who was three at that time. And my daughter was in private school in New York. We lived down on the lower east side of Manhattan and we felt that we wanted to get into a school situation where we could take her out of private school because we realized we'd soon have to face having another one in school and also, as I say, my son was active and we felt that the suburbs had more to offer as far as freedom of action and so forth. I guess that's how we came to Teaneck.
(I) And you stayed. Your daughter went right in to . .
(N) To junior high school. She went into Benjamin Franklin. She went right into junior high school.
(I) And your son, which school. .
(N) Well he was three at the time so he didn't start school but when he did start school, he went to Eugene Field.
(I) And there were buses at the time. Was it the open enrollment or . .
(N) I have to think back now. Golly. We were involved with the open enrollment, I don't remember if my son was already in school or not.
(I) Well it was only one year when they had the open enrollment and I think the parents had to provide their own transportation.
(N) I can't really remember.
(I) Did you become active in the school at that time, right away?
(N) Yeah. I became very active at Eugene Field. I served as vice president of P.T.A. the first, second or third and in some capacity as vice president. I never wanted the job as president but I did serve as vice president the entire time that my son was there and we had a lot of I guess you'd call them pilot projects. We were working with integration and. .
(I) Are you referring specifically to Eugene Field now?
(N) Eugene Field, right. The parents in working with the P.T.A. one time we started a program where the black children would go home for lunch with the white children and then the white children would come home for lunch with the black children and so on. It was the idea that the black children were always eating in school, you see, and the white kids went home for lunch. It was just different ideas of people in the P.T.A. trying to come up with some ways
(I) Was this lunchroom project a successful one?
(N) Yes, but it didn't last very long. As I say, I think most of them were pilot programs. There was one thing that, in fact I initiated it as one of our little projects of having international dinners and I think that is still going on in several schools to this day. As a way of people mixing. Everyone was to bring an ethnic dish or their favorite dish, whatever, and it was family style and we all tasted it, the Greek food and the Spanish food and what have you, and the idea caught on and as far as I know, it still continues to this day. It even spread to other schools. My youngest son by my second marriage, my stepson I should say, goes to St. Cecelia's and we got a call just last night that they are having an international dinner. That seems to be a good mixer.
(I) Then you still have one child in school now.
(N) Yes. Jay Washington is my stepson and he's at St. Cecilia's. He's a senior there.
(I) Are you still active in the parents group?
(N) Yeah, over there. Right.
(I) Some colleges and universities have parents groups now too. But you had other activities in addition to P.T.A. You didn't mention Ben Franklin, did you do things there too?
(N) I wasn't quite as active as Ben Franklin as I was at Eugene Field. I guess because of the personalities of my children, you know, my daughter was a good student. She was quiet, a bookworm and so on and whereas with my son, I felt I'd better be more involved.
(I) Well, you had a number of other activities. Do you want to start with B.E.A.T. or N.E.C.O.? Which one do you want?
(N) Well, which came first? N.E.C.O. was first. N.E.C.O. was a group of us got together and we were down Archie Lacey's basement and we started talking about just problems in general. At that time, you know, the northeast section was becoming predominantly black. White families were moving out and so we started having complaints like, you know, the Public Service guy or the garbage men walk all over the lawn or they don't go to the back door. They ring the front doorbell. And some of the things were perhaps not as important as others but out of these discussions and problems, N.E.C.O. was formed and it dealt with education, educational problems. It dealt with the home improvements and maintenance of our homes. I just can't remember all, but it was very diversified in that it tried to cover, we tried to cover the problems that we were either experiencing or anticipating because of the fact that the area was becoming almost completely black.
(I) Besides Archie and Theodora, who were some of the other people who. .
(N) Mr. and Mrs. Smith. I think her husband's name was George Smith and her name was Mary Louise Smith. I'm not sure; it has been a long time. And there was Leon Gilchrist and Nelson.
(I) You were all neighbors. Is that how this, how these groups started?
(N) Yeah, we were all neighbors. Right. We were all in the same northeast section. Well I guess everyone will remember the old N.E.C.O. dances. We used to even swing Matty Feldman around at them. But they were integrated. We always had a theme. And we used to have them out at that place in Paramus, the Bergen Mall or something. It was downstairs and we always had the dances there because it was very reasonable, everybody at those times, they just brought their own bottles or we brought snacks and we had Wally Richardson, you know. He used to play for us.
(I) Yes, he was the entertainer.
(N) And it was really integrated.
(I) Did N.E.C.O. ever achieve a political force? Or was it, wasn't it designed to be a political force?
(N) There was a political portion but I don't think that was utmost. I mean it didn't have priority. But yes, we were involved in the politics, particularly local politics as far as Teaneck was concerned.
(I) School Board elections or council elections?
(N) Yeah, I'd say. Local type politics we were involved in. Council, school board. I can't really pinpoint our involvement but there was discussions, there was work and most of us worked on the polls and we used to meet over at Gladys and Ike McNatt's house on Saturdays when there was an election going on and we just flooded the town with pamphlets and rang doorbells and that kind of local comradeship, you know.
(I) So N.E.C.O. was community-oriented including education but B.E.A.T. was entirely different.
(N) B.E.A.T. was an entirely different organization but it was a community organization. The only difference I'd say was that B.E.A.T.'s only interest was education. Whereas N.E.C.O. had its fingers in many pies and we were concerned with the general livelihood of the people in the northeast section whereas B.E.A.T. which was started in my home was concerned with the fact that there were very few black teachers in proportion to the number of black students in the Teaneck system. We were concerned with the fact, with the textbooks. We had a textbook review committee and made up a list of textbooks that we felt either had stereotyped the black person or had derogatory remarks and so on and we were successful in getting a Teaneck Textbook Committee.
(I) Let me just go back to the beginning for just a moment. Was this primarily the same group that organized B.E.A.T. as was active in N.E.C.O.?
(N) No, it was not the same group at all. Some of us overlapped in that my husband, my late husband and myself and Archie and Theodora Lacey and well there was Byron Whitter and so on. They were people who had been involved in both N.E.C.O. and B.E.A.T. but then with B.E.A.T. we got into Rev. Dixon who was new to all of us. We sort of just met. And then there was Tommy Scarborough who was new. Not as a friend but new as involved in an organization of this type.
(I) Then B.E.A.T. expanded its membership to other parts of town too.
(N) That's right. We went out and we sent letters out and we talked to groups and we beat the bushes more or less because we said, hey, this thing is effecting all of us and then we started having our meetings at the Town House and they were opened to the public and then we attended Board meetings and if you were at any of those board meetings, you'd know that we were quite verbal, quite outspoken and we met with the superintendent of schools, Dr. Killory. We met with him and outlined some of our grievances and asked that we work together to primarily to get more black teachers into the school system and I think that it was due to our efforts that some teachers who are there now got in because of the fact that B.E.A.T. actually went out and looked for candidates and made recommendations and screamed and so on. B.E.A.T. I think too was more, N.E.C.O. was quieter. N.E.C.O. was more on a social level, we had a lot of whites who belonged to N.E.C.O. too who lived in the area and as I say, we had the annual N.E.C.O. dance which was about 50/50. N.E.C.O. there was not the atmosphere I think was friendlier. There was more of a mixing and of oneness. By the time B.E.A.T. came along, we were beginning to feel I'd say more militant. More black. More aware of our blackness and it seems as if the problems were, especially where education was concerned, so many of us felt that our children were being shortchanged. So many of us felt that there was an attitude of not expecting our kids to succeed or do well, lack of interest and there was just a whole new atmosphere when B.E.A.T. came along.
(I) Yes,I have talked to some of the other young people as well and they too felt this militancy that came along. And I don't know whether B.E.A.T. stirred it up or just reacted to the stirring. Just the times, do you think?
(N) I don't know whether you'd say stir it up because you have to realize that there was a time span there where N.E.C.O. was during the time of coalescing, of getting together, everybody was thinking of Teaneck was just being the ideal town where blacks and whites lived together and it was just a different time. Now when you get into B.E.A.T., you are getting into the time when we are talking about black power, we are talking about, well there was already a change in attitudes and we were beginning to think more in terms of blacks have to do for themselves, we've got to stop leaning on other people or expecting other people to support us. We've got to make our own demands, stand up for ourselves and blah, blah, blah. This was going on throughout the country in the colleges and everywhere and we were, you know all of a sudden the schools were opening up and they were beckoning to our black students and then we were finding that our black students weren't prepared and they were having problems and they were having to have special tutoring from someone. So then we started to turn our view to why aren't they prepared. The doors are opening but they are not going out on the same level and so then you start looking back at your elementary school, your preparation, and. .
(I) I'd just like to ask now, when you are talking about this preparation of the students, are you talking about nationally or specifically here in Teaneck?
(N) B.E.A.T. addressed itself to Teaneck. What I am talking about now, in retrospect, is that it was a national problem but B.E.A.T. did not deal with it on a national level.
(I) In Teaneck itself, did your find a double standard?
(N) We felt that there was, yes. Right. We did. And we felt that certain, there were certain parents in Teaneck who were well known, who were aggressive, who had the educational background, professional backgrounds, that they got, they demanded and they got attention from the system. But there were a lot of people, blacks who were moving to Teaneck, who were blue collar workers and they did not feel that comfortable with expressing themselves so we set about to set up workshops to teach them the questions to ask, how to mind out what's going on. I would say that based on the turnout to the B.E.A.T. meetings and so on, most people were beginning to feel that there was a double standard. And certainly the proportion of teachers compared to the students was way out of balance so our youngsters had no role models, they had no one that they felt was taking a particular interest in them. The only way you were going to get it was to get out here and make this known and demand that some of these situations be alleviated. And they were to a certain extent. What can I tell you? At that time, I guess every time you managed to get another black teacher in, you'd felt you'd accomplished something, you know.
(I) Or stimulated some of the kids to go on to do extraordinary things. There were other things. In your spare time, there was also fair housing.
(N) Yes, well I had been involved in New York and so I guess it was just natural that when I came to Teaneck, I would get involved with Fair Housing and McNatt, who was a very close friend of ours, was I believe president of Fair Housing Council at that time and so we started going to meetings and we went to discussion groups and of course then too everything was social. You know, Fair Housing used to have its annual dance and everybody who was anybody was there.
(I) And you brought your own bottle to those too.
(N) Right. You brought your own bottle. There was a feeling of comradeship, of getting it together and it was very nice. They were good days. They were enjoyable.
(I) Well you say they were good days. Is all of that finished?
(N) Well Fair Housing is still going strong and I think that the problems are probably just as great today as they were then. I know Lee Porter, the director, is a very close friend of mine. I've known her through the years. And they still have problems with the rental of apartments and those kinds of things but I guess it was in its infancy when I was involved.
Continue on the Next Page
Back to Teaneck Oral History (2)
Back to Township History Main Page | <urn:uuid:a3bef035-32da-424f-a7b9-df073726add5> | 2 | 2.046875 | 0.023907 | en | 0.994577 | http://teaneck.org/virtualvillage/OralHistory2/washington.html |
Innovation under Attack
by on September 22, 2005 · 6 comments
Publishers have sued to stop Google Print, a search engine for books, on the theory that it’s an infringement of copyright to make digital copies of copyrighted books, even if they never show those copies in their entirety to anyone.
The publishers’ position is anti-innovation in a very fundamental way. In the analog world, there’s a clear distinction between “using” a copyrighted work (say, reading a book) and “copying” it (say, using a photocopier). Copyright law says that you’re allowed to use a book you legally own, but generally speaking, you can’t make copies, at least not in a commercial product.
But the “physics” of the digital world are different. Every “use” of content involves the creation of a copy of that data. When you read this web page, dozens of copies of the document were created as it was passed across the Internet. If making a digital copy is a copyright infringement, that means that no one can use their copyrighted content on digital systems without the explicit permission of the copyright owner.
Fortunately, that’s not how the courts have ruled in the past. In 1984, the Supreme Court held that it was a fair use to make personal copies of TV shows for the purpose of “time shifting.” In the 1999 Diamond decision, the Ninth Circuit held that “space shifting”–making copies of music for listening on an MP3 player is a fair use. And in 2002, the Ninth Circuit held that displaying thumbnails of copyrighted images is a fair use. In each case, the court appreciated that new technological realities made the copying involved in these uses fundamentally different than the copying prohibited by traditional copyright law.
Unfortunately, judges have not always been so clear-sighted. In the 2000 case, a stubbornly literalist district judge held that storing copies of CDs on’s servers for future transmission to customers (all of whom had shown they were legal owners of the CDs) was not a fair use. Unfortunately, that case was settled before it could be appealed.
The courts need to clearly say that the mere act of making a digital copy is not a violation of copyright. What matters is how those copies are used. Fortunately, I think the folks at Google understand what’s at stake, and they know that the future of their business may depend on this issue. They are in the business of organizing the world’s information, most of which is owned by other people. If they have to get permission from each individual copyright holder, many of the innovative things they’d like to do with that information will become logistically impossible. So I’m crossing my fingers and hoping that Larry and Sergei fight this thing all the way to the Supreme Court.
Previous post:
Next post: | <urn:uuid:89030c5c-172b-4ce3-b925-1f7ce4ad1653> | 2 | 1.679688 | 0.21679 | en | 0.954685 | http://techliberation.com/2005/09/22/innovation-under-attack/ |
I've covered a lot of research on how to make your life better but many people struggle with implementing changes because it seems like a major undertaking. It doesn't have to be.
You can make strides in five fundamental areas by just sending five emails.
Every morning send a friend, family member, or co-worker an email to say thanks for something. Might sound silly but it's actually excellent advice on how to make your life better. There's tons and tons and tons of research showing that over time, this alone — one silly email a day — can make you happier. Harvard professor Shawn Achor writes in The Happiness:
This is why I often ask managers to write an e-mail of praise or thanks to a friend, family member, or colleague each morning before they start their day's work — not just because it contributes to their own happiness, but because it very literally cements a relationship. [The Happiness
(More on increasing happiness here.)
2. JOB
At the end of the week, send your boss an email and sum up what you've accomplished. They probably have no idea what you're doing with your time. They're busy. They have their own problems. For your boss, this let's them know what you've been up to without having to ask and saves them from wondering and worrying. They'll appreciate it and probably come to rely on it. For you, it's proactive and shows off your efforts, which Stanford professor Jeffrey Pfeffer says is the key to success in any organization:
…you should make sure that your performance is visible to your boss and your accomplishments are visible. Your superiors in the organization have their own jobs, are managing their own careers, are busy human beings. And you should not assume that they're spending all their time thinking about you and worrying about you and your career. [Power: Why Some People Have It and Others Don’t]
More on improving your work life here.
Once a week, email a potential mentor. Doesn't have to be related to your job. Who do you admire that you could learn from? As I've blogged about before, mentors are key to success.
Any person lucky enough to have had one great teacher who inspired, advised, critiqued, and had endless faith in her student's ability will tell you what a difference that person has made in her life. "Most students who become interested in an academic subject do so because they have met a teacher who was able to pique their interest," write Csikszentmihályi, Rathunde, and Whalen. It is yet another great irony of the giftedness myth: In the final analysis, the true road to success lies not in a person's molecular structure, but in his developing the most productive attitudes and identifying magnificent external resources. [The Genius in All of Us: New Insights into Genetics, Talent, and IQ]
It's the age of the internet, folks. If you have Google and half an ounce of resourcefulness it's not that hard to find almost anyone's email address. If they have a website, their email is probably listed on it.
What do you write? Try Adam's method or Tim's method or Ramit's method.
(More on the power of mentors here.)
Email a good friend and make plans. What does research say keeps friendships alive? Staying in touch every two weeks. Got 14 friends? Then you need to be emailing somebody every day. And what should you email them about? Make plans to get together. Research shows the best use of electronic communication is to facilitate face-to-face interaction. As Stephen Marche writes in The Atlantic:
The results were unequivocal. "The greater the proportion of face-to-face interactions, the less lonely you are," he says. "The greater the proportion of online interactions, the lonelier you are." Surely, I suggest to Cacioppo, this means that Facebook and the like inevitably make people lonelier. He disagrees.Facebook is merely a tool, he says, and like any tool, its effectiveness will depend on its user. "If you use Facebook to increase face-to-face contact," he says, "it increases social capital." So if social media let you organize a game of football among your friends, that's healthy. If you turn to social media instead of playing football, however, that's unhealthy. [The Atlantic]
(More on improving friendships here.)
These "weak ties" are the primary source of future career opportunities. As
"But I don't know what to say." Do any little thing that benefits them, not you. Try Adam Rifkin's 5 minute favor.
Or just send them a link they might find useful.
Still stuck? Okay, send them the link to the post you're reading right now. If this has helped you make your life better it can probably help them too.
(More on how to network effectively here.)
More from Barking Up The Wrong Tree... | <urn:uuid:c38fe30b-30d2-42ba-8624-4d29d31d5e51> | 2 | 1.789063 | 0.065612 | en | 0.969687 | http://theweek.com/articles/453541/improve-life-by-sending-five-simple-emails |
DiscardedYKTTW All Scotsmen Are Cheapskates YKTTW Discussion
All Scotsmen Are Cheapskates
Caledonian means mean.
Motion To Discard Motion To Discard Motion To Discard Motion To Discard
(permanent link) added: 2014-01-06 06:54:33 sponsor: SonofRojBlake (last reply: 2014-01-07 05:17:55)
Add Tag:
Seen It a Million Times. We have All Jews Are Cheapskates, but that's rather US-specific.
In media originating in the USA, if you want to portray a character who's mean with their money, the laws of stereotyping along with the fact that You Have to Have Jews means they'll usually be Jewish.
In the UK the usual preferred racial stereotype for meanness is the Scotsman.
Live Action Television: The Young Ones features a bit with Arnold Brown, who introduces himself as a "Scottish Jew - two racial stereotypes for the price of one".
Radio: I'm Sorry I Haven't a Clue, Graeme Garden and Barry Cryer's recurring characters of Hamish and Dougal greet each other by name, followed invariably with the phrase "You'll have had your tea..." (i.e. the character is implying "... so I won't be offering you anything to eat or drink."
Replies: 4 | <urn:uuid:da044faa-9c37-4358-946a-ac9a0e6dcc57> | 2 | 2.0625 | 0.038475 | en | 0.916884 | http://tvtropes.org/pmwiki/discussion.php?id=vio2yrfap9durg6gphbinp3v&trope=DiscardedYKTTW |
From Uncyclopedia, the content-free encyclopedia
Revision as of 12:08, August 3, 2010 by Ironfist (talk | contribs)
Jump to: navigation, search
Whoops! Maybe you were looking for Logic?
“Who's gonna make me a sammich, then?!”
~ Oscar Wilde on Antifeminism
Feminine Articles
Articles About Feminine Issues
Antifeminists are a kind of feminists that are against everything, especially other feminists.
Who they do think they are
Antifeminists are men that think that they are better off not listening to any of that self assertive gibberish intellectual college girls are spreading around these days.
Antifeminists claim themselves to be less sexist and more gender neutral than any other feminists. This is because they are not interested in gender social constructive theory at all, they are more into the very biological pussy itself - or at least - so they wish they were.
A minor fraction of antifeminists consist of women who have taken the feminist propaganda about women rights to make their own choices a little too seriously and therefore, to the feminists rage, have consciously chosen to either become a porn model or marry some filthy rich lawyer that can support them as housewives at home while the children are not yet teenagers.
What they do think
Antifeminism is a very straight forward ideology. Most antifeminist ideological statements can be made by the teenage inversal technique, taking an ordinary feminist statement and add the word - not.
An example:
Cquote1 The female sex is subordinated by the patriarchacy - not! Cquote2
As you see this form of argumentation is kind of waterproof and can only be beaten by rhetoric experts using the russian reversal technique.
The above is one of the gayest lines ever written by a woman or man, women suck - not!
What they do
Antifeminist movements most frequently attract angry young men who hate any women they can't get, which most of the time means any women except those without clothes in a porn magazine. Most anti-feminists suffer from a deep heterosexual urge which ought to make them more interested in getting a haircut and shaping up their act before entering the ordinary pub in order to get some real women. Unfortunately the only urge that go deeper then their sexual desire is hunting up a real fat furry feminist dyke to scorn as to keep their ideological preferences alive and healthy.*Static* The feminists are everywhere! *Chhhhshk* Fight the female supremacists!!! *Chhhshk* Ahhh!!!*Biting noise*
What ?
Why, don't ask me! Do better your self, Dammit!
Personal tools | <urn:uuid:8fac8f10-854a-49cd-b28f-12e7ab0af37d> | 2 | 1.570313 | 0.037208 | en | 0.950908 | http://uncyclopedia.wikia.com/wiki/Antifeminism?oldid=4680853 |
Take the 2-minute tour ×
I can easily use Netcat (or, Socat) to capture traffic between my browser and a specific host:port.
But for Linux, does there exist any command-line counterpart of a Squid-like HTTP proxy that I can use to capture traffic between my HTTP client (either browser or command-line program) and any arbitrary host:port?
share|improve this question
2 Answers 2
up vote 6 down vote accepted
Both Perl and Python (and probably Ruby as well) have simple kits that you can use to quickly build simple HTTP proxies.
In Perl, use HTTP::Proxy. Here's the 3-line example from the documentation. Add filters to filter, log or rewrite requests or responses; see the documentation for examples.
use HTTP::Proxy;
In Python, use SimpleHTTPServer. Here's some sample code lightly adapted from effbot. Adapt the do_GET method (or others) to filter, log or rewrite requests or responses.
import SocketServer
import SimpleHTTPServer
import urllib
class Proxy(SimpleHTTPServer.SimpleHTTPRequestHandler):
def do_GET(self):
self.copyfile(urllib.urlopen(self.path), self.wfile)
httpd = SocketServer.ForkingTCPServer(('', 3128), Proxy)
share|improve this answer
I believe this is exactly the kind of solution I was looking for. Thanks. – Harry May 18 '12 at 4:05
The Python one worked for me for HTTP requests but it doesn't seem to support HTTPS requests. – Russell Silva Dec 12 '14 at 0:00
This may not be the best solution, but if you use any proxy then it will have a specific host:port so the netcat solution with still work, albeit you'll have to pick apart the proxy meta-data to make sense of it.
The easiest way to do this might be to use any random anonymization proxy out there and just channel all the traffic through netcat. (I.e., set your browser proxy to localhost:port and then forward the data to the real proxy.)
If you want to have a local proxy then a SOCKS5 proxy with ssh -D <port> localhost is probably your easiest option. Obviously, you need to tell your browser to use a "socks" proxy rather than an "http" proxy.
So, something like this (assuming your local machine accepts incoming ssh connections):
ssh -fN -D 8000 localhost
nc -l 8080 | tee capturefile | nc localhost 8000
Naturally, that'll only work for one browser connection attempt, and then exit, and I have not attempted to forward the return data to the browser, so you'll need your full netcat solution.
share|improve this answer
Thanks. Do you have any name suggestions for anonymization proxies for Linux? I would prefer ones that are not as heavyweight as Squid (in terms of size, memory, startup speed), are startable by a non-root user, etc. – Harry May 17 '12 at 14:52
Also, I didn't follow your SOCKS5 proxy example. Could you please explain what is going on with the 2 invocations above? I, e.g. know that ssh -fN -D ... starts a SOCKS5 proxy on localhost port 8000, but what does nc -l ... do? And, to which port should I make my browser point to: 8000 or 8080? – Harry May 17 '12 at 15:37
@harry for an anomyzation proxy have a look at privoxy – Ulrich Dangel May 17 '12 at 17:17
When I say "anonymization proxy" I don't mean something you set up yourself: I mean an existing proxy out there on the net. @UlrichDangel suggests privoxy and I'm sure there are others too. – ams May 18 '12 at 8:04
The nc commands accept all traffic from port 8080 and forward it to 8000. I did not provide a full netcat solution because your post suggested you already knew how to do that part. – ams May 18 '12 at 8:06
Your Answer
| <urn:uuid:557e43f7-b7e8-47fb-8d00-861baa2bb780> | 2 | 2.40625 | 0.263227 | en | 0.866037 | http://unix.stackexchange.com/questions/38850/is-there-any-command-line-generic-http-proxy-like-squid/38859 |
Jazz &
* Julian "Cannonball" Adderley
Profile & discography.
* Chet Baker: Lost & Found
A bit messy with a text-based browser, but navigable.
* Art Blakey
* Ornette Coleman
A New Jazz Archive page.
* John Coltrane
Maintained by Lenny Dyadel. Messy with a text-based browser; offers discography, biography, etc.
* Miles Davis
* Buddy DeFranco
Biography, itinerary, discography, fan contributions, etc.
* Duke Ellington
* Dizzy Gillespie Page
Short bio and a set of links.
* Dexter Gordon
* John McLaughlin
* Taj Mahal
Huge site, somewhat obscurely organised at times, but everything's there somewhere.
* Wes Montgomery
* Charlie Parker
* The Art of Pepper
Art Pepper. A bit messy with a text-based browser, but navigable.
* Sonny Rollins
* John Surman
* Fats Waller
* WestbrookJazz
A site devoted to Mike and Kate Westbrook, including news about their projects, recordings and performances.
* Aziza Mustafa Zadeh
* Joe Zawinul
Italian Fan-Site (offering English & Italian versions). Takes in Weather Report too.
Reviews & Lists
* 52nd Street Jazz
* The Jazzlist
Maintained by Stan Anson. Very useful resource, offering collations from printed sources of recommended jazz recordings, plus information about buying jazz on line.
* David Reitzes
Two main offerings:
1. Jazz Best of the Best -- a selective list from David Reitzes, giving suggested best recordings (and brief reviews) of suggested best composers/performers. Messy with a text-based browser, but interesting.
2. Music essays and reviews -- jazz and pop.
Record labels
* Blue Note Records
One of the best-known labels; the site offers a good deal of information, though it's a bit messy with a text-based browser (why do Web-page programs insist on telling you that there's a graphic? I'm using a text-based browser because I don't care).
* BMG Classics: Jazz
* Impulse!
Miscellaneous Sites
* African Music Encyclopedia
A bit difficult to use at the beginning with a text-based browser, but worth sticking at.
* American Jazz Symposium
Huge site, with masses of information and links.
* The Blue Highway
The blues: the music, U.S.-based radio listings, the people, the Web.
* Blues World
E-magazine: articles, photo gallery, CD reviews, 78rpm record auction, plus the homepages for Blues & Rhythm: The Gospel Truth, Europe's leading English language blues mag, and Vintage Jazz & Blues Mart, record trading since 1954.
* Cheap or What! CDs
* Europe Jazz Network
* InterJazz
"The Internet's Jazz Plaza, where you can find all the resources on Jazz world-wide. From Festivals, Clubs, Record Labels, Industry Yellow Pages, to live IRC sessions with top artists that perform in venues participating in our programme."
* Japanese Music
Includes details of Japanese jazz musicians.
* Scott Joplin Rags
Four rags by Scott Joplin in zipped Finale (r) 3.5 *.MUS file format (ed. Randy D. Ralph).
[Music] [e-mail
Page] [Philosophy Index] [Everything Else] | <urn:uuid:5cff2908-1edf-4030-9890-f21fe8f1a287> | 2 | 1.617188 | 0.043136 | en | 0.781543 | http://users.ox.ac.uk/~worc0337/recreation/jazz.html |
National Geographic
VOICES Voices Icon Ideas and Insight From Explorers
As a lifelong storyteller turned manned submersible pilot Erika Bergman is a passionate ocean explorer. She studied chemical oceanography at the University of Washington while working as a diesel engineer aboard the tall ship S/V Lady Washington and a steam ship engineer aboard the S/S Virginia V. Since then she has worked as a submersible pilot and engineer completing sub dives for exploration, research and filmmaking. Erika hopes to inspire enthusiasm for ocean awareness using these exciting inner space vehicles.
Crashing Into Ice: The Impact of Climate Change, on My Head
Ruby, Françoise, and I are barefoot and wearing t-shirts as we conduct sea bird surveys from the prow of the M/V Cape Race. Between shifts we close our eyes, the sun warms our faces and it feels downright tropical. Opening our eyes again, we are reminded of where we are. Looming in the distance are massive, glassy ice bergs, which we will soon be swimming by.
The View From Cuba: Photo Updates
Experience Erika’s travels through her latest photos, as she travels to Cuba to lead a cross-cultural ocean education program.
2,100 Feet and Holding: Inside the Mind of a Submarine Pilot
There’s nothing between me and complete corporeal implosion but a 3 inch thick dome of plexiglass. What goes through the mind of a submarine pilot two thousand feet below the surface of the ocean? And better yet, what goes past the window?
Going Live in 3…2…1: A Google Hangout Inside a Manned Submersible
Sunlight reflects off pale blue walls and brightens the interior of the small space in which I am perched. A brightly colored umbrella rests above me to block the intense Caribbean sun from melting my little laptop. Under the open hatch, I begin an Internet broadcast from inside a submersible.
Classrooms Under the Sea: Descending into the Deep Reefs of Curaçao
One thousand feet below the surface of the ocean, below the reach of the sun’s warmth and light, deep water corals thrive. Standing tall in front of the black backdrop of their environment, colorful corals and deep sea organisms are part of my inspiration to bring deep sea exploration live to classrooms.
Classrooms Under the Sea: Manned Submersibles as Ocean Teaching Tools
A manned submersible expedition is underway using ‘Google+ Hangouts’ to bring exploration into the classroom. | <urn:uuid:ae3560ca-2a48-431c-a9e7-8df2e592a88a> | 3 | 2.671875 | 0.023516 | en | 0.885003 | http://voices.nationalgeographic.com/author/ebergman/ |
Nonasymptotic lossy compression
Victoria Kostina
Graduate Student, Princeton University
Given on: April 1st, 2013
The fundamental problem of lossy compression is to represent an object with the best compression ratio (rate) possible while satisfying a constraint on the fidelity of reproduction. Traditional (asymptotic) information theory describes the optimum tradeoff between rate and fidelity that is achievable in the limit of infinite length of the source block to be compressed. Since limited delay is a key design requirement in many modern compression and transmission applications, we drop the assumption of unbounded blocklength and analyze the optimum tradeoff among rate, fidelity and blocklength achievable, regardless of the complexity of the code. We have found that the key random variable that governs the nonasymptotic fundamental limit of lossy compression is the so-called d-tilted information in x, which quantifies the number of bits required to represent source outcome x within distortion d. If we let the blocklength increase indefinitely, by the law of large numbers the d-tilted information in most source outcomes would be near its mean, which is equal to Shannon's rate-distortion function. At finite blocklength, however, the whole distribution of d-tilted information matters. Interestingly, in a seemingly unrelated problem of channel coding under input cost constraints, the maximum nonasymptotically achievable channel coding rate can be bounded in terms of the distribution of the random variable termed the b-tilted information in channel input x, which parallels the notion of the d-tilted information in lossy compression. Finally, we have analyzed the nonasymptotic fundamental limits of lossy joint source-channel coding, showing that separate design of source and channel codes, known to be optimal asymptotically, fails to achieve the non-asymptotic fundamental limit. Nonasymptotic information theory thus forces us to unlearn the lessons instilled by traditional asymptotic thinking.
Victoria is finishing up her PhD at Princeton University. She is working with Prof. Sergio Verdú on nonasymptotic information-theoretic limits. Previously, she received a Bachelor's degree from Moscow Institute of Physics and Technology and a Master's degree from University of Ottawa. | <urn:uuid:5cecc801-2127-447c-8f59-15a5c280cc0a> | 2 | 2.078125 | 0.077638 | en | 0.899749 | http://web.stanford.edu/group/it-forum/colloquium/colloquium_kostina.html |
Shortcut to Lock Computer on Win2k
Tim posted a list of handy Windows keyboard shortcuts. Someone pointed out in his comments that some of these don't work on all versions of Windows, such as Win + L which will lock your machine on XP & 2003, but not 2000.
Fortunately, this particular shortcut is easy enough to fix! Create a new shortcut on your desktop, hard drive, or elsewhere. Set this shortcut to run “rundll32 user32,LockWorkStation”. Once the shortcut is created, right click it, goto properties, click in the Shortcut Key text box, and press Win + L. Win + L will now lock your machine.
EDIT: Raymond Chen tells me I shouldn't be doing this rundll32 trick. I've never had it fail on me, and I don't exactly understand what would cause it to, so all I can say, while this trick works, one of the guru's behind Windows says don't use it., so, use the LockWorkStation entrypoint at your own discretion.
• You are a bad man! Thanks for the tip.
• Please don't do this. The LockWorkStation function was not designed to be run via Rundll32. To be run via Rundll32 a function needs to match a very specific function signature, which LockWorkStation doesn't. As a result, the stack is misaligned on return and what happens next is anybody's guess.
I'm severely tempted to fix Longhorn so it enforces the function call signature strictly for Rundll32 - people are abusing it pretty badly.
• Perhaps Raymond or someone could make a constructive suggestion as to what to use instead of rundll32 then.
If something is broken (win2k), and a workaround is found (rundll32), either fix the original problem or get an acceptable workaround. Don't just say "don't do it", and then try to stop people.
Raymond is not addressing the real problem.
• // Use this instead:
#define _WIN32_WINNT 0x0500
#include <windows.h>
return 0;
• Also pushing CTRL+ALT+DEL then pushing enter locks your computer in win2k. Which is much easier and faster to type than WINKEY+L in my opinion.
• I tried to create the shortcut and it wouldn't allow me to add the keyboard shortcut "Win + L", but it worked fine. Also, if you are not supposed to do this but it worked, could anything serious go wrong with the system by using the rundll32 shortcut.
• Maybe a valid command line workaround would be a better idea?
As funky as a little bit of code is, that isn't practical for your average end user.
Personally, all I want is to have one of my keyboard keys programmed to lock the workstation. Faster then winkey+L, faster then ctrl-alt-del L, faster then anything else.
• Clearly Raymond Chen doesn't know what the heck he's talking about since this above link to the microsoft support site essentially recommends rundll32 user32,LockWorkStation.
Then again, MS Support could be wrong too... It wouldn't be the first time.
In any case, nice trick! I like it.
• I can't speak for anyone else, but I would tend to believe Raymond Chen over Windows Support. Who was it that actually works on Windows again? ;)
• I want to kow how to enable autolock in win2000 profesiional
• Here it is years later and the none-MS recommended way of doing it is still all that's available? Someone wrote a little hack to do it but I tend to be suspicious of things like that.
Comments have been disabled for this content. | <urn:uuid:f54551e4-59ca-4294-82f8-4589f8576be4> | 2 | 1.828125 | 0.230923 | en | 0.93812 | http://weblogs.asp.net/bdesmond/43016 |
MySQL ships with four predefined accounts: root@localhost, root@%, @%, and @localhost. The MySQL administrative user uses the root@localhost and root@% accounts to create new users, databases, and so forth. The @% and @localhost accounts are used for what MySQL terms anonymous connections. When users don't supply credentials, MySQL uses the anonymous connections to grant access. While you're learning MySQL, you might want to keep these connections active. However, they represent a security risk, so be sure you delete the anonymous connections in a production network. To delete the connections, double-click User Administration in MySQL Control Center (MySQLCC), right-click the @% and @localhost accounts, and click Delete User.
While you're in the User Administration menu, you might also want to change the root user's password, which is blank by default. To do so, right-click the root@local host and root@% accounts, click Edit User, and update the password. | <urn:uuid:3b670f66-3db2-4c5f-81e9-969022600791> | 3 | 2.75 | 0.028239 | en | 0.905364 | http://windowsitpro.com/security/predefined-mysql-accounts |
[RSArchive Icon]
Rudolf Steiner Archive Section Name Rudolf Steiner Archive
From the Contents of Esoteric Classes
Esoteric Lessons Part II: Stuttgart, 2-12
102. EL, Stuttgart, 2-'12
In our last few lectures we learned that our whole existence is guided by high beings who each in their own way work at world becoming and at our special human features. If we want to connect ourselves with them through concentration and meditation we must fill ourselves with a feeling of humility that can't be compared with the humility that we have in daily life, for this feeling of humility stands too high above every human comprehension, when we connect ourselves with these sublime beings who are also our teachers in the spiritual world. Later on, a man is able to distinguish between real beings and forces that radiate from within him. One can feel in one's heart whether what's seen comes from higher worlds or from within one; it goes through the heart with a warmth and excitement that radiate into it from the cosmos. For the heart is connected with Leo and the sun, and the warmth of these forces participates in spiritual vision.
Now what does it mean to be an esoteric? A man is placed in his karma through all phases of his earth existence. It's impossible for him to escape it, for the consequences of his feeling, thinking and especially of his deeds follow him irrevocably through all of his incarnations, be it sooner or later. He must eradicate the wrongs that he did here on earth, depending on the circumstances into which he's put through his incarnation. Divine guidance sees to this. Before a man takes his own development in hand, everything goes according to regulated laws that nothing can accelerate. But if he begins an esoteric training something quite different happens to him. He frees himself from guidance, takes his development in hand and becomes a different man qualitatively. Through what? Things that he previously thought were desirable mostly love their value for him, his views and attitudes change, and he sees that he often acted unsympathetically in the past. His feeling of responsibility now becomes much more subtle, and he tries to make his wrongs good in every direction, no matter how many outer and inner sacrifices it may cost him. The meditation and other exercises that are given to an esoteric transform his etheric body through daily repetition, assuming that he experiences them in the right way, that is, with the right feelings and through pictures that arise within. Thereby the etheric body gradually separates itself. After these exercises have been done patiently and by giving up one's whole existence for a short time each day, something wonderful will be faintly noticeable to the man on awakening which he can't express in word, for it's a very delicate feeling of an experience in the spiritual world from which he's just returned. After awhile, he sees colors rising before him in which forms take shape, and something quite unlike what he's used to seeing confronts him. At the beginning of spiritual development the things that appear are similar to things in our daily environment, and they often radiate out of our soul as the latter's qualities — so we shouldn't take them to be spiritual experiences right away. One should emphasize that esoteric training doesn't just make a man better. A man may have moral virtues and be ever so intellectually developed, and yet have disharmonious, bad qualities hidden in his soul that are usually varnished over by conventional morals. A man is really worse than one usually thinks. When a man takes his esoteric development in hand, his vices inevitably appear, and here an esoteric must use his whole strength to master them; he brings up his karma and accelerates it through his development. Let's understand this well for we've entered on another life's path; we've now become companions of our sublime spiritual guides who previously directed us, for now we direct ourselves and also take full responsibility for this.
People often say that it's nothing but egoism if a man wants to develop faster than his fellows. But that's not so. As soon as we realize that we have a divine origin and that we must develop ourselves up again to the primal source of our existence, to divinity, then it's even a sin of omission if we say: I don't want to participate in the Godhead, it'll lead me to the goal someday.
There's a lot of intellectual arrogance in a statement like that, for the Gods have laid the germs of our spiritual capacities in us, and when we're aware of this it must be our duty not to let these forces lie fallow or to leave their germination to the general stream of development. We must take the unfolding of our spiritual organs in hand ourselves, we must no longer let ourselves be led — we must become companions of our leaders. It's a difficult path. There can be no question of egoism here for we have duties with respect to the leaders who've previously shown us the path.
The Rudolf Steiner Archive is maintained by:
The e.Librarian: [email protected] | <urn:uuid:085f637f-cd20-458d-81b1-8345978bc022> | 2 | 2 | 0.033172 | en | 0.97757 | http://wn.rsarchive.org/Lectures/GA/GA0266/19120222f01.html |
Tuesday, September 28, 2010
Parents insuring their kids till they are 26? A bad message to send.
Last week, the first aspects of Obamacare went into effect. Most of America, including myself, is dreading the day, a couple years from now, when the bulk of the bill actually goes into affect (assuming it passes the Constitutionality test it faces in the Supreme Court), but the "patients bill of rights", as it is called, aspect of the legislation, which is what went into effect last Thursday, is the only part of the legislation I actually liked-except for children being allowed to stay on their parents health insurance until the age of 26...
I am a Gen X'er, and, as I was taught, one of the early steps that I had to take in becoming an adult, was in getting a job which provided me with my own health insurance. It was at this point that "mommy and daddy" no longer had to "take care of me" and that I could not only provide money for myself, but was self sufficient enough that I could also provide for my own health when needed. What concerns me is the message we are sending to our future generations by saying they don't have to worry about health insurance until well into adulthood.
Becoming an adult is all about personal responsibility and achieving independence, and a big part of that is being able to provide for your own health. Human nature tends towards laziness (unless nurtured otherwise at an early age); by giving the option to not need health insurance until 26 years of age, we are nurturing that part of humanity that is detrimental to a healthy, vibrant and successful society; and, in a small way, we are telling children that adulthood can wait. This in turn could breed a much broader lack of responsibility regarding personal choices like drug use, sex, and money management.
It may also cause social conflict within children once they reach the age of 18-when law considers them an adult: we have seen for decades the conflict that 18-20 year olds have when they consider they are old enough to vote and die for our country in combat, but not old enough to enjoy a beer and burger with their friends and family. What are they to think about their place in life when all need to be independent at the "legal" age of being an adult is removed? Are we to expect them to be ready for the responsibility of raising a family, owning and taking care of a home, and managing a household and career, when all the little steps at being prepared for such things are removed or pushed later and later into adulthood?...
We can hope that this legislation, 20 years from now, will not lead to such social and personal strife and inevitable economic stresses; then again, we were told in the mid 20th century that Welfare wouldn't produce an entire segment of society dependent on government hand outs for their existence.
No comments: | <urn:uuid:8e29ab22-1051-4532-82c1-653bfbb388bc> | 2 | 1.570313 | 0.107402 | en | 0.982208 | http://wordofmouthbyryanryles.blogspot.com/2010/09/parents-insuring-their-kids-till-they.html |
AD Main Menu
If the oceans get warm enough, the largest species of pacific salmon -- commonly known as kings or chinooks -- could face a “catastrophic” population loss by the century’s end, according to a new study published in the journal Nature Climate Change and reported in Toronto’s Globe and Mail.
The study, explains the Vancouver Sun, examined juvenile king salmon’s ability to adapt to increasingly warmer water temperatures and found that heart rate increased with temperature until, at 24.5 degrees Celsius (76.1 degrees Fahrenheit), the fishes’ hearts could no longer beat faster and slowed or became arrhythmic.
Based on those findings, the study concluded that under “average” scenarios projected for warming there was a 17 percent chance of catastrophic population loss (though that figure rises to 98 percent in worst-case warming scenarios).
Read more Alaska Beat
Alaska Dispatch News | <urn:uuid:0dcd9198-c885-448c-9211-1f0fd24be5f3> | 3 | 3.3125 | 0.045912 | en | 0.910242 | http://www.adn.com/section/alaska-beat?page=1 |
Not long after religious nationalists held a rally in Bat Yam under the banner of "Jewish girls for the Jewish people," a group of rabbis' wives published a letter urging Jewish women not to date Arab men.
While her parents know and have met Rona's boyfriend, Rona says that she is at a point where she is "actively lying" to the rest of her family.
"I don't know how to articulate how they'd react, "Rona says. "I think that my aunt and uncle know that there is someone ... and they definitely know that he's Arab. But it's more about my grandmother and her sisters and the older generation. It's like if [I] were to bring home a mass murderer."
She laughs nervously and continues.
"It just doesn't happen. It's like: 'Bring home somebody who is a total loser, but don't bring home an Arab.'"
Rona describes her parents' political views as "moving more left but kind of traditional," adding, "my mum always says that she thinks that the occupation of Gaza and the West Bank in 1967 was a mistake and that [Israel] should have returned the territories."
"There was a period of time I was hiding it for convenience's sake. I just wanted to enjoy my life and not be harassed."
When she did talk to her parents about her boyfriend, who is a non-practicing Muslim, they sidestepped the issue of his race, focusing instead on "cultural differences".
"I was like, 'What are you saying? That he's going to come home one day and want me to put on a hijab? Do you know what the cultural differences are?'" Rona recalls. "So I took immediate offense to this concept. I thought it was racist from the get go."
Her parents also objected to the relationship because "it would be so difficult for us to live here together," Rona says, due to the widespread discrimination they would face.
She describes the first time her parents met her boyfriend as "awkward".
Rona says that she has not felt any racism coming from her boyfriend's family. But, because of the political situation, there are moments when she feels a divide between them.
She was living with her boyfriend when Operation Cast Lead began in December, 2008. Her boyfriend's mother, whose sister lives in the Gaza Strip, happened to be visiting when the war began.
"We were watching the news and they were showing the first strikes, the air attack," Rona recalls. "His mum was screaming and crying and cursing the army and the Israelis and the Jews and everyone and I was standing there like 'I don't know what to do.' On the one hand, I wanted to show her that I care. On the other, does she now want an Israeli Jew to put her arm around her? But I did."
History of mixed marriages
Iris Agmon, a professor in Ben Gurion University's department of Middle East studies, says: "In the Ottoman sharia court records one can find women whose nicknames hint to the fact that they are converted Muslims." And some of these women were probably Jewish.
After Ottoman rule ended, the British mandate also saw such couples. Deborah Bernstein, a professor in the University of Haifa's department of sociology and anthropology, says that although there is no "systematic documentation or even discussion of the subject ... it is clear that such a phenomena did exist". She found family stories of these couples while researching her Hebrew-language book about women in mandatory Tel Aviv.
Bernstein also discovered "archival welfare documents," pointing to such relationships. "For example, [one referred to] a [Jewish] woman leaving her husband and children and going to live with an Arab man."
Bernstein adds that the Jewish community was "very strongly opposed" to "mixed marriages".
"This was the case in [Jewish immigrants'] countries of origin," Bernstein says, explaining that the opposition to mixed marriages took on an "additional national element" in Israel.
But, sometimes, protests against such relationships ran the other way - leaving a lasting impact on generations to come.
The Palestinian grandson of such a marriage lives in a neighbouring Arab country. According to Jewish religious law, he is not Jewish. While, technically, many of his cousins are Jewish, they do not know it - their grandmother's conversion is a strictly-guarded secret, shared with only a few members of the family.
"The first song I learned to sing was shir l'shalom [song for peace]. We've gone to demonstrations since I was a toddler. So I was always on the left," he explains, "but I never knew any Palestinians."
"[Society] is built in a way that doesn't help relationships," Salma says. "Everything is segregated. The educational systems are separated ... People don't meet. And if they do meet, they meet under unusual circumstances, like at a demonstration."
Even though both Alex and Salma grew up in liberal homes, the two were no exception - it was activism that brought them together.
"You know, we sort of chose our lives," Salma says. "I can't be friends with racist people so it's easy to avoid. But I think if we would have gone out to more parties we would have faced more problems."
Still, things are only "relatively simple".
Alex recalls running into a friend from school who made a racist and obscene remark about his relationship with Salma. And one of Salma's closest childhood friends stopped speaking to her when she joined a Jewish-Arab group that advocates for a bi-national solution to the conflict.
"I think it comes out more than that," Alex adds.
Salma nods and begins to explain: "I have one sister who got married last summer. She knows Alex and his family very well, so she wanted to invite [them] ..."
She pauses and, a bit like an old married couple, Alex picks up the thread and continues: "And the oldest sister says, 'What are you going to invite all of your Zionist friends?'"
There is a flicker of hurt on Alex's face as he remembers. "Now, this comes out of nowhere. I refused [mandatory military service]," Alex says. "I'm definitely not a Zionist. I refused and my parents aren't Zionists."
Alex emphasises that he maintains a warm relationship with Salma's oldest sister and that her remark came during an emotional argument. But, Alex says, the incident pointed to something that "can't be completely erased ... that the relationship can't be normalised. It always has to be politically justified."
What do such tensions say about Israeli society?
"Nothing good," Alex answers.
"I think the hatred is becoming more and more explicit," Salma says, pointing to the rally in Bat Yam and the rabbis' wives' letter as two examples. "It's 'don't take our girls' ...."
Source: Al Jazeera | <urn:uuid:9e906826-6702-4c2e-b938-9adf00f7078f> | 2 | 1.65625 | 0.023153 | en | 0.982114 | http://www.aljazeera.com/indepth/features/2011/01/201112912322207901.html |
Word! Sclera
Kids > Kids' Medical Dictionary > S > Word! Sclera
Word! Sclera
(En español: Esclerótica)
Say: sklair-uh
The sclera is the white part of your eye. It's a tough, protective covering and the muscles that control eye movement are connected to it.
| <urn:uuid:55c956aa-0262-4940-89f3-dd53419551e9> | 3 | 3.4375 | 0.844341 | en | 0.816135 | http://www.allkids.org/body.cfm?id=1905§ion=3&category=20209&ref=30756 |
» Pre-Anesthetic Testing
» Laser Surgery NEW
» IV Catheter & Fluids during surgery
» Declaw for your cats
» Ear Cropping, Tail Docking & Dewclaw Removal
» Spaying your Cat
» Neutering your Cat
» Cruciate Ligament Surgery
» Hernia Surgery
» Surgical Monitoring
» Spaying your Dog
» Neutering your Dog
» Lumpectomy (lump removal)
» Abscess Surgery
» Exploratory Surgery
» Patella Surgery
» Pyometra
» C-section
» Bladder Surgery
Hernias: Umbilical - Inguinal & Diaphragmatic Surgeries performed at American Animal Hospital
Hernias are very common in human medicine, especially in males. Hernias in general are a weakness or opening within a muscle mass that allows other tissues to pass through. In men, they are usually inguinal hernias, which are found in the groin area where there is a tiny natural opening within a band of muscle. When a hernia occurs here, the opening enlarges and the intestines from the abdominal area pass through it, producing a swelling immediately under the skin.
Types of hernias in dogs and cats
In pets, there are also hernias involving the muscles that surround the abdomen and they are commonly found at two locations. The first site would be in the groin area on the inner surface of the rear leg - an inguinal hernia. The second site would be the 'belly button' where the umbilical cord had connected the puppy to his mother. A hernia at this location is called an umbilical hernia. In both cases, abdominal organs such as the intestines or fat pass through the opening and lie just beneath the skin.
Another common hernia site in pets involves the internal muscle that separates the abdomen and chest. That muscle is called the diaphragm, so the hernia is therefore referred to as a diaphragmatic hernia. The intestines and other abdominal organs (such as liver and stomach) are able to pass through the opening within the diaphragm into the chest cavity. There they take up a portion of the space normally occupied by the lungs.
A hernia is, therefore, usually nothing more than an abnormal opening in a muscle through which other tissues of the body pass.
Consequences of hernias
The idea that a section of intestine or other structure might slip through one of these openings and move under the skin or into a different body cavity (such as the chest) does not seem like a big problem. However, in many cases, a hernia that goes untreated can have a fatal outcome. Usually, the problems that occur are not caused by the intestines or other organs being in an abnormal position or from the displacement of other tissues that are supposed to be there. Rather, in most instances, a problem arises when the blood supply of the herniated tissues is affected.
Figure #1 shows a hernia that involves the abdominal wall and a section of the small intestine. A portion of the intestine has slipped through a small hole in the muscular wall. This is exactly how an umbilical hernia appears. Notice the stricture (abnormal narrowing) of the intestine itself. This could easily prevent the passage of food through this section of the intestine, effectively causing an obstruction or blockage. This would certainly lead to the death of the animal, if it were not treated. More importantly, however, please look at Figure #2. This shows a close-up of the intestinal wall as it passes through the hernia. Notice how the blood vessels are twisted and constricted. Blood will not be able to flow back and forth from between this portion of the intestine and the rest of the body. It means that the section of intestine that has passed through the hole in the abdominal wall will lose its blood supply. It will be deprived of oxygen and nutrients and when this occurs, it dies.
The symptoms associated with a hernia, like the one pictured in Figure 1 and 2 may initially relate to the inability of food to pass through this constricted section of intestine. Muscles within the wall of the intestine are responsible for moving food and water through the organ. Waves of contractions called peristalsis propel the contents along the length of the intestine. When an obstruction is encountered, like the one described, the peristaltic waves reverse direction and move the food backward through the entire digestive tract. This results in food and water being vomited. After this portion of the tract has emptied, the animal usually goes off food and refuses to eat. They may still drink water because liquids might be able to pass through the restricted section of the intestine or be absorbed prior to that point.
Once the blood vessels are affected, however, the clinical signs change drastically. The area will become swollen and painful. Without adequate oxygen and nutrients, the intestinal tissues initially develop cramps just like your leg does when you cross it and it 'goes to sleep.' And if the flow of blood is completely lost, cell death occurs. The pain then becomes severe. The animal will probably develop a fever, become lethargic, and go completely off food and water. As these tissues break down, the toxins from bacteria that normally live in the intestine make their way into the rest of the animal’s body. As the tissue dies, the affected area turns into an abscess and many different harmful metabolic waste products are flushed throughout the animal’s body. All of these substances (bacterial toxins and metabolic waste products) seriously affect the various organ systems of the body. Liver and/or kidney failure are quite common in these situations. Without treatment, the animal will usually die within 24 to 48 hours.
As an owner, do not take a hernia in your pet lightly. In many cases, they are disasters just waiting to happen. Do not buy a puppy that has a hernia unless you have a veterinarian examine it so you will understand what treatment is necessary and the potential cost. Some hernias found in young dogs can wait for repair until the time they are spayed or neutered. In older dogs, we generally repair a hernia as soon as possible once they are discovered.
Hernias are surgically repaired by replacing the herniated (displaced) structures back into their correct position and then suturing closed the abnormal openings . This often requires the use of specialized techniques and long-lasting suture material. We frequently perform this surgery and most pets recover without complications.
Hereditary potential
As a note, umbilical hernias in puppies are a genetic or congenital defect in over 90% of the cases. The disorder is passed from generation to generation just like the color of the coat or the animal’s overall size. Very, very rarely are they caused by trauma or excessive pressures during whelping. Animals that have a hernia or had a surgical repair of a hernia should never be used for breeding. Additionally, those adults that produce puppies with this condition should not be bred again.
Optimized for 1024 x 768 pixels.
Copyright © 2009, American Animal Care
Web Partner | <urn:uuid:2a1461c9-dce2-4fa7-a9a1-dad016462bba> | 3 | 2.78125 | 0.019518 | en | 0.943913 | http://www.americananimalcare.com/pethealth/hernia_surgery_dogs_cats_umbilical_diaphragmatic_irguinal.html |
Brian's Website > Bicycle Projects > Redline Monocog >
Bicycle Generator Overview
This page provides an overview of Bicycle Generators, including the common types and their relative advantages and disadvantages.
Common Features of Bicycle Generators
All bike generators that I have ever seen have had several common traits. They all put out AC current, and all can produce 3 - 6 watts of power. Most bike generators are designed to produce a 6V output voltage, and a few are designed for 12V. All bike generators 'saturate' at a certain output current, in order to avoid frying whatever they are powering. This is done by limiting the amount of ferrous metal surrounding the windings in the generator. (Don't ask for any more details, please). Most bike generators, when used on an average sized bicycle wheel, will reach their full output power when going around 12 - 15 miles per hour. The saturation trait gives them the slightly odd feature that you can (theoretically) make them put out any output voltage you want, at up to whatever the saturation current is. The down side to this is that you have to spin them really fast get higher voltages, when you have any kind of load hooked up. (Faster than you could easily get going on a bicycle). For a hypothetical 3 watt, 6 volt generator and a load that somehow limits the output voltage to 3 volts (say, a battery that is being charged) then the maximum power that you can get out of the generator, due to the saturation current, is (3 volts) x (1/2 amp) = 1.5 watts. If your battery is 6V, then the generator is limited to 6 volts, but the saturation current is the same, so you can get 3 watts out of the generator. With higher voltage batteries, you can get even more power from the generator. (Charging a battery will not provide any load to the generator until the generator voltage surpasses the battery voltage, and after that the load increases quickly for small increases in generator voltage) If the same hypothetical generator has a resistive load (say, a light bulb with a fixed resistance of 12 ohms) Then when the light bulb has 1/2 amp flowing through it, it will have 6 volts across it, and so the generator, which cannot supply more than 1/2 amp, will be limited to 6 volts.
The Unregulated Battery Charging Circuits I describe take advantage of the fact that you can get some power out of bike generator when it is going slowly, but its maximum power output goes up if you increase the limiting voltage. That is why the third circuit (the one on my bike) is designed to switch between a 3.6V limited circuit to a 7.2V limited circuit as soon as the generator is going fast enough to put out more than 7.2V. The maximum power output of the generator is higher with the higher limiting voltage, so the overall circuit efficiency goes up. There is no point in trying to switch to a yet higher limiting voltage, because I could not sustain the speed on my bike to make it useful.
There are three main kinds of bicycle generators. I will describe pros and cons (as I see them) of each below. The three kinds are: Hub Generators, Sidewall Generators, and Drum Generators.
Hub Generators
This is, in my opinion, the best kind of bicycle generator. This is the kind that I have on my bicycle. This type of generator is built into the hub of a bicycle wheel (usually the front). These are by far the most elegant, and the most efficient (no friction to overcome between a roller and tire, like the other two kinds of generators). These are also impervious to dirt, unlike the other generator types, since they are away from all the muck the tire goes through. This kind is by far the most reliable. This type also will not slip when the tire is wet, and cannot damage the tire. The disadvantages of hub generators are that they are heavy, harder to install (you must build a custom wheel) and they tend to be a little less powerful than the other types. Hub generators are by far the most expensive. (A used dynohub will cost around $40 - $50 and you still need to pay to build the wheel.) I do not know if Sturmey-Archer still builds dynohubs. I have seen dynohubs with manufacture dates from the early '50s up to the middle '70s. I would not recommend heavy use of a dynohub built before the '60's due to poor metallurgy. My dynohub was in new condition when I got it, and now has tens of thousands of miles on it. It was built in 1965. Today, modern hub generators are made by Shimano and Sachs, but these are quite expensive and still relatively hard to find. There are other hub generators, (bendix made one back in the 50's) but I have never seen a serviceable example.
Sidewall Generators
This is by far the most common type. They are shaped like a little barrel with a small roller that runs against the side of your bike tire. These are small, but powerful for their size. They are reasonably reliable, but you have to work a bit to keep them clean and oiled, especially if you ride when it is wet. They will wear out after a few hundereds or a thousand miles of use, but they are easy to find and replace. The problems with this kind of generator are that they can damage your tire, and they will only work if your tire has a surface that the wheel can run against, and they can slip, and they are noisy. They are also kind of ugly. They do have another small advantage over Hub generators, in that they can be disengaged from the wheel. (Though the extra drag from a hub generator is totally unnoticeable.)
Drum Generators
These are basically like sidewall generators, except that their roller is designed to contact the top of the tire instead of the side. They are less likely to damage the tire. They are also a little less noisy, since the roller is larger diameter. They are a little more discreet than sidewall generators are. Unfortunately, they will get completely clogged with dirt and die, if run under anything other than perfect weather and paved streets. I have never had much luck with drum generators for that reason. These are a lot less common than sidewall generators, but still easier to find than a hub generator.
Vs. Batteries
Most bike lights these days run off of batteries. Fancy battery systems can be good for up to 40 watts light output, but they will run down. (And the batteries for these systems are usually big and heavy, too.) Many 'fancy' battery systems have a couple of hours of run time, or less. In my experience, a well-focused 6 watt headlight is sufficient for most circumstances,and with the proper generator, you can run after dark indefinitely (albeit with a noticeable drag penalty). With a 1.5 amp-hour, (the capacity of average Ni-Cad C cells) 6V battery, you can run the same headlight for 1.5 hours. (multiply the amp-hour capacity of the battery by the voltage, and then divide by the headlight wattage to figure how long it should run)
Finding a Generator
There are new bike generators available, but most are fairly poor quality (that statement does not apply to new hub generators). It is not hard to find older, but better generators in good condition by looking around secondhand bike shops (any big urban area will have some) or thrift stores. Usually they have barely been used, and just need oil and a good cleaning. You can tell by how worn the roller is, or just by how much dirt is on them. For a hub generator, if the axle turns freely, but is not sloppy, and it does not make any bad noises, it should be OK. Make sure to thorougly clean and repack its bearings before using it, if it is used. | <urn:uuid:6f5af595-eb01-4042-b28a-3eaa203ea4e9> | 3 | 2.8125 | 0.158063 | en | 0.958195 | http://www.amphibike.org/index.cgi?page=pages/3_bikes/monocog/genOverview |
The content below is entirely editable.
Kino is a young girl, visiting different lands alongside her trusty talking motorcycle, learning various things... both about the lands, and about herself.
Kino is a teen on a talking motorcycle called Hermes in a picaresque set of new places-of-the-week, drawing on stories from Pilgrim's Progress to The Littlest Hobo. Kino's encounters are surreal fables, loaded with repetition and allegory. Encounters often involve experiments with utopia that have gone strangely awry, such as a society where telepathy has been enforced upon the populace to foster harmony, but instead causes the opposite effect. Other meetings are framed as fables, such as three men, each unaware of the others' existence-one polishing railway tracks, the other ripping them up, and a third laying new tracks down, in a satire of corporate waste. A state adopts total democracy, only to collapse into mob rule; another gets robots to do all the work, leaving its populace idle. A nation declares itself to be the repository of all the world's books, but then censors its publications so mercilessly that there is little in the library but technical manuals and children's books. Two rival countries sublimate their warring impulses into sporting matches instead of war, but sporting matches with deathly consequences.
Some storylines stretch across more than one episode, such as the "Coliseum" arc, in which Kino must escape a gladiatorial arena where citizens fight for the right to make laws. In any other anime this would be an excuse for prolonged combat, but while Kino does eventually start shooting, the plot remains thoughtful. In any other anime, this would also be pretentious, obfuscating nonsense, but Kino's symbolism has an ultimate purpose. Based on a series of novels by Keiichi Sigsawa, with art by Kuroboshi, originally serialized in Dengeki magazine, it is difficult to discuss the story of KJ without giving away one of its secrets: Kino is actually a girl. Although not a plot point of major importance, not referenced in many episodes, and less of an issue in non-gender-specific Japanese dialogue, this fact had a palpably damaging effect on the way the show could be sold abroad. On DVD in English, KJ has struggled to reach new audiences with carefully non-committal press releases and box blurbs.
The surprise is deadened somewhat by Kino's androgynous look, and by the fact that the voice is provided, like so many anime boys, by a female actress. She is traveling in imitation of Hermes' previous owner, the original Kino, a male traveler who stopped briefly in her homeland. Like the many utopias through which she passes, it was a flawed paradise, a country where children receive a neural modification before puberty that turns them into contented, compliant adults. Our Kino takes up the questing mantle of the original after he dies protecting her from her parents, who want her to undergo the same operation.
Screened on late-night television in Japan, KJ seems designed to provoke thought and debate, its surreal encounters often scripted by Perfect Blue's Murai, its sparse direction often by Serial Experiments Lain's Nakamura. The pale colors are so painterly that the screen frequently gains canvas textures, the images so superfluous at times it's practically radio rather than animation. Its wandering protagonist is an everyman for the teenage audience, a living symbol of their own search for meaning and belonging in an inner world whose rules are eternally shifting. Many, if not most, anime are about the trauma of growing up and finding one's place, but KJ breaks new ground in its use of magic realism to convey the idea. That's not to say it does not have its inspirations, but it seems rooted in the "soft" SF of the New Wave, such as J.G. Ballard, or the poetic allegories of Ray Bradbury, rather than the "hard" SF that informs so many other anime storylines. The result is beautiful and remarkably restful after the frantic attention grabbing of some contemporary shows with their excess of flash and bounce, but it's more like meditation than entertainment.
A short prequel disc, Kino's Journey-Totteoki no Hanashi, was issued with a booklet in Japan in 2003, which included a "visual version of the novel" entitled To no Kuni-Freelance, and trailers for the then-upcoming series. The 2005 movie, Kino's Journey-Life Goes On (Kino no Tabi-Nanika o Suru Tame ni) is a prequel which shows the protagonist, wracked with guilt about the death of the real Kino, being trained by her teacher. After being directed to seek out Kino's mother, she sets off on her journey, framing the rest of the season of KJ as an homage to From the Apennines to the Andes-"3000 leagues in search of someone else's mother."
Series Credits
Person Name Episode Count
Sadayuki Murai
Ryutaro Nakamura
Ryo Sakai
Shigeyuki Suga
To edit the cast, go to an episode page.
Original US Poster Art
General Information Edit
Name Kino's Journey
Name: キノの旅
Romaji: Kino no Tabi
Publisher ADV
Start Year 2003
Add a new genre
Add a new theme
Aliases The Beautiful World
Kino no Tabi
Top Rated Lists
Anime Titles I Want Licensed a list of 20 items by Dream
Anime series i have watched a list of 140 items by rein
Anime I've Seen a list of 228 items by tadamono
Franchise Edit
We don't have any info about Kino's Journey's related franchises. Help us fill it in!
Similar Edit
We don't have any info about Kino's Journey's related seriess. Help us fill it in!
Top Editors
Mandatory Network
Submissions can take several hours to be approved.
Save ChangesCancel | <urn:uuid:b4a2ca74-5299-48de-8fcd-3d5e402c137c> | 2 | 1.664063 | 0.089279 | en | 0.94904 | http://www.animevice.com/kinos-journey/11-4475/ |
TITLE: A TUTOR FOR TEACHING ENGLISH AS A SECOND LANGUAGE FOR DEAF USERS OF AMERICAN SIGN LANGUAGE AUTHORS: Kathleen F. McCoy and Lisa N. Masterman COMMENTS: In Proceedings of Natural Language Processing for Communication Aids, an ACL/EACL '97 Workshop ABSTRACT: In this paper we introduce a computer-assisted writing tool for deaf users of American Sign Language (ASL). The novel aspect of this system (under development) is that it views the task faced by these writers as one of second language acquisition. We indicate how this affects the system design and the system's correction and explanation strategies, and present our methodology for modeling the second language acquisition process. | <urn:uuid:1e03d72b-a333-4c7d-b41f-3dacfb3d1c66> | 2 | 1.921875 | 0.06772 | en | 0.868922 | http://www.asel.udel.edu/nli/pubs/1997/McCoMast97.abs |
Skip to content
Get a head-start on tax
Investors should consider these eight tax issues in the lead-up to June 30.
Photo of Ali Suleyman By Ali Suleyman, Pitcher Partners
Albert Einstein once said "the hardest thing in the world to understand is the income tax." Although this might provide some cold comfort for investors worldwide, it remains prudent to conduct a regular review of your tax affairs, particularly as June 30 approaches.
This article provides information on eight key tax issues that individual investors should consider.
1. Investor or trader definitions
Many shareholders routinely assume that transactions of their listed securities result in a capital gain or loss. However, shares can be held for either investment or trading purposes, and the treatment of gains and losses under either circumstance can differ markedly.
A capital gain or loss typically arises on the sale of a share held as an investment and tax treatment applies to a person who invests in shares (share investor) with the intention of earning income from dividends and similar receipts. A share investor is someone who is not carrying on a business of buying and selling shares and also does not have the requisite profit-making intention in acquiring shares. Such investors will generally have a long-term outlook in holding shares.
In contrast, a share trader is generally someone who carries out business activities for the purpose of earning income from buying and selling shares. They will generally derive income (not capital gains) from the sale of shares, and their purchased shares would be regarded as trading stock, or part of a profit-making enterprise.
There are many different pointers for identifying a share trader compared to a share investor, including the nature and scale of the investment, the investment style, the length of time the assets are held, and the repetition and regularity of buying and selling. It is important that shareholders review this classification of activities annually to ensure it remains appropriate.
The classification makes a number of key distinctions. For example, a share trader will not qualify for the 50 per cent capital gains tax discount, explained below. For share investors, capital losses can only be used to reduce capital gains, whereas revenue losses by share traders can be applied against any income or gain.
2. Capital gains tax (CGT)
Where an individual is a share investor, it is important to consider CGT implications before the end of the tax year. The tax system generally imposes CGT on those profits that have been realised. Similarly, capital losses will not arise until the losses have crystallised by realisation (or a declaration by a liquidator or administrator that the shares are worthless).
The timing of a capital gain (or loss) can mean significant after-tax cash differences. An individual qualifies for the 50 per cent CGT discount (reducing the capital gain subject to tax by half) when they have owned the relevant asset for more than a year. Of course, the decision to hold a share also requires consideration of the commercial risks.
A capital gain on the sale of an investment asset typically arises at the date of contract for the sale. Accordingly, when the sale occurs after June 30, tax may not be payable until the following year.
Finally, an investor can only use a realised capital loss against a capital gain; otherwise such losses are carried forward.
3. 'Wash sales'
Investors must be careful not to undertake "wash sale" arrangements under which they bring forward a capital loss by selling a listed security and then immediately repurchasing the same, or substantially the same, asset. Under this arrangement there is effectively no change in the economic exposure of the owner to the asset.
The Australian Taxation Office (ATO) seeks to apply tax-avoidance rules to "wash sale" arrangements and will deny the capital loss (or trading loss) to the investor. There are only limited circumstances in which such arrangements will be acceptable. For example, it may be possible for an investor to dispose of shares in one company and purchase shares in a competitor company that carries on a similar business, without attracting the ire of the ATO.
4. Dividends and franking credits
Investors often seek to use the franking credits distributed by companies paying dividends. However, where a company has declared a franked dividend, an investor may not be able to use any of the franking credits associated with the dividend if they sell the share parcel on which the dividends are paid within 45 days of acquisition. Individuals may qualify for the small-shareholder exemption from this holding rule if their total franking credits are less than $5000 in a tax year, or if they acquired the shares before May 13, 1997.
5. Prepayment of expenses
For individuals not carrying on a business, the prepayment of expenditure for a period of up to 12 months may be deductible. This can include interest, internet fees, subscriptions to investment journals and other publications, seminars and training courses.
An investor who prepays interest is often induced to do so, on the basis of favourable lending terms, such as reduced interest rates or increased principal amounts. It is important to note that if prepayments are solely motivated by the tax advantages that can be obtained, tax-avoidance rules may apply.
6. Capital acquisitions
If you use a computer to maintain investment records relating to your investment or income-earning activities, the costs are potentially deductible. A capital acquisition used for income-producing activities costing more than $300 can be depreciated. Capital items costing less than $300 (such as calculators, printers or software) may be deductible in full when incurred by individual investors not also carrying on a business.
7. Trusts as investors
The information above focuses on an individual investor. Many of the tax principles mentioned also apply equally to trusts, but there are some key differences. There are different rules relating to prepayments, family trust election requirements for franked dividends, and distribution issues relating to capital gains and dividends (including streaming and ensuring eligibility for the 50 per cent CGT discount). Trustees of trusts must also conform with the trust deed, especially in any year-end income or capital distribution resolutions or determinations.
8. Other considerations
Although obtaining tax benefits or outcomes should never be the sole driver of an investment decision, they can be an important consideration. The above outline is not and exhaustive list of different tax issues that should be considered by individual investors before June 30.
Other tax matters for investors in listed securities include the deductibility of costs incurred in relation to capital-protected products, and the implications of company restructures including scrip-for-scrip rollovers, share buybacks and capital reductions.
Given the ever-growing and complex taxation landscape, it is wise to consult an accountant or tax professional for advice specific to your circumstances.
About the author
Ali Suleyman is Director, Tax Consulting, at Pitcher Partners.
From ASX
• What is a share?
• Why and how to invest
• Risks and benefits of shares
• What to consider in an investment
• How to buy and sell shares.
Market news
Source: Source DowJones View all Market news | <urn:uuid:acaba574-1b16-46db-9203-bc529a9aa7e6> | 2 | 1.726563 | 0.029107 | en | 0.94086 | http://www.asx.com.au/education/investor-update-newsletter/get-a-head-start-on-tax.htm |
Baggy's knot box
Want to learn a new knot? or learn a quick way of tying an old knot, well Baggy's Knot Box is the place to look.
The Angler's Loop
This is a slick, and impressive way of tying a simple loop in the end of a line, which due to its nature is not too difficult to untie. Originally it was designed to be easily tied in mono-filament fishing line, the technique here of throwing the end is not possible, just pass the short end round. This method was taught to me by a member of the International Guild of Knot Tyers.
[anglers1.png] To start hold the rope in both hands (the end in the right hand should be the shorter. Throw the right-hand end round the other rope, as illustrated.
[anglers2.png] Now twist your left hand through 180 degrees (unless you are double jointed it will only go one way! - towards you). You can combine these first two moves together.
[anglers3.png] Finally tuck the loop in your right hand (down) through the loop in your left hand.
The Fisherman's Knot
You may already have seen the fisherman's knot, which is a way of tying two ropes together securely. Unfortunately untying the knot usually requires the Swiss Army technique if load has been placed on the ropes.
Here is a novel way of tying the knot, which once learned leads to a fast and accurate way of tying the knot, and something to impress your SL! This method was taught to me by my current Group quartermaster!
[fish1.png] Start with the cord held betweem the first two fingers of each hand - see diagram, then hook (towards you) the left side of the bight with your right thumb, and then the right side with left thumb.
[fish2.png] You should now be in the position illustrated in the diagram, now tuck the two thumbs into the two triangles made by your first finger and the cord.
[fish3.png] Now here comes the tricky bit to explain: Twist your thumbs through 360 degrees, coming towards yourself at first.
[fish4.png] This should leave you with a half hitch around each of your thumbs and the cord. This bit requires a little bit of dexterity - pass the end of each cord through the half hitch, and then ease the two thumb knots together, and there you have a fisherman's knot.
Once you have mastered this, you should be able to get this down to about five seconds without great difficulty. It is both a faster and more reliable way (you can never get the thumb knots the wrong way round) of tying the Fisherman's Knot.
The Friendship Knot
This knot is often used by Scouts to secure the neckerchiefs, instead of a woggle - either because they've lost it - or because they're worried they might. It can also be tied in the points of a necker (very small) if a woggle is being worn.
The linking of the two sides of the necker represent the bond of friendship - both sides of the knot collapse to nothing if they are disentangled.
First cross the two ends of the neckers
Then make a bight with the end that went over, and tuck it back under the other end
Now pass the other end behind the bight
Finally tuck the end down the bight
Finished friendship knot - looks smarter in a Halved necker, and if you fold the bights
©1995-2014 James Smith | <urn:uuid:ab522b1d-450b-4518-a327-a5ed6efcb0bf> | 2 | 2.15625 | 0.033103 | en | 0.942035 | http://www.baggy.me.uk/knots/ |
Distracted Driving…Why Risk It?
texting & driving
According to the National Highway Traffic Safety Administration (NHTSA); it has been estimated that in the year 2012, 421,000 people were injured in auto accidents involving a distracted driver. This was a jump of 9% from 387,000 people injured in 2011. The definition of distracted driving is: driving while another activity takes away your attention from the road. Distracted driving is proven to increase the chance of motor vehicle collisions.
Here are 3 main types of driver distractions:
1. Manual: taking your hands off the wheel.
2. Visual: taking your eyes off the road.
3. Cognitive: taking your mind off of driving.
Activities like texting, cell phone use, applying makeup and eating, are just some examples of distractions that typically occurs in the vehicle. Also, using in-vehicle technologies (i.e. navigation systems), changing radio stations, inserting cds into the dash are some other examples of distraction.
We urge every driver to avoid cell phone use while driving.
Here are some tips for drivers with regard to cell phone use in vehicles.
1. Unless you have a hands-free device in your vehicle, wait until a trip is complete before placing a call. Allow your calls to go to voicemail while you are driving.
2. Keep cell phone out of reach when driving to avoid the temptation of using it. “Out of sight; out of mind.”
3. If an urgent call needs to be answered, you should pull into a parking lot to perform this task. It is not wise to pull over on the side of the road where a rear-end collision is possible. More frequently, accidents of this nature are occurring. If you have to make or take a call, take advantage of speed-dialing capabilities.
4. Never drive and talk on the phone during a stressful or emotional time. Do not have in-depth discussions as the risk of an accident is increased during such times.
5. Never text while driving. While any of these other distractions can endanger the driver and others, we emphasize that texting while driving is especially dangerous because it combines all three types of distractions!
Here is an info-graphic provided by Inthenation.com on cell phone use & distracted driver statistics:
We hope this information provided is not only resourceful but, is actually put to use. If we’re able to affect one person at a time, the submission of this article was well worth it. | <urn:uuid:f0f1c182-2e2b-4ee2-a07c-8e5a1d9151c7> | 3 | 2.953125 | 0.099564 | en | 0.89809 | http://www.bddinsurance.com/blog/ |
Ben Nadel
Meanwhile on Twitter
Loading latest tweet...
The School Of Practical Philosophy: Philosophy Works - Week Eight
By Ben Nadel on
Tags: Life
This class is so darn fun! If I had any complaint at all, it would simply be that the class isn't long enough. Sometimes, by 9:30PM, I feel like the conversation is only just starting to get juicy. Today, in fact, we definitely had to end the conversation in the middle of a thought - the conversation being an exploration of the question, "What am I?"
In the past, we've talked about emotions being nothing more than tools - informational markers that can be considered when making decisions. As such, it is easy to look at emotions and thoughts as being somehow separate from "me" - they are things that I do or have, but they are not "me." Who I am is the, "observer."
While I like this dichotomy, I think that it has some very practical limitations. Viewing your thoughts as being separate constructs is definitely a powerful mindset because it provides you with a perspective in which you have a greater ability to work towards self-improvement and the attainment of windsom. At a practical level, however, I think that trying to understand this divergence in any greater depth is a fruitless journey.
At the end of the day, we are nothing more than the sum of our actions. These actions (or inactions) are the physical incarnation of our conscious decisions. And so, while our being and our body might be separate, one only has meaning in the context of the other. While the physical underpinnings of the human body are endlessly fascinating, they lose (some) meaning without consciousness; and, while the mystery of consciousness continues to evade science, without the physical body, it lacks expression - and "wisdom," as we well know, is necessarily the expression and application of understanding.
As such, thinking about your observable self as being separate from "who you are" is certainly a great way to gather information; it is only through the tight coupling of mind and body, however, that one can have wisdom.
On a slightly different topic, someone in class raised a question about the difference between positive and negative emotions. When we think about emotions as being separate from "us," it is typically in the context of negative emotions. Positive emotions, though not subject to the same level of scrutiny, serve the exact same purpose. In response to the question, I talked about how movies often state that "love isn't enough" - Yes, movies are an amazing source of Truth.
Kevin Kline in Life as a House.
When I got home from class, I wanted to look up one such occurrence and found a lovely one in a great Kevin Kline movie, Life as a House:
You need to know what? Do I still love you? Absolutely. There's not a doubt in my mind that through all my anger... my ego, I was faithful in my love for you. From seventh grade on. That I made you doubt it, that I withheld it... that's the greatest mistake of a life full of mistakes. But the truth doesn't set us free, Robin. I can say it as many times as you can stand to hear it. And all that does, the only thing, is remind us that love isn't enough. Not even close.
It is not the love that defined this character - it was his actions. Positive or negative, emotions are only tools - bits of information that hopefully provide us with the means to make the right decisions. And those decisions only constitute wisdom when expressed in the physical world though physical actions.
Reader Comments
A few things came to mind as I read through this. As far as your comment that "'wisdom,' as we well know, is necessarily the expression and application of understanding": That sounds a bit like what Aristotle calls "phronesis", which is usually translated "practical wisdom" or sometimes "prudence". You can read a pretty good article about it in Wikipedia, but essentially phronesis is the ability to know what to do in order to achieve a desired purpose. But there's another kind of wisdom that Aristotle talks about, which he calls "sophia" (which is usually translated "wisdom"): the ability to understand why the world is what it is. This is not so much dependent on your actions, but only on observation and reflection.
As far as whether love is enough - depends how you define "love", doesn't it? I firmly believe that love is enough, and more: but my marriage would be in serious difficulty if I defined love as simply an emotion. It's really more like an attitude, or an approach to the world. As such, it can (and must) include emotions, but equally it can and must include actions of various sorts.
Just a couple scattered thoughts :-)
Wisdom is an interesting word. In class, we try to draw a distinction between information and wisdom. Specifically that one can have a load of information but still not be wise. As such, we've been looking at wisdom as the application of information and understanding (not simply the collection of it).
Of course, I could also be way off base as far as what I think the class is saying as well :D We definitely argue a bit of the definition of things all the time.
I agree that you can have a load of information and not be wise; but I'm not sure I'd say that you can have a load of understanding and not be wise - does that make sense? To have understanding, I believe, is to have a knowledge not only of something in itself but of how and why it fits into some larger scheme. That implies some level of understanding of the larger scheme; and this, I think, requires wisdom - even if not applied in any particular way.
I suppose what you are saying is true :) I think I just have a mental gap between having understanding and demonstrating understanding.
Speaking of movies, as I was reading this, I couldn't help but think about The Genius Club.
Wisdom is very interesting. You can be extremely intelligent, yet have little wisdom. You could have an I.Q. that is off the charts, yet lack wisdom. One of my mother's favorite things that she said to me as I was growing up and she was lecturing me on something I had obviously done wrong was, "You may be a lot smarter than me, but I have more wisdom..." lol
Just a humorous observation:
The first picture is a quote from scripture that says, "the Truth will set you free." The second picture is Kevin Kline saying to Robin, "But the truth doesn't set us free..."
Which one is it? If the truth will set us free, and there is a lot of truth in movies but Kevin says it doesn't set us free...INFINITE LOOP!!! :)
No, the truth doesn't set you free in movies because you still have to pay to see them.
I was going to come up with a good quote about truth too, but the only one that came to mind is Peter Schickele's "Truth is just truth; you can't have opinions about truth."
I have not heard of the Genius Club - I'll have to look it up.
Ha ha - I was hoping to cause a "too much recursion" error ;)
I have to admit, I have a "bad" habit of paying for women at the movies, regardless of whether or not I am in a relationship. Perhaps it's just the caveman in me.
For a more cynical view - seek the Truth and it will set you free. It will also get you killed by those who don't want it known.
@Ben I absolutely loved The Genius Club, though I didn't think it got much press. It's this movie which (without giving much of it away) has pretty much just good discussion throughout the whole movie, and an ending which is what you would think most people would want while watching the movie the whole time, but also that has an element of sadness to it as well, and it has you questioning yourself. (or at least it had me questioning myself). I love those types of movies. Of course, I may just have very unique tastes compared to other people, too.
And I have had many guys pay for my movies, even if we weren't in an actual relationship. I am of the opinion that the BEST relationships are the ones that start out as friends, because you have more of a foundation that way. And one of the best ways for someone like me to get on my good side and set the motion in the right direction is when a guy pays for my movie. I am not by any means poor, nor is it the case that I do not make my own money and can not pay for my own stuff. But I just happen to be very traditional, and although I CAN technically pay for my own stuff, there's something romantic about a guy paying for my stuff. I love it when a guy steps up to pay without even expecting anything in return. It's something about the whole feeling that the man is trying to take care of the woman, and that makes me feel very comfortable. On the same note, I could absolutely in most cases most likely take care of myself (including in a physical way), and sometimes even better than the guy I am with, because I have taken 5 different disciplines of Martial Arts and have years of practice, and know practical application of what I have learned. Nevertheless, if it ever came up, and a guy stepped in and took care of me, I would love that, and I love the idea of being with a guy who can, because no matter how capable a woman is of taking care of herself, there's something downright comfortable about being with a guy who can and will take care of you. :-)
So, maybe you're on the right track there, Ben, with some of those girls, and maybe one day one of them will turn into a long-term fulfilling relationship. And there are women who love the caveman type, so once you are ready and have time to more actively pursue a relationship, gravitate towards those. :-)
I don't think I would every really call myself the caveman type; I'm definitely not any kind of alpha male or anything to that effect. But, I suppose there are just parts, deep down inside, that can't be stopped. I like to think of myself as a gentleman and conscious of my actions (or at least trying to get better at being conscious); but, I wouldn't even come close to calling myself a "stereotypical" male.
But, I do love the idea of taking care of the people that mean something to me. I wish I could do it more often.
What kind of martial arts did you study? That sounds wicked cool :)
Thanks! It was so interesting, and actually an intellectual pursuit as well as a physical one, because when you study martial arts, a HUGE part of it is the mind and body connection. Another huge part of it is respect and control. I have studied Kempo (sometimes spelled Kenpo -- Shorinji Rhu), Goshin Do, Tae Kwon Do, Hapkiddo, Brazilian Ju Jitsu, and MMA. I started out with Kempo, and that's the one I studied the most of. I love it.
That's cool...personally, I can't stand certain things about men who can be classified as 'type a' personality, and they have a lot on common with alpha males. I wouldn't want a guy who was an alpha male in every way, but I do like being taken care of, but not in the way you would take care of someone who was not capable of taking care of herself. There's something about someone doing something for you because they want to, and not because they have to.
And it's not like I would just expect a man to take care of me without taking care of him somewhat in return, it just wouldn't be in the same way necessarily, and not in the way a lot of people talk about a woman taking care of a man either, necessarily. | <urn:uuid:349aed83-a834-45dc-b23b-fad30596b3b8> | 2 | 2.015625 | 0.057931 | en | 0.978212 | http://www.bennadel.com/blog/2139-the-school-of-practical-philosophy-philosophy-works-week-eight.htm |
Bloomberg the Company & Products
Bloomberg Anywhere Login
Financial Products
Enterprise Products
Customer Support
• Americas
+1 212 318 2000
• Europe, Middle East, & Africa
+44 20 7330 7500
• Asia Pacific
+65 6212 1000
Industry Products
Media Services
Follow Us
1) Decode the Causes of Diseases
When the first draft of the human genome was completed in 2000, it was hailed as a revolution in health care. A complete understanding of the 30,000 genes in the body, the estimated 50,000 proteins they encode, and the tiny chemical variations that make each of us different suddenly seemed within reach. Medical prognosticators declared that within a few years, we would unlock the secrets of the world's worst ills and figure out how to eradicate them. Medicine would become personalized: We would go to the doctor, have our genes screened, and get customized drug regimens that would keep us healthy throughout our long lives.
This vision wasn't flawed so much as drastically premature. The fact is, we still don't understand how most genes operate. We know they encode proteins involved in complex biochemical pathways that underlie most diseases. But so little is known about those proteins that the drugs on the market today target just 10% of them. "The genome has given us a wonderful parts list, but it's only the beginning," says Dr. Leroy Hood, president of the Institute for Systems Biology, a Seattle research group.
Hood and other scientists are championing a research approach designed to close the knowledge gap. They believe that instead of examining one gene at a time, scientists should strive to discover how the body's many different biological systems interact in an illness and affect our individual responses to drugs. Only that knowledge will lead to the goal of switching off diseases while avoiding toxic side effects.
Decoding complete disease pathways could have tremendous implications for drug research and marketing. Roger M. Perlmutter, Amgen's executive vice-president for R&D, points to the company's rheumatoid arthritis drug, Kineret, as an example. There is a small subset of patients with the disease who don't respond to any of the commonly prescribed remedies, but they do get better on Kineret. "We don't know who those people are," Perlmutter says. If Amgen could figure out which genes trigger the positive response to Kineret, it might be able to develop a test to identify those patients. That would allow Amgen to market Kineret more precisely, thus lowering costs and boosting sales and profit margins.
Genentech has experience with this model. Its breast-cancer drug, Herceptin, helps 25% of patients with the illness -- those who have too much of a protein expressed by a gene called Her2. Doctors can use one of two tests to identify women with the problem, and administer Genentech's drug. Last year, sales of Herceptin jumped 11% to $385 million.
The ability to examine all the body's systems in concert is still far off. But it is already possible to extract tips from the behavior of groups of genes. Psychiatric Genomics Inc. in Gaithersburg, Md., is studying manic depression, schizophrenia, and autism -- disorders in which multiple genes are switched on or off by a variety of factors that aren't yet understood. The company is building biochemical models of mental illnesses using diseased brain tissue, and is also looking for patterns in drug effectiveness by studying medical records of deceased patients. The goal is a new model of drug development that involves finding all the genes that change in the course of the disease, then identifying a drug that can restore the most critical genes to a normal pattern.
Companies pursuing this new, systems-based approach to research are finding that it requires a paradigm shift. To build complete models of diseases, companies must foster constant collaboration among chemists, biologists, physicists, mathematicians, and computer engineers. "Everyone used to be in their own silos," says Psychiatric Genomics CEO Richard E. Chipkin. "That doesn't work anymore. Teamwork is critical." Adds J. Craig Venter, chairman of the Institute for Genomic Research and one of the pioneers of mapping the human genome: "We need to take a far more sophisticated approach and pool our resources to gain a full understanding of disease."
blog comments powered by Disqus | <urn:uuid:bb70683d-2492-411d-a9cc-e0add29a1b10> | 2 | 2.265625 | 0.026247 | en | 0.946542 | http://www.bloomberg.com/bw/stories/2003-06-01/1-decode-the-causes-of-diseases |
War and Peace Short Essay Assignments
Buy the War and Peace Lesson Plans
1. Prince Vasili is described in the beginning at Anna Pavlovna's soiree. What is Prince Vasili's most notable characteristic at this point in the story.
2. What are the obvious differences between Prince Andrew and Pierre that is illustrated at the start of the story?
3. Why do the three princesses living at Count Bezukhov's treat Pierre so badly?
(read all 60 Short Essay Questions and Answers)
This section contains 4,593 words
(approx. 16 pages at 300 words per page)
Buy the War and Peace Lesson Plans
Follow Us on Facebook | <urn:uuid:b1108a95-5f66-4e9b-b6e0-bd763c034fa3> | 4 | 3.578125 | 0.096868 | en | 0.908547 | http://www.bookrags.com/lessonplan/warpeacevoinaimir/shortessay.html |
Frogs Background Information for Teachers and Parents
Grade Levels: K-3
This page contains information to support educators and families in teaching K-3 students about amphibians, tadpoles, and frogs. The information is designed to complement the BrainPOP Jr. movie Frogs. It explains the type of content covered in the movie, provides ideas for how teachers and parents can develop related understandings, and suggests how other BrainPOP Jr. resources can be used to scaffold and extend student learning.
Frogs can be found practically everywhere, from urban parks to woodland forests. They can even be found in some deserts! This movie will explain the life cycle of a frog and explore some reasons why frogs’ populations are changing. We encourage you to learn about the frogs in your community as a way to extend and apply the material introduced in the topic. Before exploring frogs, you may wish to screen part of the Classifying Animals movie, which introduces vertebrates and shares some information about amphibians. You might also be interested in the movie Camouflage, because frogs use both camouflage and mimicry to avoid predators.
Remind children that frogs are vertebrates, which means they have a spine, or backbone. Frogs are also amphibians, which means they have adapted to live both in water and on land. Remind children that amphibians are cold-blooded, which means they rely on their environment to control their body temperatures. They warm up on land and cool down in the water or mud. Frogs, toads, salamanders, newts, and caecilians are all types of amphibians.
Children may ask about the differences between frogs and toads. Both are in the same family, so technically toads are frogs. In general, people say that toads are fatter and squatter than frogs and have shorter legs. Frogs have bulging eyes, while the eyes in toads are more deeply set. Toads have rougher, drier skin with “warts,” while frogs have smooth moist skin. (It may be important to note that toads’ “warts” cannot transfer to people, despite what urban legends might say.) These general differences between frogs and toads may be applied to those found in North America, but not necessarily to species that live in habitats closer to the Equator. There the physical distinctions between frogs and toads are far more subtle, or may even be reversed, with toads being damp-skinned and frogs having warts.
Remind children that adult frogs breathe through lungs, just like people. However, many frogs can breathe and drink water through their skin as well. Frogs have strong legs that help them jump and hop great distances relative to their body lengths. Many species of tree frogs have sticky feet and grasping toes to help them cling to leaves and branches. Some species have webbed feet, which help them swim. Flying or gliding frogs have large webbed feet to help them maneuver between trees. Most frogs have long, sticky tongues that help them snatch up insects and other prey. Remind children that prey is an animal that is eaten by other animals.
Where do frogs live? Ask children where they may have seen frogs. Frogs can only live in freshwater habitats, so they are not found in saltwater oceans and seas. Many frogs live near rivers, lakes, streams, and ponds, but they can also be found in jungles, rainforests, woodland forests, and even deserts. The Trilling frog lives in the deserts of Australia and burrows deep underground awaiting rain. It can spend months of its life underground. Frogs in colder, snowy areas may hibernate underground through the winter.
Remind children that living things have ways to survive in their environments. Review that a predator is an animal that eats another animal. Frogs have many predators, including snakes, birds, raccoons, fox, and even other frogs. Frogs therefore have many adaptations to stay safe and ward off predators. They can croak loudly to communicate with each other and scare off enemies. The sacs in their throat act like a drum to help magnify the sound. Many frogs, such as the laughing tree frog, use camouflage to blend in with their surroundings. Other frogs, like the fire-bellied toad, flash bright colors to scare off predators. Some of the most poisonous animals on the planet are the brightly colored dart frogs. Some frogs have special glands that emit bitter toxins that make them unpalatable, or urinate on themselves to ward off would-be predators. Many frogs, such as the tomato frog, can puff their bodies up to appear larger—too large to swallow whole.
Review with children that a life cycle shows how a living thing grows and changes. Female frogs lay tiny, soft eggs in the water. Tadpoles hatch from the eggs, and they look very different from adult frogs. Tadpoles have long bodies and tails and live exclusively underwater, breathing through gills. As they develop, they grow back legs and then their front legs. They develop lungs and internal organs and eventually lose their tails. At this point, they are frogs and can live on land. When they become adult frogs, they can mate and start the life cycle again.
Today, many species of frogs are being threatened due to human activity. Some frogs are losing their habitats due to deforestation and wetland destruction. Pollution and pesticides are also a threat to many frog species. Since many frogs breathe through their skin, they are incredibly sensitive to toxins in the environment. In addition, invasive species are threatening frogs around the planet. For example, people have introduced trout to many freshwater rivers and streams. The trout have been eating eggs and tadpoles, harming frog populations. Bullfrogs are common in North America, but have been introduced to other countries as far away as Italy and Venezuela. The large bullfrogs often compete for food with smaller, native species of frog. As a result, bullfrog populations are exploding around the globe, taking over entire habitats. Climate change is affecting frogs as well. Rising temperatures are contributing to the spread of Chytrid fungus, a deadly fungus that infects frogs through their skins and is decimating frog populations.
Help children understand that frogs play an important role in many food chains and food webs. Many animals rely on frogs for food and if frog species are threatened or become endangered or extinct, the entire food chain can be affected. Help children understand that they should protect the environment and think about how their actions can affect living things around them. Encourage them to protect habitats in their community and be good global citizens. | <urn:uuid:91e00814-aa67-40aa-a728-4e24e12f31b8> | 4 | 3.984375 | 0.497677 | en | 0.961147 | http://www.brainpop.com/educators/community/lesson-plan/frogs-background-information-for-teachers-and-parents/ |
Blogs review: Bold ideas for the eurozone from economic history
by Jérémie Cohen-Setton on 26th April 2013
What’s at stake: In this review, I present an eclectic set of proposals and analyses that have been put forward by economic historians to reform the functioning the eurozone in a big way. The first category of proposals discuss ways though which monetary policy could be differentiated across different countries within the monetary union. The second category of analyses challenges the now conventional view that a monetary union necessarily requires some form of fiscal, banking and/or political union.
Making monetary policy more flexible
Markus Brunnermeier writes that the ECB could optimize its currency area by using “regional tools” that affect the regional credit and term spreads. Unconventional monetary policy allows central banks to influence term and credit spreads directly by buying or selling long-term risky assets. But the ECB could also use its haircut policy to lean against regional imbalances. Using haircuts to lean against regional imbalances is in sharp contrast to the ECB’s current policy. Currently, the ECB uses collateral and haircut policy purely as a risk management tool, i.e., to minimize potential losses from lending against certain assets. Furthermore, there is a tendency to treat all member countries the same and avoid any differentiation. This makes all spreads more uniform across the membership countries – the opposite effect of what a targeted active policy that leans against regional imbalances would prescribe.
Harold James also argues that different interest rates in different countries might open the door to a more stable eurozone, but notes that different policy rates might even be possible. When the EC Committee of Central Bank Governors began to draft the ECB statute, it took the indivisibility and centralization of monetary policy as given. But it was not really justified either historically or in terms of economic fundamentals. The history of the gold standard, and of other large common-currency areas show that despite the theoretical possibility of capital being sent over vast distances to other parts of the world, much capital remained local, making the differentiation of interest rates possible. In the early history of the Federal Reserve System, individual Reserve Banks set their own discount rates. In smooth or normal times, the rates tended to converge. But in times of shocks, they could move apart. The Eurozone is now moving to a modern equivalent as bank collateral requirements are being differentiated in different areas. This represents a remarkable incipient innovation.
Federal reserve discount rates 1914-1939
Source: Harold James
A common currency does not mean a single currency
Harold James notes that was one of the possibilities that was raised in the discussions on monetary union in the early 1990s was that there might be a common currency but not necessarily a single currency. Keeping the Euro for all members of the Eurozone but also allowing some of them (in principle all of them) to issue national currencies would be the modern equivalent to the band widening of 1993. The countries that do that would find that their new currencies immediately trading at what would probably be a heavy discount. California recently adopted a similar approach, issuing IOUs when faced by the impossibility of access to funding. Such a course would not require the redenomination of bank assets or liabilities, and hence would not be subject to the multiple legal challenges that a more radical alternative would encounter.
Harold James writes that such a state of affairs is not just a theoretical construct in fringe debates in the early 1990s, but a real historical alternative. There is in fact a rather surprising parallel for such a stable coexistence of two currencies over a surprisingly long period of time. Before the victory of the gold standard in the 1870s, Europe operated with a bimetallic standard for centuries, not only gold but also silver. One trick that made this regime so successful was that the coins were used for different purposes. High value gold coins were used as a reference for large value transactions and for international business. Low value silver coins were used for small day to day transactions, for the payment of modest wages and rents. A depreciation of silver relative to gold in this system would bring down real wages and improve competitiveness. In the modern setting, the equivalent of the adjustment mechanism in the early modern world of bimetallism would be a fall in Greek (or other crisis country) wage costs as the wages were paid in the national currency, as long as it was traded at a discount. These would be the equivalent of silver currencies. Meanwhile, the Euro would be the equivalent of the gold standard. It would be kept stable by the institutions which already exist today, the ECB and the ECSB of those national central banks who have no new alternative.
Hugh Rockoff argues that the case of the West in the 19th century suggests that dividing the Eurozone in two currency zones is possible. From the outbreak of the war until 1879 the West remained on the gold standard while the East was on the greenback standard. National banks in the West issued “goldbacks” redeemable in gold. The exchange rate between greenbacks and goldbacks fluctuated. A curiosum, but perhaps one that suggests that dividing the Eurozone in two currency zones is possible.
A monetary union without a fiscal/political union
Simon Wren-Lewis writes that the view that the Eurozone will have to move to fiscal union, which implies some form of political union, seems to be a very common view at the moment. Those working in the political unions that are the US or the UK, know combined monetary and fiscal unions can work. From this perspective, the monetary only union of the Eurozone was a largely untried experiment, and it appears to be failing. Within the Eurozone itself, there has always been a powerful lobby for further integration. It is therefore not surprising that actors like the Commission see further integration as the longer-term solution to the Eurozone’s problems.
Harold James writes that the idea that Europeans simply need a country because they happen to have a currency reflects a misunderstanding about the reasons politicians embarked on the economic and monetary union of Europe.
Simon Wren-Lewis writes that we should be very cautious about making generalizations from a single observation. The Eurozone has not been a fair test of monetary union without fiscal union since poor policies were also put in place at the same time. 1) No attempt was made to use fiscal policy to offset overheating in periphery countries. 2) Instead of recognizing the need for default early on, the union made a futile attempt to avoid it by replacing private debt with intergovernmental lending. 3) The fiscal position of Eurozone economies became critical because the ECB refused to act as a lender of last resort. 4) The current double dip recession in the Eurozone is largely about a collective failure of fiscal and monetary policy.
Benjamin Cohen writes that history suggests that political union is not necessary for the longevity of the euro area. In the modern era (19th century onward), there are at least seven notable examples – other than EMU – of formal monetary unions without political union: The Latin Monetary Union, the Scandinavian Monetary Union, the Belgium-Luxembourg Economic Union, the CFA Franc Zone, the East African Community, the East Caribbean Currency, and the West African Monetary Area. Two of the seven (CFA, ECCA) remain in existence to the present day; a third (BLEU) existed for 3/4 century until incorporated into the larger EMU; and two other (LMU, SMU) managed to survive for more than a half century until brought to an end by World War I.
Randall Henning writes that a couple of observations emerge from the history of the US fiscal rulemaking that seem especially relevant to the EU now.
1. Even though the debt brakes of the fiscal compact are introduced into national constitutions and framework laws, the process has been initiated by the center and in some cases under duress. In the United States, rules were adopted autonomously by the states. This has implications for domestic political “ownership”.
2. Community institutions play a leading role in enforcing the rules, whereas the U.S. federal government has no such role. In fact, the U.S. federal government in fact cannot legislate fiscal rules for the states; this would be an unconstitutional infringement on “state sovereignty.” The U.S. model of fiscal rectitude for the states rests on multiple layers of rules combined with the no bailout norm.
Germany: the missing hegemon
Benjamin Cohen writes that experience suggests that in the absence of political union, a local hegemony or solidarity are necessary to keep a monetary union functioning reasonably well; where both conditions are present, they are sufficient. The importance of a local hegemon was well demonstrated by BLEU (Belgium, twenty times the size of Luxembourg, called the shots). The importance of solidarity is evident in the longevity of SMU, BLEU, ECCA. All three involved groups of partners with a strong sense of common identity, grounded in a shared cultural and political background and institutionalized in a broad network of related economic and political agreements.
Brad DeLong writes that the Kindlebergian perspective would lead one to think that the problem of Europe today is that Germany does not want to assume the burden, or assume the role, or is not wanted by the rest of Europe to assume the explicit role on terms that Germany wishes to exercise it. Brad DeLong and Barry Eichengreen write the German Federal government has room for countercyclical fiscal policy. It could encourage the European Central Bank to make more active use of monetary policy. It could fund a Marshall Plan for Greece and signal a willingness to assume joint responsibility, along with its EU partners, for some fraction of their collective debt. But Germany still thinks of itself as the steward in a small open economy.
Republishing and referencing
comments powered by Disqus
Republishing and referencing
| <urn:uuid:de357865-81ad-420c-b323-bdc79c4c4742> | 2 | 1.6875 | 0.12569 | en | 0.958692 | http://www.bruegel.org/nc/blog/detail/article/1077-blogs-review-bold-ideas-for-the-eurozone-from-economic-history/ |
Subsets and Splits